Page 1
® ™ VMAX3 Family Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS REVISION 6.5...
Page 2
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published in the USA.
HYPERMAX OS support for open systems..........58 Backup and restore to external arrays............59 Data movement................59 Typical site topology..............60 ProtectPoint solution components..........61 ProtectPoint and traditional backup..........62 Basic backup workflow..............63 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Open Replicator hot (or live) pull................171 Open Replicator cold (or point-in-time) pull.............. 172 Migrating data and removing the original secondary array (R2)........ 173 Migrating data and replacing the original primary array (R1)........174 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Page 8
IEA480E service alert error message format (SRDF Group lost/SIM presented against unrelated resource)..................184 z/OS IEA480E service alert error message format (mirror-2 resynchronization)..185 z/OS IEA480E service alert error message format (mirror-1 resynchronization)..185 eLicensing process....................188 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Environmental errors reported as SIM messages............181 VMAX3 product title capacity types .................189 VMAX3 license suites for open systems environment..........191 Individual licenses for open systems environment............. 196 Individual licenses for open systems environment............. 197 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Page 10
TABLES License suites for mainframe environment..............198 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Preface As part of an effort to improve its product lines, EMC periodically releases revisions of its software and hardware. Therefore, some functions described in this document might not be supported by all versions of the software or hardware currently in use.
Page 12
Documents the SYMCLI commands, daemons, error codes and option file parameters provided with the Solutions Enabler man pages. EMC Solutions Enabler Array Controls and Management for HYPERMAX OS CLI User Guide Describes how to configure array control, management, and migration operations using SYMCLI commands for arrays running HYPERMAX OS.
Page 13
Defines the versions of HYPERMAX OS and Enginuity that can make up valid SRDF replication and SRDF/Metro configurations, and can participate in Non- Disruptive Migration (NDM). EMC Solutions Enabler TimeFinder SnapVX for HYPERMAX OS CLI User Guide Describes how to configure and manage TimeFinder SnapVX environments using SYMCLI commands.
Page 14
EMC GDDR for SRDF/A Product Guide Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery following both planned outages and disaster situations. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Page 15
EMC z/TPF Suite Release Notes Describes new features and any known limitations. Special notice conventions used in this document EMC uses the following conventions for special notices: DANGER Indicates a hazardous situation which, if not avoided, will result in death or serious injury.
Vertical bar indicates alternate selections - the bar means “or” Braces enclose content that the user must specify, such as x or y or Ellipses indicate nonessential information omitted from the example Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Page 17
Downloads, as well as more dynamic content, such as presentations, discussion, relevant Customer Support Forum entries, and a link to EMC Live Chat. EMC Live Chat — Open a Chat or instant message session with an EMC Support Engineer. eLicensing support To activate your entitlements and obtain your VMAX license files, visit the Service Center on https://support.EMC.com, as directed on your License Authorization...
Revised content: Number of CPUs required to support eManagement. HYPERMAX 5977.691.684 Revised content: HYPERMAX 5977.691.684 In SRDF/Metro, changed terminology from quorum to Witness. New content: HYPERMAX 5977.691.684 New feature for FAST.X SRDF/Metro on page 145 Revised content: Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
SRDF/Star solutions on page 109 New content: Embedded NAS (eNAS). HYPERMAX OS 5977.497.471 First release of the VMAX 100K, 200K, and 400K arrays with EMC HYPERMAX OS 5977. HYPERMAX OS 5977.250.189 FAST.X requires Solutions Enabler/Unisphere for VMAX version 8.0.3. Revision history...
CHAPTER 1 VMAX3 with HYPERMAX OS This chapter summarizes VMAX3 Family specifications and describes the features of HYPERMAX OS. Topics include: Introduction to VMAX3 with HYPERMAX OS.............22 VMAX3 Family 100K, 200K, 400K arrays............23 HYPERMAX OS....................34 VMAX3 with HYPERMAX OS...
VMAX3 with HYPERMAX OS VMAX3 Family 100K, 200K, 400K arrays VMAX3 arrays range in size from single up to two (100K), four (200K) or eight engine systems (400K). Engines (consisting of two controllers) and high-capacity disk enclosures (for both 2.5" and 3.5" drives) are consolidated in the same system bay, providing a dramatic increase in floor tile density.
Vault to Flash Vault implementation 2 to 4 Flash I/O modules per 2 to 8 Flash I/O modules per 2 to 8 Flash I/O modules per Engine Engine Engine Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
VMAX3 with HYPERMAX OS Table 6 Front end I/O modules Feature VMAX 100K VMAX 200K VMAX 400K Max front-end I/O modules/ engine Front-end I/O modules and FC: 4 x 8Gbs (FC, SRDF) FC: 4 x 8Gbs (FC, SRDF) FC: 4 x 8Gbs (FC, SRDF)
VMAX3 with HYPERMAX OS Disk drive support The VMAX 100K, 200K, and 400K support the latest 6Gb/s dual-ported native SAS drives. All drive families (Enterprise Flash, 10K, 15K and 7.2K RPM) support two independent I/O channels with automatic failover and fault isolation. Configurations with mixed-drive capacities and speeds are allowed depending upon the configuration.
VMAX3 with HYPERMAX OS Table 21 Power consumption and heat dissipation VMAX 100K VMAX 200K VMAX 400K Maximum power Maximum Maximum Maximum Maximum Maximum Maximum and heat total power heat total power heat total power heat dissipation at consumption dissipation...
An imbalance of AC input currents may exist on the three-phase power source feeding the array, depending on the configuration. The customer's electrician must be alerted to this possible condition to balance the phase-by-phase loading conditions within the customer's data center. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Radio frequency interference specifications Electro-magnetic fields, which include radio frequencies can interfere with the operation of electronic equipment. EMC Corporation products have been certified to withstand radio frequency interference (RFI) in accordance with standard EN61000-4-3. In data centers that employ intentional radiators, such as cell phone repeaters, the maximum ambient RF field strength should not exceed 3 Volts /meter.
This release has been validated to interoperate with the following KMIP-based key managers: Gemalto SafeNet KeySecure IBM Security Key Lifecycle Manager Data at Rest Encryption on page 39 provides more information. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
VMAX3 with HYPERMAX OS HYPERMAX OS emulations HYPERMAX OS provides emulations (executables) that perform specific data service and control functions in the HYPERMAX environment. The following table lists the available emulations. Table 27 HYPERMAX OS emulations Area Emulation Description Protocol Speed Back-end Back-end connection in the SAS 6 Gb/s...
Page 37
The eNAS solution runs on standard array hardware and is typically pre-configured at the factory. In this scenario, EMC provides a one-time setup of the Control Station and Data Movers, containers, control devices, and required masking views as part of the factory eNAS pre-configuration.
Refer to Using VNX SnapSure 8.x . eNAS replication is available as part of the Remote Replication Suite and Local Replication Suite. Note SRDF/A, SRDF/Metro, and TimeFinder are not available with eNAS. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
(KMIP). The following external key managers are supported: SafeNet KeySecure by Gemalto IBM Security Key Lifecycle Manager Note For supported external key manager and HYPERMAX OS versions, refer to the EMC E-Lab Interoperability Matrix (https://www.emc.com/products/interoperability/ elab.htm). When D@RE is enabled, all configured drives are encrypted, including data drives, spares, and drives with no provisioned volumes.
Page 40
D@RE is disruptive and requires re- installing the array, and may involve a full data back up and restore. Before you upgrade, you must plan how to manage any data already on the array. EMC Professional Services offers services to help you upgrade to D@RE.
VMAX3 with HYPERMAX OS Figure 1 D@RE architecture, embedded Storage Configuration Management Host eDPM Client Director Director Module Module Module Module Unencrypted data eDPM Server Management traffic Encrypted data Unique key per physical drive Figure 2 D@RE architecture, external Storage Configuration Management Host...
Page 42
The self-test prevents silent data corruption due to encryption hardware failures. Audit logs The audit log records major activities on the VMAX3 array, including: Host-initiated actions Physical component changes Actions on the MMCS D@RE key management events Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Page 43
EMC offers the following data erasure services: EMC Data Erasure for Full Arrays — Overwrites data on all drives in the system when replacing, retiring or re-purposing an array. EMC Data Erasure/Single Drives — Overwrites data on individual SAS and Flash drives.
Page 44
HYPERMAX OS corrects single-bit errors and report an error code once the single-bit errors reach a predefined threshold. In the unlikely event that physical memory replacement is required, the array notifies EMC support, and a replacement is ordered. Drive sparing and direct member sparing When HYPERMAX OS 5977 detects a drive is about to fail or has failed, a direct member sparing (DMS) process is initiated.
Page 45
To support vault to flash, the VMAX3 arrays require the following number of flash I/O modules: VMAX 100K two to four per engine VMAX 200K and 400K two to eight per engine The size of the flash module is determined by the amount system cache and metadata required for the configuration.
Page 46
VMAX3 with HYPERMAX OS Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
................52 ViPR suite......................52 vStorage APIs for Array Integration..............53 ® ™ SRDF Adapter for VMware vCenter Site Recovery Manager......54 SRDF/Cluster Enabler ..................54 EMC Product Suite for z/TPF................54 SRDF/TimeFinder Manager for IBM i..............55 AppSync......................55 Management Interfaces...
SRA V6.3 VASA Provider V8.4 Unisphere for VMAX EMC Unisphere for VMAX is a web-based application that allows you to quickly and easily provision, manage, and monitor arrays. Unisphere allows you to perform the following tasks: Table 30 Unisphere tasks...
Management Interfaces Table 30 Unisphere tasks (continued) Unisphere for VMAX is also available as Representational State Transfer (REST) API. This robust API allows you to access performance and configuration information, and to provision storage arrays. It can be used in any of the programming environments that support standard REST clients, such as web browsers and programming platforms that can issue HTTP requests.
HTTP requests. Mainframe Enablers The EMC Mainframe Enablers are a suite of software components that allow you to monitor and manage arrays running HYPERMAX OS. The following components are distributed and installed as a single package:...
GDDR does not provide replication and recovery services itself, but rather monitors and automates the services provided by other EMC products, as well as third-party products, required for continuous operations or business restart. GDDR facilitates business continuity by generating scripts that can be run on demand;...
Integrate with VMware and Microsoft compute stacks Migrate non-ViPR volumes into the ViPR environment (ViPR Migration Services Host Migration Utility) For ViPR Controller requirements, refer to the EMC ViPR Controller Support Matrix on the EMC Online Support website. ViPR Storage Resource Management EMC ViPR SRM provides comprehensive monitoring, reporting, and analysis for heterogeneous block, file, and virtualized storage environments.
It gives you a quick overview of the overall capacity status in your environment, raw capacity usage, usable capacity, used capacity by purpose, usable capacity by pools, and service levels. EMC ViPR SRM Product Documentation Index provides links to related ViPR documentation. vStorage APIs for Array Integration VMware vStorage APIs for Array Integration (VAAI) optimize server performance by offloading virtual machine operations to arrays running HYPERMAX OS.
108 EMC Product Suite for z/TPF The EMC Product Suite for z/TPF is a suite of components that monitor and manage arrays running HYPERMAX OS from a z/TPF host. z/TPF is an IBM mainframe operating system characterized by high-volume transaction rates with significant communications content.
Management Interfaces SRDF/TimeFinder Manager for IBM i EMC SRDF/TimeFinder Manager for IBM i is a set of host-based utilities that provides an IBM i interface to EMC's SRDF and TimeFinder. This feature allows you to configure and control SRDF or TimeFinder operations on...
Page 56
Applications — Oracle, Microsoft SQL Server, Microsoft Exchange, and VMware VMFS and NFS datastores and File systems. Replication Technologies—SRDF, SnapVX, VNX Advanced Snapshots, VNXe Unified Snapshot, RecoverPoint, XtremIO Snapshot, and ViPR Snapshot. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
CHAPTER 3 Open systems features This chapter describes open systems-specific functionality provided with VMAX3 arrays. HYPERMAX OS support for open systems............58 Backup and restore to external arrays..............59 VMware Virtual Volumes..................69 Open systems features...
For more information on provisioning storage in an open systems environment, refer to Open Systems-specific provisioning on page 79. For the most recent information, consult the EMC Support Matrix in the E-Lab Interoperability Navigator at http://elabnavigator.emc.com. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Backup and restore to external arrays EMC ProtectPoint integrates primary storage on storage arrays running HYPERMAX OS and protection storage for backups on an EMC Data Domain system. ProtectPoint provides block movement of the data on application source LUNs to encapsulated Data Domain LUNs for incremental backups.
Operations to restore the data and make the recovery or restore devices available to the recovery host must be performed manually on the primary storage through EMC Solutions Enabler. The ProtectPoint workflow provides a copy of the data, but not any application intelligence.
This is often due to small or non-existent backup windows, demanding recovery time objective (RTO) or recovery point objective (RPO) requirements, or a combination of both. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Open systems features Unlike traditional backup and recovery, ProtectPoint does not rely on a separate process to discover the backup data and additional actions to move that data to backup storage. Instead of using dedicated hardware and network resources, ProtectPoint uses existing application and storage capabilities to create point-in-time copies of large data sets.
3. The primary storage array analyzes the data and uses FAST.X to copy the changed data to an encapsulated Data Domain storage device. 4. The Data Domain creates and stores a backup image of the snapshot. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Open systems features Basic restore workflow There are two types of restoration: Object-level restoration One or more database objects are restored from a snapshot. Full-application rollback restoration The application is restored to a previous point-in-time. There are two types of recovery operations: A restore to the production database devices seen by the production host.
Storage Administrator must create a snapshot between the encapsulated Data Domain recovery devices and the restore/production devices, and then initiate the link copy operation. The following image shows the full application rollback restoration workflow. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Open systems features Figure 7 Full-application rollback restoration workflow 000BA 0001A vdisk-dev0 Production host 000BB 0001B Backup Production Devices Devices (Encapsulated vdisk-dev1 vdisk-dev2 000BC 0001C Recovery vdisk-dev3 host 000BD 0001D Recovery Restore Data Domain Devices Devices (Encapsulated) Primary Storage 1. The Data Domain system writes the backup image to the encapsulated storage device, making it available on the primary storage array.
To support management capabilities of VVols, the storage/vCenter environment requires the following: EMC VMAX VASA Provider – The VASA Provider (VP) is a software plug-in that uses a set of out-of-band management APIs (VASA version 2.0). The VASA Provider exports storage array capabilities and presents them to vSphere through the VASA APIs.
VASA Provider V8.2 or higher For instructions on installing Unisphere and Solutions Enabler, refer to their respective installation guides. For instructions on installing the VASA Provider, refer to the EMC VMAX VASA Provider Release Notes . The steps required to create a VVol-based virtual machine are broken up by role: Procedure 1.
Page 71
Open systems features c. Create the VM Storage policies. d. Create the VM in the VVol datastore, selecting one of the VM storage policies. VVol workflow...
Page 72
Open systems features Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
CHAPTER 4 Mainframe Features This chapter describes mainframe-specific functionality provided with VMAX arrays. HYPERMAX OS support for mainframe..............74 IBM z Systems functionality support..............74 IBM 2107 support....................75 Logical control unit capabilities................75 Disk drive emulations..................76 Cascading configurations................... 76 Mainframe Features...
Mainframe Features HYPERMAX OS support for mainframe VMAX 100K, 200K, 400K arrays with HYPERMAX OS supports both mainframe-only and mixed mainframe/open systems environments. VMAX arrays provide the following mainframe support for CKD: Support for 64, 128, 256 FICON single and multi mode ports, respectively...
Mainframe Features Sequential Data Striping Multi-Path Lock Facility HyperSwap Note VMAX can participate in a z/OS Global Mirror (XRC) configuration only as a secondary. IBM 2107 support When VMAX arrays emulate an IBM 2107, they externally represent the array serial number as an alphanumeric number in order to be compatible with IBM command output.
Specific IBM CPU models, operating system release levels, host hardware, and HYPERMAX levels are also required. For the most up-to-date information about switch support, consult the EMC Support ™ Matrix (ESM), available through E-Lab...
Reading an unmapped block returns a block in which each byte is equal to zero. When more storage is required to service existing or future thin devices, data devices can be added to existing thin storage groups. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Provisioning Thin CKD If you are using HYPERMAX 5977 or higher, initialize and label thin devices using the ICKDSF INIT utility. Thin device oversubscription before mapping all of the reported A thin device can be presented for host use capacity of the device. The sum of the reported capacities of the thin devices using a given pool can exceed the available storage capacity of the pool.
Provisioning Figure 9 Auto-provisioning groups Masking view Initiator group VM 1 VM 2 VM 3 VM 4 VM 1 VM 2 VM 3 VM 4 Host initiators Port group Ports Storage group Devices SYM-002353 Mainframe-specific provisioning In Mainframe Enablers, the Thin Pool Capacity (THN) Monitor periodically examines the consumed capacity of data pools.
FAST is entirely automated and requires no user intervention. The following image shows how FAST moves hot data to high-performance drives, and cold data to lower-cost drives Figure 10 FAST data movement Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
(10K and 7.2K), technology (SAS, flash SAS), and capacity. RAID protection options are configured at the disk group level. EMC strongly recommends that you use one or more of the RAID data protection schemes for all data devices.
Page 86
Storage Resource Pools. The following image shows FAST components that are pre-configured at the factory. Once installed, thin devices are created and added to the storage group. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Storage Tiering Figure 11 FAST components Service Levels * Not supported on storage groups containing CKD volumes. FAST allocation by storage resource pool FAST manages the allocation of new data within the Storage Resource Pool by automatically selecting a Storage Resource Pool based on available disk technology, capacity and RAID type.
Workload Planner), models the ability of the array to deliver that performance, and reports: The expected performance range, in response time Whether the array can deliver the requested service level Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
This allows the remote site to better meet the Service Level of the storage group. SRDF and EMC FAST coordination on page 163 provides more information. FAST/SRDF coordination...
Solutions Enabler V8.1 or higher Supported external arrays platforms For details on the supported external arrays, refer to the FAST.X Simple Support Matrix on the E-Lab Interoperability Navigator page: https://elabnavigator.emc.com Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
CHAPTER 7 Native local replication with TimeFinder This chapter describes local replication features. Topics include: About TimeFinder....................92 Mainframe SnapVX and zDP................98 Native local replication with TimeFinder...
Native local replication with TimeFinder About TimeFinder EMC TimeFinder delivers point-in-time copies of volumes that can be used for backups, decision support, data warehouse refreshes, or any other process that requires parallel access to production data. Previous VMAX families offered multiple TimeFinder products, each with their own characteristics and use cases.
Page 93
TimeFinder/Clone TimeFinder/Clone TimeFinder/Mirror TimeFinder/Mirror TimeFinder VP Snap TimeFinder Snap EMC Dataset Snap IBM FlashCopy (Full Volume and Extent Level) Interoperability between TimeFinder SnapVX and legacy TimeFinder and IBM FlashCopy products depends on: The device role in the local replication session.
Use SnapVX to provision multiple test, development environments using linked snapshots. To access a point-in-time copy, create a link from the snapshot data to a host mapped target device. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Target volumes must be unmounted before issuing the relink command to ensure that the host operating system does not cache any filesystem data. If accessing through VPLEX, ensure that you follow the procedure outlined in the technical note EMC VPLEX: LEVERAGING ARRAY BASED AND NATIVE COPY TECHNOLOGIES , available on support.emc.com...
— the planning phase and the implementation phase. The planning phase is done in conjunction with your EMC representative who has access to tools that can help size the capacity needed for zDP if you are currently a VMAX3 user.
Page 100
Native local replication with TimeFinder Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions Native remote replication with SRDF The EMC Symmetrix Remote Data Facility (SRDF) family of products offers a range of array based disaster recovery, parallel processing, and data migration solutions for VMAX Family systems, including: HYPERMAX OS for VMAX All Flash 250F, 450F, 850F, and 950F arrays...
Remote replication solutions secondary site until no new writes are sent to the R1 device and all data has finished copying to the R2. SRDF 2-site solutions The following table describes SRDF 2-site solutions. Table 39 SRDF 2-site solutions Solution highlights Site topology SRDF/Synchronous (SRDF/S) Primary...
SRDF remote replication. For more information, see EMC SRDF/Cluster Cluster 2 Enabler Plug-in Product Guide . SRDF/S or SRDF/A links Host 2 Cluster 2 Host 1 SRDF-2node2cluster.eps Site A Site B Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Solutions Enabler software restart operations for VMware environments in SRDF topologies. IP Network IP Network The EMC SRDF Adapter enables VMware Site Recovery Manager to automate storage-based ESX Server disaster restart operations in SRDF solutions. Solutions Enabler software configured as a SYMAPI server...
Page 106
Site B mirroring among surviving sites in a multi-site disaster Site A SRDF/A Site C recovery implementation. Implemented using SRDF consistency groups (CG) with SRDF/S and SRDF/A. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions Table 40 SRDF multi-site solutions (continued) Solution highlights Site topology See: SRDF/Star solutions on page 109. Concurrent SRDF solutions Concurrent SRDF is a 3-site disaster recovery solution using R11 devices that replicate to two R2 devices. The two R2 devices operate independently but concurrently using any combination of SRDF modes: Concurrent SRDF/S to both R2 devices if the R11 site is within synchronous distance of the two R2 sites.
Cascaded SRDF can be implemented with SRDF/Star. Cascaded SRDF/Star on page 111 describes cascaded SRDF/Star. The following image shows a cascaded SRDF topology. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions Figure 19 Cascaded SRDF topology Site A Site B Site C Host SRDF/S, SRDF/A or SRDF/A or Adaptive copy Adaptive copy SRDF/Star solutions SRDF/Star is a disaster recovery solution that consists of three sites; primary (production), secondary, and tertiary. The secondary site synchronously mirrors the data from the primary site, and the tertiary site asynchronously mirrors the production data.
R22 devices improve the resiliency of the SRDF/Star application, and reduce the number of steps for failover procedures. The following image shows R22 devices at Site C. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions Figure 21 Concurrent SRDF/Star with R22 devices Site A Site B SRDF/S SRDF/A SRDF/A recovery links Active Inactive Site C Cascaded SRDF/Star In cascaded SRDF/Star solutions, the synchronous secondary site is always more current than the asynchronous tertiary site. If the synchronous secondary site fails, the cascaded SRDF/Star solution can incrementally establish an SRDF/A session between primary site and the asynchronous tertiary site.
The following image shows cascaded R22 devices in a cascaded SRDF solution. Figure 23 R22 devices in cascaded SRDF/Star Site A Site B SRDF/S SRDF/A recovery links SRDF/A Active Inactive Site C Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Page 113
Remote replication solutions In cascaded SRDF/Star configurations with R22 devices: All devices at the production site (Site A) must be configured as concurrent (R11) devices paired with R21 devices (Site B) and R22 devices (Site C). All devices at the synchronous site in Site B must be configured as R21 devices. All devices at the asynchronous site in Site C must be configured as R22 devices.
SRDF solution, enabling technology refreshes. Different operating environments offer different SRDF features. SRDF supported features The following table lists the SRDF features supported on each hardware platform and operating environment. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Page 116
TimeFinder VP Snap, TimeFinder/Mirror), and ORS. The GCM attribute can be set in the following ways: NOTICE Do not set GCM on devices that are mounted and under Local Volume Manager (LVM) control. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Automatically on a target of an SRDF or TimeFinder relationship if the source is either a 5876 device with an odd number of cylinders, or a 5977 source that has GCM set. Manually using Base Controls interfaces. The EMC Solutions Enabler SRDF Family CLI User Guide provides additional details. SRDF device pairs An SRDF device is a logical device paired with another logical device that resides in a second array.
R21 devices are typically used in cascaded 3-site solutions where: Data on the R1 site is synchronously mirrored to a secondary (R21) site, and then Synchronously mirrored from the secondary (R21) site to a tertiary (R2) site: Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions Figure 27 R21 device in cascaded SRDF Production host SRDF Links Site A Site B Site C When the R1->R21->R2 SRDF relationship is established, no host has write access to the R21 device. Note Diskless R21 devices are not supported on arrays running HYPERMAX OS. R22 devices R22 devices: Have two R1 devices, only one of which is active at a time.
This state is possible in recovery or parallel processing operations. Not Ready—The R2 device responds Not Ready (Intervention Required) to the host for read and write operations to that device. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions SRDF view The SRDF view is composed of the SRDF state and internal SRDF device state. These states indicate whether the device is available to send data across the SRDF links, and able to receive software commands. R1 device states An R1 device can have the following states for SRDF operations: Ready—The R1 device is ready for SRDF operations.
If the device to be swapped is participating in an active SRDF/A session. In SRDF/EDP topologies diskless R11 or R22 devices are not valid end states. If the device to be swapped is the target device of any TimeFinder or EMC Compatible flash operations.
Remote replication solutions Synchronous mode SRDF/S maintains a real-time mirror image of data between the R1 and R2 devices over distances of ~200 km or less. Host writes are written simultaneously to both arrays in real time before the application I/O completes. Acknowledgments are not sent to the host until the data is stored in cache on both arrays.
Link Limbo and Link Domino modes Autolink recovery Hardware and software compression SRDF/A: Cycle time Session priority Pacing delay and threshold Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions Note SRDF/A device pacing is not supported in HYPERMAX OS. Starting in HYPERMAX OS, all SRDF groups are dynamic. Moving dynamic devices between SRDF groups You can move dynamic SRDF devices between groups in SRDF/S, SRDF/A and SRDF/A MSC solutions without incurring a full synchronization.
Software and hardware compression can be enabled on both the R1 and R2 sides, but the actual compression happens from the side initiating the I/O (typically the R1 side). Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions SRDF write operations This section describes SRDF write operations. Write operations in synchronous mode In synchronous mode, data must be successfully written to cache at the secondary site before a positive command completion status is returned to the host that issued the write command.
Page 128
Enginuity 5876—If either array in the solution is running Enginuity 5876, SRDF/A operates in legacy mode. There are 2 cycles on the R1 side, and 2 cycles on the R2 side: On the R1 side: Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions – One Capture – One Transmit On the R2 side: – One Receive – One Apply Each cycle switch moves the delta set to the next cycle in the process. A new capture cycle cannot start until the transmit cycle completes its commit of data from the R1 side to the R2 side.
2 cycles on the R2 side (receive and apply) for an SRDF/A MSC session when both of the arrays in the SRDF/A solution are running HYPERMAX OS. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions Figure 33 SRDF/A MSC cycle switching – multi-cycle mode Primary Site Secondary Site Capture Apply N-M-1 SRDF Transmit queue consistency depth = M Receive group Transmit Transmit Apply Capture Receive N-M Transmit cycle cycle cycle cycle SRDF cycle switches all SRDF/A sessions in the MSC group at the same time. All sessions in the MSC group have the same: Number of cycles outstanding on the R1 side Transmit queue depth (M)
DSE works in tandem with group-level write pacing to prevent cache over-utilization during spikes in I/O or network slowdowns. Resources to support offloading vary depending on the version of Enginuity running on the array. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Page 133
Enginuity 5876 or earlier, the SRDF/A session runs in Legacy mode. DSE is disabled by default on both arrays. EMC recommends that you enable DSE on both sides. Transmit Idle During short-term network interruptions, the transmit idle state describes that SRDF/A is still tracking changes but is unable to transmit data to the remote side.
Page 134
SRDF/A device-level write pacing is not supported or required for asynchronous R2 devices in TimeFinder or TimeFinder SnapVX sessions if either array in the configuration is running HYPERMAX OS, including: R1 HYPERMAX OS - R2 HYPERMAX OS Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions R1 HYPERMAX OS - R2 Enginuity 5876 R1 Enginuity 5876 - R2 HYPERMAX OS Enginuity 5773 to 5876 SRDF/A device-level pacing applies a write pacing delay for individual SRDF/A R1 devices whose R2 counterparts participate in TimeFinder copy sessions. SRDF/A group-level pacing avoids high SRDF/A cache utilization levels when the R2 devices servicing both the SRDF/A and TimeFinder copy requests experience slowdowns.
Page 136
If the remote host reads data from the R2 device while a write I/O is in transmission on the SRDF links, the host will not be reading the most current data. EMC strongly recommends that you allow the remote host to read data from the R2 devices while in Read Only mode only when: Related applications on the production host are stopped.
Remote replication solutions SRDF recovery operations This section describes recovery operations in 2-site SRDF configurations. Planned failover (SRDF/S) A planned failover moves production applications from the primary site to the secondary site in order to test the recovery solution, upgrade or perform maintenance at the primary site.
Production host Remote, failover host Site failed Site failed SRDF links - SRDF links suspended Not Ready or Read/Write Read Only Site A Site B Site A Site B Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Page 139
Remote replication solutions Failback to the primary array After the primary host and array containing the primary (R1) devices are again operational, an SRDF failback allows production processing to resume on the primary host. Recovery for a large number of invalid tracks If the R2 devices have handled production processing for a long period of time, there may large numbers of invalid tracks owed to the R1 devices.
Note Before you begin, verify that your specific hardware models and Enginuity or HYPERMAX OS versions are supported for migrating data between different platforms. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Page 141
The interim 3-site migration topology Final 2-site topology After migration, the original primary array is mirrored to a new secondary array. EMC support personnel are available to assist with the planning and execution of your migration projects. Migration using SRDF/Data Mobility...
Final 2-site topology After migration, the new primary array is mirrored to the original secondary array. EMC support personnel are available to assist with the planning and execution of your migration projects. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions Figure 39 Migrating data and replacing the original primary array (R1) Replacing R1 and R2 devices with new R1 and R2 devices You can use the combination of concurrent SRDF and cascaded SRDF to replace both R1 and R2 devices at the same time. Note Before you begin, verify that your specific hardware models and Enginuity or HYPERMAX OS versions are supported for migrating data between different...
Remote replication solutions Migration process The final topology EMC support personnel is available to assist with the planning and execution of your migration projects. Figure 40 Migrating data and replacing the original primary (R1) and secondary (R2) arrays Site B...
R1 and R2 devices should not be presented to the cluster until they reach one of these 2 states and present the same WWN. All device pairs in an SRDF/Metro group are managed together for all supported operations, with the following exceptions: Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions If all the SRDF device pairs are Not Ready (NR) on the link, createpair operations can add devices to the group if the new device pairs are created Not Ready (NR) on the link. If all the SRDF device pairs are Not Ready (NR) on the link, deletepair operations can delete a subset of the SRDF devices in the SRDF group.
Use the deletepair operation to delete all or a subset of device pairs from the SRDF group. Removed devices return to the non-SRDF state. Use the createpair operation to add additional device pairs to the SRDF group. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions Use the removepair and movepair operations to remove/move device pairs. If all device pairs are removed from the group, the group is no longer controlled by SRDF/Metro. The group can be re-used either as a SRDF/Metro or non-Metro group.
Page 150
The Array Witness method requires 2 SRDF groups; one between the R1 array and the witness array, and a second between the R2 array and the witness array: Note A Witness group is not allowed to contain devices. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions Figure 43 SRDF/Metro Array Witness and groups SRDF/Metro Witness array: SRDF links R2 array R1 array Solutions Enabler checks that the Witness groups exist and are online when carrying out establish or restore operations. SRDF/Metro determines which witness array an SRDF/Metro group is using, so there is no need to specify the Witness.
Remove a vWitness from the configuration. Once removed, SRDF/Metro will break the connection with vWitness. You can only remove vWitnesses that are not currently servicing active SRDF/Metro sessions. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions Witness failure scenarios This section depicts various single and multiple failure behaviors for SRDF/Metro when the Witness option (Array or vWitness) is used. Figure 45 SRDF/Metro Witness single failure scenarios R1 side of device pair R2 side of device pair Witness Array/vWitness SRDF links SRDF links/IP connectivity*...
The devices must be in Suspended state in order to perform the deletepair operation. When all the devices in the SRDF/Metro group have been deleted, the group is no longer part of an SRDF/Metro configuration. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Remote replication solutions NOTICE The deletepair operation can be used to remove a subset of device pairs from the group. The SRDF/Metro configuration terminates only when the last pair is removed. Delete one side of a SRDF/Metro configuration To remove devices from only one side of a SRDF/Metro configuration, use the half_deletepair operation to terminate the SRDF/Metro configuration at one side of the SRDF group.
VDMs, file systems, file system checkpoint schedules, CIFS servers, networking, and VDM configurations into their own separate pools. This feature works for a recovery where the source is unavailable. For recovery support in Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Page 157
The manually initiated failover and reverse operations can be performed using EMC File Auto Recovery Manager (FARM). FARM allows you to automatically failover a selected sync-replicated VDM on a source eNAS system to a destination eNAS system.
CHAPTER 9 Blended local and remote replication This chapter describes TimeFinder integration with SRDF. SRDF and TimeFinder..................160 Blended local and remote replication...
SRDF/AR can be deployed in 2-site or 3-site solutions: In 2-site solutions, SRDF/DM is deployed with TimeFinder. In 3-site solutions, SRDF/DM is deployed with a combination of SRDF/S and TimeFinder. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Blended local and remote replication The time to create the new replicated consistent image is determined by the time that it takes to replicate the deltas. SRDF/AR 2-site solutions The following image shows a 2-site solution where the production device (R1) on the primary array (Site A) is also a TimeFinder target device: Figure 47 SRDF/AR 2-site solution Host...
Allows R2 or R22 devices at the middle hop to be used as TimeFinder source devices. Device-level (TimeFinder) pacing on page 134 provides more information. Note Device-level write pacing is not required in configurations that include Enginuity 5876 and HYPERMAX OS. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
R1 and R2 devices in TimeFinder operations on page 160 are met. SRDF and EMC FAST coordination SRDF coordination instructs FAST (HYPERMAX OS) and FAST VP (Enginuity 5876) to factor the R1 site statistics into the move decisions that are made at the R2 site.
Page 164
Enginuity 5876 With Enginuity 5876, you can enable/disable SRDF/FAST VP coordination on a storage group (symfast associate command), even when there are no SRDF devices in the storage group. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
CHAPTER 10 Data Migration This chapter describes data migration solutions. Topics include: Overview......................166 Data migration solutions for open systems environments......... 166 Data migration solutions for mainframe environments........176 Data Migration...
NDM requires a VMAX array running Enginuity 5876 with required ePack (source array), and an array running HYPERMAX OS 5977.811.784 or higher (target array). Consult with Dell EMC for required ePack for source arrays running Enginuity 5876. In addition, refer to the NDM support matrix available on eLab Navigator for array operating system version support, host support, and multipathing support for NDM operations.
Data Migration Simple process for migration: 1. Select storage group to migrate. 2. Create the migration session. 3. Discover paths to the host. 4. Cutover storage group to VMAX3 or VMAX All Flash array. 5. Monitor for synchronization to complete. 6.
Page 168
Note A gkselectfile, that lists gatekeeper devices is recommended. For more information on the gkselect file, refer to EMC Solutions Enabler Installation and Configuration Guide. Pre-migration rules and restrictions for Non-Disruptive Migration In addition to general configuration requirements of the migration environment, the following conditions are evaluated by Solutions Enabler prior to starting a migration.
Page 169
Data Migration Multiple masking views on the storage group using the same initiator group are only allowed if port groups on the target array already exist for each masking view, and the ports in the port groups are selected. Storage groups must be parent or standalone storage groups. A child storage group with a masking view on the child storage group is not supported.
HYPERMAX OS with minimal disruption to host applications. NOTICE Open Replicator cannot copy a volume that is in use by SRDF or TimeFinder. Open Replicator operations Open Replicator includes the following terminology: Control Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
The recipent array and its devices are referred to as the control side of the copy operation. Remote The donor EMC arrays or third-party arrays on the SAN are referred to as the remote array/devices. The Control device is Read/Write online to the host while the copy operation is in progress.
SRDF leg while remote mirroring for protection along the other leg. Once the migration process completes, the concurrent SRDF topology is removed, resulting in a 2-site SRDF topology. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Final 2-site topology After migration, the original primary array is mirrored to a new secondary array. EMC support personnel are available to assist with the planning and execution of your migration projects. Figure 52 Migrating data and removing the original secondary array (R2)
Final 2-site topology After migration, the new primary array is mirrored to the original secondary array. EMC support personnel are available to assist with the planning and execution of your migration projects. Figure 53 Migrating data and replacing the original primary array (R1)
Initial 2-site topology Migration process The final topology EMC support personnel is available to assist with the planning and execution of your migration projects. Figure 54 Migrating data and replacing the original primary (R1) and secondary (R2) arrays Site B...
For mainframe environments, z/OS Migrator provides non-disruptive migration from any vendor storage to VMAX arrays. z/OS Migrator can also migrate data from one VMAX array to another. With z/OS Migrator, you can: Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Refer to the z/OS Migrator Product Guide for detailed product information. Volume migration using z/OS Migrator EMC z/OS Migrator is a host-based data migration facility that performs traditional volume migrations as well as host-based volume mirroring. Together, these capabilities are referred to as the volume mirror and migrator functions of z/OS Migrator.
Figure 56 z/OS Migrator dataset migration Thousands of datasets can either be selected individually or wild-carded. z/OS Migrator automatically manages all metadata during the migration process while applications continue to run. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
HYPERMAX OS reports the pending error on the next I/O and then the second error. Enginuity reports error conditions to the host and to the EMC Customer Support Center. When reporting to the host, Enginuity presents a unit check status in the status byte to the channel whenever it detects an error condition such as a data check, a command reject, an overrun, an equipment check, or an environmental error.
EMC Customer Support Center is performing service/ maintenance operations on the system. REMOTE FAILED The Service Processor cannot communicate with the EMC Customer Support Center. Environmental errors The following table lists the environmental errors in SIM format for HYPERMAX OS 5977 or higher.
Page 182
AC power lost to Power Zone 247A A or B. 047B MODERATE Drop devices after RDF E47B Adapter dropped. 01BA ACUTE Power supply or enclosure 24BA 02BA SPS problem. 03BA Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Page 183
An SRDF link recovered from E47E failure. The SRDF link is operational. 047F REMOTE SERVICE The Service Processor 147F successfully called the EMC Customer Support Center (called home) to report an error. 0488 SERIOUS Replication Data Pointer Meta E488 Data Usage reached 90-99%.
SIM presented against unreleated resource An SRDF Group is lost (no links) Event messages The VMAX array also reports events to the host and to the service processor. These events are: Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Mainframe Error Reporting The mirror-2 volume has synchronized with the source volume. The mirror-1 volume has synchronized with the target volume. Device resynchronization process has begun. On z/OS, these events are displayed as IEA480E Service Alert Error messages. They are formatted as shown below: Figure 60 z/OS IEA480E service alert error message format (mirror-2 resynchronization) *IEA480E 0D03,SCU,SERVICE ALERT,MT=3990-3,SER=, REFCODE=E461-0000-6200...
APPENDIX B Licensing This appendix provides an overview of licensing on arrays running HYPERMAX OS. Topics include: eLicensing......................188 Open systems licenses..................190 Mainframe licenses................... 198 Licensing...
For more information on eLicensing, refer to EMC Knowledgebase article 335235 on the EMC Online Support website. You obtain license files from EMC Online Support, copy them to a Solutions Enabler or a Unisphere for VMAX host, and push them out to your arrays. The following figure illustrates the process of requesting and obtaining your eLicense.
Unisphere for VMAX, Mainframe Enablers, Transaction Processing Facility (TPF), or IBM i platform console. In addition, you can also Transformation eLicensing Solution (TeS) to view all of your EMC entitlements from one location. For more information on TeS, refer to EMC Software Licensing Central page at https://community.emc.com/...
The Total Productivity Pack includes the Advanced, Base (as part of Advanced), Local Replication, and Remote Replication Suites. License suites The following table lists the license suites available in an open systems environment. Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Licensing Table 49 VMAX3 license suites for open systems environment License suite Available for mutli- Includes Allows you to With the command tier or single tier Base Suite Multi-tier symconfigure HYPERMAX OS Virtualize an eDisk for encapsulation Priority Controls Use VLUN to OR-DM migrate from an encapsulated...
Page 192
Create time windows symoptmz FAST symtw SL Provisioning Workload Planner symfast Add disk group Database Storage tiers to FAST Analyzer policies Unisphere for File Enable FAST Set the following FAST parameters: Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Page 193
Licensing Table 49 VMAX3 license suites for open systems environment (continued) License suite Available for mutli- Includes Allows you to With the command tier or single tier Swap Non- Visible Devices Allow Only Swap User Approval Mode Maximum Devices to Move Maximum Simultaneous...
Page 194
Convert non- File SRDF devices to Compatible Peer SRDF Add SRDF mirrors to devices in Adaptive Copy mode Set the dynamic- SRDF capable attribute on devices Create SAVE devices Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Page 195
Licensing Table 49 VMAX3 license suites for open systems environment (continued) License suite Available for mutli- Includes Allows you to With the command tier or single tier symrdf Create dynamic SRDF pairs in Asynchronous mode Set SRDF pairs into Asynchronous mode symconfigure Add SRDF mirrors...
Encrypt data and protect it against unauthorized access unless valid keys are provided. This prevents data from being accessed and provides a mechanism to quickly cryptoerase data. FAST.X Perform FAST.X operations: symdisk Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Licensing Table 50 Individual licenses for open systems environment (continued) License Allows you to With the command symcfg Monitor and report eDisk state and track symconfigure information Manage external disks, including add, remove, drain, activate operations Advanced Suite is the prerequisite to using this license.
License pack Entitlements in license Included features file Total Efficiency Pack SYMM_VMAX_ENGINUI HYPERMAX OS SYMM_VMAX_FAST_VP OR-DM SYMM_VMAX_UNISPHE Unisphere for VMAX FAST VP SYMM_VMAX_TIMEFIN SRDF SRDF/Synchronous SYMM_VMAX_SRDF_RE PLICATION SRDF/Asynchronous SnapVX Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS...
Licensing Table 52 License suites for mainframe environment License pack Entitlements in license Included features file TimeFinder/Clone Individual license The following feature has an individual license: Data Protector for z Systems Individual license...