Migrations also often involve applications that cannot be taken offline, even briefly for cutover to a new storage array. Dell EMC’s VMAX Non-Disruptive Migration, or NDM, allows customers to perform online data migrations that are simple as well as completely non-disruptive to the host and application.
TABLE OF CONTENTS VMAX NON-DISRUPTIVE MIGRATION....................1 EXECUTIVE SUMMARY ........................5 AUDIENCE ..................................5 NON-DISRUPTIVE MIGRATION OVERVIEW ..................5 NDM MIGRATION OPERATIONS ......................7 BENEFITS ..................................7 MANIPULATION OF DEVICE IDS AND HOST PATHS ..................... 7 SUPPORTED DISTANCE BETWEEN THE SOURCE AND TARGET ................8 EFFECTS ON DEVICES AND EXISTING REPLICATION SESSIONS ................
Page 4
COMMIT A MIGRATION ..............................69 COMPARING THE PRE AND POST DEVICE IDENTITIES FOLLOWING A COMMIT ..........71 USE THE RECOVER COMMAND ........................... 74 REMOVE THE ENVIRONMENT ............................76 ADD DISASTER RECOVERY TO THE TARGET ARRAY....................77 APPENDIX A: HOST MULTIPATHING SOFTWARE NOTES AND SETTINGS ........ 79 AIX NATIVE MULTI-PATHING SOFTWARE ........................
Migrations also often involve applications that cannot be taken offline, even briefly for cutover to a new storage array. Dell EMC’s VMAX Non-Disruptive Migration, or NDM, allows customers to perform online data migrations that are simple as well as completely non-disruptive to the host and application.
List – Shows a list of migrations to or from a specified array, with the current status for each. BENEFITS Allows migration from VMAX to VMAX3 or VMAX All Flash with hosts and applications completely online Designed for ease of use with control operations that automate the setup and configuration of the migration environment ...
NDM performs the data migration and device ID swap without the host being aware. The path management changes appear as either the addition of paths or the removal of paths to the existing source device. To the host and application, there is no change in the device that it is accessing and access to the device is maintained throughout the entire migration process.
BOOT FROM SAN SUPPORT NDM supports hosts that boot directly from the VMAX. The host boot BIOS must be updated to point to the target volume so that when the host is rebooted at a later date it will find the volume containing the operating system. For details on boot drive configuration please refer to the vendor specific HBA management guide or BIOS guides.
HOW NDM HANDLES MASKING GROUPS AND VIEWS When NDM sessions are created, NDM configures storage groups (SGs), initiator groups (IGs), port groups (PGs), and masking views (MVs) on the target array that exactly match the group names on the source array. Both initiator groups and port groups can exist in multiple masking views, so these groups will be reused when applicable.
For more information on the process to enable SRDF on the array for use with NDM, please visit KB article#495041 (web link below). https://support.emc.com/kb/495041 Prior to running the Create, the following is required: • The host with the application being migrated must be zoned to the VMAX3 or VMAX All Flash.
Please ensure that the environment you are planning to migrate has a match in the support matrix prior to starting the migration. The E- Lab Advisor (ELA) application available at the weblink below provides a way for Dell EMC employees and authorized partners and customers to upload host grab files to validate the environment against NDM support matrix and generate a Migration Readiness report.
EXAMINE THE PRE-MIGRATION ENVIRONMENT Prior to any NDM operation, the application is running in production against the source VMAX array. A VMAX All Flash, which will be the target of the migration in this example, has been attached physically, zoned to the host, and has been discovered by Solutions Enabler.
Page 14
There are four, 50 GB data volumes presented to the host from the source array (volumes 0158 to 015B). They are non-SRDF volumes. C:\>syminq Device Product Device -------------------------- --------------------------- --------------------------- Name Type Vendor Ser Num Cap (KB) -------------------------- --------------------------- --------------------------- \\.\PHYSICALDRIVE0 VRAID 0533 255723B2...
Note: Workload Planner (WLP) is a Unisphere utility that can be used to determine the ideal target ports to use for an NDM migration. If WLP is used, it must be run from a control host that has access to both arrays. It cannot be run from Embedded Unisphere for VMAX. SET UP THE ENVIRONMENT Once the required pre-migration tasks are completed, the environment can be setup using NDM.
C:\>symdm environment -src_sid 3184 -tgt_sid 0129 -setup A DM 'Environment Setup' operation is in progress. Please wait... Analyze Configuration..........Started. Source SID:000195703184 Target SID:000197800129 Analyze Configuration..........Done. Setup Configuration..........Started. Setup Configuration..........Done. The DM 'Environment Setup' operation successfully executed. Once the setup is complete, the migration sessions can be created. Note: NDM Environment setup will create RDF links between the Source and Target arrays using one port per Director for each zoned link.
Page 17
The validate operation on lcseb246_sg confirms that the devices and masking can be configured on the target array. The create operation can now be run to create the migration sessions: C:\>symdm create -src_sid 3184 -tgt_sid 0129 -sg lcseb246_sg -nop A DM 'Create' operation is in progress for storage group 'lcseb246_sg'.
Page 18
Created The sessions can be viewed, in this case from the source VMAX. No user data has copied between the source and target devices, but SCSI reservations put in place by PowerPath, the host LVM, or cluster software are copied to the target.
PERFORM A HOST RESCAN After the create operation completes the systems administrator must issue a host rescan to allow the host to discover the paths to the newly created devices. This host rescan is OS specific and also should include a rescan using the host multipathing software if it must be performed separately from the host rescan, as with PowerPath.
HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 2 port2\path0\tgt6\lun4 c2t6d4 1d:61 active alive 2 port2\path0\tgt5\lun4 c2t5d4 1d:60 active alive 2 port2\path0\tgt4\lun3 c2t4d3 7e:00 active alive 1 port1\path0\tgt6\lun4 c1t6d4 2d:61 active alive 1 port1\path0\tgt5\lun4 c1t5d4 2d:60 active alive 1 port1\path0\tgt4\lun3 c1t4d3 8e:00 active...
Page 22
Source Array : 000195703184 Target Array : 000197800129 Migration State : CutoverReady Total Capacity (GB) : 200.0 Done (%) : N/A Source Configuration: OK Storage Groups (1): OK Name : lcseb246_sg Status : OK ---------------------------------------------------------------- 00158:0015B Masking Views (1): OK Masking View Name Initiator Group Port Group...
Page 23
Status : OK ---------------------------------------------------------------- 00013:00016 Target Configuration: OK Storage Groups (1): OK Name : lcseb246_sg Status : OK ---------------------------------------------------------------- 00013:00016 Masking Views (1): OK Masking View Name Initiator Group Port Group Status --------------------- --------------------- --------------------- --------- lcseb246_mv lcseb246_ig lcseb246_pg Initiator Groups (1): OK Name : lcseb246_ig Status : OK...
Page 24
Device Pairs (4): OK Source Target Status Dev Status ------ ------ ------ ------ 00158 00013 00159 00014 0015A 00015 0015B 00016 All of the group, view, and pair information can also be listed individually. As an example, by pairs and then by storage group: C:\>symdm -sid 0129 -sg lcseb246_sg list -v -pairs_info -detail Symmetrix ID : 000197800129...
The create operation automatically configures matching volumes on the target array. These volumes will be the same size and configuration, though they will most likely not have the same VMAX volume numbers. Following the create operation the 4 new volumes on the target array are 0013 through 0016.
Page 27
Device Capacity Cylinders 54614 Tracks 819210 512-byte Blocks 104858880 MegaBytes 51201 KiloBytes 52429440 Geometry Limited : No Device External Identity Device WWN : 60000970000195703184533030313538 Target Device IDs: The native and effective target device IDs are not the same on the target device (0013). Device 0013 has a unique native device ID, but its effective ID is the same as device 0158.
Device Capacity Cylinders 27307 Tracks 409605 512-byte Blocks 104858880 MegaBytes 51201 KiloBytes 52429440 Geometry Limited : No Device External Identity Device WWN : 60000970000195703184533030313538 CANCEL A MIGRATION At any point before a commit operation is run on a Storage Group, a migration that has not been committed may be canceled. In this example, the cancel is occurring before the cutover.
PERFORM A CUTOVER TO THE TARGET ARRAY Prior to this step, the migration session that was canceled in the last section was simply created again. The host rescan has been performed and the devices are CutoverReady: C:\>symdm -sid 0129 -sg lcseb246_sg list -v -detail -pairs_info Symmetrix ID : 000197800129 Storage Group : lcseb246_sg...
Page 30
Figure 5. Cutover C:\>symdm -sg lcseb246_sg cutover -sid 0129 -nop A DM 'Cutover' operation is in progress for storage group 'lcseb246_sg'. Please wait... Analyze Configuration..........Started. Source SID:000195703184 Target SID:000197800129 Analyze Configuration..........Done. Cutover............Started. Cutover............Done. The DM 'Cutover' operation successfully executed for storage group 'lcseb246_sg'.
Page 31
C:\>symdm -sid 0129 -sg lcseb246_sg list -v -detail -pairs_info Symmetrix ID : 000197800129 Storage Group : lcseb246_sg Source Array : 000195703184 Target Array : 000197800129 Migration State : Migrating Total Capacity (GB) : 200.0 Done (%) : 37 Device Pairs (4): OK Source Target Status Dev...
Page 32
Figure 6. CutoverSynchronized Following a cutover, the two paths to the source array are inactive (FAs 7e:00 and 8e:00) and the four paths to the target are active. In PowerPath, the terms used are "dead" and "alive". Note: The PowerPath mode is "active" for all paths. This is a PowerPath term and is unrelated to the active and inactive path states in NDM terminology.
EXAMINE THE DEVICE IDENTITIES FOLLOWING A CUTOVER The device IDs used on the source and target devices have not changed following the Cutover operation. The target devices are still using the effective WWN of the source devices. The source devices still have the same native and effective IDs. Source device: C:\>symdev show 0158 -sid 3184 Device Physical Name...
C:\>symdev show 013 -sid 0129 Device Physical Name : Not Visible Device Symmetrix Name : 00013 Device Serial ID : N/A Symmetrix ID : 000197800129 Number of RAID Groups Encapsulated Device : No Encapsulated WWN : N/A Encapsulated Device Flags: None Encapsulated Array ID : N/A Encapsulated Device Name : N/A...
Page 36
The revert operation may take some time to run. As the revert is running, the host will discover that the paths to the source array are active again. This is monitored by the VMAX, which will wait for the rediscovery before proceeding.
COMMIT A MIGRATION When the data copy is complete, the migration can be committed. The commit operation completes the migration by removing the migrated application resources from the source array and temporary system resources used for the migration. To commit, the state of the migration sessions must be CutoverSync or CutoverNoSync.
Remove Data Replication........Done. The DM 'Commit' operation successfully executed for storage group 'lcseb246_sg'. PERFORM A HOST RESCAN After commit operation completes the systems administrator can issue a host rescan to allow the host to clean up the dead paths left by the removed paths to the source array.
DIF1 Flag : False Gatekeeper Device : False AS400_GK : False Host Cache Registered : False Optimized Read Miss : N/A Front Director Paths (4): ----------------------------------------------------------------------- POWERPATH DIRECTOR PORT --------- ------------ ---- -------- --------- PdevName Type Type Num VBUS TID SYMM Host ----------------------------------------------------------------------- \\.\PHYSICALDRIVE11 02D:028 RW...
Page 42
Attached VDEV TGT Device : N/A Vendor ID : EMC Product ID : SYMMETRIX Product Revision : 5876 Device WWN : 60000970000195703184533030313538 Device Emulation Type : FBA Device Defined Label Type: N/A Device Defined Label : N/A Device Sub System Id : 0x0001 Cache Partition Name : DEFAULT_PARTITION...
PRERFORMING NDM OPERATIONS USING UNISPHERE Prior to any NDM operation, the application is running in production against the source VMAX array. A VMAX All Flash, which will be the target of the migration in this example, has been attached physically, zoned to the host, and has been discovered by Unisphere.
Page 46
There are four, 50 GB data volumes presented to the host from the source array (volumes 0158 to 015B). They are non-SRDF volumes.
NDM migration. If WLP is used, it must be run from a control host that has access to both arrays. It cannot be run from Embedded Unisphere for VMAX. Note: The Windows disk numbers will change in the output due to changes in the test environment.
Page 48
Note: Workload Planner (WLP) is a Unisphere utility that can be used to determine the ideal target ports to use for an NDM migration. If WLP is used, it must be run from a control host that has access to both arrays. It cannot be run from Embedded Unisphere for VMAX.
Prior to running the create, the target array resources can be validated to ensure that the target array has the resources required to configure the migration sessions. Click on the storage group to select it, click the double angle brackets, and click Manage. Select the remote VMAX array from the "Symmetrix" pulldown menu. Select the desired SRP from the "SRP" pulldown menu.
Page 50
Click the Create Data Migration radio button and click Next. Note: If compression is available on a VMAX All Flash target array, a "Compression" checkbox will appear on this screen.
Page 51
View the summary. Click the angle bracket next to "Add to Job List" and click “Run Now” Confirm that the migration session was created.
Page 52
Figure 9. Created Note: During an NDM migration, the source of the migration will be an R2 or an R21 device (if there is existing SRDF DR replication from the source device) and the target will be an R1 device. This is different than basic SRDF operations and is required to allow DR protection during a migration using a cascaded SRDF configuration.
PERFORM A HOST RESCAN After the create operation completes the systems administrator must issue a host rescan to allow the host to discover the paths to the newly created devices. This host rescan is OS specific and also should include a rescan using the host multipathing software if it must be performed separately from the host rescan, as with PowerPath.
Go to Storage > Migration again to view the migration session and a high level view of its properties and state. Note: During an NDM migration, the source of the migration will be an R2 or an R21 device (if there is existing SRDF DR replication from the source device) and the target will be an R1 device.
The create operation automatically configures matching volumes on the target array. These volumes will be the same size and configuration, though they will most likely not have the same VMAX volume numbers. Following the create operation the 4 new volumes on the target array are 0166 through 0169.
Page 56
Target Device IDs: The native and effective target device IDs are not the same on the target device (001D). Device 001D has a unique native device ID, but its effective ID is the same as device 0158. This is required at this stage of the migration because all device paths are active.
CANCEL A MIGRATION At any point before a commit operation is run on a Storage Group, a migration that has not been committed may be canceled. In this example, the cancel is occurring before the cutover. This operation does not require the revert flag because processing has not moved to the target array.
Page 58
Click the angle bracket next to "Add to Job List" and click Run Now. Note: The "Revert" option will have a checkbox when a revert operation is applicable. It is not required before running a cancel operation while the SG is in a CutoverReady state.
PERFORM A RESCAN FOLLOWING A CANCEL Following the cancel operation, the host paths to the target array are no longer available. The host systems administrator should perform a rescan to remove the dead paths to the target array. PERFORM A CUTOVER TO THE TARGET ARRAY Prior to this step, the migration session that was cancelled in the last step was simply created again.
Page 60
Click the angle bracket next to "Add to Job List" and click “Run Now”.
Page 61
A cutover operation moves the target devices out of pass-through mode, initiates data synchronization from the source to the target and makes the host paths to the source array inactive so that all I/Os are being serviced by the target array.
Page 62
Figure 11. Cutover When the cutover operation completes, the data copy begins. The session is in a Migrating state and will remain in that state until either the pairs are cutover to the new array or other action is taken. The volumes are now in a CutoverSync state.
Page 63
Figure 12. CutoverSynchronized Following a cutover, the two paths to the source array are inactive (FAs 7e:00 and 8e:00) and the four paths to the target are active. In PowerPath, the terms used are "dead" and "alive". Note: The PowerPath mode is "active" for all paths. This is a PowerPath term and is unrelated to the active and inactive path states in NDM terminology.
Once the data copy is done, the migration for that storage group can be completed with a commit or the host access can be reverted back to the source array from the target array and the migration can be cancelled. EXAMINE THE DEVICE IDENTITIES FOLLOWING A CUTOVER The device IDs used on the source and target devices have not changed following the Cutover operation.
Target device: REVERT TO THE SOURCE ARRAY Because the migration is not permanent until the commit operation is run, after a cutover, the migration can still be cancelled and reverted to the source array. To revert back to the source array following a cutover, a cancel is run with the revert option. The revert option moves the processing back to the source array and the cancel removes all of the target side entities created for the migration.
Page 67
As the revert is running, the host will discover that the paths to the source array are active again. This is monitored by the VMAX, which will wait for the rediscovery before proceeding. To cancel the migration, click the double angle brackets and click Cancel Migration.
PERFORM A RESCAN FOLLOWING A CANCEL REVERT Note: Following the cancel -revert operation, the host paths to the target array are no longer available. The paths to the source array should be returned to the original configuration to avoid Data Unavailability. A host HBA rescan should be undertaken to ensure application stability.
COMMIT A MIGRATION When the data copy is complete, the migration can be committed. The commit operation completes the migration by removing the migrated application resources from the source array and releases temporary system resources used for the migration. Once the commit is complete, replication between the source and target arrays is terminated. The source devices are no longer be visible to a host because the masking has been removed.
Page 70
To commit the migration, click the angle bracket next to "Add to Job List" and click Run Now. After the commit operation succeeds, the migration is complete. Because the commit completes the migration and removes all of the source side masking, there are no longer any active paths seen to the source array.
The four remaining active paths are the four paths to the target array. COMPARING THE PRE AND POST DEVICE IDENTITIES FOLLOWING A COMMIT Following the commit operation, each device presents the opposite device ID. The source device now presents the target device ID as its external identity and the target presents the source device ID as its external identity.
In the case where the device being migrated was an odd cylinder count the GCM flag will also be set. This flag is not currently user configurable. Expansion of such devices will require the LUN to be taken offline and adjusted by contacting Dell EMC support. This is to be rectified in the next Hypermax OS release and will be user adjustable as long as the LUN is not in a replication session.
Page 75
Click the angle bracket next to "Add to Job List" and click Run Now.
REMOVE THE ENVIRONMENT Running the environment remove takes down the replication pathway configured by the environment setup operation, and removes the resources that were configured to support NDM on the source and target arrays. Upon successful completion of an environment remove operation, only running an environment setup operation is allowed.
From the Migration Environments screen select the environment you want to remove and select Remove. Once the task has complete the Environment is no longer available for migrations and will need to be setup should further migrations be required. ADD DISASTER RECOVERY TO THE TARGET ARRAY Prior to a migration, many environments contain both local Snap/VX sessions and SRDF remote replication sessions.
Page 78
The Diagram shows what a migration with existing SRDF replication would look like after a create operation has been performed. This is a configuration that closely resembles a regular cascaded SRDF environment. The main difference is that in a created state, the host can access the R1 or the R21.
APPENDIX A: HOST MULTIPATHING SOFTWARE NOTES AND SETTINGS This section describes best practices for using multi-pathing software in an NDM environment. Refer to the NDM Support Matrix for the latest operating system and multi-pathing software combinations. AIX NATIVE MULTI-PATHING SOFTWARE For Native Multipathing on AIX, best practice is to use the following settings for MPIO: algorithm = round_robin (other algorithms may also be used) check_cmd = inquiry...
Use default MPIO settings with the following parameters enabled: PathVerifyEnabled - Enable for optimal results with path discovery. PathVerificationPeriod - Set a time in seconds for automatic path detections. Dell EMC recommends setting it to lowest allowed value between 10 and 30 seconds.
Page 81
During NDM, after ‘symdm create’ cli command is issued and new paths have been discovered by the host, all new native devices on the target side will have reserve_policy set to single_path. This is PowerPath's default setting for the VMAX array.
Page 82
APPENDIX C: HOST REBOOTS DURING NDM PROCESS The following outlines the outcomes of a host reboot either planned or unplanned during each of the discrete steps of the NDM process. Note: Manual verification of host paths following a reboot of any kind should be undertaken before continuing with a migration.
Need help?
Do you have a question about the VMAX and is the answer not in the manual?
Questions and answers