Silicon Graphics InfiniteStorage 4000 Series Configuring And Maintaining

Storage array
Hide thumbs Also See for InfiniteStorage 4000 Series:
Table of Contents

Advertisement

Quick Links

SGI InfiniteStorage 4000 Series and 5000 Series
Configuring and Maintaining a Storage Array
(ISSM 10.86)
 
 
 
 
 
 
007-5882-002
April 2013

Advertisement

Table of Contents
loading
Need help?

Need help?

Do you have a question about the InfiniteStorage 4000 Series and is the answer not in the manual?

Questions and answers

Summary of Contents for Silicon Graphics InfiniteStorage 4000 Series

  • Page 1 SGI InfiniteStorage 4000 Series and 5000 Series Configuring and Maintaining a Storage Array (ISSM 10.86)             007-5882-002 April 2013...
  • Page 2 The information in this document supports the SGI InfiniteStorage 4000 series and 5000 series storage systems (ISSM 10.86). Refer to the table below to match your specific SGI InfiniteStorage product with the model numbers used in this document.   SGI Model #...
  • Page 3 Copyright information Copyright © 1994–2012 NetApp, Inc. All rights reserved. Printed in the U.S.A. No part of this document covered by copyright may be reproduced in any form or by any means— graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval system—without prior written permission of the copyright owner.
  • Page 4 Trademark information NetApp, the NetApp logo, Network Appliance, the Network Appliance logo, Akorri, ApplianceWatch, ASUP, AutoSupport, BalancePoint, BalancePoint Predictor, Bycast, Campaign Express, ComplianceClock, Cryptainer, CryptoShred, Data ONTAP, DataFabric, DataFort, Decru, Decru DataFort, DenseStak, Engenio, Engenio logo, E-Stack, FAServer, FastStak, FilerView, FlexCache, FlexClone, FlexPod, FlexScale, FlexShare, FlexSuite, FlexVol, FPolicy, GetSuccessful, gFiler, Go further, faster, Imagine Virtually Anything, Lifetime Key Management, LockVault, Manage ONTAP, MetroCluster, MultiStore, NearStore, NetCache, NOW (NetApp on the Web), Onaro, OnCommand,...
  • Page 5: Table Of Contents

    Table of Contents Chapter 1 About the Command Line Interface ........1 Structure of a CLI Command .
  • Page 6 Drive Security with Full Disk Encryption ......40 Volume Groups ..........42 Disk Pools.
  • Page 7 Chapter 5 Using the Snapshot (Legacy) Premium Feature ......81 How Snapshot (Legacy) Works ........81 About Scheduling Snapshots (Legacy) .
  • Page 8 Deleting a Snapshot Consistency Group ......112 Creating a Snapshot Volume ........113 Resuming a Consistency Group Snapshot Volume .
  • Page 9 Data Replication ..........143 Link Interruptions or Secondary Volume Errors.
  • Page 10 Creating a Volume Copy ......... 161 Enabling the Volume Copy Premium Feature .
  • Page 11 Performance Tuning ..........185 Monitoring the Performance .
  • Page 12 Appendix B Example Script Files ..........247 Configuration Script Example 1.
  • Page 13: About The Command Line Interface

    About the Command Line Interface The command line interface (CLI) is a software application that provides a way for installers, developers, and engineers to configure and monitor storage arrays. Using the CLI, you can run commands from an operating system prompt, such as the DOS C: prompt, a Linux operating system path, or a Solaris operating system path.
  • Page 14: Structure Of A Cli Command

    Show the alert notification settings for storage arrays that are currently  configured in the Enterprise Management Window. Direct the output to a standard command line display or to a named file.  Structure of a The CLI commands are in the form of a command wrapper and elements embedded into the wrapper.
  • Page 15: Cli Command Wrapper Syntax

    To end an interactive mode session, type the operating system-specific command for terminating a program, such as Control-C on the UNIX operating system or the Windows operating system. Typing the termination command (Control-C) while in interactive mode turns off interactive mode and returns operation of the command prompt to an input mode that requires you to type the complete SMcli string.
  • Page 16 -w wwID)  SMcli (-n storage-system-name [-f scriptfile]  [-o outputfile] [-R (admin | monitor)] [-p password] [-e] [-S] [-quick] SMcli -a email: email-address [host-name-or-IP-address1  [host-name-or-IP-address2]]  [-n storage-system-name | -w wwID | -h host-name]  [-I information-to-include] [-q frequency] [-S] SMcli -x email: email-address [host-name-or-IP-address1 ...
  • Page 17: Command Line Terminals

    Command Line Terminals Terminal Definition Specifies either the host name or the Internet Protocol (IP) address host-name-or-IP- (xxx.xxx.xxx.xxx) of an in-band managed storage array or an address out-of-band managed storage array. If you are managing a storage array by using a host through in-band ...
  • Page 18 Terminal Definition -f (lowercase) Specifies a file name that contains script commands that you want to run on the specified storage array. The -f terminal is similar to the -c terminal in that both terminals are intended for running script commands. The -c terminal runs individual script commands.
  • Page 19 Terminal Definition Specifies a file name for all output text that is a result of running the script commands. Use the -o terminal with these terminals:   If you do not specify an output file, the output text goes to standard output (stdout).
  • Page 20 Terminal Definition Reduces the amount of time that is required to run a single-line operation. -quick An example of a single-line operation is the recreate snapshot volume command. This terminal reduces time by not running background processes for the duration of the command. Do not use this terminal for operations that involve more than one single-line operation.
  • Page 21: Alert Severities Commands

    Terminal Definition Specifies the WWID of the storage array. This terminal is an alternate to the -n terminal. Use the -w terminal with the -d terminal to show the WWIDs of the known storage arrays. The file content has this format: storage-system-name world-wide-ID IP-address1 IP-address2 -X (uppercase)
  • Page 22: Autosupport Bundle Collection Commands

    This command sends out a test alert to the Windows Event Log and all configured syslog receivers. AutoSupport AutoSupport (ASUP) is a feature that enables storage arrays to automatically collect support data into a customer support bundle and send the data to Technical Support. Bundle Technical Support can then perform remote troubleshooting and problem analysis Collection...
  • Page 23 Places less burden on payload and transmission on the messages originating — from Event ASUP messages. Weekly:  Sent once every week at times that do not impact storage array operations. — Includes configuration and system state information. — The storage management software automatically assigns the schedule for each storage array it has discovered.
  • Page 24: Disable Autosupport At The Emw Level Smcli Version

    ASUP Log The ASUP log file has a detailed list of events encountered during delivery of the ASUP messages. The ASUP log provides information about status, history of transmission activity, and any errors encountered during delivery of the ASUP messages. The log file is available for all ASUP-enabled storage arrays. The archived log filename is ASUPMessages.n, where n is an integer from 1 to 5.
  • Page 25: Set Storage Array Autosupport Bundle Disable

    This command turns off the AutoSupport (ASUP) bundle collection and transmission Set Storage Array for the storage array. You can run this version of the command from the script editor AutoSupport Bundle or in a script file. Disable Syntax set storageArray autoSupport disable Parameters None.
  • Page 26: Naming Conventions

    Naming Use the following guidelines when creating names; Conventions Names can have a maximum of 30 characters.  You can use any combination of alphanumeric characters, hyphens, and  underscores for the names of the following components: Storage arrays — Host groups —...
  • Page 27: Formatting Cli Commands

    When you enter a script command that requires a name, the script engine looks for a name that starts with an alphabetic character. The Script Engine might not recognize the following names: Names that are only numbers, such as 1 or 2 ...
  • Page 28: Formatting Rules For Script Commands

    Insert one caret (^) before each special script character when used within a string literal in a script command. For example, to change the name of a storage array to FINANCE_|_PAYROLL, enter the following string: -c "set storageArray userLabel=\"FINANCE_^|_PAYROLL\";" Formatting Syntax unique to a specific script command is explained in the Notes section at the end of each script command description.
  • Page 29: Formatting Cli Commands In Windows Powershell

    to specify the ID of the drive tray, set the ID of the drawer to 0, and specify the ID of the slot in which a drive resides. Separate the ID values with a comma. If you enter more than one set of ID values, separate each set of values with a space. Enclose the set of values in parentheses.
  • Page 30: Usage Examples

    Enclose the script command in single quotation marks (’ ’)  Double quotation marks that are part of a name, file path, or value must have a  backslash before each double quotation mark character (\") Following is an example of a CLI command to create a storage array name in the Windows Powershell.
  • Page 31 This example shows how to delete an existing volume and create a new volume on a storage array. The existing volume name is Stocks_<_Bonds. The new volume name is Finance. The controller host names are finance1 and finance2. The storage array is protected, requiring the password TestArray.
  • Page 32: Exit Status

    If you want to know the IP address of each storage array in the configuration, add the -i terminal to the command. SMcli -d -i Exit Status This table lists the exit statuses that might be returned and the meaning of each status. Status Value Meaning The command terminated without an error.
  • Page 33 Status Value Meaning The command was not available. The device was not in the configuration file. An error occurred while updating the configuration file. An unknown host error occurred. The sender contact information file was not found. The sender contact information file could not be read. The userdata.txt file exists.
  • Page 34 Exit Status...
  • Page 35: About The Script Commands

    About the Script Commands You can use the script commands to configure and manage a storage array. The script commands are distinct from the command line interface (CLI) command wrappers. You can enter individual script commands, or you can run a file of script commands. When you enter an individual script command, you embed the script command in a CLI command wrapper.
  • Page 36: Structure Of A Script Command

    Structure of a All script commands have the following structure: Script command operand-data (statement-data) Command command identifies the action to be performed.  operand-data represents the objects associated with a storage array that you  want to configure or manage. statement-data provides the information needed to perform the command.
  • Page 37: Synopsis Of The Script Commands

    Object Type Identifier Volume user label snapshot (legacy) Not applicable storageArray Tray ID tray Volume user label or volume World Wide Identifier volume (WWID) (set command only) Target volume user label and, optionally, the source volumeCopy volume user label User label volumeGroup Valid characters are alphanumeric, a hyphen, and an underscore.
  • Page 38 Syntax Description check object  Starts an operation to report on errors in the object, which is a synchronous operation. {statement-data} clear object  Discards the contents of some attributes of an object. This operation is destructive and cannot be reversed. {statement-data} create object ...
  • Page 39: Recurring Syntax Elements

    Syntax Description Starts a suspended operation. The operation starts where resume object it left off when it was suspended. Forces the object from the Failed state to the Optimal revive object state. Use this command only as part of an error recovery procedure.
  • Page 40 Table 4 Recurring Syntax Elements Recurring Syntax Syntax Value raid-level (0 | 1 | 3 | 5 | 6) repository-raid-level (1 | 3 | 5 | 6) capacity-spec integer-literal [KB | MB | GB | TB | Bytes] segment-size-spec integer-literal boolean (TRUE | FALSE) user-label...
  • Page 41 Recurring Syntax Syntax Value drive-channel-identifier (1 | 2 | 3 | 4 | 5 | 6 | 7 | 8) (eight drive ports per tray) drive-channel-identifier-l drive-channel-identifier {drive-channel-identifier} host-channel-identifier (a1 | a2 | b1 | b2) (four host ports per tray) host-channel-identifier (a1 | a2 | a3 | a4 | b1 | b2 | b3 | b4) (eight host ports per tray)
  • Page 42 Recurring Syntax Syntax Value repositoryRAIDLevel  count-based-repository- =repository-raid-level  spec repositoryDriveCount=integer-literal  [repositoryVolumeGroupUserLabel  =user-label]  ]  [driveType=drive-type ] |  [trayLossProtect=(TRUE | FALSE) ] |  [drawerLossProtect=(TRUE | FALSE) [dataAssurance=(none | enabled) wwID string-literal string-literal string-literal | integer-literal host-type host-card-identifier (1 | 2 | 3 | 4)
  • Page 43 Recurring Syntax Syntax Value driveType=drive-type |  autoconfigure-vols-attr- driveMediaType=drive-media-type |  value-pair raidLevel=raid-level |  volumeGroupWidth=integer-literal |  volumeGroupCount=integer-literal |  |  volumesPerGroupCount=integer-literal hotSpareCount=integer-literal |  segmentSize=segment-size-spec |  cacheReadPrefetch=(TRUE | FALSE)  securityType=(none | capable |  |  enabled) dataAssurance=(none | enabled) create-volume-copy-attr-...
  • Page 44 Recurring Syntax Syntax Value enableIPv4=(TRUE | FALSE) |  ethernet-port-options enableIPv6=(TRUE | FALSE) |  IPv6LocalAddress=ipv6-address |  IPv6RoutableAddress=ipv6-address |  IPv6RouterAddress=ipv6-address |  IPv4Address=ip-address |  IPv4ConfigurationMethod=  (static | dhcp) |  IPv4GatewayIP=ip-address |  IPv4SubnetMask=ip-address |  duplexMode=(TRUE | FALSE) | ...
  • Page 45 Recurring Syntax Syntax Value snapshot snapshot (legacy)-schedule-attribut (legacy)-schedule-attribute-value-pair {snapshot e-value-list (legacy)-schedule-attribute-value-pair} (GMT+HH:MM | GMT-HH:MM)  time-zone-spec [dayLightSaving=HH:MM] startDate=MM:DD:YY  snapshot scheduleDay=(dayOfWeek | all)  (legacy)-schedule-attribut startTime=HH:MM  e-value-pair scheduleInterval=interger  endDate=(MM:DD:YY | noEndDate)  timesPerDay=interger For tray loss protection to work, each drive in a volume group must be in a separate tray.
  • Page 46: Usage Guidelines

    The driveType parameter is not required if only one type of drive is in the storage array. If you use the driveType parameter, you also must use the hotSpareCount parameter and the volumeGroupWidth parameter. If you do not use the driveType parameter, the configuration defaults to Fibre Channel drives.
  • Page 47: Adding Comments To A Script File

    NOTE While the CLI commands and the script commands are not case sensiti v e, user labels (such as for volumes, hosts, or host ports) are case sensitive. If you try to map to an object that is identified by a user label, you must enter the user label exactly as it is defined, or the CLI commands and the script commands will fail.
  • Page 48 Adding Comments to a Script File...
  • Page 49: Configuration Concepts

    Configuration Concepts When you configure a storage array, you organize drives into a logical structure that provides storage capacity and data protection so that one or more hosts can safely store data in the storage array. This chapter provides definitions of the physical and logical components required to organize the drives into a storage array configuration.
  • Page 50: Drives

    controller must be in slot A. A controller tray or a controller-drive tray with two controllers is called a duplex tray. A controller tray or a controller-drive tray with one controller is called a simplex tray. Controllers manage the interface by running controller firmware to transmit and receive commands between the hosts and the drives.
  • Page 51: Hot Spare Drives

    Figure 1 Drive Drawer with Drives The total number of drives in a storage array depends on the model of the controller tray or controller-drive tray and the capacity of the drives. The following table lists, by controller tray or controller-drive tray model and drive tray capacity, the maximum number of drives in a storage array.
  • Page 52: Drive Security With Full Disk Encryption

    The hot spare must be the same type of drive as the drive that failed (for example, a Serial Advanced Technology Attachment [SATA] hot spare cannot replace a Fibre Channel hot spare). You can assign drives to act as hot spares manually or have the script commands automatically assign hot spares.
  • Page 53 Table 5 Volume Group Security Properties Security Status Security Capable – yes Security Capable – no The volume group is composed of all full Not applicable. Only FDE drives Secure – yes disk encryption (FDE) drives and is in a can be in a Secure state.
  • Page 54: Volume Groups

    Volume Groups A volume group is a set of drives that are logically grouped together by the controllers in a storage array. After you create a volume group, you can create one or more volumes in the volume group. A volume group is identified by a sequence number that is defined by the controller firmware when you created the volume group.
  • Page 55: Disk Pools

    Disk Pools A disk pool is a set of drives that is logically grouped together in the storage array. The drives in each disk pool must be of the same drive type and drive media type, and they must be similar in size. As with a volume group, you can create one or more volumes in the disk pool.
  • Page 56 Faster reconstruction of data – Disk pools do not use hot spare drives for data  protection like a volume group does. Instead of hot spare drives, disk pools use spare capacity within each drive that comprises the disk pool. In hot spare drive coverage, the maximum drive IOPS limits the speed of reconstruction of data from the failed drive to the hot spare drive.
  • Page 57: Volumes

    Volumes A volume is the logical component that hosts use for data storage. Hosts that are attached to the storage array write data to the volumes and read data from the volumes. You can create a volume from either a volume group or a disk pool. Before you create a volume, the volume group or a disk pool must already exist and it must have enough free capacity to create the volume.
  • Page 58 NOTE You can create a thin volume only from a disk pool, not from a volume group. Disk pools and thin volumes are available only on the E2600 controller and the E5400 controller. Access volume – A factory-configured volume in a storage area network (SAN) ...
  • Page 59 NOTE Snapshot (Legacy) Volume and Synchronous Mirroring are premium features that you must activate before you can use them. For more information about snapshot (legacy) volumes, see “Using the Snapshot (Legacy) Premium Feature.” For more information about Synchronous Mirroring, see “Using the Synchronous Mirroring Premium Feature.”...
  • Page 60: Raid Levels

    RAID Levels The RAID level defines a storage architecture in which the storage capacity on the drives in a volume group is separated into two parts: part of the capacity stores the user data, and the remainder stores redundant or parity information about the user data.
  • Page 61: Hosts

    RAID Level Configuration High-bandwidth mode – RAID Level 3 stripes both user data and redundancy data (in the form of parity) across the drives. The equivalent of the capacity of one drive is used for the redundancy data. RAID Level 3 works well for large data transfers in applications, such as multimedia or medical imaging, that write and read large sequential chunks of data.
  • Page 62: Host Groups

    Host Groups A host group is a topological element that you can define if you want to designate a collection of hosts that will share access to the same volumes. A host group is a logical entity. Host groups are identified by names or labels that users choose. The host group name can be any combination of alphanumeric characters with a maximum length of 30 characters.
  • Page 63: Configuring A Storage Array

    Configuring a Storage Array When you configure a storage array, you organize drives into a logical structure that provides storage capacity and data protection so that one or more hosts can safely store data in the storage array. You want to maximize the data availability by making sure that the data is quickly accessible while maintaining the highest level of data protection possible.
  • Page 64: Determining What Is On Your Storage Array

    Determining Even when you create a configuration on a storage array that has never been configured, you still need to determine the hardware features and software features What Is on Your that are on the storage array. When you configure a storage array that has an existing Storage Array configuration, you must make sure that your new configuration does not inadvertently alter the existing configuration, unless you are reconfiguring the entire storage array.
  • Page 65 In this example, the name folder is the folder in which you choose to place the profile file, and storagearrayprofile.txt is the name of the file. You can choose any folder and any file name. ATTENTION Possible loss of data – When you write information to a file, the script engine does not check to determine if the file name already exists.
  • Page 66 NVSRAM version: Not applicable  Transferred on: Not applicable  NVSRAM configured for batteries: Yes  Start cache flushing at (in percentage): 80  Stop cache flushing at (in percentage): 80  Cache block size (in KB): 4  Media scan frequency (in days): Disabled  Failover alert delay (in minutes): 5 ...
  • Page 67: Clearing The Configuration

     show storageArray dbmDatabase  show storageArray hostTopology  show storageArray iscsiNegotiationDefaults  show storageArray lunMappings  show storageArray unconfiguredIscsiInitiators  show storageArray unreadableSectors  show string  show thinVolume  show allVolumes  show volume  show volume actionProgress ...
  • Page 68: Configuring A Storage Array With Volume Groups

    volumeGroups – Removes the storage array mapping (volume configuration,  the volume group configuration, disk pools, and thin volumes), but leaves the rest of the configuration intact. If you want to create new volume groups and volumes within the storage array, you can use the clear storageArray configuration command with the volumeGroups parameter to remove existing volume groups in a pre-existing configuration.
  • Page 69 would like to change any of the parameter values, you can do so by entering new values for the parameters when you run the autoConfigure storageArray command. If you are satisfied with the parameter values that the show storageArray autoConfiguration command returns, run the autoConfigure storageArray command without new parameter values.
  • Page 70 If the volume is for a single user with large I/O requests (such as multimedia), performance is maximized when a single I/O request can be serviced with a single data stripe. A data stripe is the segment size multiplied by the number of drives in the volume group that are used for data storage.
  • Page 71: Using The Create Volume Command

    Example of the Auto Configuration Command c:\...\smX\client>smcli 123.45.67.88 123.45.67.89  -c “autoConfigure storageArray driveType=fibre  raidLevel=5 volumeGroupWidth=8 volumeGroupCount=3  volumesPerGroupCount=4 hotSpareCount=2  segmentSize=8 cacheReadPrefetch=TRUE;” The command in this example creates a storage array configuration by using Fibre Channel drives set to RAID Level 5. Three volume groups are created, and each volume group consists of eight drives, which are configured into four volumes.
  • Page 72 NOTE The capacity parameter, the owner parameter, the cacheReadPrefetch parameter, the segmentSize parameter, the trayLossProtect parameter, the drawerLossProtect parameter, the dssPreAllocate parameter, and the securityType parameter are optional parameters (indicated by the placement inside the square brackets). You can use one or all of the optional parameters as needed to define your configuration.
  • Page 73 73 GB x 6 drives = 438 GB Because only 20 GB is assigned to the volume, 418 GB remains available (as unconfigured capacity) for other volumes that a user can add to this volume group later. 438 GB - 20 GB volume group size = 418 GB Cache read prefetch is turned on, which causes additional data blocks to be written into the cache.
  • Page 74: Tray Loss Protection

    If you want to add a new volume to an existing volume group, use this command: Creating Volumes in an Existing Disk Pool create volume volumeGroup=volumeGroupNumber  userLabel=volumeName  [freeCapacityArea=freeCapacityIndexNumber |  capacity=volumeCapacity | owner=(a | b) |  cacheReadPrefetch=(TRUE | FALSE) |  segmentSize=segmentSizeValue] NOTE Parameters wrapped in square brackets or curly brackets are optional.
  • Page 75: Configuring A Storage Array With Disk Pools

    When the controller firmware assigns the drives, if trayLossProtect=TRUE, the storage array posts an error if the controller firmware cannot provide drives that result in the new volume group having tray loss protection. If trayLossProtect=FALSE, the storage array performs the operation even if it means that the volume group might not have tray loss protection.
  • Page 76: Using The Create Disk Pool Command

    NOTE Many of these commands require a thorough understanding of the firmware as well as an understanding of the network components that need to be mapped. Use the CLI commands and the script commands with caution. Use the create diskPool command to create a new disk pool in two ways: Using the Create Disk Pool Command Create a new disk pool automatically by entering the type of drives that you want...
  • Page 77 Hot spares are not required or needed for disk pools. Spare capacity for reconstruction is divided among the drives within a disk pool. A small amount of each drive is reserved as reconstruction space to hold reconstructed data in the event of loss of access to a drive or a drive failure.
  • Page 78 create diskPool driveType=(fibre|sas)  userLabel="diskPoolName"  [driveCount=driveCountValue |  warningThreshold=(warningThresholdValue|default) |  criticalThreshold=(criticalThresholdValue|default) |  criticalPriority=(highest|high|medium|low|lowest) |  backgroundPriority=(highest|high|medium|low|lowest) |  degradedPriority=(highest|high|medium|low|lowest) |  securityType=(none|capable|enabled) |  driveMediaType=(hdd | ssd | allMedia | unknown) |  dataAssurance=(none|enabled)] Example of Creating Volumes with Software-Assigned Drives c:\...\smX\client>smcli 123.45.67.88 123.45.67.89 ...
  • Page 79 The priority for correcting the disk pool after it has entered a Degraded state is set  to high. If a condition occurs, such as a single drive failure, the storage management software makes the correction of the condition a high priority. The securitytype is enabled, so the storage management software uses only ...
  • Page 80: Using The Create Volume Command

    Example of Creating Volumes with User-Assigned Drives c:\...\smX\client>smcli 123.45.67.88 123.45.67.89  -c “create diskpool drives=(1,1,1 1,1,2 1,2,3 ... 2,1,10, 2,2,11)  userLabel=”Engineering_1” warningthreshold=65 criticalthreshold=75 criticalpriority=high backgroundpriority=medium degradedpriority=high securitytype=enabled drivemediatype=hdd dataassurance=enabled;” This command creates a disk pool with these features: The list of drives represents the drives found in a high capacity drive tray. ...
  • Page 81 A standard volume has a fixed capacity that you can define when you create the volume. The standard volume reports only the fixed capacity to the host. In disk pools, the volume capacity is distributed across all of the applicable drives. You do not need to identify specific drives for the volume.
  • Page 82 Dynamic cache read prefetch allows the controller to (optionally) copy additional sequential data blocks into the cache while it is reading data blocks from a drive to cache. This caching increases the chance that future requests for data can be filled from the cache.
  • Page 83 The mapping parameter defines whether you want the storage management software to map the volume to a host, or if you want to map the volume to a host at a later time. To allow the storage management software to map the volume to a host, use the default parameter.
  • Page 84: Modifying Your Configuration

    The mapping parameter defines whether you want the storage management software to map the volume to a host, or if you want to map the volume to a host later. To allow the storage management software to map the volume to a host use the default parameter.
  • Page 85: Setting The Controller Clocks

    To synchronize the clocks on the controllers with the host, use the set Setting the storageArray time command. Run this command to make sure that event time Controller Clocks stamps that are written by the controllers to the Event Log match the event time stamps that are written to the host log files.
  • Page 86: Setting The Storage Array Cache

    For example, if you set the defaultHostType parameter to Linux, the controller communicates with any undefined host if the undefined host is running a Linux operating system. Typically, you would need to change the host type only when you are setting up the storage array. The only time that you might need to use this parameter is when you need to change how the storage array behaves relative to the hosts that are connected to it.
  • Page 87 set storageArray cacheBlockSize=cacheBlockSizeValue |  cacheFlushStart=cacheFlushStartSize |  cacheFlushStop=cacheFlushStopSize You can enter one, two, or all three of the parameters on the command line. The cache block size value defines the size of the data block that is used by the controller in transferring data into or out of the cache.
  • Page 88 set (allVolumes | volume [volumeName] |  volumes [volumeName1 ... volumeNameN]  volume <wwID>) |  cacheFlushModifier=cacheFlushModifierValue |  cacheWithoutBatteryEnabled=(TRUE | FALSE) |  mirrorCacheEnabled=(TRUE | FALSE) |  readCacheEnabled=(TRUE | FALSE) |  writeCacheEnabled=(TRUE | FALSE) |  cacheReadPrefetch=(TRUE | FALSE) The cacheFlushModifier parameter defines the amount of time that data stays in the cache before it is written to the drives.
  • Page 89 NOTE Do not set the value of the cacheFlushModifier parameter above 10 seconds. An exception is for testing purposes. After running any tests in which you have set the values of the cacheFlushModifier parameter above 10 seconds, return the value of the cacheFlushModifier parameter to 10 or fewer seconds. The cacheWithoutBatteryEnabled parameter turns on or turns off the ability of a host to perform write caching without backup batteries in a controller.
  • Page 90: Setting The Modification Priority

    The writeCacheEnabled parameter turns on or turns off the ability of the host to write data to the cache. Write caching enables write operations from the host to be stored in cache memory. The volume data in the cache is automatically written to the drives every 10 seconds.
  • Page 91: Assigning Global Hot Spares

    set (allVolumes | volume [volumeName] |  volumes [volumeName1 ... volumeNameN] volume <wwID> |  accessVolume)  modificationPriority=(highest | high | medium | low | lowest) This example shows how to use this command to set the modification priority for volumes named Engineering_1 and Engineering_2: c:\...\smX\client>smcli 123.45.67.88 123.45.67.89 ...
  • Page 92 save storageArray configuration file=”filename”  [(allconfig | globalSettings=(TRUE | FALSE)) |  volumeConfigAndSettings=(TRUE | FALSE) |  hostTopology=(TRUE | FALSE) | lunMappings=(TRUE | FALSE)] ATTENTION Possible loss of data – When information is written to a file, the script engine does not check to determine if the file name already exists. If you choose the name of a file that already exists, the script engine writes over the information in the file without warning.
  • Page 93: Using The Snapshot (Legacy) Premium Feature

    Using the Snapshot (Legacy) Premium Feature The Snapshot (Legacy) premium feature creates a snapshot (legacy) volume that you can use as a backup of your data. A snapshot (legacy) volume is a logical point-in-time image of a standard volume. Because it is not a physical copy, a snapshot (legacy) volume is created more quickly than a physical copy and requires less storage space on the drive.
  • Page 94 Table 14 Components of a Snapshot (Legacy) Volume Component Description Base volume A standard volume from which the snapshot (legacy) is created Snapshot (Legacy) volume A logical point-in-time image of a standard volume Snapshot (Legacy) repository A volume that contains snapshot (legacy) metadata and volume copy-on-write data for a particular snapshot (legacy) volume...
  • Page 95: About Scheduling Snapshots (Legacy)

    Table 15 Snapshot (Legacy) Volume Commands Command Description This command creates a snapshot (legacy) volume. create snapshotVolume This command starts a fresh copy-on-write operation recreate snapshot by using an existing snapshot (legacy) volume. This command restarts multiple snapshot (legacy) recreate snapshot volumes as one batch operation by using one or many collection existing snapshot (legacy) volumes.
  • Page 96 create a schedule containing a start time of 8:00 a.m. and an end time of 5:00 p.m. Select 10 snapshots (legacy) per day on Monday, Tuesday, Wednesday, Thursday, and Friday. Select a start date of today and no end date. Create an end of day backup as described in the "Scheduled backups"...
  • Page 97: Creating A Snapshot (Legacy) Volume

    Creating a The create snapshotVolume command provides three methods for defining the drives for your snapshot (legacy) repository volume: Snapshot Defining the drives for the snapshot (legacy) repository volume by their tray IDs (Legacy)  and their slot IDs. Volume Defining a volume group in which the snapshot (legacy) repository volume ...
  • Page 98: Creating A Snapshot (Legacy) Volume With Software-Assigned

    The command in this example creates a new snapshot (legacy) of the base volume Mars_Spirit_4. The snapshot (legacy) repository volume consists of five drives that form a new volume group. The new volume group has RAID Level 5. This command also takes a snapshot (legacy) of the base volume, which starts the copy-on-write operation.
  • Page 99: Creating A Snapshot (Legacy) Volume By Specifying A Number Of

    c:\...\smX\client>smcli 123.45.67.88 123.45.67.89  -c “create snapshotVolume baseVolume=\”Mars_Spirit_4\”  repositoryVolumeGroup=2 freeCapacityArea=2;” The command in this example creates a new snapshot (legacy) repository volume in volume group 2. The base volume is Mars_Spirit_4. The size of the snapshot (legacy) repository volume is 4 GB. This command also takes a snapshot (legacy) of the base volume, starting the copy-on-write operation.
  • Page 100: User-Defined Parameters

    The command in this example creates a new snapshot (legacy) repository volume that consists of three drives. Three drives comprise a new volume group that has RAID Level 5. This command also takes a snapshot (legacy) of the base volume, which starts the copy-on-write operation.
  • Page 101 Parameter Description The name that you want to give to the snapshot (legacy) repositoryUserLabel repository volume. If you do not choose a name for the snapshot (legacy) repository volume, the software creates a default name by using the base volume name. For example, if the base volume name is Mars_Spirit_4 and does not have an associated snapshot (legacy) repository volume, the default snapshot (legacy) repository volume name is...
  • Page 102: Volume Names

    create snapshotVolume baseVolume=”Mars_Spirit_4”  repositoryRAIDLevel=5 repositoryDriveCount=5 driveType=fibre  userLabel=”Mars_Spirit_4_snap1”  repositoryUserLabel=”Mars_Spirit_4_rep1”  warningThresholdPercent=75 repositoryPercentOfBase=40  repositoryFullPolicy=failSnapshot; The snapshot (legacy) volume names and the snapshot (legacy) repository volume Snapshot (Legacy) names can be any combination of alphanumeric characters, hyphens, and underscores. Volume Names and The maximum length of the volume names is 30 characters.
  • Page 103: Creating A Snapshot (Legacy) Schedule

    Creating a You can create a snapshot (legacy) schedule in two ways: Snapshot When you create a snapshot (legacy) volume using the create  snapshotVolume command (Legacy) Schedule When you modify a snapshot (legacy) volume using the set (snapshot)  volume command The following table lists the parameters that you can use to set a schedule for a snapshot (legacy):...
  • Page 104 Scheduling Snapshots (Legacy) Use the enableSchedule parameter and the schedule parameter to schedule automatic snapshots (legacy). Using these parameters, you can schedule snapshots (legacy) daily, weekly, or monthly (by day or by date). The enableSchedule parameter turns on or turns off the ability to schedule snapshots (legacy). When you enable scheduling, you use the schedule parameter to define when you want the snapshots (legacy) to occur.
  • Page 105: Changing Snapshot (Legacy) Volume Settings

    option value that you set. For example, 1440/180 = 8. The firmware then compares the timesPerDay integer value with the calculated scheduleInterval integer value and uses the smaller value. To remove a schedule, use the delete snapshot (legacy) command with the schedule parameter.
  • Page 106: Stopping, Restarting, And Deleting A Snapshot (Legacy) Volume

    Stopping, When you create a snapshot (legacy) volume, copy-on-write starts running immediately. As long as a snapshot (legacy) volume is enabled, storage array Restarting, and performance is impacted by the copy-on-write operations to the associated snapshot Deleting a (legacy) repository volume. Snapshot If you no longer want copy-on-write operations to run, you can use the stop (Legacy)
  • Page 107: Starting, Stopping, And Resuming A Snapshot (Legacy) Rollback

    This example shows how to use the command in a script file: recreate snapshot volumes  [“Mars_Spirit_4-2” “Mars_Spirit_4-3”]; If you do not intend to use a snapshot (legacy) volume again, you can delete the snapshot (legacy) volume by using the delete volume command. When you delete a snapshot (legacy) volume, the associated snapshot (legacy) repository volume also is deleted.
  • Page 108 Rollback operations involving the volumes in a Synchronous Mirroring  relationship have these constraints: If the base volume is acting as the secondary volume in a Synchronous — Mirroring relationship, you cannot start a rollback operation. If the base volume is acting as the primary volume in a Synchronous —...
  • Page 109 Snapshot (Legacy) Rollback Status You can see the status of a snapshot (legacy) rollback operation by running the show volume command on the snapshot (legacy) volume. The show volume command returns one of these statuses during a snapshot (legacy) rollback operation: Table 19 Snapshot (Legacy) Rollback Operation Status Status Description...
  • Page 110 Starting, Stopping, and Resuming a Snapshot (Legacy) Rollback...
  • Page 111: Using The Snapshot Images Premium Feature

    Using the Snapshot Images Premium Feature NOTE Snapshot image operations are available only on the E2600 controller and the E5400 controller. A snapshot image is a logical image of the content of an associated base volume created at a specific moment. A snapshot image can be thought of as a restore point. A host cannot directly read from or write to the snapshot image because the snapshot image is used to save only the transient data captured from the base volume.
  • Page 112: Differences Between Snapshots (Legacy) And Snapshot Image Operations

    Identifying the volume group or disk pool in which you want to place the  repository volume The capacity for the repository volume  You can delete older snapshot images in a snapshot group. When a snapshot image is deleted, its definition is removed from the system, and the space occupied by the snapshot image in the repository is released and made available for reuse within the snapshot group.
  • Page 113: Snapshot Groups

    You can create either a snapshot image that is capable of both reading operations and writing operations or you can create a read-only a snapshot volume. Snapshot A snapshot group is a collection of snapshot images of a single associated base volume.
  • Page 114: Repository Volumes

    Characteristics of Snapshot Groups Snapshot groups can be initially created with or without snapshot images.  Depending on your configuration, a single associated base volume has a  maximum limit of snapshot groups. Depending on your configuration, a snapshot group has a maximum limit of ...
  • Page 115: Snapshot Volumes

    Auto-Purge Snapshot Images: Automatically delete the oldest snapshot images in  the snapshot group to free up space that can be used to satisfy the copy-on-write operation capacity needs in the snapshot group repository. Fail Base Writes: Fail write requests to the base volume that triggered the ...
  • Page 116: Relationship Between Snapshot Images, Snapshot Groups, And Snapshot

    The snapshot is allocated from the storage pool from which the original snapshot image is allocated. All I/O write operations to the snapshot image are redirected to the snapshot volume repository that was allocated for saving data modifications. The data of the original snapshot image remains unchanged.
  • Page 117 A consistency group pools several volumes together so that you can take a snapshot of all the volumes at the same point in time. This action creates a synchronized snapshot of all the volumes and is ideal for applications that span several volumes, for example, a database application that has the logs on one volume and the database on another volume.
  • Page 118: Creating A Snapshot Group

    Creating a Before you can create any snapshot images you must first create a snapshot group and the associated repository volume. Snapshot Group To create a new snapshot group use the create snapGroup command. This command creates a new snapshot group that is associated with a specific source volume.
  • Page 119: Creating A Snapshot Image

    To delete the snapshot group, use this command: delete snapGroup If you want to retain the repository members, set the deleteRepositoryMembers parameter to FALSE. Creating a To create a new snapshot image use the create snapImage command. This command creates a new snapshot image in one or more existing snapshot groups. Snapshot Image Before you can create a snapshot image, you must first have at least one snapshot group into which you can place the snapshot image.
  • Page 120: Creating A Snapshot Image Schedule

    Creating a You can schedule creating regular snapshot images to enable file recovery, and scheduled backups. You can create a schedule when you initially create a snapshot Snapshot Image group or consistency group, or you can add one later to an existing snapshot group or Schedule consistency group.
  • Page 121 Parameter Description Use this parameter to determine whether system rollbackPriority resources should be allocated to the rollback operation at the expense of system performance. A value of 0 indicates that the rollback operation is prioritized over all other host I/O. A value of 4 indicates that the rollback operation should be performed with minimal impact to host I/O.
  • Page 122: Deleting A Snapshot Group

    endDate – A specific date on which you want to stop creating a snapshot image  and end the copy-on-write operation. The format for entering the date is MM:DD:YY. An example of this option is endDate=11:26:11. noEndDate – Use this option if you do not want your scheduled copy-on-write ...
  • Page 123: Creating A Snapshot Consistency Group

    To delete the snapshot image, use this command: delete snapImage Optionally you can choose to keep a number of snapshot images with these parameters: deleteCount – This parameter deletes the oldest snapshot image first and  continues to delete the oldest snapshot images until reaching the number that you enter.
  • Page 124: Deleting A Snapshot Consistency Group

    You can change the size of the snapshot repository. If you have the storage  capacity you can increase the size of the snapshot repository to avoid a repository full message. Conversely, if you find that the snapshot volume repository is larger than you need, you can reduce its size to free up space that is needed by other logical volumes.
  • Page 125: Creating A Snapshot Volume

    deleteCount – This parameter deletes the oldest snapshot image first and  continues to delete the oldest snapshot images until reaching the number that you enter. If the number that you enter is greater than the number of snapshot images, all of the snapshot images are deleted.
  • Page 126: Resuming A Consistency Group Snapshot Volume

    Both of these conditions together might cause the creation of a snapshot image to  enter in a Pending state when you try to create a snapshot volume: The base volume that contains this snapshot image is a member of an —...
  • Page 127: Changing The Size Of A Repository Volume

    Changing the You can increase or decrease the size of a repository volume. Size of a Increasing the Size of a Repository Volume Repository Because a repository volume is comprised of one or more standard volumes, you can Volume increase the storage capacity of an existing repository for these storage objects: Snapshot group ...
  • Page 128 Drive Type: A match requires that the base volume and the repository — volume reside on either a volume group or disk pool with identical drive type attributes. You cannot increase or decrease the repository capacity for a snapshot volume ...
  • Page 129: Starting, Stopping, And Resuming A Snapshot Image Rollback

    You cannot increase or decrease the repository capacity for a snapshot volume  that is read-only because it does not have an associated repository. Only snapshot volumes that are read-write require a repository. When you decrease capacity for a snapshot volume or a consistency group ...
  • Page 130 Keep these guidelines in mind before you start a rollback operation: The rollback operation does not change the content of the snapshot images that  are associated with the base volume. You cannot perform the following actions when a rollback operation is in ...
  • Page 131 more member volumes. The start snapImage rollback works with specific snapshot images. The start cgSnapImagerollback command works with specific member volumes in the consistency group. Stopping a Snapshot Image Rollback ATTENTION Possible loss of data access – Stopping a snapshot image rollback can leave the base volume and the snapshot image unusable.
  • Page 132 Table 21 Snapshot (Legacy) Rollback Operation Status Status Description None No snapshot image rollback operations are running. In Progress A snapshot image rollback operation is running. When a snapshot image rollback operation is running, the amount of the rollback operation finished is shown as a percentage and an estimate of the time remaining is also shown.
  • Page 133: Using The Asynchronous Mirroring Premium Feature

    Using the Asynchronous Mirroring Premium Feature The Asynchronous Mirroring premium feature provides for replicates the data between storage arrays over a remote distance. In the event of a disaster or a catastrophic failure on one storage array, you can promote the second storage array to take over responsibility for computing services.
  • Page 134: How Asynchronous Mirroring Works

    You can use asynchronous mirroring for these functions: Disaster recovery – You can replicate data from one site to another site, which  provides an exact duplicate at the remote (secondary) site. If the primary site fails, you can use mirrored data at the remote site for failover and recovery. You can then shift storage operations to the remote site for continued operation of all of the services that are usually provided by the primary site.
  • Page 135 The controller owner of the primary volume initiates remote writes to the secondary volume to keep the data on the two volumes synchronized. The secondary volume maintains a mirror (or copy) of the data on its associated primary volume. The controller owner of the secondary volume receives remote writes from the controller owner of the primary volume but does not accept host write requests.
  • Page 136: Configurating For Asynchronous Mirroring

    Data on the secondary volume must support a site-level failover for disaster  recovery. For this reason, the data on the secondary volume is protected during the synchronization process so that writes to the secondary volume do not render the volume data unusable. Additionally, many applications require the use of more than one volume, each of which must be mirrored in order to support a site-level failover.
  • Page 137: Asynchronous Mirror Groups

    VLAN – Both local and remote systems must have the same VLAN setting in  order to communicate iSCSI listening port  Jumbo Frames  Ethernet Priority  Some applications, such as file systems and databases, distribute data storage across Asynchronous many volumes.
  • Page 138 The resynchronization interval is the amount of time between automatically sending updates of modified data from the primary storage array to the secondary storage array. The interval, expressed in minutes, represents the time between the starting points of sending updates from the primary to the secondary. A resynchronization interval of zero means that synchronization is manual.
  • Page 139: Mirror Repository Volumes

    A mirror repository volume is a special volume in the storage array that is created as a Mirror Repository resource for the controller owner of the primary volume in a remote mirrored pair. Volumes The controller stores mirror information on this volume, including information about remote writes that are not yet complete.
  • Page 140: Creating An Asynchronous Mirrored Pair

    Creating an Before you create any mirror relationships, you must create an asynchronous mirror group. The asynchronous mirror group is a logical entity that spans a local storage Asynchronous array and a remote storage array that is used for mirroring and that contains one or Mirrored Pair more mirrored pairs.
  • Page 141: Creating The Asynchronous Mirroring Group

    To activate the Asynchronous Mirroring premium feature, use this command: activate storageArray feature=asyncRemoteMirror The storage array performs the following actions when you activate the Asynchronous Mirroring premium feature: Logs out all hosts currently using the highest numbered Fibre Channel host port ...
  • Page 142 relationship. You use the Create Asynchronous Mirror Group command to specify the remote storage array that contains the volumes that will provide the secondary role in the mirror relationship. The command has this form: create asyncMirrorGroup userLabel="asyncMirrorGroupName"  (remoteStorageArrayName="storageArrayName" | remoteStorageArrayWwn="wwID") ...
  • Page 143: Creating The Asynchronous Mirroring Pair

    This example shows how to use the command in a script file: create asyncMirrorGroup userLabel="EngDevData"  remoteStorageArrayName="Eng_Backup"  interfaceType=iSCSI  remotePassword="xxxxx"  syncInterval=8 hours  warningSyncThreshold=1 hours  warningRecoveryThreshold=2 hours  warningThresholdPercent=80  autoResync=TRUE] After you create the asynchronous mirror group, you can create the asynchronous mirrored pair to start performing remote mirroring operations.
  • Page 144 Keep these guidelines in mind when creating the asynchronous mirroring pairs: Primary volumes and secondary mirror repository volumes do not need to be the  same size. Mirror repository volumes are independent of the associated primary volume and  secondary volume so that they can be created in separate volume groups with different RAID levels.
  • Page 145 When you run this command, at a minimum you must perform these actions: Identify the volume on the local storage array that you want to mirror to a  repository volume on the remote storage array. Identify the asynchronous mirror group in which you want to place the volume ...
  • Page 146: Changing Asynchronous Mirroring Settings

    Changing The set asyncMirrorGroup command enables you to change the property settings for an asynchronous mirrored pair. Use this command to change these Asynchronous property settings: Mirroring Synchronization interval – The length of time between automatically sending Settings  updates of modified data from the local storage array to the remote storage array. Synchronization warning threshold –...
  • Page 147: Suspending And Resuming The Asynchronous Mirror Group

    Suspending Use the suspend asyncMirrorGroup command to stop data transfer between all of the primary volumes and all of the secondary volumes in an asynchronous and Resuming mirror group without disabling the asynchronous mirroring relationships. Suspending the asynchronous mirroring relationship lets you control when the data on the primary Asynchronous volume and data on the secondary volume are synchronized.
  • Page 148: Manually Resynchronizing Volumes In An Asynchronous Mirror Group

    Manually Manually resynchronizing the volumes in an asynchronous mirror group immediately resynchronizes all of the mirror relationships within the asynchronous mirror group. Resynchronizing You cannot perform this operation if one of these conditions exists: Volumes in an The asynchronous mirror group has failed because any dependent component of Asynchronous ...
  • Page 149 The original primary volumes are protected from new write requests just as if  they were secondary volumes. A resynchronization process from the original primary volumes to the original  secondary volumes starts. The resynchronization operation completes after all mirror-pairs of the asynchronous mirror group are fully synchronized.
  • Page 150: Canceling A Pending Asynchronous Mirror Group Role Change

    If the role change is interrupted because of a communication failure between the storage arrays, the mirror roles can possibly end as two secondary roles. This role conflict does not compromise the data synchronization state. Canceling a You can cancel a pending role change by running this command: Pending stop asyncMirrorGroup rolechange Asynchronous...
  • Page 151: Removing Volumes From The Asynchronous Mirror Group

    Removing When you have an asynchronous mirror group you have three volumes that you need to manage: Volumes from Primary on the local storage array  Asynchronous Secondary on the remote storage array  Mirror Group Repository on both storage arrays ...
  • Page 152: Deleting An Asynchronous Mirror Group

    remove volume ["Jan_04_Account"] asyncMirrorGroup="amg_001"  deleteRepositoryMembers=TRUE; You must run this command on the local storage array to remove the primary volume. If the volume is not successfully removed from both sides of the asynchronous mirror group, the mirror volume that was not removed becomes an orphan. Orphans are detected when communications between the controller on the local storage array and the corresponding controller on the remote storage array is restored.
  • Page 153: Using The Synchronous Mirroring Premium Feature

    Using the Synchronous Mirroring Premium Feature The Synchronous Mirroring premium feature provides for online, real-time replication of data between storage arrays over a remote distance. In the event of a disaster or a catastrophic failure on one storage array, you can promote the second storage array to take over responsibility for computing services.
  • Page 154: Mirror Repository Volumes

    Table 23 Maximum Number of Defined Mirrors per Storage Array Maximum Number of Defined Controller Model Mirrors AM1331, AM1333, AM1532, Only supported in a co-existence AM1932 storage environment E2600 CDE3992, CDE3994 CE4900 CE6998, CE7900 The primary volume is the volume that accepts host I/O activity and stores application data.
  • Page 155: Mirror Relationships

    Because of the critical nature of the data being stored, do not use RAID Level 0 as the RAID level of mirror repository volumes. The required size of each volume is 128 MB, or 256 MB total for both mirror repository volumes of a dual-controller storage array.
  • Page 156: Link Interruptions Or Secondary Volume Errors

    record on the mirror repository volume. The controller then sends an I/O completion indication back to the host system. Synchronous write mode is selected as the default value and is the recommended write mode. Asynchronous Write Mode Asynchronous write mode offers faster host I/O performance but does not guarantee that a copy operation has successfully completed before processing the next write request.
  • Page 157: Resynchronization

    primary volume during the link interruption are copied to the secondary volume. After the resynchronization starts, the mirrored pair transitions from an Unsynchronized status to a Synchronization in Progress status. The primary controller also marks the mirrored pair as unsynchronized when a volume error on the secondary side prevents the remote write from completing.
  • Page 158: Performance Considerations

    The secondary volume must be of equal or greater size than the primary volume.  The RAID level of the secondary volume does not have to be the same as the  primary volume. Use these steps to create the volume. Enable the Synchronous Mirroring premium feature.
  • Page 159: Enabling The Synchronous Mirroring Premium Feature

    The first step in creating a remote mirror is to make sure that the Synchronous Enabling the Mirroring premium feature is enabled on both storage arrays. Because Synchronous Synchronous Mirroring is a premium feature, you need a feature key file to enable the premium Mirroring Premium feature.
  • Page 160 Activating the Synchronous Mirroring Premium Feature with User-Assigned Drives Activating the Synchronous Mirroring premium feature by assigning the drives provides flexibility in defining your configuration by letting you choose from the available drives in your storage array. Choosing the drives for your remote mirror automatically creates a new volume group.
  • Page 161: Determining Candidates For A Remote Mirrored Pair

    c:\...\smX\client>smcli 123.45.67.88 123.45.67.89  -c “activate storageArray feature=syncMirror  repositoryVolumeGroup=2 freeCapacityArea=2;” The command in this example creates a new mirror repository volume in volume group 2 using the second free capacity area. This example shows how to use the command in a script file: activate storageArray feature=syncMirror ...
  • Page 162: Creating A Remote Mirrored Pair

    The command takes this form: c:\...\smX\client>smcli 123.45.67.88 123.45.67.89  -c “show remoteMirror candidates primary=\“volumeName\”  remoteStorageArrayName=\“storageArrayName\”;” where volumeName is the name of the volume that you want to use for the primary volume, and storageArrayName is the remote storage array that contains possible candidates for the secondary volume.
  • Page 163: Changing Synchronous Mirroring Settings

    a remote mirrored pair is a significant change to a storage array configuration. Setting the write mode to synchronous and the synchronization priority to highest means that host write requests are written to the primary volume and then immediately copied to the secondary volume. These actions help to make sure that the data on the secondary volume is as accurate a copy of the data on the primary volume as possible.
  • Page 164: Suspending And Resuming A Synchronous Mirroring Relationship

    Suspending Use the suspend remoteMirror command to stop data transfer between a primary volume and a secondary volume in a mirror relationship without disabling the and Resuming a mirror relationship. Suspending a mirror relationship lets you control when the data Synchronous on the primary volume and data on the secondary volume are synchronized.
  • Page 165: Removing A Mirror Relationship

    This example shows how to use the command in a script file: resume remoteMirror volume Jan_04_Account  writeConsistency=false; Removing a Use the remove remoteMirror command to remove the link between a primary volume and a secondary volume. (Removing a mirror relationship is similar to Mirror deleting a mirror relationship.) Removing the link between a primary volume and a Relationship...
  • Page 166: Deactivating The Synchronous Mirroring Premium Feature

    disable storageArray feature=remoteMirror Deactivating the If you no longer require the Synchronous Mirroring premium feature and you have removed all of the mirror relationships, you can deactivate the premium feature. Synchronous Deactivating the premium feature re-establishes the normal use of dedicated ports on Mirroring both storage arrays and deletes both mirror repository volumes.
  • Page 167: Snapshot (Legacy) Volumes

    A snapshot (legacy) is a point-in-time image of a volume. Typically, it is created so Snapshot (Legacy) that an application, such as a backup application, can access the snapshot (legacy) Volumes volume and read the data while the base volume stays online and is accessible to hosts.
  • Page 168 Interaction with Other Premium Features...
  • Page 169: Using The Volume Copy Premium Feature

    Using the Volume Copy Premium Feature The Volume Copy premium feature lets you copy data from one volume (the source) to another volume (the target) in a single storage array. You can use this premium feature to perform these tasks: Back up data ...
  • Page 170: Target Volume

    A mirror repository volume  A failed volume  A missing volume  A volume currently in a modification operation  A volume that is holding a Small Computer System Interface-2 (SCSI-2)  reservation or a persistent reservation A volume that is a source volume or a target volume in another volume copy that ...
  • Page 171: Volume Copy And Persistent Reservations

    A volume that is holding a SCSI-2 reservation or a persistent reservation  A volume that is a source volume or a target volume in another volume copy that  has a status of In Progress, Pending, or Failed You cannot use volumes that hold persistent reservations for either a source volume or Volume Copy and a target volume.
  • Page 172: Volume Copy Commands

    A volume that is reserved by the host cannot be selected as a source volume or as  a target volume. A volume with a status of Failed cannot be used as a source volume or as a target  volume.
  • Page 173: Creating A Volume Copy

    Creating a Before you create a volume copy, make sure that a suitable target volume exists on the storage array, or create a new target volume specifically for the volume copy. The Volume Copy target volume that you use must have a capacity equal to or greater than the source volume.
  • Page 174: Creating A Volume Copy

    ATTENTION Possible loss of data access – A volume copy overwrites data on the Creating a Volume target volume. Make sure that you no longer need the data or have backed up the data Copy on the target volume before you start a volume copy operation. When you create a volume copy, you must define which volumes that you want to use for the source volume and the target volume.
  • Page 175: Viewing Volume Copy Properties

    To view the progress of a volume copy, use the show volume actionProgress command. This command returns information about the volume action, the percentage completed, and the time remaining until the volume copy is complete. Viewing Volume Use the show volumeCopy command to view information about one or more selected source volumes or target volumes.
  • Page 176 Copy priority has five relative settings ranging from highest to lowest. The highest priority supports the volume copy, but I/O activity might be affected. The lowest priority supports I/O activity, but the volume copy takes longer. You can change the copy priority at these times: Before the volume copy operation starts ...
  • Page 177: Recopying A Volume

    set volumeCopy target [“Obi_1”] copyPriority=highest  targetReadOnlyEnabled=FALSE; Recopying a Use the recopy volumeCopy command to create a new volume copy for a previously defined copy pair that has a status of Stopped, Failed, or Completed. You Volume can use the recopy volumeCopy command to create backups of the target volume.
  • Page 178: Stopping A Volume Copy

    copy pair, which has already created one volume copy. By using this command, you are copying the data from the source volume to the target volume with the assumption that the data on the source volume has changed since the previous copy was made. This example shows you how to use the command in a script file: recopy volumeCopy target [“Obi_1”] copyPriority=highest;...
  • Page 179: Interaction With Other Premium Features

    This example shows how to use the command in a script file: remove volumeCopy target [“Obi_1”]; Interaction with You can run the Volume Copy premium feature while running the following premium features: Other Premium Storage Partitioning Features  Snapshot (Legacy) ...
  • Page 180: Synchronous Mirroring

    You can select snapshot (legacy) volumes as the source volumes for a volume copy. This selection is a good use of this premium feature, because it performs complete backups without significant impact to the storage array I/O. Some I/O processing resources are lost to the copy operation.
  • Page 181 The secondary volume stays available to host applications as read-only while mirroring is underway. In the event of a disaster or a catastrophic failure at the primary site, you can perform a role reversal to promote the secondary volume to a primary role.
  • Page 182 Interaction with Other Premium Features...
  • Page 183: Using The Ssd Cache Premium Feature

    Using the SSD Cache Premium Feature NOTE The SSD Cache premium feature is available only on the E2600 controller and the E5400 controller. The SSD Cache premium feature provides a way to improve read-only performance. SSD cache is a set of Solid-State Disk (SSD) drives that you logically group together in your storage array to implement a read cache for end-user volumes.
  • Page 184 possible candidates consisting of different counts of SSD drives. You also have the option to enable SSD cache on all eligible volumes that are currently mapped to hosts. Lastly, after you create the SSD cache, you can enable or disable it on existing volumes or as part of a new volume creation.
  • Page 185: Creating The Ssd Cache, Adding Volumes, And Removing Volumes

    Creating the Before you create the SSD cache, make sure that suitable SSD drives are available on the storage array. You can achieve the best performance when the working set of the SSD Cache, data fits in the SSD cache so that most host reads can be serviced from the lower Adding latency solid state disks instead of the higher latency hard drives (HDDs).
  • Page 186 The performance modeling tool provides an estimate of performance using these metrics: Cache hit percentage  Average response time  NOTE Performance modeling does not survive a controller reboot. Starting and Stopping SSD Cache Performance Modeling To start a performance modeling operation, use this command: start ssdCache [ssdCacheName] performanceModeling Enclose the identifier in square brackets ([ ]).
  • Page 187 Then you can use a spreadsheet program outside of the storage management software to compare the data from the .csv file. The performance modeling tool does not support the loading of saved files. The .csv File Information The the .csv file shows the following information: SSD Cache Capacity (GB) –...
  • Page 188: Ssd Cache Management Tasks

    Cache hit percentage – The cache-hit percentage indicates the percentage of all  read commands that find data in the SSD cache for each of the cache capacities. For almost all workloads, a cache-hit percentage around 75 percent indicates that you have sufficient capacity.
  • Page 189 View information about the drives, status, and capacity of the SSD cache.  Locate the drives that physically comprise the SSD cache.  Adding drives to and removing drives from the SSD cache.  Suspend and resume SSD cache operation. ...
  • Page 190 You can increase the capacity of an existing SSD cache by using this command to add solid SSDs: set ssdCache [ssdCacheName]  addDrives=(trayID1,drawerID1,slotID1 ... trayIDn,drawerIDn,slotIDn) You can add one or more SSDs by specifying the location of the drives that you want to add.
  • Page 191 This command temporarily stops caching for all of the volumes that are using the SSD cache. While caching is stopped, host reads are serviced from the base volumes instead of from the SSD cache. After performing maintenance, you can restart the SSD cache by using this command: resume ssdCache [ssdCacheName] Renaming the SSD Cache If you want to change the name of the SSD cache, you can use this command:...
  • Page 192 SSD Cache Management Tasks...
  • Page 193: Maintaining A Storage Array

    Maintaining a Storage Array Maintenance covers a broad spectrum of activity with the goal of keeping a storage array operational and available to all hosts. This chapter provides descriptions of commands you can use to perform storage array maintenance. The commands are organized into four sections: Routine maintenance ...
  • Page 194: Running A Redundancy Check

    Recovered media error – The drive could not read the requested data on its first  attempt. The result of this action is that the data is rewritten to the drive and verified. The error is reported to the Event Log. Redundancy mismatches –...
  • Page 195: Resetting A Controller

    set (allVolumes | volume [volumeName] |  volumes [volumeName1 ... volumeNameN] |  volume <wwID>)  redundancyCheckEnabled=(TRUE | FALSE) NOTE When you reset a controller, the controller is no longer available for I/O Resetting a operations until the reset is complete. If a host is using volumes that are owned by the Controller controller being reset, the I/O that is directed to the controller is rejected.
  • Page 196: Synchronizing The Controller Clocks

    To clear persistent volume reservations, use this command: clear (allVolumes | volume [volumeName] |  volumes [volumeName1 ... volumeNameN]) reservations To synchronize the clocks on both controllers in a storage array with the host clock, Synchronizing the use this command: Controller Clocks set storageArray time At times, you might need to locate a specific drive.
  • Page 197: Performance Tuning

    Basic Process Steps Relocating a volume group includes these procedures: Verifying the status of the storage array Locating the drives in the volume group Placing the volume group offline Removing drives from the storage array Replacing a volume group into the new storage array To perform these steps, you must be familiar with the following CLI commands.
  • Page 198 Table 25 Information About Storage Array Performance Type of Information Description Devices These devices are included in the file: Controllers – The controller in slot A or slot B and a list of the  volumes that are owned by the controller Volumes –...
  • Page 199: Changing The Raid Levels

    When you create a volume group, you can define the RAID level for the volumes in Changing the RAID that volume group. You can change the RAID level later to improve performance or Levels provide more secure protection for your data. NOTE RAID Level 6 is a premium feature for the CDE3992 controller-drive tray, CDE3994 controller-drive tray, and the CE4900 controller-drive tray.
  • Page 200: Defragmenting A Volume Group

    The set volume command lets you change settings for these items: The cache flush modifier  The cache without batteries enabled or disabled  The mirror cache enabled or disabled  The read cache enabled or disabled  The write cache enabled or disabled ...
  • Page 201: Collecting All Support Data

    You are not required to perform any action to save the error data to a file.  The CLI does not have any provision to avoid over-writing an existing version of  the file that contains error data. For error processing, errors appear as two types: Terminal errors or syntax errors that you might enter ...
  • Page 202 The following table lists the type of support data that you can collect. For the commands that you can use to collect support bundle data, refer to Command Line Interface and Script Commands. Table 26 Support Data for the Storage Array Type of Data Description and File Name Storage array support data...
  • Page 203 Type of Data Description and File Name Switch-on-a-chip (SOC) error Information from the loop-switch ports that are connected statistics to Fibre Channel devices. soc-statistics.csv Cable and connections A detailed list of actions that describes the drive side cabling and connections. connection.txt Drive command aging timeout A detailed list of drive information related to current and...
  • Page 204 Type of Data Description and File Name Object bundle A detailed description of the status of the storage array and its components, which was valid at the time that the file was generated. The object bundle file is a binary file and does not contain human-readable information.
  • Page 205: Collecting Drive Data

    Type of Data Description and File Name Storage array profile A list of all components and properties of a storage array. storage-array-profile.txt Controller trace buffer DQ Trace buffer of each controller. trace-buffers.7z Environmental services module A detailed description of the current state of the ESMs in a (ESM) state capture storage array.
  • Page 206: Running Read Link Status Diagnostics

    The write test initiates a write command as it would be sent over an I/O data path to the diagnostics region on a specified drive. This diagnostics region is then read and compared to a specific data pattern. If the write fails or the data compared is not correct, the controller is considered to be in error, and it is failed and placed offline.
  • Page 207 Error counts are calculated from the current baseline. The baseline describes the error count values for each type of device in the Fibre Channel loop, either when the controller goes through its start-of-day sequence or when you reset the baseline. The baseline indicates the difference in error counts from the time the baseline was established to the time you request the read link status data.
  • Page 208 Type of Data Description Loss of synchronization The total number of LOS errors that were detected on the Fibre (LOS) Channel loop from the baseline time to the current date and time. LOS errors indicate that the receiver cannot acquire symbol lock with the incoming data stream due to a degraded input signal.
  • Page 209: Collecting Switch-On-A-Chip Error Statistics

    Type of Data Description Invalid cyclic redundancy The total number of ICRC errors that were detected on the Fibre check (ICRC) Channel loop from the baseline date to the current date and time. An ICRC count indicates that a frame has been received with an invalid cyclic redundancy check value.
  • Page 210: Recovery Operations

    The loop cycle count  The operating system (OS) error count  The port connections attempted count  The port connections held off count  The port utilization  The method for collecting error statistics starts by establishing a baseline for the SOC error statistics.
  • Page 211 Taking a controller offline can seriously impact data integrity and storage array operation. If you do not use write cache mirroring, data in the cache of the controller you  place offline is lost. If you take a controller offline and you have controller failover protection through ...
  • Page 212: Changing The Controller Ownership

    You can change which controller is the owner of a volume by using the set Changing the volume command. The command takes this form: Controller Ownership set (allVolumes | volume [volumeName] |  volumes [volumeName1 ... volumeNameN] |  volume <wwID>) owner=(a | b) ATTENTION Possible loss of data access –...
  • Page 213: Redistributing Volumes

    A volume is automatically initialized when you first create it. If the volume starts showing failures, you might be required to re-initialize the volume to correct the failure condition. Consider these restrictions when you initialize a volume: You cannot cancel the operation after it begins. ...
  • Page 214 ATTENTION Possible loss of data access – Never remove a component that has a Service Action Required indicator light on unless the Service Action Allowed indicator light is on. If a component fails and must be replaced, the Service Action Required indicator light on that canister comes on to indicate that service action is required, provided no data availability dependencies or other conditions exist that dictate the canister should not be removed.
  • Page 215 not come on. For example, if the power supply in the power-fan canister in slot A has failed, then replacement of the controller canister in slot B, the interconnect-battery canister, or the power-fan canister in slot B is not allowed, which is indicated when the Service Action Allowed indicator light stays off for those canisters.
  • Page 216 Recovery Operations...
  • Page 217: Appendix A Examples Of Information Returned By The Show Commands

    Examples of Information Returned by the  Show Commands This appendix provides examples of information that is returned by the show commands. These examples show the type of information and the information detail. This information is useful in determining the components, features, and identifiers that you might need when you configure or maintain a storage array.
  • Page 218 Show Storage Array...
  • Page 219 Appendix A: Examples of Information Returned by the Show Commands...
  • Page 220 Show Storage Array...
  • Page 221 Appendix A: Examples of Information Returned by the Show Commands...
  • Page 222 Show Storage Array...
  • Page 223 Appendix A: Examples of Information Returned by the Show Commands...
  • Page 224 Show Storage Array...
  • Page 225 Appendix A: Examples of Information Returned by the Show Commands...
  • Page 226 Show Storage Array...
  • Page 227 Appendix A: Examples of Information Returned by the Show Commands...
  • Page 228 Show Storage Array...
  • Page 229 Appendix A: Examples of Information Returned by the Show Commands...
  • Page 230 Show Storage Array...
  • Page 231 Appendix A: Examples of Information Returned by the Show Commands...
  • Page 232: Show Controller Nvsram

    Show Controller The show controller NVSRAM command returns a table of values in the controller NVSRAM that is similar to that shown in this example. With the NVSRAM information from the table, you can modify the contents of the NVSRAM by using the set controller command.
  • Page 233 Appendix A: Examples of Information Returned by the Show Commands...
  • Page 234 Show Controller NVSRAM...
  • Page 235: Show Volume

    Show Volume The show volume command returns information about the volumes in a storage array. STANDARD VOLUMES------------------------------  SUMMARY  Number of standard volumes: 5  See other Volumes sub-tabs for premium feature information.  NAME STATUS CAPACITY RAID LEVEL VOLUME LUN ...
  • Page 236 High  priority: Enabled  Read cache: Enabled  Write cache: Disabled  Write cache without batteries: Enabled  Write cache with mirroring: 10.00  Flush write cache after (in seconds): Enabled  Dynamic cache read prefetch: Disabled  Enable background media scan: Disabled ...
  • Page 237 Optimal  Volume status: 10.000 GB  Capacity: Volume world-wide 60:0a:0b:80:00:29:ed:12:00:00  identifier: 16  Subsystem ID (SSID): Volume-Group-2  Associated volume group: 1  RAID level: 15  LUN: Default Group  Accessible By: Hard Disk Drive  Media type: Fibre Channel ...
  • Page 238 Yes  Tray loss protection: No  Secure: Preferred owner: Controller in slot B  Current owner: Controller in slot B  Segment 128 KB  size: Capacity reserved for future segment size changes: Yes  Maximum future segment 2,048 KB  size: Modification High ...
  • Page 239 Not Appl  size: Modification Low  priority: Enabled  Read cache: Disabled  Write cache: Disabled  Write cache without batteries: Disabled  Write cache with mirroring: 10.00  Flush write cache after (in seconds): Disabled  Dynamic cache read prefetch: Enabled ...
  • Page 240 Segment 64 KB  size: Capacity reserved for future segment size changes: No  Maximum future segment Not ap  size: Modification High  priority: Enabled  Read cache: Enabled  Write cache: Disabled  Write cache without batteries: Enabled  Write cache with mirroring: 10.00 ...
  • Page 241 slot B  Current owner: Controller in slot B  32 KB  Segment size: Capacity reserved for future segment size No  changes: Maximum future segment size: applicable  High  Modification priority: MIRROR REPOSITORY VOLUME NAME: Mirror Repository 1  Mirror repository volume status: Optimal ...
  • Page 242 8:54 AM  Unnamed  Associated base volume (standard): Associated snapshot repository volume: DAE1-1  Volume world-wide 60:0a:0b:80:00:29:ed:12:00  identifier: 100.004 MB  Capacity: Controller in slot B  Preferred owner: Controller in slot B  Current owner:  COPIES------------------------------  SUMMARY ...
  • Page 243 Volume world-wide 60:0a:0b:80:00:47:5b:8a  identifier: Disabled  Read-only: Copy pair: Unnamed and 3  Stopped  Copy status: None  Start timestamp: None  Completion timestamp: Lowest  Copy priority: Unnamed  Source volume: Volume world-wide 60:0a:0b:80:00:29:ed:12  identifier: 11  Target volume: Volume world-wide 60:0a:0b:80:00:29:ed:12 ...
  • Page 244: Show Drive Channel Stat

    60:0a:0b:80:00:47:5b:8  identifier: Enabled  Read-only:  MIRRORED PAIRS------------------------------   Number of mirrored pairs: 0 of 64 used   MISSING VOLUMES------------------------------   Number of missing volumes: 0 Show Drive The show drive channel stat command returns information about the drive channels in a storage array.
  • Page 245 Max. Rate: 4 Gbps  Current Rate: 4 Gbps  Rate Control: Auto  Controller A link status: Up  Controller B link status: Up  Trunking active: No   DRIVE COUNTS   Total # of attached drives: 44  Connected to: Controller A, Port 8 ...
  • Page 246 Controller B link status: Up  Trunking active: No   DRIVE COUNTS   Total # of attached drives: 0   CUMULATIVE ERROR COUNTS  Controller A  Baseline time set: 10/30/10 1:15:59 PM  Sample period (days, hh:mm:ss): 32 days, 00:55:04 ...
  • Page 247 Baseline time set: 10/30/10 1:15:59 PM  Sample period (days, hh:mm:ss): 32 days, 00:55:04  0  Controller detected errors: 0  Drive detected errors: 0  Timeout errors: N/A  Link down errors: 13414513  Total I/O count:  Controller B ...
  • Page 248  Controller B  Baseline time set: 10/30/10 1:15:59 PM  Sample period (days, hh:mm:ss): 32 days, 00:53:22  54  Controller detected errors: 0  Drive detected errors: 0  Timeout errors: N/A  Link down errors: 13039285  Total I/O count: ...
  • Page 249 1:15:59 PM  Sample period (days, hh:mm:ss): 32 days, 00:53:22  1  Controller detected errors: 52  Drive detected errors: 0  Timeout errors: N/A  Link down errors: 182512319  Total I/O count:  DRIVE CHANNEL 6  Port: 3, 4 ...
  • Page 250 DRIVE CHANNEL 7  Port: 5, 6  Optimal  Status: Max. Rate: 4 Gbps  Current Rate: 2 Gbps  Rate Control: Auto  Controller A link status: Up  Controller B link status: Up  Trunking active: No  ...
  • Page 251 Trunking active: No   DRIVE COUNTS   Total # of attached drives: 0   CUMULATIVE ERROR COUNTS  Controller A  Baseline time set: 10/30/10 1:15:59 PM  Sample period (days, hh:mm:ss): 32 days, 00:55:04  44  Controller detected errors: 0 ...
  • Page 252: Show Drive

    Show Drive The show drive command returns information about the drives in a storage array. Show Drive...
  • Page 253 Appendix A: Examples of Information Returned by the Show Commands...
  • Page 254 Show Drive...
  • Page 255 Appendix A: Examples of Information Returned by the Show Commands...
  • Page 256 Show Drive...
  • Page 257 Appendix A: Examples of Information Returned by the Show Commands...
  • Page 258 Show Drive...
  • Page 259: Appendix B Example Script Files

    Example Script Files This appendix provides example scripts for configuring a storage array. These examples show how the script commands appear in a complete script file. Also, you can copy these scripts and modify them to create a configuration unique to your storage array.
  • Page 260 show “Setting additional attributes for volume 7”;  //Configuration settings that cannot be set during volume creation  set volume[“7”] cacheFlushModifier=10;  set volume[“7”] cacheWithoutBatteryEnabled=false;  set volume[“7”] mirrorEnabled=true;  set volume[“7”] readCacheEnabled=true;  set volume[“7”] writeCacheEnabled=true;  set volume[“7”] mediaScanEnabled=false;  set volume[“7”] redundancyCheckEnabled=false;...
  • Page 261 create volume volumeGroup=volumeGroupNumber  userLabel=volumeName  [freeCapacityArea=freeCapacityIndexNumber]  [capacity=volumeCapacity | owner=(a | b) |  cacheReadPrefetch=(TRUE | FALSE) |  segmentSize=segmentSizeValue]  [trayLossProtect=(TRUE | FALSE)] The general form of the command shows the optional parameters in a different sequence than the optional parameters in the example command. You can enter optional parameters in any sequence.
  • Page 262: Configuration Script Example 2

    Configuration This example creates a new volume by using the create volume command with Script  user-defined drives in the storage array. Example 2 Show “Create RAID3 Volume 2 on existing Volume Group 2”; //This command creates the volume group and the initial volume on that group.
  • Page 263: Appendix C Asynchronous Write Mode Mirror Utility

    Asynchronous Write Mode Mirror Utility This appendix describes the host utility to achieve periodic consistency with Asynchronous Write Mode Mirror configurations. This appendix also describes how to run the Asynchronous Write Mode utility. NOTE The Asynchronous Write Mode Mirror utility works only with the synchronous remote mirror commands.
  • Page 264: Operation Of The Asynchronous Synchronous Mirroring Utility

    Operation of the The Asynchronous Write Mode Mirror utility performs steps that generate a recoverable state for multiple mirror volumes at a secondary site. The utility runs Asynchronous these steps to create consistent, recoverable images of a set of volumes: Synchronous On the primary storage array –...
  • Page 265: Configuration Utility

    The maximum number of volume sets that you can specify in the file is four.  The maximum number of mirrored pairs that you can specify as part of a  consistency group is eight. The optional parameter, -d, lets you specify a file to which you can send information regarding how the utility runs.
  • Page 266 mirrorSpec ::= "Mirror" "{" {mirrorAttribute} "}"  mirrorAttribute ::= primarySpec | secondarySpec |  snapshotSpec primarySpec ::= "Primary" "=" volumeSpec  secondarySpec ::= "Secondary" "=" volumeSpec  snapshotSpec ::= "Copy" "=" volumeSpec  volumeSpec ::= storageArrayName"."volumeUserLabel In this syntax, items enclosed in double quotation marks (“ ”) are terminal symbols. Items separated by a vertical bar (|) are alternative values (enter one or the other, but not both).
  • Page 267 NOTE In the Asynchronous Write Mode Mirror utility configuration file, you must specify the primary volume, the secondary volume, and the copy (snapshot (legacy)) volume. The utility does not make sure that the secondary volume is correct for the Synchronous Mirroring relationship. The utility also does not make sure that the snapshot (legacy) volume is actually a sn apshot (legacy) for the secondary volume.
  • Page 268 Configuration Utility...
  • Page 269: Appendix D Simplex-To-Duplex Conversion

    Simplex-to-Duplex Conversion Some models of controller trays and controller-drive trays are available in either a simplex configuration (one controller) or a duplex configuration (two controllers). You can convert a simplex configuration to a duplex configuration by installing new nonvolatile static random access memory (NVSRAM) and a second controller. This appendix explains how to convert a simplex configuration to a duplex configuration by using CLI commands or by using the storage management software.
  • Page 270: Downloading The Nvsram By Using The Command Line Interface

    Download the duplex NVSRAM by using the command line interface.  Download the duplex NVSRAM by using the graphical user interface (GUI) of  the storage management software. Copy the duplex NVSRAM from the installation CD in the conversion kit. ...
  • Page 271: Copying Nvsram From The Installation Cd

    Make a copy of your storage array profile, and save it in the event that you might Copying NVSRAM need to restore the storage array. from the Installation Insert the Installation CD into the CD-ROM drive. At the storage management station, start the SMclient software. In the Array Management Window, select Advanced >>...
  • Page 272: Step 3 - Installing The Second Controller

    Step 3 – ATTENTION Possible hardware damage – To prevent electrostatic discharge damage to the tray, use proper antistatic protection when handling tray components. Installing the Second Controller NOTE For best operation, the new controller must have a part number identical to the existing controller, or the new controller must be a certified substitute.
  • Page 273: Step 5 - Connecting The Controller To A Drive Tray

    Plug the other end of the fiber-optic cable into one of the HBAs in the host (direct topology) or into a switch (switch topology). Attach a label to each end of the cable by using this scheme. A label is very important if you need to disconnect the cables later to service a controller.
  • Page 274: Step 6 - Running Diagnostics

    Step 6 – Using the LEDs on the storage array and information provided by the storage management software, check the status of all trays in the storage array. Running Does any component have a Needs Attention status? Diagnostics Yes – Click the Recovery Guru toolbar button in the Array Management —...
  • Page 276 © Copyright 2012 NetApp, Inc. All rights reserved.

This manual is also suitable for:

Infinitestorage 5000 series

Table of Contents