Silicon Graphics InfiniteStorage 4000 Series User Manual

Failover drivers guide for santricity es
Hide thumbs Also See for InfiniteStorage 4000 Series:
Table of Contents

Advertisement

Quick Links

SGI InfiniteStorage 4000 Series and 5000 Series
Failover Drivers Guide for SANtricity ES
(ISSM 10.86)
 
 
 
 
 
 
007-5886-002
April 2013

Advertisement

Table of Contents
loading

Summary of Contents for Silicon Graphics InfiniteStorage 4000 Series

  • Page 1 SGI InfiniteStorage 4000 Series and 5000 Series Failover Drivers Guide for SANtricity ES (ISSM 10.86)             007-5886-002 April 2013...
  • Page 2 The information in this document supports the SGI InfiniteStorage 4000 series and 5000 series storage systems (ISSM 10.86). Refer to the table below to match your specific SGI InfiniteStorage product with the model numbers used in this document.   SGI Model #...
  • Page 3: Copyright Information

    Copyright information Copyright © 1994–2012 NetApp, Inc. All rights reserved. Printed in the U.S.A. No part of this document covered by copyright may be reproduced in any form or by any means— graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval system—without prior written permission of the copyright owner.
  • Page 4 Trademark information NetApp, the NetApp logo, Network Appliance, the Network Appliance logo, Akorri, ApplianceWatch, ASUP, AutoSupport, BalancePoint, BalancePoint Predictor, Bycast, Campaign Express, ComplianceClock, Cryptainer, CryptoShred, Data ONTAP, DataFabric, DataFort, Decru, Decru DataFort, DenseStak, Engenio, Engenio logo, E-Stack, FAServer, FastStak, FilerView, FlexCache, FlexClone, FlexPod, FlexScale, FlexShare, FlexSuite, FlexVol, FPolicy, GetSuccessful, gFiler, Go further, faster, Imagine Virtually Anything, Lifetime Key Management, LockVault, Manage ONTAP, MetroCluster, MultiStore, NearStore, NetCache, NOW (NetApp on the Web), Onaro, OnCommand,...
  • Page 5: Table Of Contents

    Table of Contents Chapter 1 Overview of Failover Drivers ......... . 1 Failover Driver Setup Considerations .
  • Page 6 Reduced Failover Timing ......... . 15 Path Congestion Detection and Online/Offline Path States .
  • Page 7 Chapter 4 Device Mapper Multipath for the Linux Operating System ....49 Device Mapper Features......... . . 49 Known Limitations and Issues of the Device Mapper .
  • Page 8 viii Table of Contents...
  • Page 9: Overview Of Failover Drivers

    Overview of Failover Drivers Failover drivers provide redundant path management for storage devices and cables in the data path from the host bus adapter to the controller. For example, you can connect two host bus adapters in the system to the redundant controller pair in a storage array, with different buses for each controller.
  • Page 10: Supported Failover Drivers Matrix

    The I/O Shipping feature implements support for ALUA. With the I/O Shipping feature, a storage array can service I/O requests through either controller in a duplex configuration. However, I/O shipping alone does not guarantee that I/O is routed to the optimized path. With Windows, Linux and VMWare, your storage array supports an extension to ALUA to address this problem so that volumes are accessed through the optimized path unless that path fails.
  • Page 11: I/O Coexistence

    Table 2 Matrix of Supported Failover Drivers by Operating System (OS) Solaris 11, Solaris VMWare 4.1 u3, 5.1, Mac OS 10.6 and HPUX 11.31 11.1 ad 5.0 u2 10.7 Failover driver MPxIO TPGS/ALUA VMWare native - ATTO driver with type (TPGS/ALUA) SATP/ALUA TPGS/ALUA...
  • Page 12: Host Clustering Configurations

    any data path failure could result in unpredictable effects on the host system. For the greatest level of I/O protection, provide each controller in a storage array with its own connection to a separate HBA in the host system. Figure 1 Single-Host-to-Storage Array Configuration Host System with Two Fibre Channel Host Bus Adapters Fibre Channel Connection –...
  • Page 13 Both hosts have complete visibility of both controllers, all data connections, and all configured volumes in a storage array, plus failover support for the redundant controllers. However, in this configuration, you must use caution when you perform storage management tasks (especially deleting and creating volumes) to make sure that the two hosts do not send conflicting commands to the controllers in the storage arrays.
  • Page 14: Supporting Redundant Controllers

    Figure 2 Multi-Host-to-Storage Array Configuration Two Host Systems, Each with Two Fibre Channel Host Bus Adapters Fibre Channel Connections with Two Switches (Might Contain Different Switch Configurations) Storage Array with Two Fibre Channel Controllers Supporting The following figure shows how failover drivers provide redundancy when the host Redundant application generates a request for I/O to controller A, but controller A fails.
  • Page 15 Figure 3 Example of Failover I/O Data Path Redundancy Host Application I/O Request Failover Driver Host Bus Adapters Controller A Failure Controller B Initial Request to the HBA Chapter 1: Overview of Failover Drivers...
  • Page 16: How A Failover Driver Responds To A Data Path Failure

    Initial Request to the Controller Failed Request Returns to the Failover Driver 10. Failover Occurs and I/O Transfers to Another Controller 11. I/O Request Re-sent to Controller B How a Failover One of the primary functions of the failover feature is to provide path management. Driver Failover drivers monitor the data path for devices that are not working correctly or for multiple link errors.
  • Page 17: Least Queue Depth

    The multi-path driver determines which paths to a device are in an active state and can be used for load balancing. The load-balancing policy uses one of three algorithms: round robin, least queue depth, or least path weight. Multiple options for setting the load-balancing policies let you optimize I/O performance when mixed host interfaces are configured.
  • Page 18: Dividing I/O Activity Between Two Raid Controllers To Obtain The Best

    Dividing I/O For the best performance of a redundant controller system, use the storage Activity management software to divide I/O activity between the two RAID controllers in the storage array. You can use either the graphical user interface (GUI) or the command Between Two line interface (CLI).
  • Page 19: Failover Drivers For The Windows Operating System

    Failover Drivers for the Windows Operating System The failover driver for hosts with Microsoft Windows operating systems is Microsoft Multipath I/O (MPIO) with a Device Specific Module (DSM) for SANtricity ES Storage Manager. Microsoft Microsoft Multipath I/O (MPIO) provides an infrastructure to build highly available Multipath solutions for Windows operating systems (OSs).
  • Page 20: Selective Lun Transfer

    The TimeOutValue is typically reset when an HBA driver is upgraded. For information about the configurable parameters for the customized timeout feature, go to Configuration Settings for the Windows DSM and the Linux RDAC. The per-protocol timeout values feature slightly modifies the way in which the SynchTimeout parameter is evaluated.
  • Page 21: Windows Failover Cluster

    The maximum number of times that the LUN transfer is issued. This parameter setting prevents a continual ownership thrashing condition from occurring in cases where the controller tray or the controller-drive tray is attached to another host that requires the LUN be owned by the current controller. A time delay before LUN transfers are attempted.
  • Page 22: I/O Shipping Feature For Asymmetric Logical Unit Access (Alua)

    Shipping feature of the CFW or Selective LUN Transfer feature of the DSM, set the DisableLunRebalance parameter to 3. For information about this parameter, go Configuration Settings for the Windows DSM and the Linux RDAC. I/O Shipping The I/O Shipping feature implements support for ALUA. With earlier releases of the Feature for controller firmware (CFW), the device specific module (DSM) had to send input/output (I/O) requests for a particular volume to the controller that owned that...
  • Page 23: Reduced Failover Timing

    In the Registry Editor, expand the path to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\< DSM_Driver>\Parameters. The expression <DSM_Driver> is the name of the DSM driver used in your storage array. Set the parameter DisableLunRebalance. Set the parameter ClassicModeFailover. Close the Registry Editor. Reduced Settings related to drive I/O timeout and HBA connection loss timeout are adjusted in Failover Timing the host operating system so that failover does not occur when a controller is restarted.
  • Page 24 NOTE Any condition that causes blank registration information to be returned, where previous requests returned valid registration information, can cause the drive resource to fail. If the arbitration succeeds, the resource is brought online. Otherwise, the resource remains in a failed state. One reason for an arbitration failure is the combination of brownout condition and plug-and-play (PnP) timing issues if the HBA timeout period expires.
  • Page 25: Path Congestion Detection And Online/Offline Path States

    NOTE The APTPL feature within the DSM driver is enabled using the DSM utility with the - o (feature) option by setting the SetAPTPLForPR option to 1. According to the SCSI specification, you must set this option before PR registration occurs. If you set this option after a PR registration occurs, take the disk resource offline, and then bring the disk resource back online.
  • Page 26 Table 1 Configuration Settings for the Windows DSM and the Linux RDAC Default Value Parameter Name Description (Operating System) The maximum number of paths (logical endpoints) MaxPathsPerController that are supported per controller. The total number of paths to the storage array is the MaxPathsPerController value multiplied by the number of controllers.
  • Page 27 Default Value Parameter Name Description (Operating System) This setting determines which errors to log. These ErrorLevel values are valid: 0 – Display all errors 1 – Display path failover errors, controller failover errors, re-triable errors, fatal errors, and recovered errors 2 –...
  • Page 28 Default Value Parameter Name Description (Operating System) The number of times a Unit Attention (UA) status UaRetryCount from a LUN is retried. This parameter does not apply to UA conditions due to Quiescence In Progress. The allowed values range from 0x0 to 0x64 (100) for the Windows OS, and from 0x0 to 0xFFFFFFFF For Linux RDAC.
  • Page 29: Wait Time Settings

    Default Value Parameter Name Description (Operating System) This parameter determines the load-balancing LoadBalancePolicy policy used by all volumes managed by the Windows DSM and Linux RDAC failover drivers. These values are valid: 0 – Round robin with subset. 1 – Least queue depth with subset. 2 –...
  • Page 30 known as a wait time. If the NotReadyWaitTime value, the BusyWaitTime value, and the QuiescenceWaitTime value are greater than the ControllerIoWaitTime value, they will have no effect. For the Linux OS, the configuration settings can be found in the /etc/mpp.conf file.
  • Page 31: Configuration Settings For Path Congestion Detection And Online/Offline Path

    Configuration The following configuration settings are applied using the utility mppUtil -o Settings for option parameter. Path Table 3 Configuration Settings for the Path Congestion Detection Feature Congestion Detection and Online/Offline Path States Default Parameter Name Description Value A Boolean value that indicates whether the path congestion CongestionDetectionEnabled detection is enabled.
  • Page 32: Example Configuration Settings For The Path Congestion Detection Feature

    Default Parameter Name Description Value The number of I/O requests that must be sent to a path before CongestionSamplingInterval the nth request is used in the average response time calculation. For example, if this parameter is set to 100, every 100th request sent to a path is used in the average response time calculation.
  • Page 33: Disks

    NOTE The path ID (in this example 0x77070001) is found using the dsmUtil -g command. To use the dsmUtil command to set a path to online: dsmUtil -o SetPathOnline=0x77070001 Running the Consider a scenario where you map storage to a Windows Server 2008 'R2' (or above) DSM Failover parent partition.
  • Page 34: Dsmutil Utility

    # Powershell Script: Set_SCSI_Passthrough.ps1 $TargetHost=$args[0] $vsManagementService=gwmi MSVM_VirtualSystemManagementService -namespace "root\virtualization" foreach ($Child in Get-WmiObject -Namespace root\virtualization Msvm_ComputerSystem -Filter "ElementName='$TargetHost'") $vmData=Get-WmiObject -Namespace root\virtualization -Query "Associators of {$Child} Where ResultClass=Msvm_VirtualSystemGlobalSettingData AssocClass=Msvm_ElementSettingData" $vmData.AllowFullSCSICommandSet=$true $vsManagementService.ModifyVirtualSystem($Child,$vmDat a.PSBase.GetText(1))|out-null dsmUtil Utility The dsmUtil utility is a command-line driven utility that works only with the Multipath I/O (MPIO) Device Specific Module (DSM) solution.
  • Page 35 Table 4 dsmUtil Parameters Parameter Description Shows a summary of all storage arrays seen by the DSM. The summary shows the -a [target_id] target_id, the storage array WWID, and the storage array name. If target_id is specified, DSM point-in-time state information appears for the storage array. On UNIX operating systems, the virtual HBA specifies unique target IDs for each storage array.
  • Page 36: Device Manager

    Parameter Description Troubleshoots a feature or changes a configuration setting. Without the SaveSettings keyword, the changes only affect the in-memory state of the variable. The [[feature_action_nam SaveSettings keyword changes both the in-memory state and the persistent state. e[=value]] | Some example commands are: [feature_variable_na me=value]][, dsmUtil -o –...
  • Page 37: Determining If A Path Has Failed

    Determining if a If a controller supports multiple data paths and one of those paths fails, the failover Path Has Failed driver logs the path failure in the OS system log file. In the storage management software, the storage array shows a Degraded status. If all of the paths to a controller fail, the failover driver makes entries in the OS system log that indicate a path failure and failover.
  • Page 38: Removing Santricity Es Storage Manager And The Dsm Failover Driver From The Windows Os

    If you receive this warning and want to update SANtricity ES Storage Manager, click OK. Select whether to automatically start the Event Monitor. Click Next. Start the Event Monitor for the one I/O host on which you want to receive —...
  • Page 39 \Device\Scsi – This structure contains information that is maintained by the ScsiPort driver. The objects shown in the\Device\Scsi directory in the following table show the physical volumes that are identified by the HBAs. If a specific volume is not in this list, the DSM driver cannot detect the volumes. Table 5 Object Path and Descriptions of the WinObj DSM Object Path Description...
  • Page 40: Frequently Asked Questions About Windows Failover Drivers

    The objects shown in the \Device\MPPDSM directory show the items that are reported by MPIO to the DSM driver. If a device is not in this list, MPIO has not notified the DSM driver. Frequently The following table lists answers to questions about Windows failover drivers. Asked Questions about Windows...
  • Page 41 Question Answer Why does the SMdevices utility not show any volumes? If the SMdevices utility does not show any volumes, perform these steps: Make sure that all cables are seated correctly. Make sure that all gigabit interface converters (GBICs) are seated correctly.
  • Page 42 Question Answer After I install the DSM driver, my system takes a long You might still experience long start times after you install the time to start. Why? DSM driver because the Windows OS is completing its configuration for each device. For example, you install the DSM driver on a host with no storage array attached, and you restart the host.
  • Page 43 Question Answer What should I do if I receive this message? You do not need to update files. The information is dynamically created only when the storage array is found Warning: Changing the storage array initially. Use one of these two options to correct this behavior: name can cause host applications to Restart the host server.
  • Page 44 Frequently Asked Questions about Windows Failover Drivers...
  • Page 45: Failover Drivers For The Linux Operating System

    Failover Drivers for the Linux Operating System Redundant Dual Active Controller (RDAC) is the supported failover driver for SANtricity ES Storage Manager with Linux operating systems. Linux OS This version of the Linux OS RDAC failover driver does not support any Linux OS Restrictions 2.4 kernels, such as the following: SUSE SLES 8 OS...
  • Page 46: Prerequisites For Installing Rdac On The Linux Os

    Three load-balancing policies are supported: round robin subset, least queue depth, and path weight. Prerequisites Before installing RDAC on the Linux OS, make sure that your storage array meets for Installing these conditions: RDAC on the Make sure that the host system on which you want to install the RDAC driver has Linux OS supported HBAs.
  • Page 47: Installing Santricity Es Storage Manager And Rdac On The Linux Os

    For IBM Emulex HBAs, INITRD_MODULES includes an lpfcdd driver or an lpfc driver in the /etc/sysconfig/kernel file. For Emulex HBAs, INITRD_MODULES includes an lpfcdd driver or an lpfc driver in the /etc/sysconfig/kernel file. Installing NOTE SANtricity ES Storage Manager requires that the different Linux OS kernels SANtricity ES have separate installation packages.
  • Page 48: Installing Rdac Manually On The Linux Os

    Click Install. You will receive a warning after you click Install. The warning tells you that the RDAC driver is not automatically installed. You must manually install the RDAC driver. The RDAC source code is copied to the specified directory in the warning message.
  • Page 49: Making Sure That Rdac Is Installed Correctly On The Linux Os

    Making Sure that Restart the system by using the new boot menu option. RDAC Is Installed Make sure that these driver stacks were loaded after restart: Correctly on the — scsi_mod Linux OS — sd_mod — — mppUpper The physical HBA driver module —...
  • Page 50: Configuring Failover Drivers For The Linux Os

    Use a utility, such as devlabel, to create user-defined device names that can map devices based on a unique identifier, called a UUID. Use the udev command for persistent device names. The udev command dynamically generates device name links in the /dev/disk directory based on path, ID or UUID.
  • Page 51: Compatibility And Migration

    Default Parameter Name Description Value This parameter determines whether to create individual SCSI generic (SG) AllowHBAsgDevs devices for each I:T:L for the end LUN through the physical HBA. This parameter can take the following values: 0 – Do not allow creation of SG devices for each I:T:L through the physical HBA.
  • Page 52 Table 2 mppUtil Parameters Parameter Description Shows the RDAC driver’s internal information for the specified virtual -a target_name target_name (storage array name). If a target_name value is not included, the -a parameter shows information about all of the storage arrays that are currently detected by this host. Clears the WWN file entries.
  • Page 53 Parameter Description Sets the current error reporting level to error_level, which can -e error_level have one of these values: 0 – Show all errors. 1 – Show path failover, controller failover, retryable, fatal, and recovered errors. 2 – Show path failover, controller failover, retryable, and fatal errors.
  • Page 54: Frequently Asked Questions About Linux Failover Drivers

    Parameter Description Manually initiates one of the RDAC driver’s scan tasks. -s ["failback" | "avt" | "busscan" | "forcerebalance"] A “failback” scan causes the RDAC driver to reattempt communications with any failed controllers. An “avt” scan causes the RDAC driver to check whether AVT has been enabled or disabled for an entire storage array.
  • Page 55 Question Answer What must I do after applying a kernel After you apply the kernel update and start the new kernel, perform these steps update? to build the RDAC Initial Ram Disk image (initrd image) for the new kernel: Change the directory to the Linux RDAC source code directory. Type make uninstall, and press Enter.
  • Page 56 Question Answer What should I do if I receive this The path failover drivers that cause this warning are the RDAC drivers on both message? the Linux OS and the Windows OS. The storage array user label is used for storage array-to-virtual target ID binding Warning: Changing the in the RDAC driver.
  • Page 57: Device Mapper Multipath For The Linux Operating System

    Device Mapper Multipath for the Linux Operating System Device Mapper (DM) is a generic framework for block devices provided by the Linux operating system. It supports concatenation, striping, snapshots (legacy), mirroring, and multipathing. The multipath function is provided by the combination of the kernel modules and user space tools.
  • Page 58: Device Mapper Operating Systems Support

    Device Mapper NetApp supports device mapper for SLES 11 and RHEL 6.0 onwards. All future Operating updates of these OS versions are also supported. The following sections provide specific information on each of these operating systems. Systems Support I/O Shipping Feature The I/O Shipping feature implements support for ALUA.
  • Page 59: Operating System Version 6.3

    Reboot the host. Setting Up the multipath.conf File Use the procedures under on page update and configure the /etc/multipath.conf file. On the command line, type chkconfig multipath on. The multipathd daemon is enabled when the system starts again. To add the scsi_dh_rdac to initrd image, edit the /etc/sysconfig/kernel file to add the directive scsi_dh_rdac to the INITRD_MODULES section of the file.
  • Page 60: Linux Os

    Perform one of the following options to preload scsi_dh_rdac during boot Use a text editor to add the command rdloaddriver=scsi_dh_rdac — at the end of the kernel options in your boot loader configuration file. Enter the following commands to create the file initramfs and insert the —...
  • Page 61: Setting Up The Multipath.conf File

    Check the log file at /var/log/messages for entries similar to scsi 3:0:2:0: rdac: LUN 0 (IOSHIP). These entries indicate that the scsi_dh_rdac driver correctly recognizes ALUA mode. The keyword IOSHIP refers to ALUA mode. These messages are displayed when the devices are discovered in the system. These messages might also be displayed in dmesg logs or boot logs.
  • Page 62: Updating The Blacklist Section

    Updating the With the default settings, UTM LUNs might be presented to the host. I/Os operations, Blacklist Section however, are not supported on UTM LUNs. To prevent I/O operations on the UTM LUNs, add the vendor and product information for each UTM LUN to the blacklist section of the /etc/multipath.conf file.
  • Page 63 Table 1 Attributes and Values in the multipath.conf File Attribute Parameter Value Description The path grouping policy to be applied to this specific vendor path_grouping_poli group_by_prio and product storage. The program and arguments to determine the path priority prio rdac routine.
  • Page 64: Setting Up Dm-Mp For Large I/O Blocks

    Attribute Parameter Value Description The number of I/Os to route to a path before switching to the rr_min_io next path in the same path group, using bio-based device-mapper-multipath. This is only for systems running kernels older that 2.6.31. Smaller value for rr_min_io would give better performance for large I/O block size.
  • Page 65: Using The Device Mapper Devices

    On the command line, enter the command echo N >/sys/block/sd device name/queue/max_sectors_kb to set the value for the max_sectors_kb parameter for all physical paths for dm device in sysfs. In the command, N is an unsigned number less than the max_hw_sectors_kb value for the device;...
  • Page 66: Trouble

    Trouble- shooting the Device Mapper Table 3 Troubleshooting the Device Mapper Situation Resolution Is the multipath daemon, multipathd, running? At the command prompt, enter the command: /etc/init.d/multipathd status. Why are no devices listed when you run the At the command prompt, enter the command: #cat multipath -ll command? /proc/scsi/scsi.
  • Page 67: Failover Drivers For The Solaris Operating System

    Failover Drivers for the Solaris Operating System MPxIO is the supported failover driver for the Solaris operating system. Solaris OS SANtricity ES Storage Manager no longer supports or includes RDAC for these Restrictions Solaris OS: Solaris 11 Solaris 10 OS Solaris 9 OS Solaris 8 OS NOTE MPxIO is not included on the SANtricity ES Storage...
  • Page 68 Examine the /etc/symsm/mnf file. Each currently connected storage array should be on one line. An example line is: infiniti23/24~1T01610104~ 0 1 7~1T04110240~ 7~0~3~~c6t3d0~c4t2d7~ Make sure that there are no extra lines for disconnected storage arrays. Make sure that two controllers are listed on each line. (The example shows c6t3 and c4t2.) Make sure that these controllers are the correct controllers.
  • Page 69: Installing Mpxio On The Solaris 9 Os

    Installing MPxIO is not included in the Solaris 9 OS. To install MPxIO on the Solaris 9 OS, MPxIO on the perform these steps. Solaris 9 OS Download and install the SAN 4.4x release Software/Firmware Upgrades and Documentation from this website: http://www.sun.com/storage/san/ Install recommended patches.
  • Page 70: Enabling Mpxio On The Solaris 10 Os

    Enabling MPxIO MPxIO is included in the Solaris 10 OS. Therefore, MPxIO does not need to be on the Solaris installed; It only needs to be enabled. 10 OS NOTE MPxIO for iSCSI is enabled by default. To disable MPxIO on specific drives, add a line similar to the following line to the /kernel/drv/fp.conf Fibre Channel port driver configuration file: name="fp"...
  • Page 71: Configuring Failover Drivers For The Solaris Os

    Configuring Use the default settings for all Solaris OS configurations. Failover Drivers for the Solaris Frequently Asked Questions about Solaris Failover Drivers Table 1 Frequently Asked Questions about Solaris Failover Drivers Question Answer Where can I find MPxIO-related You can find MPxIO-related files in these directories: files? /etc/ /kernel/drv...
  • Page 72 Question Answer How can I get a list of controllers and Use the lad -y command. The command uses LUN 0 to get the their volumes? information and is located in the /usr/lib/symsm/bin directory. It can be reached through /etc/raid/bin. This command updates the mnf file.
  • Page 73 Question Answer Why might the rdriver fail to attach The rdriver might not attach when no entry exists in the and what can I do about it? rdriver.conf file to match the device, or whether rdnexus runs out of buses. If no physical devnode exists, then the following actions must occur: The sd.conf file must specify LUNs explicitly.
  • Page 74 Frequently Asked Questions about Solaris Failover Drivers...
  • Page 75: Subsequent Versions

    Installing ALUA Support for VMware Versions ESX4.1U3, ESXi5.0U1, and Subsequent Versions Starting with ESXi5.0 U1 and ESX4.1U3, VMware will automatically have the claim rules to select the VMW_SATP_ALUA plug-in to manage storage arrays that have the target port group support (TPGS) bit enabled. All arrays with TPGS bit disabled are still managed by the VMW_SATP_LSI plug-in.
  • Page 78 © Copyright 2012 NetApp, Inc. All rights reserved.

This manual is also suitable for:

Infinitestorage 5000 series

Table of Contents