Download  Print this page

HP StoreVirtual 4000 User Manual

10.0 hp lefthand storage user guide (ax696-96202, november 2012)
Hide thumbs

Advertisement

HP
LeftHand Storage User Guide
Abstract
This guide provides instructions for configuring individual storage systems, as well as for creating storage clusters, volumes,
snapshots, and remote copies.
HP Part Number: AX696-96202
Published: November 2012
Edition: 8

Advertisement

loading

  Also See for HP StoreVirtual 4000

  Related Manuals for HP StoreVirtual 4000

  Summary of Contents for HP StoreVirtual 4000

  • Page 1 LeftHand Storage User Guide Abstract This guide provides instructions for configuring individual storage systems, as well as for creating storage clusters, volumes, snapshots, and remote copies. HP Part Number: AX696-96202 Published: November 2012 Edition: 8...
  • Page 2 © Copyright 2009, 2012 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.21 1 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
  • Page 3: Table Of Contents

    Contents 1 Getting started..................14 Creating storage with HP LeftHand Storage................14 Configuring storage systems.....................15 Creating a storage volume using the Management Groups, Clusters, and Volumes wizard....15 Enabling server access to volumes....................16 Using the Map View.......................17 Using the display tools.......................17 Using views and layouts......................17 Setting preferences.........................18 Setting the font size and locale....................18 Setting naming conventions....................18...
  • Page 4 To reconfigure RAID......................33 Configuring RAID for a P4800 G2 with 2 TB drives..............33 Monitoring RAID status......................34 Data reads and writes and RAID status.................34 Data redundancy and RAID status..................34 Managing disks........................35 Getting there........................35 Reading the disk report on the Disk Setup tab................35 Verifying disk status......................37 Viewing disk status for the VSA..................37 Viewing disk status for the HP P4500 G2.................37...
  • Page 5 Which physical interface is preferred................59 Which physical interface is active..................59 Summary of NIC states during failover................60 Example network cabling topologies with Adaptive Load Balancing ........60 Creating a NIC bond......................61 Creating the bond......................62 Verify communication setting for new bond...............63 Viewing the status of a NIC bond..................64 Deleting a NIC bond......................65 Disabling a network interface....................67 Configuring a disabled interface..................68...
  • Page 6 7 Monitoring the SAN.................82 Monitoring SAN status......................82 Customizing the SAN Status Page ..................83 Using the SAN Status Page....................84 Alarms and events overview.....................84 Working with alarms.......................86 Filtering the alarms list......................86 Viewing and copying alarm details..................87 Viewing alarms in a separate window..................87 Exporting alarm data to a .csv file..................87 Configuring events........................87 Changing the event retention period..................87...
  • Page 7 Set DNS server........................105 Set up email for notification....................105 Create cluster and assign a VIP..................105 Create a volume and finish creating management group............106 Management group map view tab..................106 Logging in to a management group ...................106 Configuration Summary overview...................106 Reading the configuration summary..................107 Optimal configurations....................107 Configuration warnings....................107 Configuration errors.....................108...
  • Page 8 Installing the Failover Manager for Hyper-V Server............122 Uninstalling the Failover Manager from Hyper-V Server............122 Using the Failover Manager for VMware vSphere..............123 Installing the Failover Manager for VMware vSphere............123 Installing the Failover Manager for other VMware platforms..........124 Configuring the IP address and host name..............124 Installing the Failover Manager using the OVF files with the VI Client.........125 Configure the IP address and host name..............125 Finishing up with VI Client..................125...
  • Page 9 Network RAID-5 (Single Parity).................145 Network RAID-6 (Dual Parity)...................146 Provisioning snapshots......................147 Snapshots versus backups....................147 The effect of snapshots on cluster space................147 Managing capacity using volume size and snapshots............148 Volume size and snapshots...................148 Schedules to snapshot a volume and capacity..............148 Deleting snapshots......................148 Ongoing capacity management.....................148 Number of volumes and snapshots..................148 Reviewing SAN capacity and usage...................148...
  • Page 10 Delete the temporary space..................173 Rolling back a volume to a snapshot or clone point..............174 Rolling back a volume to a snapshot or clone point..............174 Continue with standard roll back...................175 Create a new SmartClone volume from the snapshot............175 Roll back all associated volumes..................176 Cancel the rollback operation..................177 Deleting a snapshot......................177 14 SmartClone volumes................178...
  • Page 11 Adding a Fibre Channel server connection................201 Manually configuring a Fibre Channel initiator..............201 Deleting a manually configured Fibre Channel initiator............202 Editing a Fibre Channel server connection................202 Deleting a Fibre Channel server connection.................202 Completing the Fibre Channel setup...................202 Clustering server connections....................203 Requirements for clustering servers..................203 Creating a server cluster....................203 Viewing the relationship between storage systems, volumes, and servers.........204 Editing a server cluster......................204...
  • Page 12 Pausing and restarting monitoring...................222 Changing the graph......................223 Hiding and showing the graph..................223 Displaying or hiding a line....................223 Changing the color or style of a line..................223 Highlighting a line......................223 Changing the scaling factor....................224 Exporting data........................224 Exporting statistics to a CSV file..................224 Saving the graph to an image file..................225 18 Registering advanced features..............226 Evaluation period for using advanced features................226...
  • Page 13 Repair the storage system.....................244 Rebuilding data.......................245 Reconfigure RAID......................245 Returning the storage system to the cluster................246 Restarting a manager....................246 Adding the repaired storage system to cluster..............247 Rebuilding volume data....................247 Controlling server access....................247 Removing the ghost storage system..................248 Returning the failed disk....................248 Replacing the RAID controller....................248 Verifying component failure....................248 Removing the RAID controller....................250...
  • Page 14: Getting Started

    1 Getting started HP LeftHand Storage enables you to create a virtualized pool of storage resources and manage a SAN. The SAN/iQ operating system is installed on the HP LeftHand Storage and you use the HP LeftHand Centralized Management Console (CMC) to manage the storage. For a list of supported software and hardware, see the HP LeftHand 4000 Storage Compatibility Matrix at http://www.hp.com/go/LeftHandcompatibility Creating storage with HP LeftHand Storage...
  • Page 15: Configuring Storage Systems

    recommended as the WWNNs based on the management group may change. (See the HP SAN Design Reference Guide.) Create a Fibre Channel server in the CMC. (See “Planning Fibre Channel server connections to management groups” (page 201)) Assign LUNs to the Fibre Channel server. (See “Assigning volumes to Fibre Channel servers”...
  • Page 16: Enabling Server Access To Volumes

    Figure 1 The SAN/iQ software storage hierarchy 1. Management group 2. Cluster 3. Volume To complete this wizard, you will need the following information: A name for the management group. A storage system discovered on the network and then configured for RAID and the Network settings DNS domain name, suffix, and server IP address for email event notification IP address or hostname and port of your email (SMTP) server for event notification...
  • Page 17: Using The Map View

    Using the Map View The Map View tab is available for viewing the relationships between management groups, servers, sites, clusters, volumes and snapshots. When you log in to a management group, there is a Map View tab for each of those elements in the management group. For example, when you want to make changes such as moving a volume to a different cluster, or deleting shared snapshots, the Map View allows you to easily identify how many snapshots and volumes are affected by such changes.
  • Page 18: Setting Preferences

    Setting preferences Use the Preferences window to set the following: Font size in the CMC Locale for the CMC. The locale determines the language displayed in the CMC. Naming conventions for storage elements Online upgrade options. See the HP LeftHand Storage Upgrade Instructions. Setting the font size and locale Use the Preferences window, opened from the Help menu, to set font size and locale in the CMC.
  • Page 19: Troubleshooting

    If you use the given defaults, the resulting names look like those in Table 2 (page 19). Notice that the volume name carries into all the snapshot elements, including SmartClone volumes, which are created from a snapshot. Table 2 Example of how default names work Element Default name Example...
  • Page 20 Table 3 CMC setup for remote support Task For more information, see Enable SNMP on each storage system “Enabling SNMP agents” (page 92) Set the SNMP trap recipient to IP address of the system “Adding SNMP traps” (page 93) where the remote support client is installed Open port 8959 (used for the CLI) Your network administrator Set the management group login and password for a...
  • Page 21: Working With Storage Systems

    2 Working with storage systems Storage systems displayed in the navigation window have a tree structure of configuration categories under them, as shown in Figure 3 (page 21). The configuration categories provide access to the configuration tasks for individual storage systems. You must configure each storage system individually before using it in a cluster.
  • Page 22: Storage System Tasks

    Table 4 HP platform identification (continued) HP LeftHand model HP platform Documentation Link HP ProLiant BL460c G7 http://www.hp.com/support/ Server Blade Maintenance BL460cG7_Server_Blade Maintenance and and Service Guide Service Blade_MSG_en HP LeftHand 4130 ProLiant HP ProLiant DL360p Gen8 http://www.hp.com/go/proliantgen8/docs HP LeftHand 4330 DL360p G8 Server Maintenance and Service Guide...
  • Page 23: Powering Off Or Rebooting The Storage System

    Powering off or rebooting the storage system Reboot or power off the storage system from the CMC. Set the amount of time before the process begins, to ensure that all activity to the storage system has stopped. Powering off the storage system through the CMC physically powers it off. The CMC controls the power down process so that data is protected.
  • Page 24: Rebooting The Storage System

    When powering off the storage system, be sure to power off the components in the following order: Power off the server blades enclosure or system controller from the CMC as described in “Powering off the storage system” (page 24). Manually power off the disk enclosure. When you reboot the storage system, use the CMC, as described in “Rebooting the storage system”...
  • Page 25: Upgrading San/Iq On Storage Systems

    Figure 5 Confirming storage system power off Depending on the configuration of the management group and volumes, your volumes and snapshots can remain available. Upgrading SAN/iQ on storage systems The CMC enables online upgrades for storage systems, including the latest software releases and patches.
  • Page 26: Checking Status Of Dedicated Boot Devices

    Figure 6 Availability tab Checking status of dedicated boot devices Some storage systems contain either one or two dedicated boot devices. Dedicated boot devices may be compact flash cards or hard drives. If a storage system has dedicated boot devices, the Boot Devices tab appears in the Storage configuration category.
  • Page 27: Replacing A Dedicated Boot Device

    Table 5 Boot device status Boot device status Description Active The device is synchronized and ready to be used. Inactive The device is ready to be removed from the storage system. It will not be used to boot the storage system. Failed The device encountered an I/O error and is not ready to be used.
  • Page 28: Configuring Raid And Managing Disks

    3 Configuring RAID and Managing Disks For each storage system, you can select the RAID configuration and the RAID rebuild options, and monitor the RAID status. You can also review disk information and, for some models, manage individual disks. Getting there In the navigation window, select a storage system and log in if necessary.
  • Page 29: Explaining Raid Devices In The Raid Setup Report

    Table 6 Descriptions of RAID levels (continued) RAID level Description mirrors the contents of one hard drive in the array onto another. If either hard drive fails, the other hard drive provides a backup copy of the files and normal system operations are not interrupted.
  • Page 30: Raid Devices By Raid Type

    RAID devices by RAID type Each RAID type creates different sets of RAID devices. Table 7 (page 30) contains a description of the variety of RAID devices created by the different RAID types as implemented on various storage systems. Table 7 Information in the RAID setup report This item Describes this Device Name...
  • Page 31: Using Network Raid In A Cluster

    NOTE: If you plan on using clusters with only a single storage system, use RAID 1 and RAID 10, RAID 5, or RAID 6 to ensure data redundancy within that storage system. Using Network RAID in a cluster A cluster is a group of storage systems across which data can be protected by using Network RAID.
  • Page 32: Mixing Raid Configurations

    Table 8 Data availability and safety in RAID configurations (continued) Configuration Data safety and availability during Data availability if an entire individual disk failure storage system fails or if network connection to a storage system is lost Volumes configured with Network Yes.
  • Page 33: Reconfiguring Raid

    Reconfiguring RAID Reconfiguring RAID on a storage system or a VSA destroys any data stored on that storage system. For VSAs, there is no alternate RAID choice, so the only outcome for reconfiguring RAID is to wipe out all data. Changing preconfigured RAID on a new storage system RAID must be configured on individual storage systems before they are added to a management group.
  • Page 34: Monitoring Raid Status

    Monitoring RAID status RAID is critical to the operation of the storage system. If RAID has not been configured, the storage system cannot be used. Monitor the RAID status of a storage system to ensure that it remains normal. If the RAID status changes, a CMC event is generated. For more information about events and event notification, see “Alarms and events overview”...
  • Page 35: Managing Disks

    Managing disks Use the Disk Setup tab to monitor disk information and perform disk management tasks as listed Table 9 (page 35). Table 9 Disk management tasks for storage systems Disk setup function Where available Monitor disk information All storage systems View Disk Details Storage systems running version 9.5.01 or later Activate Drive ID LEDs...
  • Page 36 Table 10 Description of items on the disk report Column Description Disk Corresponds to the physical slot in the storage system. Status Status is one of the following: Active—green (on and participating in RAID) Uninitialized—yellow (is not part of an array) Inactive—yellow (is part of an array, and on, but not participating in RAID) Marginal—yellow...
  • Page 37: Verifying Disk Status

    Then return the storage system to the cluster and migrate the volumes and snapshots back to the original cluster. “Changing the cluster—migrating a volume to a different cluster” (page 159). Perform a cluster swap, replacing the storage systems with the worn out SSD drives with storage systems with new SSD drives.
  • Page 38: Viewing Disk Status For The Hp P4300 G2

    Figure 12 Viewing the Disk Setup tab in a HP P4500 G2 Figure 13 Diagram of the drive bays in a HP P4500 G2 Viewing disk status for the HP P4300 G2 The disks are labeled 1 through 8 in the Disk Setup window, shown in Figure 14 (page 38), and correspond to the disk drives from top to bottom, left to right (...
  • Page 39: Viewing Disk Status For The P4800 G2

    Figure 15 Diagram of the drive bays in a HP P4300 G2 Viewing disk status for the P4800 G2 The disks are labeled 1 through 35 in the Disk Setup window( Figure 16 (page 39)), and correspond to the disk drives from top to bottom, left to right, (Figure 17 (page 39)), when you are looking at the front of the P4800 G2.
  • Page 40: Viewing Disk Status For The Hp Lefthand 4130

    Figure 18 Viewing the Disk Setup tab in a HP P4900 G2 Figure 19 Diagram of the drive bays in a HP P4900 G2 Viewing disk status for the HP LeftHand 4130 The disks are labeled 1 through 4 in the Disk Setup window (Figure 20 (page 40)), and correspond to the disk drives from top to bottom, left to right...
  • Page 41: Viewing Disk Status For The Hp Lefthand 4330

    Figure 21 Diagram of the drive bays in a HP LeftHand 4130 Viewing disk status for the HP LeftHand 4330 The disks are labeled 1 through 8 in the Disk Setup window (Figure 22 (page 41)), and correspond to the disk drives from top to bottom, left to right (Figure 23 (page 41)), when you are looking at the front of the HP LeftHand 4330.
  • Page 42: Using Repair Storage System

    NOTE: Because of the nature of the SSD drive wear management and reporting, it is likely that all the SSD drives in a system will wear out around the same time. Table 1 1 (page 42) lists disk replacement requirements for specific configurations and storage systems.
  • Page 43: Replacing A Disk In A Hot-Swap Storage System

    You know which disk needs to be replaced through SAN/iQ monitoring. When viewed in the Disk Setup tab, the Drive Health column shows Marginal (replace as soon as possible) or Predictive Failure (replace right away). RAID is still on, though it may be degraded and a drive is inactive. Use the instructions in “Replacing disks appendix”...
  • Page 44 Replace the disk You may remove and replace a disk from these hot-swap storage systems after checking that the Safe to Remove status indicates “Yes” for the drive to be replaced. Physically replace the disk drive in the storage system See the hardware documentation that came with your storage system for information about physically replacing disk drives in the storage system.
  • Page 45: Managing The Network

    4 Managing the network Correctly setting up the network for HP LeftHand Storage ensures data availability and reliability. IMPORTANT: The network settings must be the same for the switches, clients, and storage systems. Set up the end-to-end network before creating storage volumes. Network best practices Isolate the SAN, including CMC traffic, on a separate network.
  • Page 46: Changing Network Configurations

    Changing network configurations Changing the network configuration of a storage system may affect connectivity with the network and application servers. Consequently, we recommend that you configure network characteristics on individual storage systems before creating a management group or adding them to existing clusters.
  • Page 47: Changing Speed And Duplex Settings

    Table 12 Status of and information about network interfaces (continued) Column Description G4-Motherboard:Port2 Eth0 Description Describes each interface listed. For example, the bond0 is the Logical Failover Device. Speed/Method Lists the actual operating speed reported by the interface. Duplex/Method Lists duplex as reported by the interface. Status Describes the state of the interface.
  • Page 48: Changing Nic Frame Size

    To change the speed and duplex In the navigation window, select the storage system and log in. Open the tree, and select Network. Click the TCP Status tab. Select the interface to edit. Click TCP Status Tasks, and select Edit. Select the combination of speed and duplex that you want.
  • Page 49: Editing The Nic Frame Size

    configure jumbo frames on each client and each network switch may result in data unavailability or performance degradation. Jumbo frames can co-exist with 1500 byte frames on the same subnet if the following conditions are met: Every device downstream of the storage system on the subnet must support jumbo frames. If you are using 802.1q virtual LANs, jumbo frames and nonjumbo frames must be segregated into separate VLANs.
  • Page 50: The Tcp/Ip Tab

    Select the interface to edit. Click TCP Status Tasks, and select Edit. Change the flow control setting on the Edit window. Click OK. Repeat steps through for all the NICs you want to change . On the TCP Status tab window, for bonded NICs, the NIC flow control column shows the flow control settings for the physical NICs, and the bond0 as blank.
  • Page 51: Configuring The Ip Address Manually

    Click TCP/IP Tasks, and select Ping. Select which network interface to ping from, if you have more than one enabled. A bonded interface has only one interface from which to ping. Enter the IP address to ping, and click Ping. If the server is available, the ping is returned in the Ping Results window.
  • Page 52: Bonding With 10 Gbe Interfaces

    fault tolerance, load balancing and/or bandwidth aggregation for the network interface cards in the storage system. Bonds are created by joining physical NICs into a single logical interface. This logical interface acts as the master interface, controlling and monitoring the physical slave interfaces. Bonding two interfaces for failover provides fault tolerance at the local hardware level for network communication.
  • Page 53: Supported Bonds With 10 Gbe

    Supported bonds with 10 GbE The HP LeftHand Storage with 10 GbE NICs installed and configured supports these bond types. Adaptive Load Balancing (ALB) bond with the two 10 GbE Ethernet NICs Active-Passive bond with two 10 GbE Ethernet NICs Active-Passive bond with a 1 GbE NIC and a 10 GbE NIC Link Aggregation Dynamic Mode (802.3ad) bond with two 10 GbE Ethernet NICs Unsupported bonds with 10 GbE...
  • Page 54: How Active-Passive Bonding Works

    Table 16 Comparison of Active-Passive, link aggregation dynamic mode, and Adaptive Load Balancing bonding Feature Active-Passive Link aggregation dynamic Adaptive load balancing mode Bandwidth Use of 1 NIC at a time Simultaneous use of both Simultaneous use of both NICs provides normal NICs increases bandwidth.
  • Page 55: Which Physical Interface Is Preferred

    Requirements for Active-Passive To configure Active-Passive: Both NICs should be enabled. NICs should be connected to separate switches. Which physical interface is preferred When the Active-Passive bond is created, if both NICs are plugged in, the SAN/iQ software interface becomes the active interface. The other interface is Passive (Ready). For example, if Motherboard:Port1 is the preferred interface, it will be active and Motherboard:Port2 will be Passive (Ready).
  • Page 56: Example Network Cabling Topologies With Active-Passive

    Example network cabling topologies with Active-Passive Two simple network cabling topologies using Active-Passive in high availability environments are shown in Figure 26 (page 56). Figure 26 Active-Passive in a two-switch topology with server failover 1. Servers 2. HP LeftHand Storage systems 3.
  • Page 57: How Link Aggregation Dynamic Mode Bonding Works

    Figure 27 Active-Passive failover in a four-switch topology 1. Servers 2. HP LeftHand Storage systems 3. Storage cluster 4. GigE trunk 5. Active path 6. Passive path Figure 27 (page 57) illustrates the Active-Passive configuration in a four-switch topology. How link aggregation dynamic mode bonding works Link Aggregation Dynamic Mode allows the storage system to use both interfaces simultaneously for data transfer.
  • Page 58: Which Physical Interface Is Active

    Which physical interface is active When the Link Aggregation Dynamic Mode bond is created, if both NICs are plugged in, both interfaces are active. If one interface fails, the other interface continues operating. For example, suppose Motherboard:Port1 and Motherboard:Port2 are bonded in a Link Aggregation Dynamic Mode bond.
  • Page 59: How Adaptive Load Balancing Works

    Figure 28 Link aggregation dynamic mode in a single-switch topology 1. Servers 2. HP LeftHand Storage systems 3. Storage cluster How Adaptive Load Balancing works Adaptive Load Balancing allows the storage system to use both interfaces simultaneously for data transfer. Both interfaces have an active status. If the interface link to one NIC goes offline, the other interface continues operating.
  • Page 60: Summary Of Nic States During Failover

    Table 23 Example Adaptive Load Balancing failover scenario and corresponding NIC status Example failover scenario NIC status 1. Adaptive Load Balancing bond0 is created. Bond0 is the master logical interface. Motherboard:Port1 and Motherboard:Port2 are both active. Motherboard:Port1 is Active. Motherboard:Port2 is Active. 2.
  • Page 61: Creating A Nic Bond

    Figure 29 Adaptive Load Balancing in a two-switch topology 1. Servers 2. HP LeftHand Storage systems 3. Storage cluster 4. GigE trunk Creating a NIC bond Follow these guidelines when creating NIC bonds: Prerequisites Verify that the speed, duplex, flow control, and frame size are all set properly on both interfaces that are being bonded.
  • Page 62: Creating The Bond

    Ensure that the bond has a static IP address for the logical bond interface. The default values for the IP address, subnet mask and default gateway are those of one of the physical interfaces. Verify on the Communication tab that the SAN/iQ interface is communicating with the bonded interface.
  • Page 63: Verify Communication Setting For New Bond

    1 1. Search for the storage system by Host Name or IP address, or by Subnet/mask. NOTE: Because it can take a few minutes for the storage system to set the network address, the search may fail the first time. If the search fails, wait a minute or two and select Try Again on the Network Search Failed message.
  • Page 64: Viewing The Status Of A Nic Bond

    Figure 32 Verifying interface used for SAN/iQ communication Verify that the SAN/iQ communication port is correct. Viewing the status of a NIC bond You can view the status of the interfaces on the TCP Status tab. Notice that in the Active-Passive bond, one of the NICs is the preferred NIC.
  • Page 65: Deleting A Nic Bond

    Figure 34 Viewing the status of a link aggregation dynamic mode bond 1. Neither interface is preferred NOTE: If the bonded NIC experiences rapid, sequential Ethernet failures, the CMC may display the storage system as failed (flashing red) and access to data on that storage system fails. However, as soon as the Ethernet connection is reestablished, the storage system and the CMC display the correct information.
  • Page 66 Figure 35 Searching for the unbonded storage system on the network Search for the storage system by Host Name or IP Address or Subnet/Mask. NOTE: Because it can take a few minutes for the storage system to set the network address, the search may fail the first time.
  • Page 67: Disabling A Network Interface

    Figure 36 Verifying interface used for SAN/iQ communication Verify that the SAN/iQ communication port is correct. Disabling a network interface You can disable the network interfaces on the storage system. You can only disable top-level interfaces. This includes bonded interfaces and NICs that are not part of bonded interfaces.
  • Page 68: Configuring A Disabled Interface

    Configuring a disabled interface If one interface is still connected to the storage system but another interface is disconnected, you can reconnect to the second interface using the CMC. See “Configuring the IP address manually” (page 51). If both interfaces to the storage system are disconnected, you must attach a terminal, or PC or laptop to the storage system with a null modem cable and configure at least one interface using the Configuration Interface.
  • Page 69: Setting Up Routing

    Adding or changing domain names to the DNS suffix list Add up to six domain names to the DNS suffix list (also known as the look-up zone). The storage system searches the suffixes first and then uses the DNS server to resolve host names. You can also change or remove the suffixes used.
  • Page 70: Deleting Routing Information

    Deleting routing information You can only delete routes you have added. In the navigation window, select a storage system, and log in. Open the tree, and select Network. Click the Routing tab. On the Routing tab, select the optional route to delete. Click Routing Tasks, and select Edit Routing Information.
  • Page 71: Updating The List Of Manager Ip Addresses

    Figure 37 Selecting the SAN/iQ software network interface and updating the list of managers Select an IP address from the list of Manager IP Addresses. Click Communication Tasks, and select Select SAN/iQ Address. Select an Ethernet port for this address. Click OK.
  • Page 72: Fibre Channel

    Figure 38 Viewing the list of manager IP addresses Click Communication Tasks, and select Update Communications List. The list is updated with the current storage system in the management group and a list of IPs with every manager’s enabled network interfaces. A window opens which displays the manager IP addresses in the management group and a reminder to reconfigure the application servers that are affected by the update.
  • Page 73: Setting The Date And Time

    5 Setting the date and time The storage systems within management groups use the date and time settings to create a time stamp when data is stored. You set the time zone and the date and time in the management group, and the storage systems inherit those management group settings.
  • Page 74: Editing Ntp Servers

    NOTE: When using a Windows server as an external time source for an storage system, you must configure W32Time (the Windows Time service) to also use an external time source. The storage system does not recognize the Windows server as an NTP server if W32Time is configured to use an internal hardware clock.
  • Page 75: Editing The Date And Time

    The server you added first is the one accessed first when time needs to be established. If this NTP server is not available for some reason, the next NTP server that was added, and is preferred, is used for time serving. To change the order of access for time servers Delete the server whose place in the list you want to change.
  • Page 76: Managing Authentication

    6 Managing authentication Manage authentication to the HP LeftHand Storage using administrative users and groups. Managing administrative users When you create a management group, one default administrative user is created. The default user automatically becomes a member of the Full Administrator group. Use the default user and/or create new ones to provide access to the management functions of the SAN/iQ software.
  • Page 77: Deleting An Administrative User

    In the Member Groups section, select the group from which to remove the user. Click Remove. Click OK to finish. Deleting an administrative user Log in to the management group, and select the Administration category. Select a user in the Users table. Click Administration Tasks in the tab window, and select Delete User.
  • Page 78: Adding Administrative Groups

    Adding administrative groups When you create a group, you also set the permission level for the users assigned to that group. Log in to the management group, and select the Administration category. Click Administration Tasks in the tab window, and select New Group. Enter a group name and an optional description.
  • Page 79: Using Active Directory For External Authentication

    Click OK on the confirmation window. Click OK to finish. Using Active Directory for external authentication Use Active Directory to simplify management of user authentication with HP LeftHand Storage. Configuring Active Directory allows Microsoft Windows domain users to authenticate to HP LeftHand Storage using their Windows credentials, avoiding the necessity of adding and maintaining individual users in the SAN/iQ software.
  • Page 80: Configuring External Authentication

    Best practices Create a unique group in the CMC for the Active Directory association. Use a name and description that signifies the Active Directory association. See “Adding administrative groups” (page 78). Create a separate SAN/iQ ‘administrator’ group in Active Directory. Create a unique user in Active Directory to use as the Bind user for the management group to allow for communication between storage and Active Directory.
  • Page 81: Removing The Active Directory Configuration

    Click Find External Group. Enter the user name in the Enter AD User Name box and click OK. Select the correct group from the list that opens of Active Directory Groups and click Click OK when you have finished editing the group. Log out of the management group and log back in using your UPN login (e.g., name@company.com) to verify the configuration.
  • Page 82: Monitoring The San

    7 Monitoring the SAN Monitor the SAN to: Track usage. Ensure that best practices are followed when changes are made, such as adding additional storage systems to clusters. Maintain the overall health of the SAN. Tools for monitoring the SAN include the SAN Status Page, the Configuration Summary and the Best Practice table, the Alarms and Events features, including customized notification methods, and diagnostic tests and log files available for the storage systems.
  • Page 83: Customizing The San Status Page

    The best practices displayed in this content pane are the same as those displayed on the Configuration Summary page. Configuration Summary—Monitor SAN configurations to ensure optimum capacity management, performance, availability, and ease of management. Volume Data Protection Level—Ensure that the SAN is configured for ongoing maximum data protection while scaling capacity and performing system maintenance.
  • Page 84: Using The San Status Page

    Customizing the SAN Status Page layout Customize the layout of the SAN Status Page to highlight the information most important to you. All customizations are retained when the CMC is closed and restarted. Drag-and-drop content panes to change their position on the page. The layout is three columns by default.
  • Page 85 require taking action and are available only from the Events window for each management group. Warning—Provides important information about a system component that may require taking action. These types of events are visible in both the Alarms window (for all management groups) and the Events window (for the management group where the alarm occurred).
  • Page 86: Working With Alarms

    NOTE: Except for the P4800 G2, alarms and events information is not available for storage systems listed under Available Systems in the CMC, because they are not currently in use on the SAN. Table 29 (page 86) defines the alarms and events columns that appear in the CMC. Table 29 Alarms and events column descriptions Column Description...
  • Page 87: Viewing And Copying Alarm Details

    Click Filter. The list of alarms changes to display only those that contain the filter text. To display all alarms, click Clear to remove the filter. Viewing and copying alarm details In the navigation window, log in to the management group. In the Alarms window, double-click an alarm.
  • Page 88: Configuring Remote Log Destinations

    Configuring remote log destinations Use remote log destinations to automatically write all events for the management group to a computer other than the storage system. For example, direct the event data to a single log server in a remote location. You must also configure the destination computer to receive the log files by configuring syslog on the destination computer.
  • Page 89: Saving Filter Views

    From the Filters list, select an option to filter on. Options in bold are predefined filters you cannot change. Options that are not bold are custom filters that you have saved from the filters panel, described in “Saving filter views” (page 89).
  • Page 90: Copying Events To The Clipboard

    Optional: To paste the event details into a document or email message, click Copy to copy the details to the clipboard. Click Close to finish. Copying events to the clipboard In the navigation window, log in to the management group. Select Events in the tree.
  • Page 91: Configuring Email Recipients

    In the Sender Address field, enter the email address, including the domain name, to use as the sender for notifications. The system automatically adds the host name of the storage system in the email From field, which appears in many email systems. This host name helps identify where the event occurred. Do one of the following: To save your changes and close the window, click Apply.
  • Page 92: Enabling Snmp Agents

    If using the HP System Management Homepage, view the SNMP settings there. You can also start SNMP and send test v1 traps. Enabling SNMP agents Most storage systems allow enabling and disabling SNMP agents. After installing version 9.0, SNMP will be enabled on the storage system by default. Configuring SNMP includes these tasks: “Enabling the SNMP agent”...
  • Page 93: Editing Access Control Entries

    Do one of the following: Select By Address and enter the IP address, then select an IP Netmask from the list. Select Single Host if adding only one SNMP client. After entering the information, the dialog box displays acceptable and unacceptable combinations of IP addresses and IP netmasks so you can correct issues immediately.
  • Page 94 In the navigation window, log in to the management group. In the tree, select Events SNMP. Click SNMP Tasks and select Edit SNMP Traps Settings. Enter the Trap Community String. The trap community string does not have to be the same as the community string used for access control, but it can be.
  • Page 95: Using The Snmp Mibs

    Clear the selected severities checkboxes. Click OK to confirm. Using the SNMP MIBs The LeftHand Networks MIBs provide read-only access to the storage system. The SNMP implementation in the storage system supports MIB-II compliant objects. These files, when loaded in the SNMP client, allow you to see storage system-specific information such as model number, serial number, hard disk capacity, network characteristics, RAID configuration, DNS server configuration details, and more.
  • Page 96: Troubleshooting Snmp

    NETWORK-SERVICES-MIB NOTIFICATION-LOG-MIB RFC1213-MIB SNMP-TARGET-MIB SNMP-VIEW-BASED-ACM-MIB SNMPv2-MIB UCD-DLMOD-MIB UCD-SNMP-MIB Troubleshooting SNMP Table 30 SNMP troubleshooting Issue Solution SNMP queries are timing out Ensure that the timeout value is long enough for your environment. In complex configurations, SNMP queries should have longer timeouts. SNMP data gathering is not instantaneous, and scales in time with the complexity of the configuration.
  • Page 97: Generating A Hardware Information Report

    A description of the test Pass / fail criteria NOTE: Available diagnostic tests depend on the storage system. For VSA, only the Disk Status Test is available. Table 31 Example list of hardware diagnostic tests and pass/fail criteria Diagnostic test Description Pass criteria Fail criteria...
  • Page 98: Saving A Hardware Information Report

    Figure 41 Viewing the hardware information for a storage system Saving a hardware information report Click Hardware Information Tasks and select Save to File to download a text file of the reported statistics. Choose the location and name for the report. Click Save.
  • Page 99 Table 32 Selected details of the hardware report (continued) This term means this Driver name Driver version DNS data Information about DNS, if a DNS server is being used, providing the IP address of the DNS servers. IP address of the DNS servers. Memory Information about RAM in the storage system, including values for total memory and free memory in GB.
  • Page 100: Using Log Files

    Using log files If HP Support requests that you send a copy of a log file, use the Log Files tab to save that log file as a text file. The Log Files tab lists two types of logs: Log files that are stored locally on the storage system (displayed on the left side of the tab). Log files that are written to a remote log server (displayed on the right side of the tab).
  • Page 101: Deleting Remote Logs

    Select a storage system in the navigation window. Open the tree below the storage system and select Diagnostics. Select the Log Files tab. Select the log in the Remote logs list. Click Log File Tasks and select Edit Remote Log Destination. Change the log type or destination and click OK.
  • Page 102: Exporting The System Summary

    Exporting the System Summary The System Summary has information about all of the storage systems on the network. Export the summary to a .csv file for use in a spreadsheet or database. Information in the summary includes: storage system information, “Working with storage systems”...
  • Page 103: Working With Management Groups

    8 Working with management groups A management group is a collection of one or more storage systems. It is the container within which you cluster storage systems and create volumes for storage. Creating a management group is the first step in creating HP LeftHand Storage. Functions of management groups Provide the highest administrative domain for the SAN.
  • Page 104: Creating A Management Group

    Table 33 Management group components (continued) Component Description method you will use. See “Setting the date and time” (page 73). DNS configuration You can configure DNS at the management group level for all storage systems in the management group. The storage system can use a DNS server to resolve host names.
  • Page 105: Add Administrative User

    NOTE: This name cannot be changed later without destroying the management group. When naming a management group, ensure that you do not use the name of an existing management group. Doing so causes the stores to be initialized and any data on those stores to be permanently deleted.
  • Page 106: Create A Volume And Finish Creating Management Group

    Add the VIP and subnet mask. Click Next. Create a volume and finish creating management group Optional: If you want to create volumes later, select Skip Volume Creation and click Finish. Enter a name, description, data protection level, size, and provisioning type for the volume. Click Finish.
  • Page 107: Reading The Configuration Summary

    Figure 42 Configuration Summary Reading the configuration summary As items are added to the management group, the Summary graph fills in and the count is displayed in the graph. The Summary graph fills in proportionally to the optimum number for that item in a management group, as described in “Configuration guidance”...
  • Page 108: Configuration Errors

    Figure 44 Warning when items in the management group are reaching optimum limits 1. Volumes and snapshots are nearing the optimum limit. One cluster is nearing the optimum limit for storage systems. Configuration errors When any item exceeds a recommended maximum, it turns red, and remains red until the number is reduced.
  • Page 109: Best Practice Summary Overview

    Table 35 iSCSI sessions guidance Number of sessions Guidance Up to 4,000 Green 4,001 – 5,000 Orange 5,001 or more Table 36 Storage systems in the management group Number Guidance Up to 20 Green 21 — 30 Orange 30 or more Table 37 Storage systems in the cluster Number Guidance...
  • Page 110: Disk Level Data Protection

    Figure 46 Best Practice Summary for well-configured SAN Expand the management group in the summary to see the individual categories that have recommended best practices. The summary displays the status of each category and identifies any conditions that fall outside the best practice. Click on a row to see details about that item's best practice.
  • Page 111: Volume-Level Data Protection

    Volume-level data protection Use a data protection level greater than Network RAID-0 to ensure optimum data availability if a storage system fails. For information about data protection, see “Planning data protection” (page 142). Volume access Use iSCSI load balancing to ensure better performance and better utilization of cluster resources. For more information about iSCSI load balancing, see “iSCSI load balancing”...
  • Page 112: Choosing Which Storage System To Log In To

    In the navigation window, select a management group and log in by any of the following methods: Double-click the management group. Open the Management Group Tasks menu, and select Log in to Management Group. You can also open this menu by right-clicking on the management group. Click any of the Log in to view links on the Details tab.
  • Page 113: Stopping Managers

    Stopping managers Under normal circumstances, you stop a manager when you are removing a storage system from a management group. Stopping a manager that will compromise fault tolerance generates an alarm. You cannot stop the last manager in a management group. The only way to stop the last manager is to delete the management group, which permanently deletes all data stored on volumes in the management group.
  • Page 114: Saving The Management Group Configuration Information

    Saving the management group configuration information From the Tasks menu, select Management Group View Management Group Configuration. If there are multiple management groups, select the management group from the List of Management Groups and click Continue. Click Save in the Management Group Configuration window to save the configuration details in a .txt file.
  • Page 115: Restarting The Management Group

    Stop server or host access to the volumes in the list. Click Shut Down Group. Restarting the management group When you are ready to restart the management group, simply power on the storage systems for that group: Power on the storage systems that were shut down. Click Find Find Systems in the CMC to discover the storage systems.
  • Page 116: Removing A Storage System From A Management Group

    Figure 48 Manually setting management group to normal mode Click Set To Normal. Removing a storage system from a management group When a storage system needs to be repaired or upgraded, remove it from the management group before beginning the repair or upgrade. Also remove a storage system from a management group if you are replacing it with another system.
  • Page 117 Prerequisites Log in to the management group. Remove all volumes and snapshots. Delete all clusters. In the navigation window, log in to the management group. Click Management Group Tasks on the Details tab, and select Delete Management Group. In the Delete Management Window, enter the management group name, and click OK. After the management group is deleted, the storage systems return to the Available Systems pool.
  • Page 118: Working With Managers And Quorum

    9 Working with managers and quorum When a management group is created using release 10.0, it will be created with the correct number of managers started. Older management groups upgraded to release 10.0 may require additional managers or a Failover Manager started before the upgrade to 10.0 can be completed. Table 38 (page 119) to see the optimum number of managers required for each configuration.
  • Page 119: Managers And Quorum

    For more information about managers, see “Managers overview” (page 118). Table 38 Default number of managers added when a management group is created Number of storage systems Manager configuration 1 manager 2 managers and a virtual manager, if a Failover Manager is not available.
  • Page 120: Regular Managers And Specialized Managers

    Regular managers and specialized managers Regular managers run on storage systems in a management group. The SAN/iQ software has two types of specialized managers, Failover Managers and virtual managers. You can only use one type of specialized manager in a management group. The Failover Manager is recommended for management groups with a single, two-system cluster, and in Multi-Site cluster configurations with two clusters.
  • Page 121: Using The Failover Manager

    Figure 50 Virtual manager added to a management group Using the Failover Manager Adding a Failover Manager to the management group enables the SAN to have automated failover using a manager installed on network hardware other than the storage systems in the HP LeftHand Storage.
  • Page 122: Installing The Failover Manager For Hyper-V Server

    http://www.hp.com/go/LeftHandDownloads The installer for the Failover Manager for Hyper-V Server includes a wizard that guides you through configuring the virtual machine on the network and powering on the Failover Manager. CAUTION: Do not install the Failover Manager on a volume that is served from HP LeftHand Storage, since this would defeat the purpose of the Failover Manager.
  • Page 123: Using The Failover Manager For Vmware Vsphere

    Using the Failover Manager for VMware vSphere Install the Failover Manager from the DVD, or from the DVD .iso image downloaded from the website: http://www.hp.com/go/LeftHandDownloads The installer offers two choices for installing the Failover Manager for VMware: Failover Manager for VMware vSphere—The installer for the Failover Manager for VMware vSphere includes a wizard that guides you through configuring the virtual machine on the network and powering on the Failover Manager.
  • Page 124: Installing The Failover Manager For Other Vmware Platforms

    If this is the only Failover manager you are installing, select No, I am done and click Next. NOTE: If you want to install another Failover Manager, the wizard repeats the steps, using information you already entered, as appropriate. Finish the installation, reviewing the configuration summary, and click Deploy. When the installer is finished, the Failover Manager is ready to be used in the HP LeftHand Storage.
  • Page 125: Installing The Failover Manager Using The Ovf Files With The Vi Client

    Installing the Failover Manager using the OVF files with the VI Client Download the .OVF files from the following website: http://www.hp.com/go/LeftHandDownloads Click Agree to accept the terms of the License Agreement. Click the link for OVF files to open a window from which you can copy the files to the ESX Server.
  • Page 126: Uninstalling The Failover Manager From Vmware Vsphere

    Table 40 Troubleshooting for VMware vSphere installation Issue Solution General Installation You want to reinstall the Failover Manager. Close your CMC session. In the VI Client, power off the Failover Manager. Right-click and select Delete from Disk. Copy fresh files into the virtual machine folder from the downloaded zip file or distribution media.
  • Page 127: Requirements For Using A Virtual Manager

    You should only use a virtual manager if you cannot use a Failover Manager or if manual failover is preferred for a specific reason. See “Managers and quorum” (page 119) for detailed information about quorum, fault tolerance, and the number of managers. Because a virtual manager is available to maintain quorum in a management group when a storage system goes offline, it can also be used for maintaining quorum during maintenance procedures.
  • Page 128: Adding A Virtual Manager

    Figure 51 Two-site failure scenarios that are correctly using a virtual manager Scenario 1—Communication between the sites is lost In this scenario, the sites are both operating independently. On the appropriate site, depending upon your configuration, select one of the storage systems, and start the virtual manager on it. That site then recovers quorum and operates as the primary site.
  • Page 129: Starting A Virtual Manager To Regain Quorum

    TIP: Always use a Failover Manager for a two-system management group. Select the management group in the navigation window and log in. Click Management Group Tasks on the Details tab, and select Add virtual manager. Click OK to confirm the action. The virtual manager is added to the management group (1, Figure 52 (page 129)).
  • Page 130: Verifying Virtual Manager Status

    Figure 53 Starting a virtual manager when the storage system running a manager becomes unavailable 1. Unavailable storage system 2. Virtual manager started on storage system running a regular manager NOTE: If you attempt to start a virtual manager on a storage system that appears to be normal in the CMC, and you receive a message that the storage system is unavailable, start the virtual manager on a different storage system.
  • Page 131: Removing A Virtual Manager From A Management Group

    Removing a virtual manager from a management group Log into the management group from which you want to remove the virtual manager. Click Management Group Tasks on the Details tab, and select Delete Virtual Manager. Click OK to confirm the action. NOTE: The CMC does not allow you to delete a virtual manager if that deletion causes a loss of quorum.
  • Page 132: 10 Working With Clusters

    10 Working with clusters Clusters are groups of storage systems created in a management group. Clusters create a pool of storage from which to create volumes. The volumes seamlessly span the storage systems in the cluster. Expand the capacity of the storage pool by adding storage systems to the cluster. Prerequisites An existing management group At least one storage system in the management group that is not already in a cluster...
  • Page 133: Cluster Map View

    (Optional) Enter information for creating the first volume in the cluster, or select Skip Volume Creation. NOTE: The size listed in the Cluster Available Space box is an estimate because the actual size of the cluster once it is created can vary. Therefore, you may notice that, after creating a cluster and viewing the Details tab of the cluster, the size listed in the Total Available Space box is different.
  • Page 134: Editing Cluster Vip Addresses

    Editing iSNS servers from the Cluster Tasks menu Quiesce any applications that are accessing volumes in the cluster. Log off the active sessions in the iSCSI initiator for those volumes. Edit iSNS servers using either of the following methods: From the Cluster Tasks menu: Right-click the cluster or click Cluster Tasks.
  • Page 135: Maintaining Storage Systems In Clusters

    Maintaining storage systems in clusters Use the Edit Cluster menu to perform cluster maintenance tasks. Adding a storage system to a cluster Add a storage system to an existing cluster to expand the storage for that cluster. If the cluster contains a single storage system, adding a second storage system triggers a change for the volumes in the cluster to go from Network RAID 0 to Network RAID 10, which offers better data protection and volume availability.
  • Page 136: Reordering Storage Systems In A Cluster

    Figure 54 Swapping storage systems in the cluster Repeat the process for each storage system to be swapped. Click Swap Storage Systems when you are finished. The swap operation may take some time, depending upon the number of storage systems swapped and the amount of data being restriped.
  • Page 137: Troubleshooting A Cluster

    In the Edit Cluster window, select a storage system from the list. Click Remove Systems. The storage system moves out of the cluster, but remains in the management group. Click OK when you are finished. NOTE: Removing a storage system causes a full cluster restripe. Troubleshooting a cluster Auto Performance Protection monitors individual storage system health related to performance issues that affect the volumes in the cluster.
  • Page 138: Repairing A Storage System

    Select the affected storage system in the navigation window. The storage system icon blinks in the tree. Check the Status line on the Details tab. If status is Storage System Overloaded, wait up to 10 minutes and check the status again. The status may return to Normal and the storage system will be resyncing.
  • Page 139: Deleting A Cluster

    Prerequisites The volume must have Network RAID- 1 0, Network RAID- 1 0+1, Network RAID- 1 0+2, Network RAID-5, or Network RAID-6. The storage system must display the blinking red and yellow triangle in the navigation window. A disk inactive or disk off event appears in the Events list, and the Status label in the tab window shows the failure.
  • Page 140 If there are any schedules to snapshot a volume or schedules to remote snapshot a volume for this cluster, delete them. See “Deleting schedules to snapshot a volume” (page 170). Click Cluster Tasks and select Delete Cluster. A confirmation message opens. If the message says that the cluster is in use, you must first delete the snapshots and volumes on the cluster.
  • Page 141: 1 Provisioning Storage

    1 1 Provisioning storage The SAN/iQ software uses volumes, including SmartClone volumes, and snapshots to provision storage to application servers and to back up data for recovery or other uses. Before you create volumes or configure schedules to snapshot a volume, plan the configuration you want for the volumes and snapshots.
  • Page 142: Full Provisioning

    Full provisioning Full provisioning reserves the same amount of space on the SAN as is presented to application servers. Full provisioning ensures that the application server will not fail a write. When a fully provisioned volume approaches capacity, you receive a warning that the disk is nearly full. Thin provisioning Thin provisioning reserves less space on the SAN than is presented to application servers.
  • Page 143: Data Protection Level

    Data protection level Seven data protection levels are available, depending upon the number of available storage systems in the cluster. Table 43 Setting a data protection level for a volume With this number of Select any of these data protection levels For this number of copies available storage systems in cluster...
  • Page 144: How Data Protection Levels Work

    How data protection levels work The system calculates the actual amount of storage resources needed for all data protection levels. When you choose Network RAID- 1 0, Network RAID- 1 0+1, or Network RAID- 1 0+2, data is striped and mirrored across either two, three, or four adjacent storage systems in the cluster. When you choose Network RAID-5 or Network RAID-6, the layout of the data stripe, including parity, depends on both the Network RAID mode and cluster size.
  • Page 145: Network Raid- 1 0+2 (4-Way Mirror)

    Best applications for Network RAID- 1 0+1 are those that require data availability even if two storage systems in a cluster become unavailable. Figure 57 (page 145) illustrates the write patterns on a cluster with four storage systems configured for Network RAID- 1 0+1. Figure 57 Write patterns in Network RAID-10+1 (3-Way Mirror) Network RAID- 1 0+2 (4-Way Mirror) Network RAID- 1 0+2 data is striped and mirrored across four or more storage systems.
  • Page 146: Network Raid-6 (Dual Parity)

    Figure 59 (page 146) illustrates the write patterns on a cluster with four storage systems configured for Network RAID-5. Figure 59 Write patterns and parity in Network RAID-5 (Single Parity) 1. Parity for data blocks A, B, C 2. Parity for data blocks D, E, F 3.
  • Page 147: Provisioning Snapshots

    Figure 60 Write patterns and parity in Network RAID-6 (Dual Parity) 1. P1 is parity for data blocks A, B, C, D 2. P2 is parity for data blocks E, F, G, H 3. P3 is parity for data blocks I, J, K, L 4.
  • Page 148: Managing Capacity Using Volume Size And Snapshots

    Plan how you intend to use snapshots, and the schedule and retention policy for schedules to snapshot a volume. Snapshots record changes in data on the volume, so calculating the rate of changed data in the client applications is important for planning schedules to snapshot a volume. NOTE: Volume size, provisioning, and using snapshots should be planned together.
  • Page 149: Cluster Use Summary

    Figure 61 Cluster tab view Cluster use summary The Use Summary window presents information about the storage space available in the cluster. Figure 62 Reviewing the Use Summary tab In the Use Summary window, the Storage Space section lists the space available on the storage systems in the cluster.
  • Page 150: Volume Use Summary

    Table 44 Information on the Use Summary tab (continued) Category Description snapshots are created, or as thinly provisioned volumes grow. Saved Space Thin Provisioning The space saved by thin provisioning volumes. This space is calculated by the system. SmartClone Feature Space saved by using SmartClone volumes is calculated using the amount of data in the clone point and any snapshots below the clone point.
  • Page 151: System Use Summary

    Table 45 Information on the Volume Use tab (continued) Category Description to see the space saved number decrease as data on the volume increases. Full provisioning allocates the full amount of space for the size of the volume. Reclaimable space is the amount of space that you can get back if this fully provisioned volume is changed to thinly provisioned.
  • Page 152: Measuring Disk Capacity And Volume Size

    Table 46 Information on the System Use tab Category Description Name Host name of the storage system. Raw space Total amount of disk capacity on the storage system. Note: Storage systems with greater capacity will only operate to the capacity of the lowest capacity storage system in the cluster.
  • Page 153: Changing The Volume Size On The Server

    However, the file system does not inform the block device underneath (the SAN/iQ volume) that there is freed-up space. In fact, no mechanism exists to transmit that information. There is no SCSI command which says “Block 198646 can be safely forgotten.” At the block device level, there are only reads and writes.
  • Page 154: Changing Configuration Characteristics To Manage Space

    Changing configuration characteristics to manage space Options for managing space on the cluster include Changing snapshot retention—retaining fewer snapshots requires less space Changing schedules to snapshot a volume—taking snapshots less frequently requires less space Deleting volumes or moving them to a different cluster NOTE: Deleting files on a file system does not free up space on the SAN volume.
  • Page 155: 12 Using Volumes

    12 Using volumes A volume is a logical entity that is made up of storage on one or more storage systems. It can be used as raw data storage or it can be formatted with a file system and used by a host or file server. Create volumes on clusters that contain one or more storage systems.
  • Page 156: Characteristics Of Volumes

    Types of volumes Primary volumes are volumes used for data storage. Remote volumes are used as targets for Remote Copy for business continuance, backup and recovery, and data mining/migration configurations. See the HP LeftHand Storage Remote Copy User Guide for detailed information about remote volumes. A SmartClone volume is a type of volume that is created from an existing volume or snapshot.
  • Page 157: Creating A Volume

    Table 48 Characteristics for new volumes (continued) Volume characteristic Configurable for Primary What it means or Remote Volume The default value = Network RAID- 1 0. For information about the data protection levels, see “Planning data protection” (page 142). Type Both Primary volumes are used for data storage.
  • Page 158: Viewing The Volume Map

    [Optional] Assign a server to the volume. Click OK. The SAN/iQ operating system software creates the volume. The volume is selected in the navigation window and the Volume tab view displays the Details tab. NOTE: The system automatically factors data protection levels into the settings. For example, if you create a fully provisioned 500 GB volume and the data protection level is Network RAID- 1 0 (2–Way Mirror), the system automatically allocates 1000 GB for the volume.
  • Page 159: To Edit A Volume

    Table 49 Requirements for changing volume characteristics (continued) Item Requirements for Changing Data protection level The cluster must have sufficient storage systems and unallocated space to support the new data protection level. For example, you just added more storage to a cluster and have more capacity.
  • Page 160: Deleting A Volume

    iSCSI sessions and volume migration iSCSI sessions are rebalanced during volume migration. While data is being migrated the volume is still accessible and fully functional. The rebalancing affects systems using the DSM for MPIO differently than systems that are not using the DSM for MPIO. Using DSM for MPIO—Administrative sessions are rebalanced to the new cluster immediately upon volume migration.
  • Page 161: To Delete The Volume

    Restrictions on deleting volumes You cannot delete a volume when the volume has a schedule that creates remote copies. You must delete the remote copy schedule first. CAUTION: Typically, you do not want to delete individual volumes that are part of a volume set. For example, you may set up Exchange to use two volumes to support a StorageGroup: one for mailbox data and one for logs.
  • Page 162: 13 Using Snapshots

    13 Using snapshots Snapshots are a copy of a volume for use with backup and other applications. Types of snapshots Snapshots are one of the following types: Regular or point-in-time —Snapshot that is taken at a specific point in time. However, an application writing to that volume may not be quiesced.
  • Page 163: Planning Snapshots

    would run weekly and retain 5 copies. A third schedule would run monthly and keep 4 copies. File-level restore without tape or backup software Source volumes for data mining, test and development, and other data use. Best Practice—Use SmartClone volumes. See “SmartClone volumes”...
  • Page 164: Prerequisites For Application-Managed Snapshots

    Table 51 Snapshot characteristics (continued) Snapshot parameter What it means vCenter Server is installed. See the HP LeftHand Storage Application Aware Snapshot Manager Deployment Guide for more information about the controlling server IP address. Prerequisites for application-managed snapshots Creating an application-managed snapshot using the SAN/iQ software is the same as creating any other snapshot.
  • Page 165: Creating Snapshots

    Creating snapshots Create a snapshot to preserve a version of a volume at a specific point in time. For information about snapshot characteristics, see “Configuring snapshots” (page 163). Creating an application-managed snapshot, with or without volume sets, requires the use of the Application Aware Snapshot Manager.
  • Page 166: Editing A Snapshot

    (Optional) Edit the Snapshot Name and Description for each snapshot. NOTE: Be sure to leave the Application-Managed Snapshots check box selected. This option maintains the association of the volumes and snapshots and quiesces the application before creating the snapshots. If you clear the check box, the system creates a point-in-time snapshot of each volume listed.
  • Page 167: Requirements For Snapshot Schedules

    Table 53 Planning the scheduling for snapshots (continued) Requirement What it means If there is not sufficient room in the cluster for both snapshots, the scheduled snapshot will not be created, and the snapshot schedule will not continue until an existing snapshot is deleted or space is otherwise made available.
  • Page 168: Creating A Schedule To Snapshot A Volume

    volumes. If it is not, select a volume that is aware of all associated volumes, and create the schedule there. Updating schedule for volume sets When you first create the schedule, the system stores information about the volume set as it exists at that time.
  • Page 169: Editing Scheduled Snapshots

    Editing scheduled snapshots You can edit everything in the scheduled snapshot window except the name. If the snapshot is part of a snapshot set, you can also verify that the volumes included in the schedule are the current volumes in the volume set. For more information, see “Scheduling snapshots for volume sets”...
  • Page 170: Deleting Schedules To Snapshot A Volume

    Deleting schedules to snapshot a volume NOTE: After you delete a snapshot schedule, if you want to delete snapshots created by that schedule, you must do so manually. In the navigation window, select the volume for which you want to delete the snapshot schedule. Click the Schedules tab to bring it to the front.
  • Page 171: Making A Windows Application-Managed Snapshot Available

    Configure server access to the snapshot If you mount a Windows application-managed snapshot as a volume, use diskpart.exe to change the resulting volume's attributes, as described in “Making a Windows application-managed snapshot available” (page 171). When you have mounted the snapshot on a host, you can do the following: Recover individual files or folders and restore to an alternate location Use the data for creating backups Making a Windows application-managed snapshot available...
  • Page 172 Exit diskpart by typing exit. Reboot the server. Verify that the disk is available by launching Windows Logical Disk Manager. You may need to assign a drive letter, but the disk should be online and available for use. If the server is running Windows 2008 or later and you promoted a remote application-managed snapshot to a primary volume, start the HP LeftHand Command-Line Interface and clear the VSS volume flag by typing clearvssvolumeflags volumename=[drive_letter](where [drive_letter] is the corresponding drive letter, such...
  • Page 173: Managing Snapshot Temporary Space

    Display the volume's attributes typing att vol. The volume will show that it is hidden, read-only, and shadow copy. Change these attributes by typing att vol clear readonly hidden shadowcopy. Exit diskpart by typing exit. Reboot the server. Verify that the disk is available by launching Windows Logical Disk Manager. You may need to assign a drive letter, but the disk should be online and available for use.
  • Page 174: Rolling Back A Volume To A Snapshot Or Clone Point

    In the navigation window, select snapshot for which you want to delete the temporary space. Right-click, and select Delete Temporary Space. A warning message opens. Click OK to confirm the delete. Rolling back a volume to a snapshot or clone point Rolling back a volume to a snapshot or a clone point replaces the original volume with a read/write copy of the selected snapshot.
  • Page 175: Continue With Standard Roll Back

    Log in to the management group that contains the volume that you want to roll back. In the navigation window, select the snapshot to which you want to roll back. Review the snapshot Details tab to ensure you have selected the correct snapshot. Click Snapshot Tasks on the Details tab, and select Roll Back Volume.
  • Page 176: Roll Back All Associated Volumes

    Click New SmartClone Volume. Enter a name, and configure the additional settings. For more information about characteristics of SmartClone volumes, see “Defining SmartClone volume characteristics” (page 181). Click OK when you have finished setting up the SmartClone volume and updated the table. The new volume appears in the navigation window, with the snapshot now a designated clone point for both volumes.
  • Page 177: Cancel The Rollback Operation

    Cancel the rollback operation If you need to log off iSCSI sessions, stop application servers, or other actions, cancel the operation, perform the necessary tasks, and then do the rollback. Click Cancel. Perform necessary actions. Start the rollback again. Deleting a snapshot When you delete a snapshot, the data necessary to maintain volume consistency are moved up to the next snapshot or to the volume (if it is a primary volume), and the snapshot is removed from the navigation window.
  • Page 178: 14 Smartclone Volumes

    14 SmartClone volumes SmartClone are space-efficient copies of existing volumes or snapshots. They appear as multiple volumes that share a common snapshot, called a clone point. They share this snapshot data on the SAN. SmartClone volumes can be used to duplicate configurations or environments for widespread use, quickly and without consuming disk space for duplicated data.
  • Page 179: Example Scenarios For Using Smartclone Volumes

    Table 55 Terms used for SmartClone features (continued) Term Definition (page 179), the snapshots Volume_1_SS_1 and Volume_1_SS_2 are shared snapshots. Map view Tab that displays the relationships between clone points and SmartClone volumes. See the map view in Figure 80 (page 191) Figure 81 (page 192).
  • Page 180: Safely Use Production Data For Test, Development, And Data Mining

    Safely use production data for test, development, and data mining Use SmartClone volumes to safely work with your production environment in a test and development environment, before going live with new applications or upgrades to current applications. Or, clone copies of your production data for data mining and analysis. Test and development Using the SmartClone process, you can instantly clone copies of your production LUNs and mount them in another environment.
  • Page 181: Naming Convention For Smartclone Volumes

    Naming convention for SmartClone volumes A well-planned naming convention helps when you have many SmartClone volumes. Plan the naming ahead of time, since you cannot change volume or snapshot names after they have been created. You can design a custom naming convention when you create SmartClone volumes. Naming and multiple identical disks in a server Mounting multiple identical disks to servers typically requires that servers write new disk signatures to them.
  • Page 182: Naming Smartclone Volumes

    Table 56 Characteristics for new SmartClone volumes (continued) SmartClone volume characteristic What it means more information, see “Assigning iSCSI server connections access to volumes” (page 205). Permission Type of access to the volume: Read, Read/Write, None Naming SmartClone volumes Because you may create dozens or even hundreds of SmartClone volumes, you need to plan the naming convention for them.
  • Page 183: Shared Versus Individual Characteristics

    Figure 71 Rename SmartClone volume from base name 1. Rename SmartClone volume in list Shared versus individual characteristics Characteristics for SmartClone volumes are the same as for regular volumes. However, certain characteristics are shared among all the SmartClone volumes and snapshots created from a common clone point.
  • Page 184 Figure 72 Programming cluster with SmartClone volumes, clone point, and the source volume 1. Source volume 2. Clone point 3. SmartClone volumes (5) In this example, you edit the SmartClone volume, and on the Advanced tab you change the cluster to SysAdm.
  • Page 185: Clone Point

    Figure 74 SysAdm cluster now has the SmartClone volumes, clone point, and the source volume Table 57 (page 185) shows the shared and individual characteristics of SmartClone volumes. Note that if you change the cluster or the data protection level of one SmartClone volume, the cluster and data protection level of all the related volumes and snapshots will change.
  • Page 186 clone point. That is, you can delete all but one of the SmartClone volumes, and then you can delete the clone point. Figure 75 Navigation window with clone point 1. Original volume 2. Clone point 3. SmartClone volume Figure 75 (page 186), the original volume is “C#.”...
  • Page 187: Shared Snapshot

    Figure 76 Clone point appears under each SmartClone volume 1. Clone point appears multiple times. Note that it is exactly the same in each spot NOTE: Remember that a clone point only takes up space on the SAN once. Shared snapshot Shared snapshots occur when a clone point is created from a newer snapshot that has older snapshots below it in the tree.
  • Page 188 Figure 77 Navigation window with shared snapshots 1. Original volume 2. Clone point 3. Shared snapshots Figure 77 (page 188), the original volume is C#. Three snapshots were created from C#: C#_snap1 C#_snap2 C#_SCsnap Then a SmartClone volume was created from the latest snapshot, C#_SCsnap. That volume has a base name of C#_class.
  • Page 189: Creating Smartclone Volumes

    Creating SmartClone volumes You create SmartClone volumes from existing volumes or snapshots. When you create a SmartClone volume from another volume, you first take a snapshot of the original volume. When you create a SmartClone volume from a snapshot, you do not take another snapshot. To create a SmartClone volume When you create SmartClone volumes, you either set the characteristics for the entire group or set them individually.
  • Page 190: Viewing Smartclone Volumes

    Next you select the following characteristics: Base name for the SmartClone volumes Type of provisioning Server you want connected to the volumes, and Appropriate permission. In the Quantity field, select the number of SmartClone volumes you want to create. Click Update Table to populate the table with the number of SmartClone volumes you selected. If you want to modify any individual characteristic, do it in the list before you click OK to create the SmartClone volumes.
  • Page 191: Using Views

    Figure 80 Viewing SmartClone volumes and snapshots as a tree in the Map View Using views The default view is the tree layout, displayed in Figure 80 (page 191). The tree layout is the most effective view for smaller, more complex hierarchies with multiple clone points, such as clones of clones, or shared snapshots.
  • Page 192: Viewing Clone Points, Volumes, And Snapshots

    Figure 81 Viewing the organic layout of SmartClone volumes and related snapshots in the Map View Viewing clone points, volumes, and snapshots The navigation window view of SmartClone volumes, clone points, and snapshots includes highlighting that shows the relationship between related items. For example, in Figure 82 (page 193), the clone point is selected in the tree.
  • Page 193: Editing Smartclone Volumes

    Figure 82 Highlighting all related clone points in navigation window 1. Selected clone point 2. Clone point repeated under SmartClone volumes Editing SmartClone volumes Use the Edit Volume window to change the characteristics of a SmartClone volume. Table 60 Requirements for changing SmartClone volume characteristics Item Shared or Individual Requirements for Changing...
  • Page 194: To Edit The Smartclone Volumes

    Table 60 Requirements for changing SmartClone volume characteristics (continued) Item Shared or Individual Requirements for Changing Type Individual Determines whether the volume is primary or remote. Provisioning Individual Determines whether the volume is fully provisioned or thinly provisioned. To edit the SmartClone volumes In the navigation window, select the SmartClone volume for which you want to make changes.
  • Page 195: Deleting Multiple Smartclone Volumes

    Deleting multiple SmartClone volumes Delete multiple SmartClone volumes in a single operation from the Volume and Snapshots node of the cluster. First you must stop any application servers that are using the volumes, and log off any iSCSI sessions. Select the Volumes and Snapshots node to display the list of SmartClone volumes in the cluster. Figure 84 List of SmartClone volumes in cluster Use Shift+Click to select the SmartClone volumes to delete.
  • Page 196: 15 Working With Scripting

    15 Working with scripting The HP LeftHand Command-Line Interface (CLI) is built upon the SAN/iQ API. Use the CLI to develop automation and scripting and perform storage management. Install the CLI from the HP LeftHand Management Software DVD or download the software from http://www.hp.com/go/LeftHandDownloads Documentation You can also download sample scripts that illustrate common uses for the CLI.
  • Page 197: 16 Controlling Server Access To Volumes

    16 Controlling server access to volumes Application servers (servers), also called clients or hosts, access storage volumes on HP LeftHand Storage using either Fibre Channel or iSCSI connectivity. You set up each server that needs to connect to volumes in a management group in the CMC. We refer to this setup as a “server connection.”...
  • Page 198: Planning Server Connections To Management Groups

    Planning server connections to management groups Add each server connection that needs access to a volume to the management group containing the volume. After you add a server connection to a management group, you can assign the server connection to one or more volumes or snapshots. For more information, see “Planning iSCSI server connections to management groups”...
  • Page 199: Adding An Iscsi Server Connection

    Adding an iSCSI server connection In the navigation window, log in to the management group. Click Management Group Tasks, and select New Server. On the iSCSI tab, enter a name and optional description for the server connection. If you are taking VMware application-managed snapshots, enter the Controlling Server IP Address.
  • Page 200: Deleting An Iscsi Server Connection

    You can also delete an iSCSI server connection from the management group. For more information, “Deleting an iSCSI server connection” (page 200). CAUTION: If you change the load balancing or CHAP options, you must log off and log back on to the target in the iSCSI initiator for the changes to take effect.
  • Page 201: Planning Fibre Channel Server Connections To Management Groups

    Planning Fibre Channel server connections to management groups Add each server connection that needs access to a volume to the management group containing the volume. After you add a server connection to a management group, you can assign the server connection to one or more volumes or snapshots.
  • Page 202: Deleting A Manually Configured Fibre Channel Initiator

    In the New Server window on the iSCSI tab, clear Allow access via iSCSI. Click the Fibre Channel tab. Enter a name and optional description for the server connection. If you are taking VMware application-managed snapshots, enter the Controlling Server IP Address.
  • Page 203: Clustering Server Connections

    IMPORTANT: Do not use the LUN until you configure MPIO. Setting up MPIO Run the MPIO applet and choose the Discover Multi-paths tab. Select the LEFTHANDP4000 disk for the Device Hardware ID. Click Add. Reboot the storage system when prompted. Clustering server connections You can cluster servers to assign multiple server connections to multiple volumes in a single operation.
  • Page 204: Viewing The Relationship Between Storage Systems, Volumes, And Servers

    NOTE: The Server cluster Settings window opens automatically if inconsistencies are detected in the settings for the servers and volumes. On the Server cluster Settings window, choose the proper settings for the server cluster. Ensure that each volume listed has the same access permissions. For iSCSI, select the appropriate radio button for the load balancing setting on each server.
  • Page 205: Assigning Iscsi Server Connections Access To Volumes

    Figure 86 (page 205). You must manually change the server and volume associations to the desired configuration after deleting the server cluster. Figure 86 Servers and volumes retain connections after server cluster is deleted 1. Each volume remains connected to each server after the server cluster is deleted To delete a server cluster and remove connections: In the navigation window, select Servers and then select the server cluster to delete.
  • Page 206: Assigning Server Iscsi Connections From A Volume

    Table 65 Server connection permission levels Type of Access Allows This No access Prevents the server from accessing the volume or snapshot. Read access Restricts the server to read-only access to the data on the volume or snapshot. Read/write access Allows the server read and write permissions to the volume.
  • Page 207: Assigning Fibre Channel Servers From A Volume

    When assigning the server connections to volumes and snapshots, you set the LUN and the permissions for that volume or snapshot. Permission levels are described in Table 65 (page 206). Assigning Fibre Channel servers from a volume Assign one or more server connections to a volume or snapshot. In the navigation window, right-click the volume you want to assign server connections to.
  • Page 208: Editing Server Assignments From A Server Connection

    In the navigation window, right-click the volume whose server connection assignments you want to edit. Select Assign and Unassign Servers. Change the settings as needed. Click OK. Editing server assignments from a server connection You can edit the assignment of one or more volumes or snapshots to any server connection. In the navigation window, right-click the server connection you want to edit.
  • Page 209: 17 Monitoring Performance

    17 Monitoring performance The Performance Monitor provides performance statistics for iSCSI and storage system I/Os to help you and HP support and engineering staff understand the load that the SAN is servicing. The Performance Monitor presents real-time performance data in both tabular and graphical form as an integrated feature in the CMC.
  • Page 210: Current San Activities Example

    Generally, the Performance Monitor can help you determine: Current SAN activities Workload characterization Fault isolation Current SAN activities example This example shows that the Denver cluster is handling an average of more than 747 IOPS with an average throughput of more than 6 million bytes per second and an average queue depth of 31.76.
  • Page 211: What Can I Learn About My Volumes

    Figure 89 Example showing fault isolation What can I learn about my volumes? If you have questions such as these about your volumes, the Performance Monitor can help: Which volumes are accessed the most? What is the load being generated on a specific volume? The Performance Monitor can let you see the following: Most active volumes Activity generated by a specific server...
  • Page 212: Activity Generated By A Specific Server Example

    Figure 91 Example showing throughput of two volumes Activity generated by a specific server example This example shows the total IOPS and throughput generated by the server (ExchServer- 1 ) on two volumes. Figure 92 Example showing activity generated by a specific server Planning for SAN improvements If you have questions such as these about planning for SAN improvements, the Performance Monitor can help:...
  • Page 213: Load Comparison Of Two Clusters Example

    Figure 93 Example showing network utilization of three storage systems Load comparison of two clusters example This example illustrates the total IOPS, throughput, and queue depth of two different clusters (Denver and Boulder), letting you compare the usage of those clusters. You can also monitor one cluster in a separate window while doing other tasks in the CMC.
  • Page 214: Accessing And Understanding The Performance Monitor Window

    Figure 95 Example comparing two volumes Accessing and understanding the Performance Monitor window The Performance Monitor is available as a tree system below each cluster. To display the Performance Monitor window: In the navigation window, log in to the management group. Select the Performance Monitor system for the cluster you want.
  • Page 215: Performance Monitor Toolbar

    For more information about the performance monitor window, see the following: “Performance Monitor toolbar” (page 215) “Performance monitor graph” (page 215) “Performance monitor table” (page 216) Performance Monitor toolbar The toolbar lets you change some settings and export data. Figure 97 Performance Monitor toolbar Button or Status Definition 1.
  • Page 216: Performance Monitor Table

    Figure 98 Performance Monitor graph The graph shows the last 100 data samples and updates the samples based on the sample interval setting. The vertical axis uses a scale of 0 to 100. Graph data is automatically adjusted to fit the scale. For example, if a statistic value was larger than 100, say 4,000.0, the system would scale it down to 40.0 using a scaling factor of 0.01.
  • Page 217: Understanding The Performance Statistics

    Table 66 Performance Monitor table columns (continued) Column Definition Units Unit of measure for the statistic. Value Current sample value for the statistic. Minimum Lowest recorded sample value of the last 100 samples. Maximum Highest recorded sample value of the last 100 samples. Average Average of the last 100 recorded sample values.
  • Page 218 Table 67 Performance Monitor statistics Statistic Definition Cluster Volume or Snapshot IOPS Reads Average read requests per second for the sample interval. IOPS Writes Average write requests per second for the sample interval. IOPS Total Average read+write requests per second for the sample interval.
  • Page 219: Monitoring And Comparing Multiple Clusters

    Table 67 Performance Monitor statistics (continued) Statistic Definition Cluster Volume or Snapshot system for the sample interval. Memory Utilization Percent of total memory used on this storage system for the sample interval. Network Utilization Percent of bidirectional network capacity used on this network interface on this storage system for the sample interval.
  • Page 220: Access Size

    Access size The size of a read or write operation. As this size increases, throughput usually increases because a disk access consists of a seek and a data transfer. With more data to transfer, the relative cost of the seek decreases. Some applications allow tuning the size of read and write buffers, but there are practical limits to this.
  • Page 221: Viewing Statistic Details

    Click Figure 101 Add Statistics window From the Select Object list, select the cluster, volumes, and storage systems you want to monitor. Use the CTRL key to select multiple objects from the list. From the Select Statistics options, select the option you want. Add All—Adds all available statistics for each selected object.
  • Page 222: Removing And Clearing Statistics

    Removing and clearing statistics You can remove or clear statistics in any of the following ways: Remove one or more statistics from the table and graph Clear the sample data, but retain the statistics in the table Clear the graph display, but retain the statistics in the table Reset to the default statistics Removing a statistic You can remove one or more statistics from the table and graph.
  • Page 223: Changing The Graph

    From the Performance Monitor window, click to pause the monitoring. All data remain as they were when you paused. To restart the monitoring, click Data updates when the next sample interval elapses. The graph will have a gap in the time. Changing the graph You can change the graph and its lines in the following ways: “Hiding and showing the graph”...
  • Page 224: Changing The Scaling Factor

    Changing the scaling factor The vertical axis uses a scale of 0 to 100. Graph data is automatically adjusted to fit the scale. For example, if a statistic value was larger than 100, say 4,000.0, the system would scale it down to 40.0 using a scaling factor of 0.01.
  • Page 225: Saving The Graph To An Image File

    Click OK. The File Size field displays an estimated file size, based on the sample interval, duration, and selected statistics. 1 1. When the export information is set the way you want it, click OK to start the export. The export progress appears in the Performance Monitor window, based on the duration and elapsed time.
  • Page 226: 18 Registering Advanced Features

    18 Registering advanced features Advanced features expand the capabilities of the SAN/iQ software and are enabled by licensing the storage systems through the HP License Key Delivery Service website, using the license entitlement certificate that is packaged with each storage system. However, you can use the advanced features immediately by agreeing to enter an evaluation period when you begin using the SAN/iQ software for clustered storage.
  • Page 227: Backing Out Of Remote Copy Evaluation

    Identifying licensing status You can check the status of licensing on individual advanced features by the icons displayed. The violation icon appears throughout the evaluation period. Figure 102 Identifying the license status for advanced features Backing out of Remote Copy evaluation If you decide not to use Remote Copy and you have not obtained license keys by the end of the evaluation period, you must delete any remote volumes and snapshots you have configured.
  • Page 228: Turn Off Scripting Evaluation

    Read the text, and select the box to enable the use of scripts during a license evaluation period. Click OK. Turn off scripting evaluation Turn off the scripting evaluation period when you take either one of these actions: You obtain license keys for the feature you were evaluating. You complete the evaluation and decide not to license any advanced features.
  • Page 229: Registering Storage Systems In A Management Group

    Submitting storage system feature keys In the navigation window, select the storage system from the Available Systems pool for which you want to register advanced features. Select the Feature Registration tab. Select the Feature Key. Right-click, and select Copy. Use Ctrl+V to paste the feature key into a text editing program, such as Notepad. Register and generate the license key at the Webware website: https://webware.hp.com Entering license keys to storage systems...
  • Page 230 The Registration tab displays the following information: The license status of all the advanced features, including the progress of the evaluation period and which advanced features are in use and not licensed Version information about software components of the operating system Customer information (optional) Submitting storage system feature keys Submit the feature keys of all the storage systems in the management group.
  • Page 231: Saving And Editing Your Customer Information

    NOTE: Record the host name or IP address of the storage system along with the feature key. This record will make it easier to add the license key to the correct storage system when you receive Entering license keys When you receive the license keys, add them to the storage systems in the Feature Registration window.
  • Page 232: Editing Your Customer Information File

    Make a customer information file for each management group in your SAN. Create or edit your customer profile. Save the customer profile to a computer that is not part of your SAN. Editing your customer information file Occasionally, you may want to change some of the information in your customer profile. For example, if your company moves, or contact information changes.
  • Page 233: 19 Hp Lefthand Storage Using Iscsi And Fibre Channel

    19 HP LeftHand Storage using iSCSI and Fibre Channel iSCSI and HP LeftHand Storage The SAN/iQ software uses the iSCSI protocol to let servers access volumes. For fault tolerance and improved performance, use a VIP and iSCSI load balancing when configuring server access to volumes.
  • Page 234: Authentication (Chap)

    Requirements Cluster configured with a virtual IP address. See “VIPs” (page 233). A compliant iSCSI initiator that supports iSCSI Login-Redirect and has passed HP's test criteria for iSCSI failover in a load balanced configuration. To determine which iSCSI initiators are compliant, view the HP LeftHand Storage Compatibility Matrix at http://www.hp.com/go/ P4000compatibility.
  • Page 235: Iscsi And Chap Terminology

    Table 71 Requirements for configuring CHAP CHAP Level What to Configure for the Server in What to Configure in the iSCSI Initiator the SAN/iQ Software CHAP not required Initiator node name only No configuration requirements 1-way CHAP Enter the target secret (12-character CHAP name* minimum) when logging on to available Target secret...
  • Page 236 Figure 107 Viewing the initiator to copy the initiator node name Figure 108 (page 236) illustrates the configuration for a single host authentication with 1-way CHAP required. Figure 108 Configuring iSCSI for a single host with CHAP Figure 109 (page 237) illustrates the configuration for a single host authentication with 2-way CHAP required.
  • Page 237: Use The Hp Lefthand Dsm For Microsoft Mpio

    Figure 109 Adding an initiator secret for 2-way CHAP CAUTION: Without the use of shared storage access (host clustering or clustered file system) technology, allowing more than one iSCSI application server to connect to a volume concurrently without cluster-aware applications and/or file systems in read/write mode could result in data corruption.
  • Page 238: Creating Fibre Channel Connectivity

    systems is reported differently and zoning is uniquely handled, as described in “Zoning” (page 238). For all other Fibre Channel configuration standards, see the HP SAN Design Reference Guide. Creating Fibre Channel connectivity Two or more storage systems enabled for Fibre Channel must be added to a management group to use Fibre Channel connectivity.
  • Page 239: 20 Using The Configuration Interface

    20 Using the Configuration Interface The Configuration Interface is the command line interface that uses a direct connection with the storage system. You may need to access the Configuration Interface if all network connections to the storage system are disabled. Use the Configuration Interface to perform the following tasks. Add storage system administrators and change passwords Access and configure network interfaces Delete a NIC bond...
  • Page 240: Opening The Configuration Interface From The Terminal Emulation Session

    $ xterm In the xterm window, start minicom as follows: $ minicom -c on -l NSM Opening the Configuration Interface from the terminal emulation session Press Enter when the terminal emulation session is established. Enter start, and press Enter at the log in prompt. When the session is connected to the storage system, the Configuration Interface window opens.
  • Page 241: Deleting A Nic Bond

    Table 74 Identifying Ethernet interfaces on the storage system (continued) Ethernet Interfaces Where labeled What the label says Motherboard:Port1, Motherboard:Port2 Configuration Interface Intel Gigabit Ethernet or Broadcom Gigabit Ethernet Label on the back of the storage system Eth0, Eth1, or a graphical symbol similar to the following: Once you have established a connection to the storage system using a terminal emulation program, you can configure an interface connection using the Configuration Interface.
  • Page 242: Removing A Storage System From A Management Group

    TCP speed and duplex. You can change the speed and duplex of an interface. If you change these settings, you must ensure that both sides of the NIC cable are configured in the same manner. For example, if the storage system is set for Auto/Auto, the switch must be set the same. For more information about TCP speed and duplex settings, see “Managing settings on network interfaces”...
  • Page 243: 21 Replacing Hardware

    21 Replacing hardware This chapter describes the disk replacement procedures for cases in which you do not know which disk to replace and/or you must rebuild RAID on the entire storage system. For example, if RAID has gone off unexpectedly, you need HP Support to help determine the cause, and if it is a disk failure, to identify which disk must be replaced.
  • Page 244: Verify The Storage System Is Not Running A Manager

    Verify the storage system is not running a manager Verify that the storage system that needs the disk replacement is not running a manager. Log in to the management group. Select the storage system in the navigation window, and review the Details tab information. If the Storage System Status shows Manager Normal, and the Management Group Manager shows Normal, then a manager is running and needs to be stopped.
  • Page 245: Rebuilding Data

    NOTE: If there are Network RAID-0 volumes that are offline, the message shown in Figure 1 10 (page 245)is displayed. You must either replicate or delete these volumes before you can proceed. You see the message shown in this case. Figure 1 10 Warning if volumes are Network RAID-0 Right-click the storage system in the navigation window, and select Repair Storage System.
  • Page 246: Returning The Storage System To The Cluster

    Select the Diagnostics category, and select the Hardware Information tab. Select Click to Refresh, and scroll to the RAID section of the Hardware report(Figure 1 1 1 (page 246)) to review the RAID rebuild rate and the percent complete. Click Hardware Information Tasks and select Refresh to monitor the ongoing progress. Figure 1 1 1 Checking RAID rebuild status Returning the storage system to the cluster In the navigation window, right-click the storage system, and select Add to Existing Management...
  • Page 247: Adding The Repaired Storage System To Cluster

    Adding the repaired storage system to cluster After the initialization completes, right-click the cluster, and select Edit Cluster. The list of the storage systems in the cluster should include the ghost IP address. Add the repaired storage system to the cluster in the spot held by the ghost IP address. See Table 75 (page 247).
  • Page 248: Removing The Ghost Storage System

    Right-click the management group, and select Edit Management Group. The current Bandwidth Priority value indicates that each manager in that management group will use that much bandwidth to transfer data to the repaired storage system. Make a note of the current value so it can be restored after the data rebuild completes. Change the bandwidth value as desired, and click OK.
  • Page 249 Figure 1 12 Storage server LEDs 1. Front UID/LED switch 2. System health LED 3. NIC 1 activity LED 4. NIC 2 activity LED 5. Power LED switch Table 76 Storage server LED descriptions Description Front UID/LED switch Steady blue: Identification Flashing blue: The system is being remotely managed Off: No identification System health LED...
  • Page 250: Removing The Raid Controller

    Figure 1 13 Card 1 location Figure 1 14 Card 2 location A cache module is attached to each RAID controller and each cache module is connected to a battery. The unit is called a backup battery with cache (BBWC). BBWC 1 connects to Card 1 and BBWC 2 connects to Card 2.
  • Page 251 Remove the top cover (Figure 1 15 (page 251)): Loosen the screw on the top cover with the T- 1 0 wrench. Press the latch on the top cover. Slide the cover toward the rear of the server and then lift the top cover to remove it from the chassis.
  • Page 252 The cache module is attached to the RAID controller and must be removed before removing the RAID controller. Each cache module is connected to a battery; observe the BBWC status LED (4, Figure 1 17 (page 252)) on both batteries before removing a cache module: If the LED is flashing every two seconds, data is trapped in the cache.
  • Page 253: Installing The Raid Controller

    Figure 1 19 Removing the cache module Remove the RAID controller from its slot. Installing the RAID controller IMPORTANT: The replacement RAID controller contains a new cache module. You must remove the cache module on the replacement controller board and attach the existing cache module to the replacement controller board and reconnect the cache module to the battery cable.
  • Page 254: Verifying Proper Operation

    Figure 121 Installing Card 2 Reinstall the PCI cage (Figure 122 (page 254)): Align the PCI cage assembly to the system board expansion slot, and then press it down to ensure full connection to the system board. Tighten the thumbscrews to secure the PCI cage assembly to the system board and secure the screw on the rear panel of the chassis.
  • Page 255: 22 San/Iq Tcp And Udp Port Usage

    22 SAN/iQ TCP and UDP port usage Table 77 (page 255) lists the TCP and UDP ports that enable communication with SAN/iQ. The “management applications” listed in the Description column include the HP LeftHand Centralized Management Consoleand the scripting interface. Table 77 TCP/UDP ports used for normal SAN operations with SAN/iQ IP Protocol Port(s)
  • Page 256 Table 77 TCP/UDP ports used for normal SAN operations with SAN/iQ (continued) IP Protocol Port(s) Name Description 13847 SAN/iQ Internal Used for Virtual Manager communication 13848 SAN/iQ Internal Used for internal data distribution and resynchronization. 13849 iSCSI iSCSI initiators connect to this port when using the HP LeftHand DSM for Microsoft MPIO.
  • Page 257 Table 77 TCP/UDP ports used for normal SAN operations with SAN/iQ (continued) IP Protocol Port(s) Name Description required for normal day-to-day operations. TCP, UDP 13838, 13845, 13841, SAN/iQ Internal Outgoing from management 13843 applications. Incoming to storage systems. Used for management and control.
  • Page 258: 23 Third-Party Licenses

    Apache Software Foundation, The Legion of the Bouncy Castle, Free Software Foundation, Inc., and OpenPegasus. Other included software is under license agreements with Hewlett-Packard Development Company, L.P.; IBM Corp.; EMC Corporation; Symantec Corporation; and The Open Group. In addition, the software described in this manual includes open source software developed by: Copyright (c) 2005-2008, Kirill Grouchnikov and contributors All rights reserved.
  • Page 259: 24 Support And Other Resources

    24 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.com/support Before contacting HP, collect the following information: Product model names and numbers Technical support registration number (if applicable) Product serial numbers Error messages Operating system type and revision level Detailed questions Subscription service...
  • Page 260: Related Information

    initiate a fast and accurate resolution, based on your product’s service level. Notifications may be sent to your authorized HP Channel Partner for on-site service, if configured and available in your country. The software is available in two variants: HP Insight Remote Support Standard: This software supports server and storage devices and is optimized for environments with 1-50 servers.
  • Page 261: 25 Documentation Feedback

    25 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL when submitting your feedback.
  • Page 262: Glossary

    Glossary The following glossary provides definitions of terms used in the SAN/iQ software and the HP P4000 SAN Solution. acting primary The remote volume, when it assumes the role of the primary volume in a failover scenario. volume Active-Passive A type of network bonding which, in the event of a NIC failure, causes the logical interface to use another NIC in the bond until the preferred NIC resumes operation.
  • Page 263 Device Specific Module. DSM for MPIO The HP P4000 DSM for MPIO vendor-specific DSM that interfaces with the Microsoft MPIO framework. failback After failover, the process by which you restore the primary volume and turn the acting primary back into a remote volume. failover The process by which the user transfers operation of the application server over to the remote volume.
  • Page 264 Multi-Site cluster A cluster of storage that spans multiple sites (up to three). A Multi-Site cluster must meet at least one of the following conditions: Contain storage systems that reside in two or more sites Contain storage systems that span subnets Contain multiple VIPs.
  • Page 265 RAID status Condition of RAID on the storage system: Normal - RAID is synchronized and running. No action is required. Rebuild - A new disk has been inserted in a drive bay and RAID is currently rebuilding. No action is required. Degraded - RAID is not functioning properly.
  • Page 266 shared snapshot Shared snapshots occur when a clone point is created from a newer snapshot that has older snapshots below it in the tree. All the volumes created from the clone point will display these older snapshots that they share, as well as the clone point. site A user-designated location in which storage systems are installed.
  • Page 267 volume set Two or more volumes used by an application. For example, you may set up Exchange to use two volumes to support a StorageGroup: one for mailbox data and one for logs. Those two volumes make a volume set. volume size The size of the virtual device communicated to the operating system and the applications.
  • Page 268: Index

    Index statistics, Symbols storage systems to existing cluster, 1000BASE T interface, storage systems to management group, 1 12 802.3ad , LACP, storage to clusters, 135, users to a group, virtual manager, access control volumes, SNMP, administrative groups access rights see permission levels adding, accessing adding users,...
  • Page 269 storage system inoperable, storage system overloaded, capacity volume availability and, clusters, availability of volumes and snapshots, 23, 25, clusters and usable space in, Availability tab, 23, disk capacity and volume size, available system pool, monitoring on SAN Status Page, of the SAN, planning thin provisioning, backing out planning volume size,...
  • Page 270: Contents 1

    clone see SmartClone volumes customizing on SAN Status Page, clone a volume, monitoring on SAN Status Page, clone point reading for management group, and shared snapshots, configuring deleting, disabled network interface, SmartClone volumes, frame size in Configuration Interface, cluster, IP address manually, changing for volumes, iSCSI single host, Cluster Utilization Summary, customizing on SAN Status...
  • Page 271 availability and safety in RAID configurations, server cluster and change volume associations, clearing statistics sample, servers, 200, stripe patterns in clusters, SmartClone volumes, data mining using SmartClone volumes, snapshot schedules, 169, data protection snapshots, changing levels for volumes, snapshots, and capacity management, requirements for setting levels, volumes, data protection level...
  • Page 272 disk space usage, volumes, disk status email P4500 G2, setting up for event notification, 90, P4800, enabling P4900, NIC flow control, VSA, SNMP traps, display establishing network interfaces, default naming conventions, ESX Server see VMware font size, Ethernet interfaces, languages, evaluating display tool for using map view, backing out of Remote Copy,...
  • Page 273 for Microsoft Hyper-V Server, for VMware, hardware diagnostics, installing for Hyper-V Server, 121, list of diagnostic tests, installing for VMware, tab window, installing for VMware Server or Workstation, hardware information report, requirements for, saving to a file, system requirements for using with Hyper-V Server, help system requirements for using with VMware, obtaining,...
  • Page 274 iSCSI exporting management group support bundle, access to volumes, exporting storage system support bundle, adding server, saving for technical support, and CHAP, log in and fault tolerance, to a storage system, and iSNS servers, to management group, 1 1 1 and virtual IP address, log out as block device,...
  • Page 275 for SmartClone volumes, status of, map view using 1 GbE and 10 GbE interfaces, changing the view, verifying, display tools, with 10 GbE interfaces, for clusters, network interfaces, for volumes, attaching Ethernet cables, possible views and layouts for network elements, bonding, using, configuring, 51,...
  • Page 276 P4800 full provisioning method, disk setup, thin provisioning method, powering off the system controller and disk enclosure, point-in-time snapshots correct order, defined, powering on the system controller and disk enclosure, pool of storage, correct order, port status, Fibre Channel, P4900 positioning disk setup, storage systems in cluster,...
  • Page 277 degraded status and data redundancy, removing old logs, device, remote support software, device status, remote support, setting up CMC for, disk RAID and Network RAID in cluster, remote volumes, managing, see also HP LeftHand Remote Copy User Guide procedure for reconfiguring, removing rebuild rate, administrative users from a Group,...
  • Page 278 RAID, change volume associations after deleting, rolling back a volume, creating, from application-managed snapshots, 175, deleting, restrictions on, server connections routing iSCSI, adding network, servers deleting, access to volumes , editing network, access to volumes and snapshots, routing tables adding DNS, managing, adding Fibre Channel, adding iSCSI,...
  • Page 279 characteristics of, disabling traps, characteristics of, shared versus individual, enabling agents, clone point, enabling agents for event notification, creating from application-managed snapshots, enabling traps, definition of, overview, deleting, removing trap recipient, deleting multiple, setting up for event notification, editing, using MIB, examples for using, using traps, glossary for,...
  • Page 280 adding first one, making application-managed snapshot available after adding to existing cluster, converting, 172, adding to management group, 1 12 thresholds and raw space, changing for a snapshot, and space provisioned in cluster, requirements for changing in snapshots, configuring, time exchanging in a cluster, editing NTP server, identifying HP platform,...
  • Page 281 viewing assigning to servers, clone points, volumes, and snapshots, boot volume, assigning to server, disk report, changing clusters, disk setup reportdisk, changing data protection levels, RAID setup report, changing descriptions, SmartClone volumes, changing size, statistics details, comparing the load of two, virtual IP address, consumed space, and Fibre Channel,...
  • Page 282 DHCP unicast communication, disabling network interface, plugging NICs into same switch for Link Aggregation Dynamic Mode, return repaired system to same place, Wear life, websites customer self repair, HP Subscriber's Choice for Business, product manuals, Windows 2008 dynamic disks, write failure warnings, 282 Index...

This manual is also suitable for:

Lefthand storage