HP StoreAll 9730 Administrator's Manual

Hide thumbs Also See for StoreAll 9730:
Table of Contents

Advertisement

HP IBRIX X9720/StoreAll 9730 Storage
Administrator Guide
Abstract
This guide describes tasks related to cluster configuration and monitoring, system upgrade and recovery, hardware component
replacement, and troubleshooting. It does not document StoreAll file system features or standard Linux administrative tools and
commands. For information about configuring and using StoreAll file system features, see the
nl
HP StoreAll Storage File System User
This guide is intended for system administrators and technicians who are experienced with installing and administering networks,
and with performing Linux operating and administrative tasks. For the latest StoreAll guides, browse to
nl
http://www.hp.com/support/StoreAllManuals.
HP Part Number: AW549-96073
Published: July 2013
Edition: 14
Guide.

Advertisement

Table of Contents
loading

Summary of Contents for HP StoreAll 9730

  • Page 1 HP IBRIX X9720/StoreAll 9730 Storage Administrator Guide Abstract This guide describes tasks related to cluster configuration and monitoring, system upgrade and recovery, hardware component replacement, and troubleshooting. It does not document StoreAll file system features or standard Linux administrative tools and commands.
  • Page 2 The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
  • Page 3: Table Of Contents

    File system unmount issues....................23 File system in MIF state after StoreAll software 6.3 upgrade.............23 2 Product description...................25 System features........................25 System components.........................25 HP StoreAll software features....................25 High availability and redundancy.....................26 3 Getting started..................27 Setting up the X9720/9730 Storage..................27 Installation steps........................27 Additional configuration steps.....................27 Logging in to the system......................28...
  • Page 4 Configuring HP Insight Remote Support on StoreAll systems............36 Configuring the StoreAll cluster for Insight Remote Support............38 Configuring Insight Remote Support for HP SIM 7.1 and IRS 5.7..........41 Configuring Insight Remote Support for HP SIM 6.3 and IRS 5.6..........44 Testing the Insight Remote Support configuration..............47 Updating the Phone Home configuration................47...
  • Page 5 7 Configuring system backups...............76 Backing up the Fusion Manager configuration................76 Using NDMP backup applications....................76 Configuring NDMP parameters on the cluster................77 NDMP process management....................78 Viewing or canceling NDMP sessions................78 Starting, stopping, or restarting an NDMP Server..............78 Viewing or rescanning tape and media changer devices............79 NDMP events........................79 8 Creating host groups for StoreAll clients............80 How host groups work......................80...
  • Page 6 Finding additional information on FMT................140 Adding performance modules on 9730 systems..............140 Adding new server blades on 9720 systems................141 14 Troubleshooting..................143 Collecting information for HP Support with the IbrixCollect............143 Collecting logs........................143 Downloading the archive file.....................144 Deleting the archive file....................144 Configuring Ibrix Collect....................145 Obtaining custom logging from ibrix_collect add-on scripts............146...
  • Page 7 Troubleshooting........................165 Manually recovering bond1 as the cluster................165 iLO remote console does not respond to keystrokes...............169 The ibrix_auth command fails after a restore................169 16 Support and other resources..............170 Contacting HP........................170 Related information.......................170 Obtaining spare parts......................171 HP websites.........................171 Rack stability........................171 Product warranties........................171 Subscription service......................171...
  • Page 8 StoreAll 9730 CX 2 connections to the SAS switches..............204 StoreAll 9730 CX 3 connections to the SAS switches..............205 StoreAll 9730 CX 7 connections to the SAS switches in the expansion rack........206 C The IBRIX X9720 component and cabling diagrams........207 Base and expansion cabinets....................207 Front view of a base cabinet....................207...
  • Page 9 D Warnings and precautions..............220 Electrostatic discharge information..................220 Preventing electrostatic discharge..................220 Grounding methods.....................220 Grounding methods......................220 Equipment symbols.......................221 Weight warning........................221 Rack warnings and precautions....................221 Device warnings and precautions...................222 E Regulatory information................224 Belarus Kazakhstan Russia marking..................224 Turkey RoHS material content declaration.................224 Ukraine RoHS material content declaration................224 Warranty information......................224 Glossary....................225 Index.......................227...
  • Page 10: Upgrading The Storeall Software To The 6.3 Release

    1 Upgrading the StoreAll software to the 6.3 release This chapter describes how to upgrade to the 6.3 StoreAll software release. You can also use this procedure for any subsequent 6.3.x patches. IMPORTANT: Print the following table and check off each step as you complete it. NOTE: (Upgrades from version 6.0.x) CIFS share permissions are granted on a global basis in v6.0.X.
  • Page 11 Table 1 Prerequisites checklist for all upgrades (continued) Step Step Description completed? Set the crash kernel to 256M in the /etc/grub.conf file. The /etc/grub.conf file might contain multiple instances of the crash kernel parameter. Make sure you modify each instance that appears in the file. NOTE: Save a copy of the /etc/grub.conf file before you modify it.
  • Page 12: Upgrading 9720 Chassis Firmware

    To upgrade the firmware, complete the following steps: Go to http://www.hp.com/go/StoreAll. On the HP StoreAll Storage page, select HP Support & Drivers from the Support section. On the Business Support Center, select Download Drivers and Software and then select HP 9720 Base Rack >...
  • Page 13: Preparing For The Upgrade

    This release is only available through the registered release process. To obtain the ISO image, contact HP Support to register for the release and obtain access to the software dropbox. Ensure that the /local/ibrix/ folder is empty prior to copying the contents of pkgfull.
  • Page 14: After The Upgrade

    This release is only available through the registered release process. To obtain the ISO image, contact HP Support to register for the release and obtain access to the software dropbox. Ensure that the /local/ibrix/ folder is empty prior to copying the contents of pkgfull.
  • Page 15: After The Upgrade

    Server, the Fusion Manager is installed in passive mode on that server. Upgrade Linux StoreAll clients. See “Upgrading Linux StoreAll clients” (page 18). If you received a new license from HP, install it as described in “Licensing” (page 135). After the upgrade Complete the following steps: If your cluster nodes contain any 10Gb NICs, reboot these nodes to load the new driver.
  • Page 16 /stage /alt Verify that all FSN servers have a minimum of 4 GB of free/available storage on the /local partition by using the df command . Verify that all FSN servers are not reporting any partition as 100% full (at least 5% free space) by using the df command .
  • Page 17: Performing The Upgrade Manually

    This release is only available through the registered release process. To obtain the ISO image, contact HP Support to register for the release and obtain access to the software dropbox. Ensure that the /local/ibrix/ folder is empty prior to copying the contents of pkgfull.
  • Page 18: Upgrading Linux Storeall Clients

    /etc/init.d/ibrix_client status IBRIX Filesystem Drivers loaded IBRIX IAD Server (pid 3208) running... The IAD service should be running, as shown in the previous sample output. If it is not, contact HP Support. Installing a minor kernel update on Linux clients The StoreAll client software is upgraded automatically when you install a compatible Linux minor kernel update.
  • Page 19: Upgrading Windows Storeall Clients

    The following example is for a RHEL 4.8 client with kernel version 2.6.9-89.ELsmp: # /usr/local/ibrix/bin/verify_client_update 2.6.9-89.35.1.ELsmp Kernel update 2.6.9-89.35.1.ELsmp is compatible. If the minor kernel update is compatible, install the update with the vendor RPM and reboot the system. The StoreAll client software is then automatically updated with the new kernel, and StoreAll client services start automatically.
  • Page 20: Required Steps After The Storeall Upgrade For Pre-6.3 Express Query Enabled File Systems

    If any archive API shares exist for the file system, delete them. NOTE: To list all HTTP shares, enter the following command: ibrix_httpshare -l To list only REST API (Object API) shares, enter the following command: ibrix_httpshare -l -f <FSNAME> -v 1 | grep "objectapi: true" | awk '{ print $2 }' In this instance <FSNAME>...
  • Page 21: Troubleshooting Upgrade Issues

    In this instance <FSNAME> is the file system. Troubleshooting upgrade issues If the upgrade does not complete successfully, check the following items. For additional assistance, contact HP Support. Automatic upgrade Check the following: If the initial execution of /usr/local/ibrix/setup/upgrade fails, check /usr/local/ibrix/setup/upgrade.log for errors.
  • Page 22: Manual Upgrade

    To retry the copy of configuration, use the following command: /usr/local/ibrix/autocfg/bin/ibrixapp upgrade -f -s If the install of the new image succeeds, but the configuration restore fails and you need to revert the server to the previous install, run the following command and then reboot the machine. This step causes the server to boot from the old version (the alternate partition).
  • Page 23: File System Unmount Issues

    On the node now hosting the active Fusion Manager (ib51-102 in the example), unregister node ib51-101: [root@ib51-102 ~]# ibrix_fm -u ib51-101 Command succeeded! On the node hosting the active Fusion Manager, register node ib51-101 and assign the correct IP address: [root@ib51-102 ~]# ibrix_fm -R ib51-101 -I 10.10.51.101 Command succeeded! NOTE:...
  • Page 24 If you did not see the Version mismatch, upgrade needed in the command’s output, see “Troubleshooting an Express Query Manual Intervention Failure (MIF)” (page 152). Perform the following steps only if you see the Version mismatch, upgrade needed in the command’s output: Disable auditing by entering the following command: ibrix_fs -A -f <FSNAME>...
  • Page 25: Product Description

    2 Product description HP X9720 and 9730 Storage are a scalable, network-attached storage (NAS) product. The system combines HP StoreAll software with HP server and storage hardware to create a cluster of file serving nodes. System features The X9720 and 9730 Storage provide the following features:...
  • Page 26: High Availability And Redundancy

    Multiple environments. Operates in both the SAN and DAS environments. High availability. The high-availability software protects servers. Tuning capability. The system can be tuned for large or small-block I/O. Flexible configuration. Segments can be migrated dynamically for rebalancing and data tiering.
  • Page 27: Getting Started

    Follow these guidelines when using your system: Do not modify any parameters of the operating system or kernel, or update any part of the X9720/9730 Storage unless instructed to do so by HP; otherwise, the system could fail to operate properly.
  • Page 28: Logging In To The System

    Data tiering. Use this feature to move files to specific tiers based on file attributes. For more information about these file system features, see the HP StoreAll Storage File System User Guide. Localization support Red Hat Enterprise Linux 5 uses the UTF-8 (8-bit Unicode Transformation Format) encoding for supported locales.
  • Page 29: Using The Serial Link On The Onboard Administrator

    Double-click the first server name. Log in as normal. NOTE: By default, the first port is connected with the dongle to the front of blade 1 (that is, server 1). If server 1 is down, move the dongle to another blade. Using the serial link on the Onboard Administrator If you are connected to a terminal server, you can log in through the serial link on the Onboard Administrator.
  • Page 30 If you are using HTTP to access the Management Console, open a web browser and navigate to the following location, specifying port 80: http://<management_console_IP>:80/fusion If you are using HTTPS to access the Management Console, navigate to the following location, specifying port 443: https://<management_console_IP>:443/fusion In these URLs, <management_console_IP>...
  • Page 31 System Status The System Status section lists the number of cluster events that have occurred in the last 24 hours. There are three types of events: Alerts. Disruptive events that can result in loss of access to file system data. Examples are a segment that is unavailable or a server that cannot be accessed.
  • Page 32: Customizing The Gui

    Statistics Historical performance graphs for the following items: Network I/O (MB/s) Disk I/O (MB/s) CPU usage (%) Memory usage (%) On each graph, the X-axis represents time and the Y-axis represents performance. Use the Statistics menu to select the servers to monitor (up to two), to change the maximum value for the Y-axis, and to show or hide resource usage distribution for CPU and memory.
  • Page 33: Adding User Accounts For Management Console Access

    The administrative commands described in this guide must be executed on the Fusion Manager host and require root privileges. The commands are located in $IBRIXHOME⁄bin. For complete information about the commands, see the HP StoreAll Network Storage System CLI Reference Guide.
  • Page 34: Storeall Client Interfaces

    StoreAll clients can access the Fusion Manager as follows: Linux clients. Use Linux client commands for tasks such as mounting or unmounting file systems and displaying statistics. See the HP StoreAll Storage CLI Reference Guide for details about these commands.
  • Page 35: Configuring Ports For A Firewall

    You will be prompted to enter the new password. Configuring ports for a firewall IMPORTANT: To avoid unintended consequences, HP recommends that you configure the firewall during scheduled maintenance times. When configuring a firewall, you should be aware of the following: SELinux should be disabled.
  • Page 36: Configuring Ntp Servers

    -i -N Specify a new list of NTP servers: ibrix_clusterconfig -c -N SERVER1[,...,SERVERn] Configuring HP Insight Remote Support on StoreAll systems IMPORTANT: In the StoreAll software 6.1 release, the default port for the StoreAll SNMP agent changed from 5061 to 161. This port number cannot be changed.
  • Page 37 The cmahostd daemon is part of the hp-snmp-agents service. This error message occurs because the file system exceeds <n> TB. If this occurs, HP recommends that before you perform operations such as unmounting a file system or stopping services on a file serving node (using the...
  • Page 38: Configuring The Storeall Cluster For Insight Remote Support

    <BASE>\mibs>mcompile ibrixMib.mib <BASE>\mibs>mxmib -a ibrixMib.cfg For more information about the MIB, see the "Compiling and customizing MIBs" chapter in the HP Systems Insight Manager User Guide, which is available at: http://www.hp.com/go/insightmanagement/sim/ Click Support & Documents and then click Manuals. Navigate to the user guide.
  • Page 39 To configure the Virtual Connect Manager on an X9720/9730 system, complete the following steps: From the Onboard Administrator, select OA IP > Interconnect Bays > HP VC Flex-10 > Management Console. On the HP Virtual Connect Manager, open the SNMP Configuration tab.
  • Page 40 Configuring Phone Home settings To configure Phone Home on the GUI, select Cluster Configuration in the upper Navigator and then select Phone Home in the lower Navigator. The Phone Home Setup panel shows the current configuration. Getting started...
  • Page 41: Configuring Insight Remote Support For Hp Sim 7.1 And Irs 5.7

    -c -i 99.2.4.75 -P US -r public -w private -t Admin -n SYS01.US -o Colorado Next, configure Insight Remote Support for the version of HP SIM you are using: HP SIM 7.1 and IRS 5.7. See “Configuring Insight Remote Support for HP SIM 7.1 and IRS 5.7”...
  • Page 42 HP Systems Insight Manager (SIM) uses the SNMP protocol to discover and identify StoreAll systems automatically. On HP SIM, open Options > Discovery > New. Select Discover a group of systems, and then enter the discovery name and the Fusion Manager IP address on the New Discovery dialog box.
  • Page 43 Enter the read community string on the Credentials > SNMP tab. This string should match the Phone Home read community string. If the strings are not identical, the Fusion Manager IP might be discovered as “Unknown.” Configuring HP Insight Remote Support on StoreAll systems...
  • Page 44: Configuring Insight Remote Support For Hp Sim 6.3 And Irs 5.6

    The following example shows discovered devices on HP SIM 7.1. File serving nodes and the OA IP are associated with the Fusion Manager IP address. In HP SIM, select Fusion Manager and open the Systems tab. Then select Associations to view the devices.
  • Page 45 Enter the read community string on the Credentials > SNMP tab. This string should match the Phone Home read community string. If the strings are not identical, the device will be discovered as “Unknown.” The following example shows discovered devices on HP SIM 6.3. File serving nodes are discovered as ProLiant server. Configuring device Entitlements Configure the CMS software to enable remote support for StoreAll systems.
  • Page 46 Go to Remote Support Configuration and Services and select the Entitlement tab. Check the devices discovered. NOTE: If the system discovered on HP SIM does not appear on the Entitlement tab, click Synchronize RSE. Select Entitle Checked from the Action List.
  • Page 47: Testing The Insight Remote Support Configuration

    When Phone Home is disabled, all Phone Home information is removed from the cluster and hardware and software are no longer monitored. To disable Phone Home on the GUI, click Disable on the Phone Home Setup panel. On the CLI, run the following command: ibrix_phonehome -d Configuring HP Insight Remote Support on StoreAll systems...
  • Page 48: Troubleshooting Insight Remote Support

    Phone Home configuration” (page 47). Fusion Manager IP is discovered as “Unknown” Verify that the read community string entered in HP SIM matches the Phone Home read community string. Also run snmpwalk on the VIF IP and verify the information: # snmpwalk -v 1 -c <read community string>...
  • Page 49: Configuring Virtual Interfaces For Client Access

    Fusion Manager, a virtual interface is created for the cluster network interface. Although the cluster network interface can carry traffic between file serving nodes and clients, HP recommends that you configure one or more user network interfaces for this purpose.
  • Page 50: Creating A Bonded Vif

    To assign the IFNAME a default route for the parent cluster bond and the user VIFS assigned to FSNs for use with SMB/NFS, enter the following ibrix_nic command at the command prompt: # ibrix_nic -r -n IFNAME -h HOSTNAME-A -R <ROUTE_IP> Configure backup monitoring, as described in “Configuring backup servers”...
  • Page 51: Configuring Automated Failover

    For example: # ibric_nic -m -h node1 -A node2/bond0:1 # ibric_nic -m -h node2 -A node1/bond0:1 # ibric_nic -m -h node3 -A node4/bond0:1 # ibric_nic -m -h node4 -A node3/bond0:1 Configuring automated failover To enable automated failover for your file serving nodes, execute the following command: ibrix_server -m [-h SERVERNAME] Example configuration This example uses two nodes, ib50-81 and ib50-82.
  • Page 52: Configuring Vlan Tagging

    # ibrix_nic -b -H ib142-131/bond0.51,ib142-129/bond0.51:2 Create the user FM VIF: ibrix_fm -c 192.168.51.125 -d bond0.51:1 -n 255.255.255.0 -v user For more information about VLAG tagging, see the HP StoreAll Storage Network Best Practices Guide. Support for link state monitoring Do not configure link state monitoring for user network interfaces or VIFs that will be used for SMB or NFS.
  • Page 53: Configuring Failover

    5 Configuring failover This chapter describes how to configure failover for agile management consoles, file serving nodes, network interfaces, and HBAs. Agile management consoles The agile Fusion Manager maintains the cluster configuration and provides graphical and command-line user interfaces for managing and monitoring the cluster. The agile Fusion Manager is installed on all file serving nodes when the cluster is installed.
  • Page 54: Viewing Information About Fusion Managers

    The command takes effect immediately. The failed-over Fusion Manager remains in nofmfailover mode until it is moved to passive mode using the following command: ibrix_fm -m passive NOTE: A Fusion Manager cannot be moved from nofmfailover mode to active mode. Viewing information about Fusion Managers To view mode information, use the following command: ibrix_fm -i...
  • Page 55: What Happens During A Failover

    What happens during a failover The following actions occur when a server is failed over to its backup: The Fusion Manager verifies that the backup server is powered on and accessible. The Fusion Manager migrates ownership of the server’s segments to the backup and notifies all servers and StoreAll clients about the migration.
  • Page 56 The wizard also attempts to locate the IP addresses of the iLOs on each server. If it cannot locate an IP address, you will need to enter the address on the dialog box. When you have completed the information, click Enable HA Monitoring and Auto-Failover for both servers. Use the NIC HA Setup dialog box to configure NICs that will be used for data services such as SMB or NFS.
  • Page 57 For example, you can create a user VIF that clients will use to access an SMB share serviced by server ib69s1. The user VIF is based on an active physical network on that server. To do this, click Add NIC in the section of the dialog box for ib69s1. On the Add NIC dialog box, enter a NIC name.
  • Page 58 Next, enable NIC monitoring on the VIF. Select the new user NIC and click NIC HA. On the NIC HA Config dialog box, check Enable NIC Monitoring. Configuring failover...
  • Page 59 In the Standby NIC field, select New Standby NIC to create the standby on backup server ib69s2. The standby you specify must be available and valid. To keep the organization simple, we specified bond0:1 as the Name; this matches the name assigned to the NIC on server ib69s1. When you click OK, the NIC HA configuration is complete.
  • Page 60 You can create additional user VIFs and assign standby NICs as needed. For example, you might want to add a user VIF for another share on server ib69s2 and assign a standby NIC on server ib69s1. You can also specify a physical interface such eth4 and create a standby NIC on the backup server for it.
  • Page 61 The NICs panel for the ib69s2, the backup server, shows that bond0:1 is an inactive, standby NIC and bond0:2 is an active NIC. Changing the HA configuration To change the configuration of a NIC, select the server on the Servers panel, and then select NICs from the lower Navigator.
  • Page 62: Configuring Automated Failover Manually

    Configuring automated failover manually To configure automated failover manually, complete these steps: Configure file serving nodes in backup pairs. Identify power sources for the servers in the backup pair. Configure NIC monitoring. Enable automated failover. 1. Configure server backup pairs File serving nodes are configured in backup pairs, where each server in a pair is the backup for the other.
  • Page 63: Changing The Ha Configuration Manually

    -m -h node2 -A node1/bond0:1 ibric_nic -m -h node3 -A node4/bond0:1 ibric_nic -m -h node4 -A node3/bond0:1 The next example sets up server s2.hp.com to monitor server s1.hp.com over user network interface eth1: ibrix_nic -m -h s2.hp.com -A s1.hp.com/eth1 4.
  • Page 64: Failing A Server Over Manually

    A failback might not succeed if the time period between the failover and the failback is too short, and the primary server has not fully recovered. HP recommends ensuring that both servers are up and running and then waiting 60 seconds before starting the failback. Use the ibrix_server -l command to verify that the primary server is up and running.
  • Page 65: Setting Up Hba Monitoring

    Enter the WWPN as decimal-delimited pairs of hexadecimal digits. The following command identifies port 20.00.12.34.56.78.9a.bc as the standby for port 42.00.12.34.56.78.9a.bc for the HBA on file serving node s1.hp.com: ibrix_hba -b -P 20.00.12.34.56.78.9a.bc:42.00.12.34.56.78.9a.bc -h s1.hp.com Configuring High Availability on the cluster...
  • Page 66: Checking The High Availability Configuration

    HBA failure. Use the following command: ibrix_hba -m -h HOSTNAME -p PORT For example, to turn on HBA monitoring for port 20.00.12.34.56.78.9a.bc on node s1.hp.com: ibrix_hba -m -h s1.hp.com -p 20.00.12.34.56.78.9a.bc To turn off HBA monitoring for an HBA port, include the -U option:...
  • Page 67 -b argument. To view results only for file serving nodes that failed a check, include the -f argument. ibrix_haconfig -l [-h HOSTLIST] [-f] [-b] For example, to view a summary report for file serving nodes xs01.hp.com and xs02.hp.com: ibrix_haconfig -l -h xs01.hp.com,xs02.hp.com...
  • Page 68: Capturing A Core Dump From A Failed Node

    User nics configured with a standby nic PASSED HBA ports monitored Hba port 21.01.00.e0.8b.2a.0d.6d monitored FAILED Not monitored Hba port 21.00.00.e0.8b.0a.0d.6d monitored FAILED Not monitored Capturing a core dump from a failed node The crash capture feature collects a core dump from a failed node when the Fusion Manager initiates failover of the node.
  • Page 69: Setting Up Nodes For Crash Capture

    Highlight the BIOS Serial Console & EMS option in main menu, and then press the Enter key. Highlight the BIOS Serial Console Port option and then press the Enter key. Select the COM1 port, and then press the Enter key. Highlight the BIOS Serial Console Baud Rate option, and then press the Enter key.
  • Page 70: Configuring Cluster Event Notification

    6 Configuring cluster event notification Cluster events There are three categories for cluster events: Alerts. Disruptive events that can result in loss of access to file system data. Warnings. Potentially disruptive conditions where file system access is not lost, but if the situation is not addressed, it can escalate to an alert condition.
  • Page 71: Configuring Email Notification Settings

    Be sure to specify valid email addresses, especially for the SMTP server. If an address is not valid, the SMTP server will reject the email. The following command configures email settings to use the mail.hp.com SMTP server and turns on notifications: ibrix_event -m on -s mail.hp.com -f FM@hp.com -r MIS@hp.com -t Cluster1 Notification...
  • Page 72: Viewing Email Notification Settings

    Viewing email notification settings The ibrix_event -L command provides comprehensive information about email settings and configured notifications. ibrix_event -L Email Notification Enabled SMTP Server mail.hp.com From FM@hp.com Reply To MIS@hp.com EVENT LEVEL TYPE DESTINATION ------------------------------------- ----- ----- ----------- asyncrep.completed ALERT EMAIL admin@hp.com...
  • Page 73: Configuring The Snmp Agent

    Configuring the SNMP agent The SNMP agent is created automatically when the Fusion Manager is installed. It is initially configured as an SNMPv2 agent and is off by default. Some SNMP parameters and the SNMP default port are the same, regardless of SNMP version. The default agent port is 161.
  • Page 74: Associating Events And Trapsinks

    -a -v VIEWNAME [-t {include|exclude}] -o OID_SUBTREE [-m MASK_BITS] The subtree is added in the named view. For example, to add the StoreAll software private MIB to the view named hp, enter: ibrix_snmpview -a -v hp -o .1.3.6.1.4.1.18997 -m .1.1.1.1.1.1.1 Configuring cluster event notification...
  • Page 75: Configuring Groups And Users

    For example, to create the group group2 to require authorization, no encryption, and read access to the hp view, enter: ibrix_snmpgroup -c -g group2 -s authNoPriv -r hp The format to create a user and add that user to a group follows:...
  • Page 76: Configuring System Backups

    7 Configuring system backups Backing up the Fusion Manager configuration The Fusion Manager configuration is automatically backed up whenever the cluster configuration changes. The backup occurs on the node hosting the active Fusion Manager. The backup file is stored at <ibrixhome>/tmp/fmbackup.zip on that node. The active Fusion Manager notifies the passive Fusion Manager when a new backup file is available.
  • Page 77: Configuring Ndmp Parameters On The Cluster

    hard quota limit for the directory tree has been exceeded, NDMP cannot create a temporary file and the restore operation fails. Configuring NDMP parameters on the cluster Certain NDMP parameters must be configured to enable communications between the DMA and the NDMP Servers in the cluster.
  • Page 78: Ndmp Process Management

    To configure NDMP parameters from the CLI, use the following command: ibrix_ndmpconfig -c [-d IP1,IP2,IP3,...] [-m MINPORT] [-x MAXPORT] [-n LISTENPORT] [-u USERNAME] [-p PASSWORD] [-e {0=disable,1=enable}] -v [{0=10}] [-w BYTES] [-z NUMSESSIONS] NDMP process management Normally all NDMP actions are controlled from the DMA. However, if the DMA cannot resolve a problem or you suspect that the DMA may have incorrect information about the NDMP environment, take the following actions from the GUI or CLI: Cancel one or more NDMP sessions on a file serving node.
  • Page 79: Viewing Or Rescanning Tape And Media Changer Devices

    Viewing or rescanning tape and media changer devices To view the tape and media changer devices currently configured for backups, select Cluster Configuration from the Navigator, and then select NDMP Backup > Tape Devices. If you add a tape or media changer device to the SAN, click Rescan Device to update the list. If you remove a device and want to delete it from the list, reboot all of the servers to which the device is attached.
  • Page 80: Creating Host Groups For Storeall Clients

    8 Creating host groups for StoreAll clients A host group is a named set of StoreAll clients. Host groups provide a convenient way to centrally manage clients. You can put different sets of clients into host groups and then perform the following operations on all members of the group: Create and delete mount points Mount file systems...
  • Page 81: Adding A Storeall Client To A Host Group

    -m -g GROUP -h MEMBER For example, to add the specified host to the finance group: ibrix_hostgroup -m -g finance -h cl01.hp.com Adding a domain rule to a host group To configure automatic host group assignments, define a domain rule for host groups. A domain rule restricts host group membership to clients on a particular cluster subnet.
  • Page 82: Viewing Host Groups

    Additional host group operations are described in the following locations: Creating or deleting a mountpoint, and mounting or unmounting a file system (see “Creating and mounting file systems” in the HP StoreAll Storage File System User Guide) Changing host tuning parameters (see “Tuning file serving nodes and StoreAll clients”...
  • Page 83: Monitoring Cluster Operations

    9 Monitoring cluster operations This chapter describes how to monitor the operational state of the cluster and how to monitor cluster health. Monitoring X9720/9730 hardware The GUI displays status, firmware versions, and device information for the servers, chassis, and system storage included in X9720 and 9730 systems. The Management Console displays a top-level status of the chassis, server, and storage hardware components.
  • Page 84 Select the server component that you want to view from the lower Navigator panel, such as NICs. Monitoring cluster operations...
  • Page 85 The following are the top-level options provided for the server: NOTE: Information about the Hardware node can be found in “Monitoring hardware components” (page 87). HBAs. The HBAs panel displays the following information: ◦ Node WWN ◦ Port WWN ◦ Backup ◦...
  • Page 86 ◦ Route ◦ Standby Server ◦ Standby Interface Mountpoints. The Mountpoints panel displays the following information: ◦ Mountpoint ◦ Filesystem ◦ Access NFS. The NFS panel displays the following information: ◦ Host ◦ Path ◦ Options CIFS. The CIFS panel displays the following information: NOTE: CIFS in the GUI has not been rebranded to SMB yet.
  • Page 87: Monitoring Hardware Components

    Onboard Administrator modules, and interconnect modules (VC modules and SAS switches). The following Onboard Administrator view shows a chassis enclosure on a StoreAll 9730 system. To monitor these components from the GUI: Click Servers from the upper Navigator tree.
  • Page 88: Monitoring Blade Enclosures

    Monitoring blade enclosures To view summary information about the blade enclosures in the chassis: Expand the Hardware node. Select the Blade Enclosure node under the Hardware node. The following summary information is displayed for the blade enclosure: Status Type Name UUID Serial number Detailed information of the hardware components in the blade enclosure is provided by expanding...
  • Page 89 The sub-nodes under the Blade Enclosure node provide information about the hardware components within the blade enclosure: Monitoring X9720/9730 hardware...
  • Page 90 Table 2 Obtaining detailed information about a blade enclosure Panel name Information provided Status Type Name UUID Serial number Model Properties Temperature Sensor: The Temperature Sensor panel Status displays information for a bay, OA module or for the blade Type enclosure.
  • Page 91: Obtaining Server Details

    Obtaining server details The Management Console provides detailed information for each server in the chassis. To obtain summary information for a server, select the Server node under the Hardware node. The following overview information is provided for each server: Status Type Name UUID...
  • Page 92 Table 3 Obtaining detailed information about a server Panel name Information provided Status Type Name UUID Model Location ILO Module Status Type Name UUID Serial Number Model Firmware Version Properties Memory DiMM Status Type Name UUID Location Properties Status Type Name UUID Properties...
  • Page 93 Table 3 Obtaining detailed information about a server (continued) Panel name Information provided Drive: Displays information about each drive in a storage Status cluster. Type Name UUID Serial Number Model Firmware Version Location Properties Storage Controller (Displayed for a server) Status Type Name...
  • Page 94: Monitoring Storage And Storage Components

    Monitoring storage and storage components Select Vendor Storage from the Navigator tree to display status and device information for storage and storage components. The Vendor Storage panel lists the HP 9730 CX storage systems included in the system. The Summary panel shows details for a selected vendor storage, as shown in the following image:...
  • Page 95 The Management Console provides a wide-range of information in regards to vendor storage, as shown in the following image. Drill down into the following components in the lower Navigator tree to obtain additional details: Servers. The Servers panel lists the host names for the attached storage. Storage Cluster.
  • Page 96: Monitoring Storage Clusters

    Monitoring storage clusters The Management Console provides detailed information for each storage cluster. Click one of the following sub-nodes displayed under the Storage Clusters node to obtain additional information: Drive Enclosure. The Drive Enclosure panel provides detailed information about the drive enclosure.
  • Page 97 Expand the Drive Enclosure node to provide additional information about the power supply and sub enclosures. Table 4 Details provided for the drive enclosure Node Where to find detailed information Power Supply “Monitoring the power supply for a storage cluster” (page 97) Sub Enclosure “Monitoring sub enclosures”...
  • Page 98 Monitoring sub enclosures Expand the Sub Enclosure node to obtain information about the following components for each sub-enclosure: Drive. The Drive panel provides the following information about the drives in a sub-enclosure: ◦ Status ◦ Volume Name ◦ Type ◦ UUID ◦...
  • Page 99: Monitoring Pools For A Storage Cluster

    ◦ Name ◦ UUID ◦ Properties Monitoring pools for a storage cluster The Management Console lists a Pool node for each pool in the storage cluster. Select one of the Pool nodes to display information about that pool. When you select the Pool node, the following information is displayed in the Pool panel: Status Type Name...
  • Page 100: Monitoring Storage Controllers For A Storage Cluster

    UUID Properties The following image shows information for two volumes named LUN_15 and LUN_16 on the Volume panel. Monitoring storage controllers for a storage cluster The Management Console displays a Storage Controller node for each storage controller in the storage cluster. Select the Storage Controller node to view the following information for the selected storage controller: Status Type...
  • Page 101: Monitoring Storage Switches In A Storage Cluster

    UUID Properties. Provides information about the read, write and cache size properties. In the following image, the IO Cache Module panel shows an IO cache module with read/write properties enabled. Monitoring storage switches in a storage cluster The Storage Switch panel provides the following information about the storage switches: Status Type Name...
  • Page 102: Monitoring The Status Of File Serving Nodes

    In the following image, the LUNs panel displays the LUNs for a storage cluster. Monitoring the status of file serving nodes The dashboard on the GUI displays information about the operational status of file serving nodes, including CPU, I/O, and network performance information. To view this information from the CLI, use the ibrix_server -l command, as shown in the following sample output: ibrix_server -l...
  • Page 103: Monitoring Cluster Events

    Events are written to an events table in the configuration database as they are generated. To maintain the size of the file, HP recommends that you periodically remove the oldest events. See “Removing events from the events database table” (page 104).
  • Page 104: Removing Events From The Events Database Table

    The ibrix_event -l and -i commands can include options that act as filters to return records associated with a specific file system, server, alert level, and start or end time. See the HP StoreAll Network Storage System CLI Reference Guide for more information.
  • Page 105 The detailed report consists of the summary report and the following additional data: Summary of the test results Host information such as operational state, performance data, and version data Nondefault host tunings Results of the health checks By default, the Result Information field in a detailed report provides data only for health checks that received a Failed or a Warning result.
  • Page 106: Viewing Logs

    Iad and Fusion Manager PASSED Viewing logs Logs are provided for the Fusion Manager, file serving nodes, and StoreAll clients. Contact HP Support for assistance in interpreting log files. You might be asked to tar the logs and email them to HP.
  • Page 107 -n Network statistics -f NFS statistics -h The file serving nodes to be included in the report Sample output follows: ---------Summary------------ HOST Status CPU Disk(MB/s) Net(MB/s) lab12-10.hp.com 22528 ---------IO------------ HOST Read(MB/s) Read(IO/s) Read(ms/op) Write(MB/s) Write(IO/s) Write(ms/op) lab12-10.hp.com 22528 0.00 ---------Net------------...
  • Page 108: 10 Using The Statistics Tool

    10 Using the Statistics tool The Statistics tool reports historical performance data for the cluster or for an individual file serving node. You can view data for the network, the operating system, and the file systems, including the data for NFS, memory, and block devices. Statistical data is transmitted from each file serving node to the Fusion Manager, which controls processing and report generation.
  • Page 109: Upgrading The Statistics Tool From Storeall Software 6.0 109

    Upgrading the Statistics tool from StoreAll software 6.0 The statistics history is retained when you upgrade to version 6.1 or later. The Statstool software is upgraded when the StoreAll software is upgraded using the ibrix_upgrade and auto_ibrixupgrade scripts. Note the following: If statistics processes were running before the upgrade started, those processes will automatically restart after the upgrade completes successfully.
  • Page 110 The Time View lists the reports in chronological order, and the Table View lists the reports by cluster or server. Click a report to view it. 1 10 Using the Statistics tool...
  • Page 111: Generating Reports

    Generating reports To generate a new report, click Request New Report on the StoreAll Management Console Historical Reports GUI. To generate a report, enter the necessary specifications and click Submit. The completed report appears in the list of reports on the statistics home page. When generating reports, be aware of the following: A report can be generated only from statistics that have been gathered.
  • Page 112: Maintaining The Statistics Tool

    Maintaining the Statistics tool Space requirements The Statistics tool requires about 4 MB per hour for a two-node cluster. To manage space, take the following steps: Maintain sufficient space (4 GB to 8 GB) for data collection in the /usr/local/statstool/ histstats directory.
  • Page 113: Checking The Status Of Statistics Tool Processes

    The following actions occur after a successful failover: If Statstool processes were running before the failover, they are restarted. If the processes were not running, they are not restarted. The Statstool passive management console is installed on the StoreAll Fusion Manager in maintenance mode.
  • Page 114: Troubleshooting The Statistics Tool

    “Controlling Statistics tool processes” (page 113). Installation issues. Check the /tmp/stats-install.log and try to fix the condition, or send the /tmp/stats-install.log to HP Support. Missing reports for file serving nodes. If reports are missing on the Stats tool web page, check the following: ◦...
  • Page 115: 1 Maintaining The System

    1 1 Maintaining the system Shutting down the system To shut down the system completely, first shut down the StoreAll software, and then power off the hardware. Shutting down the StoreAll software Use the following procedure to shut down the StoreAll software. Unless noted otherwise, run the commands from the node hosting the active Fusion Manager.
  • Page 116: Powering Off The System Hardware

    Unmount all file systems on the cluster nodes: ibrix_umount -f <fs_name> To unmount file systems from the GUI, select Filesystems > unmount. Verify that all file systems are unmounted: ibrix_fs -l If a file system fails to unmount on a particular node, continue with this procedure. The file system will be forcibly unmounted during the node shutdown.
  • Page 117: Starting Up The System

    Starting up the system To start an X9720 system, first power on the hardware components, and then start the StoreAll Software. Powering on the system hardware To power on the system hardware, complete the following steps: Power on the 9100cx disk capacity block(s). Power on the 9100c controllers.
  • Page 118: Performing A Rolling Reboot

    /etc/init.d/ibrix_client [start | stop | restart | status] Tuning file serving nodes and StoreAll clients Typically, HP Support sets the tuning parameters on the file serving nodes during the cluster installation and changes should be needed only for special situations.
  • Page 119 You can locally override host tunings that have been set on StoreAll Linux clients by executing the ibrix_lwhost command. Tuning file serving nodes on the GUI The Modify Server(s) Wizard can be used to tune one or more servers in the cluster. To open the wizard, select Servers from the Navigator and then click Tuning Options from the Summary panel.
  • Page 120 The Module Tunings dialog box adjusts various advanced parameters that affect server operations. On the Servers dialog box, select the servers to which the tunings should be applied. 120 Maintaining the system...
  • Page 121 To tune host parameters on nodes or hostgroups: ibrix_host_tune -S {-h HOSTLIST|-g GROUPLIST} -o OPTIONLIST Contact HP Support to obtain the values for OPTIONLIST. List the options as option=value pairs, separated by commas. To set host tunings on all clients, include the -g clients option.
  • Page 122: Managing Segments

    To list host tuning parameters that have been changed from their defaults: ibrix_lwhost --list See the ibrix_lwhost command description in the HP StoreAll Storage CLI Reference Guide for other available options. Windows clients. Click the Tune Host tab on the Windows StoreAll client GUI. Tunable parameters include the NIC to prefer (the default is the cluster interface), the communications protocol (UDP or TCP), and the number of server threads to use.
  • Page 123: Migrating Segments

    Migrating segments Segment migration transfers segment ownership but it does not move segments from their physical locations in the storage system. Segment ownership is recorded on the physical segment itself, and the ownership data is part of the metadata that the Fusion Manager distributes to file serving nodes and StoreAll clients so that they can locate segments.
  • Page 124 The new owner of the segment must be able to see the same storage as the original owner. The Change Segment Owner dialog box lists the servers that can see the segment you selected. Select one of these servers to be the new owner. The Summary dialog box shows the segment migration you specified.
  • Page 125: Evacuating Segments And Removing Storage From The Cluster

    The following command migrates ownership of segments ilv2 and ilv3 in file system ifs1 to server2: ibrix_fs -m -f ifs1 -s ilv2,ilv3 -h server2 Migrate ownership of all segments owned by specific servers: ibrix_fs -m -f FSNAME -H HOSTNAME1,HOSTNAME2 [-M] [-F] [-N] For example, to migrate ownership of all segments in file system ifs1 from server1 to server2: ibrix_fs -m -f ifs1 -H server1,server2 Evacuating segments and removing storage from the cluster...
  • Page 126 On the Evacuate Advanced dialog box, locate the segment to be evacuated and click Source. Then locate the segments that will receive the data from the segment and click Destination. If the file system is tiered, be sure to select destination segments on the same tier as the source segment.
  • Page 127 Guide. Troubleshooting segment evacuation If segment evacuation fails, HP recommends that you run phase 1 of the ibrix_fsck command in corrective mode on the segment that failed the evacuation. For more information, see “Checking and repairing file systems” in the HP StoreAll Storage File System User Guide.
  • Page 128: Removing A Node From A Cluster

    3015A4021.C34A994C, poid 3015A4021.C34A994C, primary 4083040FF.7793558E poid 4083040FF.7793558E Use the inum2name utility to translate the primary inode ID into the file name. Removing a node from a cluster In the following procedure, the cluster contains four nodes: FSN1, FSN2, FSN3, and FSN4. FSN4 is the node being removed.
  • Page 129: Maintaining Networks

    In general, it is better to assign a user network for protocol (NFS/SMB/HTTP/FTP) traffic because the cluster network cannot host the virtual interfaces (VIFs) required for failover. HP recommends that you use a Gigabit Ethernet port (or faster) for user networks.
  • Page 130 For a highly available cluster, HP recommends that you put protocol traffic on a user network and then set up automated failover for it (see “Configuring High Availability on the cluster”...
  • Page 131: Setting Network Interface Options In The Configuration Database

    Execute this command once for each destination host that the file serving node or StoreAll client should contact using the specified network interface (IFNAME). For example, to prefer network interface eth3 for traffic from file serving node s1.hp.com to file serving node s2.hp.com: ibrix_server -n -h s1.hp.com -A s2.hp.com/eth3...
  • Page 132: Unpreferring Network Interfaces

    -n -g HOSTGROUP -A DESTHOST/IFNAME The destination host (DESTHOST) cannot be a hostgroup. For example, to prefer network interface eth3 for traffic from all StoreAll clients (the clients hostgroup) to file serving node s2.hp.com: ibrix_hostgroup -n -g clients -A s2.hp.com/eth3...
  • Page 133: Changing The Cluster Interface

    See “Changing the cluster interface” (page 133). To delete a network interface, use the following command: ibrix_nic -d -n IFNAME -h HOSTLIST The following command deletes interface eth3 from file serving nodes s1.hp.com and s2.hp.com: Maintaining networks 133...
  • Page 134: Viewing Network Interface Information

    -d -n eth3 -h s1.hp.com,s2.hp.com Viewing network interface information Executing the ibrix_nic command with no arguments lists all interfaces on all file serving nodes. Include the -h option to list interfaces on specific hosts. ibrix_nic -l -h HOSTLIST The following table describes the fields in the output.
  • Page 135: 12 Licensing

    Fax the Password Request Form that came with your License Entitlement Certificate. See the certificate for fax numbers in your area. Call or email the HP Password Center. See the certificate for telephone numbers in your area or email addresses.
  • Page 136: 13 Upgrading Firmware

    Severity — How severe an upgrade is required Reboot required on flash Device information Parent device ID Components for firmware upgrades The HP StoreAll system includes several components with upgradable firmware. The following lists the components that can be upgraded: Server ◦ ILO2 (9720 systems) ◦...
  • Page 137: Steps For Upgrading The Firmware

    ◦ 6Gb_SAS_BL_SW ◦ 3Gb_SAS_BL_SW (9720 systems) Enter the following command to show which components could be flagged for flash upgrade. hpsp_fmt -lc The following is an example of the server components that are displayed: Steps for upgrading the firmware To upgrade the firmware for components: Steps for upgrading the firmware 137...
  • Page 138 Run the /opt/hp/platform/bin/hpsp_fmt -fr command to verify that the firmware on this node and subsequent nodes in this cluster is correct and up-to-date. This command should be performed before placing the cluster back into service. The following figure shows an example of the firmware recommendation output and corrective...
  • Page 139 Perform the flash operation by entering the following command and then go to step 5: hpsp_fmt -flash -c <components-name> --force If the components require a reboot on flash, failover the FSN for continuous operation as described in the following steps: NOTE: Although the following steps are based on a two-node cluster, all steps can be used in a multiple node clusters.
  • Page 140: Finding Additional Information On Fmt

    FMT, enter the hpsp_fmt command on the file system node console. Adding performance modules on 9730 systems See the HP StoreAll 9730 Storage Performance Module Installation Instructions for details about installing the module on a StoreAll 9730 cluster. See the...
  • Page 141: Adding New Server Blades On 9720 Systems

    StoreAll software on the blades in the module. These documents are located on the StoreAll manuals page: http://www.hp.com/support/StoreAllManuals Adding new server blades on 9720 systems NOTE: This requires the use of the Quick Restore DVD. See “Recovering the X9720/9730...
  • Page 142 “Recovering the X9720/9730 Storage” (page 154) for more information. Set up fail over. For more information, see the HP StoreAll Storage File System User Guide. Enable high availability (automated failover) by running the following command on server 1: # ibrix_server -m...
  • Page 143: 14 Troubleshooting

    Ibrix Collect is a log collection utility that allows you collect relevant information for diagnosis by HP Support when system issues occur. The collection can be triggered manually using the GUI or CLI, or automatically during a system crash. Ibrix Collect gathers the following information:...
  • Page 144: Downloading The Archive File

    The average size of the archive file depends on the size of the logs present on individual nodes in the cluster. You may later be asked to email this final .tgz file to HP Support. Deleting the archive file You can delete a specific data collection or all collections simultaneously in the GUI and the CLI.
  • Page 145: Configuring Ibrix Collect

    Under Email Settings, enable or disable sending cluster configuration by email by checking or unchecking the appropriate box. Fill in the remaining required fields for the cluster configuration and click Okay. Collecting information for HP Support with the IbrixCollect 145...
  • Page 146: Obtaining Custom Logging From Ibrix_Collect Add-On Scripts

    To set up email settings to send cluster configurations using the CLI, use the following command: ibrix_collect -C -m <Yes\No> [-s <SMTP_server>] [-f <From>] [-t <To>] NOTE: More than one email ID can be specified for -t option, separated by a semicolon. The “From”...
  • Page 147: Running An Add-On Script

    Viewing the output from an add-on script To view an output from an add-on script: Go to the active Fusion Manager node in the /local/ibrixcollect/archive directory by entering the following command: [root@host2 /]#cd /local/ibrixcollect/archive/ Collecting information for HP Support with the IbrixCollect...
  • Page 148 The output of the add-on scripts is available under the tar file of the individual node. To view the contents of the directory, enter the following command: [root@host2 /]#ls -l The following is an example of the output displayed: total 3520 -rw-r--r-- 1 root root 2021895 Dec 20 12:41 addOnCollection.tgz Extract the tar file, containing the output of the add-on script.
  • Page 149: Viewing Data Collection Information

    The file system and IAD/FS output fields should show matching version numbers unless you have installed special releases or patches. If the output fields show mismatched version numbers and you do not know of any reason for the mismatch, contact HP Support. A mismatch might affect the operation of your cluster.
  • Page 150: Troubleshooting Specific Issues

    SELINUX=parameter to either permissive or disabled. SELinux will be stopped at the next boot. For StoreAll clients, the client might not be registered with the Fusion Manager. For information on registering clients, see the HP StoreAll Storage Installation Guide. Failover Cannot fail back from failover caused by storage subsystem failure When a storage subsystem fails and automated failover is turned on, the Fusion Manager will initiate its failover protocol.
  • Page 151: Windows Storeall Clients

    To maintain access to a file system, file serving nodes must have current information about the file system. HP recommends that you execute ibrix_health on a regular basis to monitor the health of this information. If the information becomes outdated on a file serving node, execute ibrix_dbck -o to resynchronize the server’s information with the configuration database.
  • Page 152: Troubleshooting An Express Query Manual Intervention Failure (Mif)

    Troubleshooting an Express Query Manual Intervention Failure (MIF) An Express Query Manual Intervention Failure (MIF) is a critical error that occurred during Express Query execution. These are failures Express Query cannot recover from automatically. After a MIF occurrence the specific file system is logically removed from the Express Query and it requires a manual intervention to perform the recovery.
  • Page 153 Wait for the resynchronizer to complete by entering the following command: ibrix_archiving -l Repeat this command until it displays the OK status for the file system. If none of the above worked, contact HP. Troubleshooting an Express Query Manual Intervention Failure (MIF) 153...
  • Page 154: 15 Recovering The X9720/9730 Storage

    Obtaining the latest StoreAll software release StoreAll OS version 6.3 is only available through the registered release process. To obtain the ISO image, contact HP Support to register for the release and obtain access to the software dropbox. Use a DVD Burn the ISO image to a DVD.
  • Page 155: Restoring An X9720 Node With Storeall 6.1 Or Later

    Recovering an X9720 or 9730 file serving node NOTE: If you are recovering blade1 on a StoreAll 9730 system, the Quick Restore procedure goes through the steps needed to form a cluster. It requires that you validate the chassis components;...
  • Page 156 Replacing a node requires less time.) IMPORTANT: HP recommends that you update the firmware before continuing with the installation. 9730 systems have been tested with specific firmware recipes. Continuing the installation without upgrading to a supported firmware recipe can result in a defective system.
  • Page 157 NOTE: If a management console is not located, the following screen appears. Select Enter FM IP and go to step 5. The Verify Hostname dialog box displays a hostname generated by the management console. Enter the correct hostname for this server. The Verify Configuration dialog box shows the configuration for this node.
  • Page 158 On the Server Networking Configuration dialog box, configure this server for bond0, the cluster network. Note the following: The hostname can include alphanumeric characters and the hyphen (-) special character. Do not use an underscore (_) in the hostname. The IP address is the address of the server on bond0. The default gateway provides a route between networks.
  • Page 159 This step applies only to 9730 systems. If you are restoring a blade on an X9720 system, go to step 8. The 9730 blade being restored needs OA/VC information from the chassis. It can obtain this information directly from blade 1, or you can enter the OA/VC credentials manually. The wizard now checks and verifies the following: OA and VC firmware VC configuration...
  • Page 160 Storage configuration Networking on the blade On the Join a Cluster – Step 2 dialog box, enter the requested information. NOTE: On the dialog box, Register IP is the Fusion Manager (management console) IP, not the IP you are registering for this blade. The Network Configuration dialog box lists the interfaces configured on the system.
  • Page 161 NOTE: If you are recovering an X9720 node with StoreAll OS 6.1 or later, you might be unable to change the cluster network for bond1. See “Manually recovering bond1 as the cluster” (page 165) for more information. The Configuration Summary dialog box lists the configuration you specified. Select Commit to apply the configuration.
  • Page 162: Completing The Restore

    When the configuration is complete, a message reporting the location of the log files appears: Logs are available at /usr/local/ibrix/autocfg/logs. The StoreAll 9730 configuration logs are available at /var/log/hp/platform/ install/X9730_install.log. Completing the restore Ensure that you have root access to the node.
  • Page 163 For example: ibrix_nic -m -h titan16 -A titan15/eth2 Configure Insight Remote Support on the node. See “Configuring HP Insight Remote Support on StoreAll systems” (page 36). Run ibrix_health -l from the node hosting the active Fusion Manager to verify that no errors are being reported.
  • Page 164 Push the original share information from the management console database to the restored node. On the node hosting the active management console, first create a temporary SMB share: ibrix_cifs -a -f FSNAME -s SHARENAME -p SHAREPATH NOTE: You cannot create an SMB share with a name containing an exclamation point (!) or a number sign (#) or both.
  • Page 165: Troubleshooting

    Troubleshooting Manually recovering bond1 as the cluster If you are unable to use the installation wizard to recover bond1 as the cluster, perform the following procedure: Create bond0 and bond1: Create the ifcfg-bond0 file in the /etc/sysconfig/network-scripts directory with the following parameters: BOOTPROTO=none BROADCAST=10.30.255.255 DEVICE=bond0...
  • Page 166 Determine if the MAC address is present in the ifcfg files. If not, obtain the MAC address and append it to each eth port: Execute the ip ad command to obtain the MAC address of all eth ports. Add the MAC address to the ifcfg-ethx file of each slave eth port. The following is an example of the MAC address: HWADDR=68:B5:99:B3:11:88 Ensure that the ONBOOT parameter is set to “no”...
  • Page 167 Register the server to the Fusion Manager configuration: (Not a replacement server) Run the register_server command: [root@X9720 ~]# /usr/local/ibrix/bin/register_server -p 172.16.3.65 -c bond1 -n r150b16 -u bond0 NOTE: The command will fail if the server is a replacement server because the server is already registered, as shown in the following example: iadconf.xml does not exist...creating new config.
  • Page 168 Run the following commands from the active Fusion Manager (r150b15 in this example) to view the existing servers and then unregister the passive Fusion Manager (r150b16 in this example): To view the registered management consoles: [root@r150b15 ibrix]#ibrix_fm -l The command provides the following output: NAME IP ADDRESS ------- ---------- r150b15 172.16.3.15...
  • Page 169: Ilo Remote Console Does Not Respond To Keystrokes

    -h hostnameX Iad error on host hostnameX failed command (<HIDDEN_COMMAND>) status (1) output: (Joining to AD Domain:IBRQA1.HP.COM With Computer DNS Name: hostsnameX.ibrqa1.hp.com ) Verify that the content of the /etc/resolve.conf file is not empty. If the content is empty, copy the contents of the /etc/resolve.conf file on another server to the empty resolve.conf...
  • Page 170: 16 Support And Other Resources

    HP ProLiant BL460c Server Blade User Guide To access these manuals, go to the Manuals page (http://www.hp.com/support/manuals) and click bladesystem > BladeSystem Server Blades, and then select HP Proliant BL 460c G7 Server Series or HP Proliant BL 460c G6 Server Series.
  • Page 171: Obtaining Spare Parts

    Extend only one rack component at a time. Racks can become unstable if more than one component is extended. Product warranties For information about HP product warranties, see the warranty information website: http://www.hp.com/go/storagewarranty Subscription service HP recommends that you register your product at the Subscriber's Choice for Business website: http://www.hp.com/go/e-updates Obtaining spare parts...
  • Page 172 After registering, you will receive email notification of product enhancements, new driver versions, firmware updates, and other product resources. 172 Support and other resources...
  • Page 173: 17 Documentation Feedback

    17 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL...
  • Page 174: A Cascading Upgrades

    A Cascading Upgrades If you are running a StoreAll version earlier than 5.6, do incremental upgrades as described in the following table. If you are running StoreAll 5.6, upgrade to 6.1 before upgrading to 6.3. If you are upgrading from Upgrade to Where to find additional information StoreAll version 5.4...
  • Page 175: Upgrading 9720 Chassis Firmware

    To upgrade the firmware, complete the following steps: Go to http://www.hp.com/go/StoreAll. On the HP StoreAll Storage page, select HP Support & Drivers from the Support section. On the Business Support Center, select Download Drivers and Software and then select HP 9720 Base Rack >...
  • Page 176: Performing The Upgrade

    The online upgrade is supported only from the StoreAll 6.x to 6.1 release. Complete the following steps: Obtain the latest HP StoreAll 6.1 ISO image from the StoreAll software dropbox. Contact HP Support to register for the release and obtain access to the dropbox.
  • Page 177 Ensure that all nodes are up and running. To determine the status of your cluster nodes, check the dashboard on the GUI or use the ibrix_health command. Verify that ssh shared keys have been set up. To do this, run the following command on the node hosting the active instance of the agile Fusion Manager: ssh <server_name>...
  • Page 178: Performing The Upgrade

    This upgrade method is supported only for upgrades from StoreAll software 5.6.x to the 6.1 release. Complete the following steps: Obtain the latest HP StoreAll 6.1 ISO image from the StoreAll software dropbox. Contact HP Support to register for the release and obtain access to the dropbox.
  • Page 179: Upgrading Linux Storeall Clients

    /etc/init.d/ibrix_client status IBRIX Filesystem Drivers loaded IBRIX IAD Server (pid 3208) running... The IAD service should be running, as shown in the previous sample output. If it is not, contact HP Support. Installing a minor kernel update on Linux clients The StoreAll client software is upgraded automatically when you install a compatible Linux minor kernel update.
  • Page 180: Upgrading Windows Storeall Clients

    /usr/local/ibrix/bin/verify_client_update <kernel_update_version> The following example is for a RHEL 4.8 client with kernel version 2.6.9-89.ELsmp: # /usr/local/ibrix/bin/verify_client_update 2.6.9-89.35.1.ELsmp Kernel update 2.6.9-89.35.1.ELsmp is compatible. If the minor kernel update is compatible, install the update with the vendor RPM and reboot the system.
  • Page 181: Upgrading Pre-6.1.1 File Systems For Data Retention Features

    Progress and status reports The utility writes log files to the directory /usr/local/ibrix/log/upgrade60 on each node containing segments from the file system being upgraded. Each node contains the log files for its segments. Log files are named <host>_<segment>_<date>_upgrade.log. For example, the following log file is for segment ilv2 on host ib4-2: ib4-2_ilv2_2012-03-27_11:01_upgrade.log Restarting the utility...
  • Page 182: Troubleshooting Upgrade Issues

    Enter the ibrix_fs command to set the file system’s data retention and autocommit period to the desired values. See the HP StoreAll Storage CLI Reference Guide for additional information about the ibrix_fs command. Troubleshooting upgrade issues If the upgrade does not complete successfully, check the following items. For additional assistance, contact HP Support.
  • Page 183: Node Is Not Registered With The Cluster Network

    Node is not registered with the cluster network Nodes hosting the agile Fusion Manager must be registered with the cluster network. If the ibrix_fm command reports that the IP address for a node is on the user network, you will need to reassign the IP address to the cluster network.
  • Page 184: Moving The Fusion Manager Vif To Bond1

    Unmount the file systems and continue with the upgrade procedure. Moving the Fusion Manager VIF to bond1 When the 9720 system is installed, the cluster network is moved to bond1. The 6.1 release requires that the Fusion Manager VIF (Agile_Cluster_VIF) also be moved to bond1 to enable access to ports 1234 and 9009.
  • Page 185: Upgrading The Storeall Software To The 5.6 Release

    GUI or use the ibrix_health command. To upgrade all nodes in the cluster automatically, complete the following steps: Check the dashboard on the management console GUI to verify that all nodes are up. Obtain the latest release image from the HP kiosk at http://www.software.hp.com/kiosk (you will need your HP-provided login credentials).
  • Page 186: Manual Upgrades

    The manual upgrade process requires external storage that will be used to save the cluster configuration. Each server must be able to access this media directly, not through a network, as the network configuration is part of the saved configuration. HP recommends that you use a USB stick or DVD.
  • Page 187: Restoring The Node Configuration

    For an agile configuration, on all nodes hosting the passive management console, return the management console to passive mode: <ibrixhome>/bin/ibrix_fm -m passive If you received a new license from HP, install it as described in “Licensing” (page 135). Upgrading the StoreAll software to the 5.6 release 187...
  • Page 188: Troubleshooting Upgrade Issues

    Troubleshooting upgrade issues If the upgrade does not complete successfully, check the following items. For additional assistance, contact HP Support. Automatic upgrade Check the following: If the initial execution of /usr/local/ibrix/setup/upgrade fails, check /usr/local/ibrix/setup/upgrade.log for errors. It is imperative that all servers are up and running the StoreAll software before you execute the upgrade script.
  • Page 189: Automatic Upgrades

    IMPORTANT: Do not start new remote replication jobs while a cluster upgrade is in progress. If replication jobs were running before the upgrade started, the jobs will continue to run without problems after the upgrade completes. If you are upgrading from a StoreAll 5.x release, ensure that the NFS exports option subtree_check is the default export option for every NFS export.
  • Page 190: Manual Upgrades

    Manual upgrades Upgrade paths There are two manual upgrade paths: a standard upgrade and an agile upgrade. The standard upgrade is used on clusters having a dedicated Management Server machine or blade running the management console software. The agile upgrade is used on clusters having an agile management console configuration, where the management console software is installed in an active/passive configuration on two cluster nodes.
  • Page 191 2323332 0 (unused) lsmod|grep ipfs ipfs1 102592 0 (unused) If either grep command returns empty, contact HP Support. From the management console, verify that the new version of StoreAll software FS/IAS is installed on the file serving node: <ibrixhome>/bin/ibrix_version -l -S If the upgrade was successful, failback the file serving node: <ibrixhome>/bin/ibrix_server -f -U -h HOSTNAME...
  • Page 192: Standard Offline Upgrade

    The installation is successful when all version indicators match. If you followed all instructions and the version indicators do not match, contact HP Support. Propagate a new segment map for the cluster: <ibrixhome>/bin/ibrix_dbck -I -f FSNAME Verify the health of the cluster: <ibrixhome>/bin/ibrix_health -l...
  • Page 193 2323332 0 (unused) lsmod|grep ipfs ipfs1 102592 0 (unused) If either grep command returns empty, contact HP Support. From the management console, verify that the new version of StoreAll software FS/IAS has been installed on the file serving nodes: <ibrixhome>/bin/ibrix_version -l -S...
  • Page 194: Agile Upgrade For Clusters With An Agile Management Console Configuration

    If there is a version mismatch, run the /ibrix/ibrixupgrade -f script again on the affected node, and then recheck the versions. The installation is successful when all version indicators match. If you followed all instructions and the version indicators do not match, contact HP Support. Verify the health of the cluster: <ibrixhome>/bin/ibrix_health -l The output should show Passed / on.
  • Page 195 Wait approximately 60 seconds for the failover to complete, and then run the following command on the node that was the target for the failover: <ibrixhome>/bin/ibrix_fm -i The command should report that the agile management console is now Active on this node. From the node on which you failed over the active management console in step 4, change the status of the management console from maintenance to passive: <ibrixhome>/bin/ibrix_fm -m passive...
  • Page 196 16. From the node on which you failed back the active management console in step 8, change the status of the management console from maintenance to passive: <ibrixhome>/bin/ibrix_fm -m passive 17. If the node with the passive management console is also a file serving node, manually fail over the node from the active management console: <ibrixhome>/bin/ibrix_server -f -p -h HOSTNAME Wait a few minutes for the node to reboot, and then run the following command to verify that...
  • Page 197 2323332 0 (unused) lsmod|grep ipfs ipfs1 102592 0 (unused) If either grep command returns empty, contact HP Support. From the management console, verify that the new version of StoreAll software FS/IAS has been installed on the file serving node: <ibrixhome>/bin/ibrix_version -l -S...
  • Page 198: Agile Offline Upgrade

    Propagate a new segment map for the cluster: <ibrixhome>/bin/ibrix_dbck -I -f FSNAME Verify the health of the cluster: <ibrixhome>/bin/ibrix_health -l The output should specify Passed / on. Agile offline upgrade This upgrade procedure is appropriate for major upgrades. Perform the agile offline upgrade in the following order: File serving node hosting the active management console File serving node hosting the passive management console...
  • Page 199 /etc/init.d/ibrix_server status The output should be similar to the following example. If the IAD service is not running on your system, contact HP Support. IBRIX Filesystem Drivers loaded ibrcud is running.. pid 23325 IBRIX IAD Server (pid 23368) running...
  • Page 200: Troubleshooting Upgrade Issues

    102592 0 (unused) If either grep command returns empty, contact HP Support. From the active management console node, verify that the new version of StoreAll software FS/IAS is installed on the file serving nodes: <ibrixhome>/bin/ibrix_version -l -S Completing the upgrade Remount the StoreAll file systems: <ibrixhome>/bin/ibrix_mount -f <fsname>...
  • Page 201: B Storeall 9730 Component And Cabling Diagrams

    SAS switches. The StoreAll 9730 CXs are numbered starting from the bottom (for example, the StoreAll 9730 CX 1 is located at the bottom of the rack; StoreAll 9730 CX 2 is located directly above StoreAll 9730 CX 1).
  • Page 202: Back View Of The Expansion Rack

    Back view of the expansion rack 1. 9730 CX 8 2. 9730 CX 7 StoreAll 9730 CX I/O modules and SAS port connectors 1. Secondary I/O module (Drawer 2) 2. SAS port 2 connector 3. SAS port 1 connector 4. Primary I/O module (Drawer 2) 5.
  • Page 203: Storeall 9730 Cx 1 Connections To The Sas Switches

    1 through 8, starting from the left.) For example, the 9730 CX 2 connects to port 2 on each SAS switch. The StoreAll 9730 CX 7 connects to port 7 on each SAS switch. StoreAll 9730 CX 1 connections to the SAS switches 203...
  • Page 204: Storeall 9730 Cx 2 Connections To The Sas Switches

    StoreAll 9730 CX 2 connections to the SAS switches On Drawer 1: SAS port 1 connector on the primary I/O module (Drawer 1) to port 2 on the Bay 5 SAS switch SAS port 1 connector on the secondary I/O module (Drawer 1) to port 2 on the Bay 6 SAS...
  • Page 205: Storeall 9730 Cx 3 Connections To The Sas Switches

    SAS port 1 connector on the primary I/O module (Drawer 2) to port 3 on the Bay 7 SAS switch SAS port 1 connector on the secondary I/O module (Drawer 2) to port 3 on the Bay 8 SAS switch StoreAll 9730 CX 3 connections to the SAS switches 205...
  • Page 206: Storeall 9730 Cx 7 Connections To The Sas Switches In The Expansion Rack

    StoreAll 9730 CX 7 connections to the SAS switches in the expansion rack On Drawer 1: SAS port 1 connector on the primary I/O module (Drawer 1) to port 7 on the Bay 5 SAS switch SAS port 1 connector on the secondary I/O module (Drawer 1) to port 7 on the Bay 6 SAS...
  • Page 207: C The Ibrix X9720 Component And Cabling Diagrams

    C The IBRIX X9720 component and cabling diagrams Base and expansion cabinets A minimum IBRIX X9720 Storage base cabinet has from 3 to 16 performance blocks (that is, server blades) and from 1 to 4 capacity blocks. An expansion cabinet can support up to four more capacity blocks, bringing the system to eight capacity blocks.
  • Page 208: Back View Of A Base Cabinet With One Capacity Block

    Back view of a base cabinet with one capacity block 1. Management switch 2 2. Management switch 1 3. X9700c 1 4. TFT monitor and keyboard 5. c-Class Blade enclosure 6. X9700cx 1 208 The IBRIX X9720 component and cabling diagrams...
  • Page 209: Front View Of A Full Base Cabinet

    Front view of a full base cabinet 1 X9700c 4 6 X9700cx 3 2 X9700c 3 7 TFT monitor and keyboard 3 X9700c 2 8 c-Class Blade Enclosure 4 X9700c 1 9 X9700cx 2 5 X9700cx 4 10 X9700cx 1 Base and expansion cabinets 209...
  • Page 210: Back View Of A Full Base Cabinet

    Back view of a full base cabinet 1 Management switch 2 7 X9700cx 4 2 Management switch 1 8 X9700cx 3 3 X9700c 4 9 TFT monitor and keyboard 4 X9700c 3 10 c-Class Blade Enclosure 5 X9700c 2 1 1 X9700cx 2 6 X9700c 1 12 X9700cx 1 The IBRIX X9720 component and cabling diagrams...
  • Page 211: Front View Of An Expansion Cabinet

    Front view of an expansion cabinet The optional X9700 expansion cabinet can contain from one to four capacity blocks. The following diagram shows a front view of an expansion cabinet with four capacity blocks. 1. X9700c 8 5. X9700cx 8 2.
  • Page 212: Back View Of An Expansion Cabinet With Four Capacity Blocks

    Back view of an expansion cabinet with four capacity blocks 1. X9700c 8 5. X9700cx 8 2. X9700c 7 6. X9700cx 7 3. X9700c 6 7. X9700cx 6 4. X9700c 5 8. X9700cx 5 Performance blocks (c-Class Blade enclosure) A performance block is a special server blade for the X9720. Server blades are numbered according to their bay number in the blade enclosure.
  • Page 213: Rear View Of A C-Class Blade Enclosure

    Rear view of a c-Class Blade enclosure 1. Interconnect bay 1 (Virtual Connect Flex- 1 0 6. Interconnect bay 6 (reserved for future use) 10 Ethernet Module) 2. Interconnect bay 2 (Virtual Connect Flex- 1 0 7. Interconnect bay 7 (reserved for future use) 10 Ethernet Module) 3.
  • Page 214: Capacity Blocks

    Ethernet module cabling—Base cabinet” (page 217). If you connect several ports to the same switch in your site network, all ports must use the same media type. In addition, HP recommends you use 10 links. The X9720 Storage uses mode 1 (active/backup) for network bonds. No other bonding mode is supported.
  • Page 215: X9700C (Array Controller With 12 Disk Drives)

    This component is also known as the HP 600 Modular Disk System. For an explanation of the LEDs and buttons on this component, see the HP 600 Modular Disk System User Guide at http://www.hp.com/support/manuals. Under Storage click Disk Storage Systems, then under Disk Enclosures click HP 600 Modular Disk System.
  • Page 216: Front View Of An X9700Cx

    Front view of an X9700cx 1. Drawer 1 2. Drawer 2 Rear view of an X9700cx 1. Power supply 5. In SAS port 2. Primary I/O module drawer 2 6. Secondary I/O module drawer 1 3. Primary I/O module drawer 1 7.
  • Page 217: Virtual Connect Flex- 1 0 Ethernet Module Cabling-Base Cabinet

    X9700c X9700cx primary I/O module (drawer 2) X9700cx secondary I/O module (drawer 2) X9700cx primary I/O module (drawer 1) X9700cx secondary I/O module (drawer 1) Virtual Connect Flex- 1 0 Ethernet module cabling—Base cabinet Site network Onboard Administrator Available uplink port Management switch 2 Bay 5 (reserved for future use) Management switch 1...
  • Page 218: Sas Switch Cabling-Base Cabinet

    SAS switch cabling—Base cabinet NOTE: Callouts 1 through 3 indicate additional X9700c components. X9700c 4 X9700c 3 X9700c 2 X9700c 1 SAS switch ports 1through 4 (in interconnect bay 3 of the c-Class Blade Enclosure). Ports 2 through 4 are reserved for additional capacity blocks.
  • Page 219 X9700c 8 SAS switch ports 1 through 4 (in interconnect bay 3 of the c-Class Blade Enclosure). Used by base cabinet. X9700c 7 SAS switch ports 5 through 8 (in interconnect bay 3 of the c-Class Blade Enclosure). X9700c 6 SAS switch ports 1 through 4 (in interconnect bay 4 of the c-Class Blade Enclosure).
  • Page 220: D Warnings And Precautions

    Use conductive field service tools. Use a portable field service kit with a folding static-dissipating work mat. If you do not have any of the suggested equipment for proper grounding, have an HP-authorized reseller install the part. NOTE: For more information on static electricity or assistance with product installation, contact your HP-authorized reseller.
  • Page 221: Equipment Symbols

    Equipment symbols If the following symbols are located on equipment, hazardous conditions could exist. WARNING! Any enclosed surface or area of the equipment marked with these symbols indicates the presence of electrical shock hazards. Enclosed area contains no operator serviceable parts. To reduce the risk of injury from electrical shock hazards, do not open this enclosure.
  • Page 222: Device Warnings And Precautions

    WARNING! To reduce the risk of personal injury or damage to the equipment: Observe local occupational safety requirements and guidelines for heavy equipment handling. Obtain adequate assistance to lift and stabilize the product during installation or removal. Extend the leveling jacks to the floor. Rest the full weight of the rack on the leveling jacks.
  • Page 223 WARNING! To reduce the risk of personal injury or damage to the equipment, the installation of non-hot-pluggable components should be performed only by individuals who are qualified in servicing computer equipment, knowledgeable about the procedures and precautions, and trained to deal with products capable of producing hazardous energy levels. WARNING! To reduce the risk of personal injury or damage to the equipment, observe local occupational health and safety requirements and guidelines for manually handling material.
  • Page 224: E Regulatory Information

    Обладнання відповідає вимогам Технічного регламенту щодо обмеження використання деяких небезпечних речовин в електричному та електронному обладнанні, затвердженого постановою Кабінету Міністрів України від 3 грудня 2008 № 1057 Warranty information HP ProLiant and X86 Servers and Options http://www.hp.com/support/ProLiantServers-Warranties HP Enterprise Servers http://www.hp.com/support/EnterpriseServers-Warranties HP Storage Products http://www.hp.com/support/Storage-Warranties...
  • Page 225: Glossary

    Domain Name System. File Transfer Protocol. Global service indicator. High availability. Host bus adapter. Host channel adapter. Hard disk drive. HP 9000 Software Administrative Daemon. Integrated Lights-Out. Initial microcode load. IOPS I/Os per second. IPMI Intelligent Platform Management Interface. JBOD Just a bunch of disks.
  • Page 226 Transmission Control Protocol/Internet Protocol. User Datagram Protocol. Unit identification. SNMP User Security Model. VACM SNMP View Access Control Model. HP Virtual Connect. Virtual interface. WINS Windows Internet Name Service. World Wide Name. A unique identifier assigned to a Fibre Channel device. WWNN World wide node name.
  • Page 227: Index

    1 18 change network, run health check, defined, start or stop processes, 1 18 contacting HP, statistics, core dump, troubleshooting, tune, 1 18 view process status, 1 18 document file system...
  • Page 228 Details panel, link state monitoring, Navigator, Linux StoreAll clients, upgrade, 18, open, loading rack, warning, view events, localization, log files, collect for HP Support, hardware logging in, power on, 1 17 shut down, 1 16 hazardous conditions manpages, symbols on equipment,...
  • Page 229 1 15 routing table entries start, 1 17 add, upgrade, 10, delete, StoreAll software 5.5 upgrade, StoreAll software 5.6 upgrade, Subscriber's Choice, HP, segments subtree_check, evacuate from cluster, symbols migrate, on equipment, server blades system recovery, booting, system startup after power failure,...
  • Page 230 HP Enterprise servers, HP Networking products, HP ProLiant and X86 Servers and Options, HP Storage products, websites HP Subscriber's Choice for Business, spare parts, weight, warning, Windows StoreAll clients, upgrade, 19, 230 Index...

This manual is also suitable for:

Ibrix x9720

Table of Contents