HP IBRIX X9720 Administrator's Manual

Hide thumbs Also See for IBRIX X9720:
Table of Contents

Advertisement

HP IBRIX X9720/StoreAll 9730 Storage
Administrator Guide
Abstract
This guide describes tasks related to cluster configuration and monitoring, system upgrade and recovery, hardware component
replacement, and troubleshooting for the HP X9720/9730 Storage. It does not document StoreAll file system features or
standard Linux administrative tools and commands. For information about configuring and using StoreAll software file system
features, see the
This guide is intended for system administrators and technicians who are experienced with installing and administering networks,
and with performing Linux operating and administrative tasks. For the latest StoreAll guides, browse to
nl
http://www.hp.com/support/StoreAllManuals.
HP Part Number: AW549-96078
Published: January 2014
Edition: 14
HP StoreAll OS User
Guide.

Advertisement

Table of Contents
loading

Summary of Contents for HP IBRIX X9720

  • Page 1 This guide describes tasks related to cluster configuration and monitoring, system upgrade and recovery, hardware component replacement, and troubleshooting for the HP X9720/9730 Storage. It does not document StoreAll file system features or standard Linux administrative tools and commands. For information about configuring and using StoreAll software file system...
  • Page 2 The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
  • Page 3 Edition Date Software Description Version upgrades, and Upgrading firmware. Moved the “Generating reports” and “Obtaining performance statistics” chapters to the HP StoreAll Storage User Guide.
  • Page 4: Table Of Contents

    System components........................9 HP StoreAll software features......................9 High availability and redundancy.....................10 2 Getting started..................11 Setting up the system.......................11 Installation steps done by an HP service specialist..............11 Additional configuration steps.....................11 Management interfaces......................12 Using the StoreAll Management Console................13 Customizing the GUI......................16 Adding user accounts for Management Console access............16 Using the CLI........................17...
  • Page 5 Viewing information about Fusion Managers.................40 Configuring High Availability on the cluster................40 What happens during a failover..................41 Configuring automated failover with the HA Wizard...............41 Changing the HA configuration..................44 Managing power sources....................44 Adding a NIC........................45 Configuring HA on a NIC....................46 Server NICs........................47 Servers..........................47 Configuring automated failover manually................48 Changing the HA configuration manually.................49...
  • Page 6 Other host group operations....................71 Add preferred NIC.........................71 Modify host group properties....................71 Mount a file system to a host group...................71 Host group mountpoints tab.....................72 Host group preferred NICs.......................72 8 Monitoring cluster operations..............73 Monitoring hardware......................73 Monitoring servers......................73 Monitoring hardware components..................76 Monitoring blade enclosures...................77 Obtaining server details....................80 Monitoring storage and storage components.................83 Monitoring storage clusters.....................85...
  • Page 7 Viewing license terms......................123 Retrieving a license key......................123 Using AutoPass to retrieve and install permanent license keys............123 1 1 Troubleshooting..................124 Collecting information for HP Support with the IbrixCollect............124 Viewing the status of data collection...................124 Collecting logs........................124 Downloading the data collection (archive file)..............126 Deleting logs........................127...
  • Page 8 StoreAll 9730 CX 3 connections to the SAS switches..............164 StoreAll 9730 CX 7 connections to the SAS switches in the expansion rack........165 B The IBRIX X9720 component and cabling diagrams........166 Base and expansion cabinets....................166 Front view of a base cabinet....................166 Back view of a base cabinet with one capacity block............167...
  • Page 9: Product Description

    1 Product description HP IBRIX X9720 and 9730 are a scalable, network-attached storage (NAS) product. The system combines HP StoreAll software with HP server storage hardware to create a cluster of file serving nodes.. IMPORTANT: It is important to keep regular backups of the cluster configuration.
  • Page 10: High Availability And Redundancy

    High availability and redundancy The segmented architecture is the basis for fault resilience-loss of access to one or more segments does not render the entire file system inaccessible. Individual segments can be taken offline temporarily for maintenance operations and then returned to the file system. To ensure continuous data access, StoreAll software provides manual and automated failover protection at various points: Server.
  • Page 11: Getting Started

    Do not modify any parameters of the operating system or kernel, or update any part of the storage unless instructed to do so by HP; otherwise, the system could fail to operate properly. File serving nodes are tuned for file serving operations. With the exception of supported backup programs, do not run other applications directly on the nodes.
  • Page 12: Management Interfaces

    Data tiering. Use this feature to move files to specific tiers based on file attributes. For more information about these file system features, see the HP StoreAll OS User Guide. Localization support Red Hat Enterprise Linux 5 uses the UTF-8 (8-bit Unicode Transformation Format) encoding for supported locales.
  • Page 13: Using The Storeall Management Console

    Using the StoreAll Management Console The StoreAll Management Console is a browser-based interface to the Fusion Manager. See the release notes for the supported browsers and other software required to view charts on the dashboard. You can open multiple Management Console windows as necessary. If you are using HTTP to access the Management Console, open a web browser and navigate to the following location, specifying port 80: http://<management_console_IP>:80/fusion...
  • Page 14 System Status The System Status section lists the number of cluster events that have occurred in the last 24 hours. There are three types of events: Alerts. Disruptive events that can result in loss of access to file system data. Examples are a segment that is unavailable or a server that cannot be accessed.
  • Page 15 Statistics Historical performance graphs for the following items: Network I/O (MB/s) Disk I/O (MB/s) CPU usage (%) Memory usage (%) On each graph, the X-axis represents time and the Y-axis represents performance. Use the Statistics menu to select the servers to monitor (up to two), to change the maximum value for the Y-axis, and to show or hide resource usage distribution for CPU and memory.
  • Page 16: Customizing The Gui

    Customizing the display You can customize the tables in the GUI to change the sort order of table columns, or to specify which columns in the table to display. Mouse over any column label. If the label field changes color and a pointer displays on the field's right edge, the field can be customized.
  • Page 17: Using The Cli

    StoreAll clients can access the Fusion Manager as follows: Linux clients. Use Linux client commands for tasks such as mounting or unmounting file systems and displaying statistics. See the HP StoreAll OS CLI Reference Guide for details about these commands.
  • Page 18: Configuring Ports For A Firewall

    You will be prompted to enter the new password. Configuring ports for a firewall IMPORTANT: To avoid unintended consequences, HP recommends that you configure the firewall during scheduled maintenance times. When configuring a firewall, you should be aware of the following: SELinux should be disabled.
  • Page 19: Configuring Ntp Servers

    Configuring HP Insight Remote Support on StoreAll systems HP Insight Remote Support (IRS) provides comprehensive remote monitoring, notifications/advisories, dispatch, and proactive service support for HP StoreAll systems. IMPORTANT: HP IRS is mandatory for sending critical events to HP Support. Configuring NTP servers...
  • Page 20: Overview

    Windows system, which is referred to as the Central Management Server (CMS). You must install HP Insight Remote Support (HP IRS) on the CMS. If you want to manage your StoreAll devices, you can install HP Systems Insight Manager (HP SIM) on the CMS. However, be aware that HP SIM is not required for sending events to HP Support.
  • Page 21: Installing And Configuring Insight Remote Support

    You may configure the attached storage separately for HP Support. See the storage documentation for more information. Configuring the StoreAll cluster for Insight Remote Support The following list is an overview of the steps to perform to configure the StoreAll cluster for HP Insight Remote Support:...
  • Page 22 Click Enable to configure the settings on the Phone Home Settings dialog box. When entering information in this dialog box, consider the following: You must enter the IP address of the CMS on which HP IRS is installed. All other fields are optional.
  • Page 23: Compiling The Mib

    <BASE>\mibs>mcompile ibrixMib.mib <BASE>\mibs>mxmib -a ibrixMib.cfg For more information about the MIB, see the "Compiling and customizing MIBs" chapter in the HP Systems Insight Manager User Guide, which is available at: http://www.hp.com/go/insightmanagement/sim/ Click Support & Documents and then click Manuals. Navigate to the user guide.
  • Page 24: Configuring Entitlements

    The Customer Entered Serial Number and Customer Entered Product Number fields are required entries. This information is used by HP Support for warranty checks. These numbers are located on the information tag attached to the front panel of the hardware. Or, you can reuse the values that display in the Serial Number and Product Number fields (where are automatically discovered by the StoreAll OS software).
  • Page 25: Configuring Server Entitlements

    To use the CLI to configure entitlements, see the ibrix_phonehome command in the HP StoreAll OS CLI Reference Guide for more information. Configuring server entitlements Entitlements must be entered for the applicable devices in your configuration (servers, storage, chassis). This information includes the hardware-related information (product name, serial number, and product number) and the IP address or host name of the device.
  • Page 26: Configuring Chassis Entitlements

    OS CLI Reference Guide for more information. Discovering devices With StoreAll OS 6.5 or later and HP IRS 7.0.8 or later, you can discover devices using HP IRS. “Discovering devices using HP IRS” (page 26). If you want to manage devices using HP SIM, you must also discover devices using HP SIM.
  • Page 27 StoreAll OS software (these fields are called Override Serial Number and Override Product Number in HP IRS). If the discovered device displays a green check mark in the Warranty & Contract column, then the device is enabled for HP Support.
  • Page 28: Discovering Devices Using Hp Sim

    IMPORTANT: If you are running StoreAll OS 6.5 or later and IRS 7.0.8, device discovery through HP SIM is only required if you want to manage devices through HP SIM. Otherwise, you can skip this procedure. HP Systems Insight Manager (SIM) uses the SNMP protocol to discover and identify StoreAll systems automatically.
  • Page 29 HP StoreAll 9300 Gateway Storage Node HP StoreAll 8800 Storage Node HP StoreAll 8200 Gateway Storage Node HP StoreAll 9720 Storage Node (only for ProLiant G7–based 9720) Table 4 Device names and branding for StoreAll OS 6.3 or earlier Device...
  • Page 30 When running a StoreAll OS version earlier than 6.5, all devices discovered through HP SIM 7.3 will have a System Subtype of StoreAll. The following figures show examples of discovered devices in HP SIM 7.3 when running StoreAll OS 6.5 or later.
  • Page 31: Managing The Phone Home Configuration

    If Phone Home was configured prior to upgrading to StoreAll OS 6.5, there are four scenarios to consider: ◦ If Phone Home is not reconfigured after the upgrade, HP SIM 7.3 will display the old StoreAll branding names (see Table 4 (page 29)).
  • Page 32: Rescanning The Phone Home Configuration

    Only first IP address of MSA disk array is discovered When enabling Phone Home for MSA disk arrays, Phone Home only discovers the first registered IP address. You must manually discover the second IP address of the MSA on HP SIM, using the HP SIM discovery option.
  • Page 33 Fusion Manager IP is discovered as “Unknown” in HP SIM Verify that the read community string entered in HP SIM matches the Phone Home read community string. Also, run snmpwalk on the VIF IP, and verify the information: # snmpwalk -v 1 -c <read community string>...
  • Page 34 The Customer Entered Serial Number and Customer Entered Product Number is displayed when you run the ibrix_phonehome -l command. Details for the Standby OA device must be entered manually In HP SIM, you must manually update the CMS IP address and Custom Delivery ID details for the Standby OA device. Getting started...
  • Page 35: Configuring Virtual Interfaces For Client Access

    Fusion Manager, a virtual interface is created for the cluster network interface. Although the cluster network interface can carry traffic between file serving nodes and clients, HP recommends that you configure one or more user network interfaces for this purpose.
  • Page 36: Creating A Bonded Vif

    To assign the IFNAME a default route for the parent cluster bond and the user VIFS assigned to FSNs for use with SMB/NFS, enter the following ibrix_nic command at the command prompt: # ibrix_nic -r -n IFNAME -h HOSTNAME-A -R <ROUTE_IP> Configure backup monitoring, as described in “Configuring backup servers”...
  • Page 37: Configuring Automated Failover

    For example: # ibric_nic -m -h node1 -A node2/bond0:1 # ibric_nic -m -h node2 -A node1/bond0:1 # ibric_nic -m -h node3 -A node4/bond0:1 # ibric_nic -m -h node4 -A node3/bond0:1 Configuring automated failover To enable automated failover for your file serving nodes, execute the following command: ibrix_server -m [-h SERVERNAME] Example configuration This example uses two nodes, ib50-81 and ib50-82.
  • Page 38: Configuring Vlan Tagging

    # ibrix_nic -b -H ib142-131/bond0.51,ib142-129/bond0.51:2 Create the user FM VIF: ibrix_fm -c 192.168.51.125 -d bond0.51:1 -n 255.255.255.0 -v user For more information about VLAN tagging, see the HP StoreAll Storage Network Best Practices Guide. Support for link state monitoring Do not configure link state monitoring for user network interfaces or VIFs that will be used for SMB or NFS.
  • Page 39: Configuring Failover

    4 Configuring failover This chapter describes how to configure failover for agile management consoles, file serving nodes, network interfaces, and HBAs. Agile management consoles The agile Fusion Manager maintains the cluster configuration and provides graphical and command-line user interfaces for managing and monitoring the cluster. The agile Fusion Manager is installed on all file serving nodes when the cluster is installed.
  • Page 40: Viewing Information About Fusion Managers

    The failover will take approximately one minute. To see which node is now the active Fusion Manager, enter the following command: ibrix_fm -i The failed-over Fusion Manager remains in nofmfailover mode until it is moved to passive mode using the following command: ibrix_fm -m passive NOTE: A Fusion Manager cannot be moved from nofmfailover mode to active mode.
  • Page 41: What Happens During A Failover

    What happens during a failover The following actions occur when a server is failed over to its backup: The Fusion Manager verifies that the backup server is powered on and accessible. The Fusion Manager migrates ownership of the server’s segments to the backup and notifies all servers and StoreAll clients about the migration.
  • Page 42 The wizard also attempts to locate the IP addresses of the iLOs on each server. If it cannot locate an IP address, you will need to enter the address when prompted. When you have completed the information, click Enable HA Monitoring and Auto-Failover for both servers. Click Next to continue.
  • Page 43 Next, enable NIC monitoring on the VIF. Select the new user NIC and click NIC HA. The NIC HA Config dialog box appears. See “Configuring HA on a NIC” (page 46) for more information. After completing the NIC HA Config dialog box, the NIC HA Setup window appears again.
  • Page 44: Changing The Ha Configuration

    Changing the HA configuration To change the configuration of a NIC, select the server on the Servers panel, and then select NICs from the lower Navigator. Click Modify on the NICs panel. The General tab on the Modify NIC Properties dialog box allows you change the IP address and other NIC properties. The NIC HA tab allows you to enable or disable HA monitoring and failover on the NIC and to change or remove the standby NIC.
  • Page 45: Adding A Nic

    Adding a NIC On the Add NIC dialog box: Complete the following fields as needed: NOTE: Name and IP Address are required fields. Name: To configure an active physical network, enter the name of the existing physical interface, such as eth4 or bond1). To configure a virtual interface (VIF), enter a name based on the existing physical network, such as bond0:1.
  • Page 46: Configuring Ha On A Nic

    Configuring HA on a NIC On the NIC HA Config dialog box: Select Enable NIC Monitoring. Select the NIC to be the standby NIC to the backup server (the server listed in the Standby Server box). The standby NIC you select must valid and available. If you need to create a standby NIC, select New Standby NIC in this box, which opens the Add NIC dialog box.
  • Page 47: Server Nics

    Server NICs This panel displays information about the NICs on the selected server and allows you to add, modify, migrate, or remove an interface. The options are: Add: Add a new interface to the selected server. When you click Add, the Add NIC dialog box opens, and you can specify the name of the new interface.
  • Page 48: Configuring Automated Failover Manually

    Field Description Backup The name of the standby server, if assigned. Whether high availability features are on or off. A File Serving Node can be in the following states: Registered: Configured but not operational. Up: Operational. Up-Alert: Server has encountered a condition that has been logged. Check the events log on the Events tab.
  • Page 49: Changing The Ha Configuration Manually

    -m -h node2 -A node1/bond0:1 ibric_nic -m -h node3 -A node4/bond0:1 ibric_nic -m -h node4 -A node3/bond0:1 The next example sets up server s2.hp.com to monitor server s1.hp.com over user network interface eth1: ibrix_nic -m -h s2.hp.com -A s1.hp.com/eth1 4.
  • Page 50: Failing A Server Over Manually

    The STATE field indicates the status of the failover. If the field persistently shows Down-InFailover or Up-InFailover, the failover did not complete; contact HP Support for assistance. For information about the values that can appear in the STATE field, see “What happens during a...
  • Page 51: Setting Up Hba Monitoring

    A failback might not succeed if the time period between the failover and the failback is too short, and the primary server has not fully recovered. HP recommends ensuring that both servers are up and running and then waiting 60 seconds before starting the failback. Use the ibrix_server -l command to verify that the primary server is up and running.
  • Page 52 HBA failure. Use the following command: ibrix_hba -m -h HOSTNAME -p PORT For example, to turn on HBA monitoring for port 20.00.12.34.56.78.9a.bc on node s1.hp.com: ibrix_hba -m -h s1.hp.com -p 20.00.12.34.56.78.9a.bc To turn off HBA monitoring for an HBA port, include the -U option:...
  • Page 53: Servers Modify Hba Properties

    Field Description Port State Operational state of the port. Backup Port WWN WWPN of the standby port for this port (standby-paired HBAs only). Monitoring Whether HBA monitoring is enabled for this port. Servers modify HBA properties Servers modify HBA properties page This dialog allows you to enable or disable the monitoring feature for HBA High Availability.
  • Page 54: Capturing A Core Dump From A Failed Node

    For example, to view a summary report for file serving nodes xs01.hp.com and xs02.hp.com: ibrix_haconfig -l -h xs01.hp.com,xs02.hp.com Host HA Configuration Power Sources Backup Servers Auto Failover Nics Monitored Standby Nics HBAs Monitored xs01.hp.com FAILED PASSED PASSED PASSED FAILED PASSED FAILED xs02.hp.com...
  • Page 55: Prerequisites For Setting Up The Crash Capture

    After the core dump is created, the failed node reboots and its state changes to Up, FailedOver. Prerequisites for setting up the crash capture The following parameters must be configured in the ROM-based setup utility (RBSU) before a crash can be captured automatically on a file server node in failed condition. Start RBSU.
  • Page 56 Tune Fusion Manager to set the DUMPING status timeout by entering the following command: ibrix_fm_tune -S -o dumpingStatusTimeout=240 This command is required to delay the failover until the crash kernel is loaded; otherwise, Fusion Manager will bring down the failed node. Configuring failover...
  • Page 57: Configuring Cluster Event Notification

    5 Configuring cluster event notification Cluster events There are three types of cluster events: Table 5 Event types Icon Type Description Alerts Disruptive events that can result in loss of access to file system data (for example, a segment is unavailable or a server is unreachable).
  • Page 58: Viewing Email Notification Of Cluster Events

    When viewing events on the Events window, the following information is displayed: Level: Indicates the event type by icon (see Table 5 (page 57)). This column is sortable. Time: Indicates the time the event originated on the management server. Event: Displays the details of the event, including suggested actions. You can be notified of cluster events by email or SNMP traps.
  • Page 59: Dissociating Events And Email Addresses

    Enter the new email address in the Update events for addresses box. Select the applicable events. Click OK when finished. To manage events using the CLI, use the ibrix_event command. See the HP StoreAll OS CLI Reference Guide for more information. Dissociating events and email addresses...
  • Page 60: Configuring The Snmp Agent

    Steps for setting up SNMP include: Agent configuration (all SNMP versions) Trapsink configuration (all SNMP versions) Associating event notifications with trapsinks (all SNMP versions) View definition (V3 only) Group and user configuration (V3 only) The StoreAll software implements an SNMP agent that supports the private StoreAll software MIB. The agent can be polled and can send SNMP traps to configured trapsinks.
  • Page 61: Associating Events And Trapsinks

    StoreAll software supports multiple trapsinks; you can define any number of trapsinks of any SNMP version, but you can define only one trapsink per host, regardless of the version. At a minimum, trapsink configuration requires a destination host and SNMP version. All other parameters are optional and many assume the default value if no value is specified.
  • Page 62: Configuring Groups And Users

    The subtree is added in the named view. For example, to add the StoreAll software private MIB to the view named hp, enter: ibrix_snmpview -a -v hp -o .1.3.6.1.4.1.18997 -m .1.1.1.1.1.1.1 Configuring groups and users A group defines the access control policy on managed objects for one or more users. All users must belong to a group.
  • Page 63: Snmp Events Panel

    with these trapsinks. A single event can generate notifications to multiple trapsinks. Also, different sets of events can generate notifications to different trapsinks. The following information is available about the SNMP Agent: System Description Name of the SNMP Agent. This field is prefilled and cannot be changed. SNMP Version Version of SNMP.
  • Page 64: Configuring System Backups

    6 Configuring system backups Backing up the Fusion Manager configuration The Fusion Manager configuration is automatically backed up whenever the cluster configuration changes. The backup occurs on the node hosting the active Fusion Manager. The backup file is stored at <ibrixhome>/tmp/fmbackup.zip on that node. The active Fusion Manager notifies the passive Fusion Manager when a new backup file is available.
  • Page 65: Configuring Ndmp Parameters On The Cluster

    Log Level The level of log verbosity. This value should be set to 0, the default. The level should be increased only under the direction of HP Support. TCP Window Size (Bytes) The window size allowed for data transfers. This value should be changed only for performance reasons.
  • Page 66: Managing Ndmp Processes

    Click Synchronize on the NDMP Configuration Summary window to copy the configuration to all nodes. To configure NDMP using the CLI, see the ibrix_ndmpconfig command in the HP StoreAll OS CLI Reference Guide. Managing NDMP processes Normally all NDMP actions are controlled from the DMA. However, if the DMA cannot resolve a...
  • Page 67: Starting, Stopping, Or Restarting An Ndmp Server

    To view or rescan devices using the CLI, see the ibrix_tape command in the HP StoreAll OS CLI Reference Guide.
  • Page 68: Ndmp Events

    NDMP events An NDMP Server can generate three types of events: INFO, WARN, and ALERT. These events are displayed on the GUI and can be viewed with the ibrix_event command. INFO events. Identifies when major NDMP operations start and finish, and also report progress. For example: 7012:Level 3 backup of /mnt/ibfs7 finished at Sat Nov 7 21:20:58 PST 2011...
  • Page 69: Creating Host Groups For Storeall Clients

    Parent: If the host group is a child group, the name of the host group's parent in the group hierarchy is displayed. Domain: If defined, the subnet IP address used to assign HP StoreAll clients to this group is displayed. A domain rule restricts host group membership to HP StoreAll clients on the cluster subnet identified by this address.
  • Page 70: Adding A Storeall Client To A Host Group

    For example, suppose that you want all clients to be able to mount file system ifs1 and to implement a set of host tunings denoted as Tuning 1, but you want to override these global settings for certain host groups. To do this, mount ifs1 on the clients host group, ifs2 on host group A, ifs3 on host group C, and ifs4 on host group D, in any order.
  • Page 71: Viewing Host Groups

    Admin and Server threads, select a protocol (TCP or UDP) and define a domain. Under the Advanced tab, you can specify module tune options. For more information, see the HP StoreAll OS User Guide and the HP StoreAll OS CLI Reference Guide.
  • Page 72: Host Group Mountpoints Tab

    The dialog contains the following fields: Field Description Mountpoint The path that will be used as the mountpoint on each client. Filesystem The files system to be mounted. atime Update the inode access time when the inode is accessed. nodiratime Do not update the directory inode access time when the directory is accessed.
  • Page 73: Monitoring Cluster Operations

    8 Monitoring cluster operations This chapter describes how to monitor the operational state of the cluster and how to monitor cluster health. Monitoring hardware The GUI displays status, firmware versions, and device information for the servers, chassis, and system storage included in X9720 and 9730 systems. The Management Console displays a top-level status of the chassis, server, and storage hardware components.
  • Page 74 Select the server component that you want to view from the lower Navigator panel, such as NICs. Monitoring cluster operations...
  • Page 75 The following are the top-level options provided for the server: NOTE: Information about the Hardware node can be found in “Monitoring hardware components” (page 76). HBAs. The HBAs panel displays the following information: ◦ Node WWN ◦ Port WWN ◦ Backup ◦...
  • Page 76: Monitoring Hardware Components

    ◦ Route ◦ Standby Server ◦ Standby Interface Mountpoints. The Mountpoints panel displays the following information: ◦ Mountpoint ◦ Filesystem ◦ Access NFS. The NFS panel displays the following information: ◦ Host ◦ Path ◦ Options CIFS. The CIFS panel displays the following information: NOTE: CIFS in the GUI has not been rebranded to SMB yet.
  • Page 77: Monitoring Blade Enclosures

    and SAS switches). The following Onboard Administrator view shows a chassis enclosure on a StoreAll 9730 system. To monitor these components from the GUI: Click Servers from the upper Navigator tree. Click Hardware from the lower Navigator tree for information about the chassis that contains the server selected on the Servers panel, as shown in the following image.
  • Page 78 Detailed information of the hardware components in the blade enclosure is provided by expanding the Blade Enclosure node and clicking one of the sub-nodes. When you select one of the sub-nodes under the Blade Enclosure node, additional information is provided. For example, when you select the Fan node, additional information about the Fan for the blade enclosure is provided in the Fan panel.
  • Page 79 Table 7 Obtaining detailed information about a blade enclosure Panel name Information provided Status Type Name UUID Serial number Model Properties Temperature Sensor: The Temperature Sensor panel Status displays information for a bay, OA module or for the blade Type enclosure.
  • Page 80: Obtaining Server Details

    Obtaining server details The Management Console provides detailed information for each server in the chassis. To obtain summary information for a server, select the Server node under the Hardware node. The following overview information is provided for each server: Status Type Name UUID...
  • Page 81 Table 8 Obtaining detailed information about a server Panel name Information provided Status Type Name UUID Model Location ILO Module Status Type Name UUID Serial Number Model Firmware Version Properties Memory DiMM Status Type Name UUID Location Properties Status Type Name UUID Properties...
  • Page 82 Table 8 Obtaining detailed information about a server (continued) Panel name Information provided Drive: Displays information about each drive in a storage Status cluster. Type Name UUID Serial Number Model Firmware Version Location Properties Storage Controller (Displayed for a server) Status Type Name...
  • Page 83: Monitoring Storage And Storage Components

    Monitoring storage and storage components Select Vendor Storage from the Navigator tree to display status and device information for storage and storage components. The Vendor Storage panel lists the HP 9730 CX storage systems included in the system. The Summary panel shows details for a selected vendor storage.
  • Page 84 The Management Console provides a wide-range of information in regards to vendor storage, as shown in the following image. Drill down into the following components in the lower Navigator tree to obtain additional details: Servers. The Servers panel lists the host names for the attached storage. Storage Cluster.
  • Page 85: Monitoring Storage Clusters

    Monitoring storage clusters The Management Console provides detailed information for each storage cluster. Click one of the following sub-nodes displayed under the Storage Clusters node to obtain additional information: Drive Enclosure. The Drive Enclosure panel provides detailed information about the drive enclosure.
  • Page 86 Expand the Drive Enclosure node to provide additional information about the power supply and sub enclosures. Table 9 Details provided for the drive enclosure Node Where to find detailed information Power Supply “Monitoring the power supply for a storage cluster” (page 86) Sub Enclosure “Monitoring sub enclosures”...
  • Page 87 Monitoring sub enclosures Expand the Sub Enclosure node to obtain information about the following components for each sub-enclosure: Drive. The Drive panel provides the following information about the drives in a sub-enclosure: ◦ Status ◦ Volume Name ◦ Type ◦ UUID ◦...
  • Page 88: Monitoring Pools For A Storage Cluster

    ◦ Serial Number ◦ Model ◦ Firmware Version Temperature Sensor. The Temperature Sensor panel provides the following information about the temperature sensors in the sub-enclosure: ◦ Status ◦ Type ◦ Name ◦ UUID ◦ Properties Monitoring pools for a storage cluster The Management Console lists a Pool node for each pool in the storage cluster.
  • Page 89: Monitoring Storage Controllers For A Storage Cluster

    UUID Properties To obtain details on the volumes in the pool, expand the Pool node and then select the Volume node. The following information is displayed for the volume in the pool: Status Type Name UUID Properties The following image shows information for two volumes named LUN_15 and LUN_16 on the Volume panel.
  • Page 90: Monitoring Storage Switches In A Storage Cluster

    Monitoring the IO Cache Modules for a storage controller The IO Cache Module panel displays the following information about the IO cache module for a storage controller: Status Type UUID Properties. Provides information about the read, write and cache size properties. In the following image, the IO Cache Module panel shows an IO cache module with read/write properties enabled.
  • Page 91: Managing Luns In A Storage Cluster

    LUNs. The LUNs panel provides information about the LUNs in a storage cluster. For HP StoreAll 9730/X9720 systems, the Vendor Storage panel lists the HP 600 Modular Disk Systems (MDS600) included in the system. The Summary panel shows details for the selected MDS600.
  • Page 92: Luns

    7 in bay 6. See the administrator guide pertaining to your version of HP StoreAll storage for additional details. Options Add: Registers and names the vendor storage in the management console database. Available vendor storage types are LeftHand, MSA, 3PAR, and EqualLogic.
  • Page 93: Monitoring Cluster Events

    Events are written to an events table in the configuration database as they are generated. To maintain the size of the file, HP recommends that you periodically remove the oldest events. See “Removing events from the events database table” (page 94).
  • Page 94: Removing Events From The Events Database Table

    LEVEL INFO TEXT Ibrix kernel file system is up on ix24-03.ad.hp.com FILESYSTEM HOST ix24-03.ad.hp.com USER NAME OPERATION SEGMENT NUMBER PV NUMBER RELATED EVENT Event: ======= EVENT ID 1980 TIMESTAMP Feb 14 15:08:14 LEVEL ALERT TEXT category:CHASSIS, name: 9730_ch1, overallStatus:DEGRADED, component:OAmodule, uuid:09USE038187WOAModule2, status:MISSING, Message: The Onboard Administrator module is missing or has failed., Diagnostic message: Reseat the Onboard...
  • Page 95: Health Check Reports

    Health check reports The summary report provides an overall health check result for all tested file serving nodes and StoreAll clients, followed by individual results. If you include the -b option, the standby servers for all tested file serving nodes are included when the overall result is determined. The results will be one of the following: Passed.
  • Page 96 --------- 0, 0, 4, 3 1.09, 0.96, 0.72 3692 1024 Memory Information ================== Mem Total Mem Free Buffers(KB) Cached(KB) Swap Total(KB) Swap Free(KB) --------- -------- ----------- ---------- -------------- ------------- 49158496 25784280 1073956 19820680 16776948 16776948 Version/OS Information ====================== Fs Version IAD Version OS Version Kernel Version...
  • Page 97 r211-s15 IP address matches on Iad and Fusion Manager PASSED r211-s15 network protocol matches on Iad and Fusion Manager PASSED r211-s15 engine connection state on Iad is up PASSED r211-s16 engine uuid matches on Iad and Fusion Manager PASSED r211-s16 IP address matches on Iad and Fusion Manager PASSED r211-s16 network protocol matches on Iad and Fusion Manager PASSED...
  • Page 98 Remote Hosts ============ Host Type Network Protocol Connection State ------ ------ ---------- -------- ---------------- r38-s1 Server 10.103.8.1 true S_SET S_READY S_SENDHB r38-s2 Server 10.103.8.2 true S_NEW Check Results ============= Check : r38-s2 can ping remote segment server hosts =================================================== Check Description Result Result Information -----------------------------...
  • Page 99: Viewing Logs

    Logs are provided for the Fusion Manager, nodes, and StoreAll clients. Contact HP Support for assistance in interpreting log files. You might be asked to tar the logs and email them to HP. Viewing operating statistics for file serving nodes Periodically, the file serving nodes report the following statistics to the Fusion Manager: Summary.
  • Page 100: Maintaining The System

    9 Maintaining the system Shutting down the system To shut down the system completely, first shut down the StoreAll software, and then power off the system hardware. Shutting down the StoreAll software Use the following procedure to shut down the StoreAll software. Unless noted otherwise, run the commands from the node hosting the active Fusion Manager.
  • Page 101: Powering Off The Hardware

    Unmount all file systems on the cluster nodes: ibrix_umount -f <fs_name> To unmount file systems from the GUI, select Filesystems > unmount. Verify that all file systems are unmounted: ibrix_fs -l If a file system fails to unmount on a particular node, continue with this procedure. The file system will be forcibly unmounted during the node shutdown.
  • Page 102: Powering Nodes On Or Off

    Power on the node hosting the active Fusion Manager. Power on the file serving nodes (*root segment = segment 1; power on owner first, if possible). Monitor the nodes on the GUI and wait for them all to report UP in the output from the following command: ibrix_server -l Mount file systems and verify their content.
  • Page 103: Starting And Stopping Processes

    HOSTNAME is the name of the node that you just rebooted. Starting and stopping processes You can start, stop, and restart processes and can display status for the processes that perform internal StoreAll software functions. The following commands also control the operation of PostgreSQL on the machine.
  • Page 104: Tuning Nodes And Storeall Clients

    Verify that the iadconf.xml has been created and that the cluster name is correct. Tuning nodes and StoreAll clients Typically, HP Support sets the tuning parameters on the file serving nodes during the cluster installation and changes should be needed only for special situations.
  • Page 105 The General Tunings dialog box specifies the communications protocol (TCP or UDP) and the number of admin and server threads. The IAD Tunings dialog box configures the StoreAll administrative daemon. The Module Tunings dialog box adjusts various advanced parameters that affect server operations. Tuning nodes and StoreAll clients 105...
  • Page 106 On the Servers dialog box, select the servers to which the tunings should be applied. To tune nodes using the CLI, see the ibrix_host_tune command in the HP StoreAll OS CLI Reference Guide. 106 Maintaining the system...
  • Page 107: Managing Segments

    To list host tuning parameters that have been changed from their defaults: ibrix_lwhost --list See the ibrix_lwhost command description in the HP StoreAll OS CLI Reference Guide other available options. Windows clients. Click the Tune Host tab on the Windows StoreAll client GUI. Tunable parameters include the NIC to prefer (the default is the cluster interface), the communications protocol (UDP or TCP), and the number of server threads to use.
  • Page 108 The Change Ownership dialog box reports the status of the servers in the cluster and lists the segments owned by each server. In the Segment Properties section of the dialog box, select the segment whose ownership you are transferring, and click Change Owner. 108 Maintaining the system...
  • Page 109: Evacuating Segments And Removing Storage From The Cluster

    The Summary dialog box shows the segment migration you specified. Click Back to make any changes, or click Finish to complete the operation. To migrate ownership of segments from the CLI, see the ibrix_fs command in the HP StoreAll OS CLI Reference Guide.
  • Page 110 On the Evacuate Advanced dialog box, locate the segment to be evacuated and click Source. Then locate the segments that will receive the data from the segment and click Destination. If the file system is tiered, be sure to select destination segments on the same tier as the source segment.
  • Page 111 Guide. Troubleshooting segment evacuation If segment evacuation fails, HP recommends that you run phase 1 of the ibrix_fsck command in corrective mode on the segment that failed the evacuation. For more information, see "Checking and repairing file systems" in the HP StoreAll OS User Guide.
  • Page 112: Removing A Node From A Cluster

    3015A4021.C34A994C, poid 3015A4021.C34A994C, primary 4083040FF.7793558E poid 4083040FF.7793558E Use the inum2name utility to translate the primary inode ID into the file name. Removing a node from a cluster In the following procedure, the cluster contains four nodes: FSN1, FSN2, FSN3, and FSN4. FSN4 is the node being removed.
  • Page 113: Maintaining Networks

    In general, it is better to assign a user network for protocol (NFS/SMB/HTTP/FTP) traffic because the cluster network cannot host the virtual interfaces (VIFs) required for failover. HP recommends that you use a Gigabit Ethernet port (or faster) for user networks.
  • Page 114 For a highly available cluster, HP recommends that you put protocol traffic on a user network and then set up automated failover for it (see “Configuring High Availability on the cluster”...
  • Page 115: Setting Network Interface Options In The Configuration Database

    Execute this command once for each destination host that the file serving node or StoreAll client should contact using the specified network interface (IFNAME). For example, to prefer network interface eth3 for traffic from file serving node s1.hp.com to file serving node s2.hp.com: ibrix_server -n -h s1.hp.com -A s2.hp.com/eth3...
  • Page 116: Unpreferring Network Interfaces

    -n -g HOSTGROUP -A DESTHOST/IFNAME The destination host (DESTHOST) cannot be a hostgroup. For example, to prefer network interface eth3 for traffic from all StoreAll clients (the clients hostgroup) to file serving node s2.hp.com: ibrix_hostgroup -n -g clients -A s2.hp.com/eth3...
  • Page 117: Changing The Ip Address For The Cluster Interface On A Dedicated Management Console

    The following command adds a route for virtual interface eth2:232 on file serving node s2.hp.com, sending all traffic through gateway gw.hp.com: ibrix_nic -r -n eth2:232 -h s2.hp.com -A -R gw.hp.com Deleting a routing table entry If you delete a routing table entry, it is not replaced with a default entry. A new replacement route must be added manually.
  • Page 118: Deleting A Network Interface

    “Changing the cluster interface” (page 117). To delete a network interface, use the following command: ibrix_nic -d -n IFNAME -h HOSTLIST The following command deletes interface eth3 from file serving nodes s1.hp.com and s2.hp.com: ibrix_nic -d -n eth3 -h s1.hp.com,s2.hp.com Viewing network interface information Executing the ibrix_nic command with no arguments lists all interfaces on all file serving nodes.
  • Page 119 3. In the OA GUI, navigate to the SAS switch by selecting Enclosure Information > Interconnects > HP 6Gb SAS BL Switch. There will be four switches listed (each identified as HP 6Gb SAS BL Switch in the OA user interface), in bays 5-8.
  • Page 120 Click the plus sign (+) next to the bay number for the switch you want to reset and select Management Console. The Virtual SAS Manager user interface displays. Select the Maintain tab. 120 Maintaining the system...
  • Page 121 Select the HP 6G SAS Blade Switch for the bay number you are resetting. The list of available tasks displays in the main window. Select Reset Hardware. The Reset Hardware window appears. In the Scope of Reset field, you can choose to either reset the local switch or both switches in the pair (for example, the switches in bays 5 and 6 are a pair and the switches in bays 7 and 8 are a pair).
  • Page 122 1 1. Repeat these steps for the remaining three switches. 122 Maintaining the system...
  • Page 123: 10 Licensing

    Fax the Password Request Form that came with your License Entitlement Certificate. See the certificate for fax numbers in your area. Call or email the HP Password Center. See the certificate for telephone numbers in your area or email addresses.
  • Page 124: 1 Troubleshooting

    Data Collection is a log collection utility that allows you collect relevant information for diagnosis by HP Support when system issues occur. The collection can be triggered manually using the GUI or CLI (using the ibrix_collect command), or automatically during a system crash. Data Collection...
  • Page 125 /local/ibrixcollect/archive directory on the new active Fusion Manager server. Select the applicable file on the Data Collection panel and click Download. This will copy the .tgz file to the new active Fusion Manager server. Collecting information for HP Support with the IbrixCollect 125...
  • Page 126: Downloading The Data Collection (Archive File)

    <timestamp>_PROCESSED. HP Support may request that you send this information to assist in resolving the system crash. HP recommends that you maintain your crash dumps in the /var/crash directory. Ibrix Collect processes the core dumps present in the /var/crash directory (linked to /local/ platform/crash) only.
  • Page 127: Deleting Logs

    The average size of the archive file depends on the size of the logs present on individual nodes in the cluster. You may later be asked to email this final .tgz file to HP Support. Deleting logs You can delete a specific data collection or all data collection sets. Deletion removes the tar files on each node from the system.
  • Page 128: Obtaining Custom Logging From Ibrix_Collect Add-On Scripts

    Enter the number of previously collected data sets (archive files) to be retained in each node of the cluster. Under Email Settings, you can choose to have a zip file containing specific system and HP StoreAll command outputs about the cluster configuration sent via email. To enable this setting, select Enable sending cluster configuration by email.
  • Page 129: Creating An Add-On Script

    The following example shows several add-on scripts stored in the ibrix_collect_add_on_scripts directory: root@host2 /]# ls -l /usr/local/ibrix/ibrixcollect/ibrix_collect_add_on_scripts/ total 8 -rwxr-xr-x 1 root root 93 Dec 7 13:39 60_addOn.sh -rwxrwxrwx 1 root root 48 Dec 20 09:22 63_AddOnTest.sh Collecting information for HP Support with the IbrixCollect 129...
  • Page 130: Running An Add-On Script

    Write an add-on shell script that contains a custom command/log that needs to be collected in the final StoreAll collection. Only StoreAll and operating system commands are supported in the scripts. These scripts should have appropriate permission to be executed. IMPORTANT: Make sure the scripts that you are creating do not collect information or logs that are already collected as part of the ibrix_collect command.
  • Page 131: Viewing Data Collection Information

    To view data collection history from the CLI, use the following command: ibrix_collect -l To view data collection details such as date (of creation), size, description, state and initiator, use the following command: ibrix_collect -v -n <Name> Collecting information for HP Support with the IbrixCollect...
  • Page 132: Adding/Deleting Commands Or Logs In The Xml File

    The file system and IAD/FS output fields should show matching version numbers unless you have installed special releases or patches. If the output fields show mismatched version numbers and you do not know of any reason for the mismatch, contact HP Support. A mismatch might affect the operation of your cluster.
  • Page 133: Failover

    Failover Cannot fail back from failover caused by storage subsystem failure When a storage subsystem fails and automated failover is turned on, the Fusion Manager will initiate its failover protocol. It updates the configuration database to record that segment ownership has transferred from primary servers to their standbys and then attempts to migrate the segments to the standbys.
  • Page 134: Synchronizing Information On File Serving Nodes And The Configuration Database

    To maintain access to a file system, file serving nodes must have current information about the file system. HP recommends that you execute ibrix_health on a regular basis to monitor the health of this information. If the information becomes outdated on a file serving node, execute ibrix_dbck -o to resynchronize the server's information with the configuration database.
  • Page 135 If a full file system restore is possible, perform a full file system restore as described in ”Backing up and restoring file systems with Express Query data” in the HP StoreAll OS User Guide. If a full file system is not possible or desirable, then you can rebuild the database using the...
  • Page 136 -C <FSNAME> command. Then, schedule the Online Metadata Synchronizer to run to restore the path names affected by the lost rename operations. See "Online Metadata Synchronizer" in the HP StoreAll OS User Guide for more information.
  • Page 137: 12 Recovering A File Serving Node

    Obtaining the latest StoreAll software release StoreAll OS version 6.5.x is only available through the registered release process. To obtain the ISO image, contact HP Support to register for the release and obtain access to the software dropbox. IMPORTANT: The bootable device must have at least 4 GB of free space and contain no other content.
  • Page 138: Creating A Bootable Usb Flash Drive On Windows

    Download the HP StoreAll QR image to the Windows computer. Mount the HP StoreAll QR image by using a tool for creating a virtual DVD drive, such as Virtual CloneDrive. If you are using Virtual CloneDrive version 5.4.5.0, you would right-click the HP StoreAll QR ISO image, and click Mount (Virtual CloneDrive) to mount the HP StoreAll QR ISO image, as shown in the following figure.
  • Page 139 Create an image file by using a tool such as ImgBurn. If you are using ImgBurn version 2.5.7.0, follow these steps: Launch ImgBurn, and click the Create image file from disc option, as shown in the following figure. Click Source. The ISO file is mounted through Virtual CloneDrive.
  • Page 140 Create the image file by clicking File→Read. Connect a USB flash drive to the Windows computer. Use a software product to copy the bootable image file to a USB flash drive. The following steps are from Win32 Disk Imager version 0.7. Win32 Disk Imager can be obtained from various freeware sites on the Internet.
  • Page 141: Preparing For The Recovery

    Click option 3 for the USB flash drive, and proceed with the quick restore process. (HP IBRIX X9720 cluster nodes only) If default OA and VC credentials are not being used, you must manually initialize the credential manager immediately after exiting from the QR Restore process.
  • Page 142: Recovering An X9720 Or 9730 File Serving Node

    Open a command prompt and run the following commands to configure custom passwords: /opt/hp/platform/bin/hpsp_credmgmt --init-cred --master-passwd=hpdefault --hw-username=<Custom username> --hw-password=<Custom password> /opt/hp/platform/bin/hpsp_credmgmt --update-cred --cred-selector=chassis:chassis/oa --cred-type=upwpair --cred-username=<Custom username> --cred-password=<Custom password> /opt/hp/platform/bin/hpsp_credmgmt --update-cred --cred-selector=chassis:chassis/vc --cred-type=upwpair --cred-username=<Custom username> --cred-password=<Custom password> /opt/hp/platform/bin/hpsp_credmgmt --update-cred --cred-selector=chassis:global/ilo --cred-type=upwpair --cred-username=<Custom username>...
  • Page 143 Replacing a node requires less time.) IMPORTANT: HP recommends that you update the firmware before continuing with the installation. 9730 systems have been tested with specific firmware recipes. Continuing the installation without upgrading to a supported firmware recipe can result in a defective system.
  • Page 144 NOTE: If a management console is not located, the following screen appears. Select Enter FM IP and go to step 5. The Verify Hostname dialog box displays a hostname generated by the management console. Enter the correct hostname for this server. The Verify Configuration dialog box shows the configuration for this node.
  • Page 145 On the Server Networking Configuration dialog box, configure this server for bond0, the cluster network. Note the following: The hostname can include alphanumeric characters and the hyphen (-) special character. Do not use an underscore (_) in the hostname. The IP address is the address of the server on bond0. The default gateway provides a route between networks.
  • Page 146 This step applies only to 9730 systems. If you are restoring a blade on an IBRIX X9720 system, go to step 8. The 9730 blade being restored needs OA/VC information from the chassis. It can obtain this information directly from blade 1, or you can enter the OA/VC credentials manually.
  • Page 147 Storage configuration Networking on the blade On the Join a Cluster — Step 2 dialog box, enter the requested information. NOTE: On the dialog box, Register IP is the Fusion Manager (management console) IP, not the IP you are registering for this blade. The Network Configuration dialog box lists the interfaces configured on the system.
  • Page 148 NOTE: If you are recovering an X9720 node with StoreAll OS 6.1 or later, you might be unable to change the cluster network for bond1. See “(9720 systems) Manually recovering bond1 as the cluster” (page 151) for more information. The Configuration Summary dialog box lists the configuration you specified. Select Commit to apply the configuration.
  • Page 149: Completing The Restore

    When the configuration is complete, a message reporting the location of the log files appears: Logs are available at /usr/local/ibrix/autocfg/logs. The StoreAll 9730 configuration logs are available at /var/log/hp/platform/ install/X9730_install.log. Completing the restore Procedure 2 Ensure that you have root access to the node.
  • Page 150 For example: ibrix_nic -m -h titan16 -A titan15/eth2 Configure Insight Remote Support on the node. See “Configuring HP Insight Remote Support on StoreAll systems” (page 19). Run ibrix_health -l from the node hosting the active Fusion Manager to verify that no errors are being reported.
  • Page 151: Troubleshooting

    Run the following command to verify that the original share information is on the restored node: ibrix_cifs -i -h SERVERNAME Restore HTTP services. Complete the following steps: Take the appropriate actions: If Active Directory authentication is used, join the restored node to the AD domain manually.
  • Page 152 Create bond0 and bond1: Create the ifcfg-bond0 file in the /etc/sysconfig/network-scripts directory with the following parameters: BOOTPROTO=none BROADCAST=10.30.255.255 DEVICE=bond0 IPADDR=10.30.3.16 NETMASK=255.255.0.0 ONBOOT=yes SLAVE=no USERCTL=no BONDING_OPTS="miimon=100 mode=1 updelay=100" MTU=1500 Create the ifcfg-bond1 file in the /etc/sysconfig/network-scripts directory with the following parameters: NOTE: When creating the ifcfg-bond1 file, enter the cluster network IP address and mask for bond1.
  • Page 153 Determine if the MAC address is present in the ifcfg files. If not, obtain the MAC address and append it to each eth port: Execute the ip ad command to obtain the MAC address of all eth ports. Add the MAC address to the ifcfg-ethx file of each slave eth port. The following is an example of the MAC address: HWADDR=68:B5:99:B3:11:88 Ensure that the ONBOOT parameter is set to "no"...
  • Page 154 Register the server to the Fusion Manager configuration: (Not a replacement server) Run the register_server command: [root@X9720 ~]# /usr/local/ibrix/bin/register_server -p 172.16.3.65 -c bond1 -n r150b16 -u bond0 NOTE: The command will fail if the server is a replacement server because the server is already registered, as shown in the following example: iadconf.xml does not exist...creating new config.
  • Page 155 Run the following commands from the active Fusion Manager (r150b15 in this example) to view the existing servers and then unregister the passive Fusion Manager (r150b16 in this example): To view the registered management consoles: [root@r150b15 ibrix]#ibrix_fm -l The command provides the following output: NAME IP ADDRESS ------- ---------- r150b15 172.16.3.15...
  • Page 156: Ilo Remote Console Does Not Respond To Keystrokes

    -h hostnameX Iad error on host hostnameX failed command (<HIDDEN_COMMAND>) status (1) output: (Joining to AD Domain:IBRQA1.HP.COM With Computer DNS Name: hostsnameX.ibrqa1.hp.com ) Verify that the content of the /etc/resolve.conf file is not empty. If the content is empty, copy the contents of the /etc/resolve.conf file on another server to the empty resolve.conf...
  • Page 157: 13 Support And Other Resources

    HP ProLiant DL380 G6 Server Maintenance and Service Guide To find these documents, go to the Manuals page (http://www.hp.com/support/manuals) and select servers > ProLiant ml/dl and tc series servers > HP ProLiant DL380 G7 Server series or HP ProLiant DL380 G6 Server series.
  • Page 158: Obtaining Spare Parts

    Online help for HP Storage Management Utility (SMU) and Command Line Interface (CLI) To find these documents, go the Manuals page (http://www.hp.com/support/manuals) and select storage >Disk Storage Systems > MSA Disk Arrays >HP 2000sa G2 Modular Smart Array or HP P2000 G3 MSA Array Systems.
  • Page 159: 14 Documentation Feedback

    14 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL...
  • Page 160: A Storeall 9730 Component And Cabling Diagrams

    A StoreAll 9730 component and cabling diagrams Back view of the main rack Two StoreAll 9730 CXs are located below the SAS switches; the remaining StoreAll 9730 CXs are located above the SAS switches. The StoreAll 9730 CXs are numbered starting from the bottom (for example, the StoreAll 9730 CX 1 is located at the bottom of the rack;...
  • Page 161: Back View Of The Expansion Rack

    Back view of the expansion rack 1. 9730 CX 8 2. 9730 CX 7 StoreAll 9730 CX I/O modules and SAS port connectors 1. Secondary I/O module (Drawer 2) 2. SAS port 2 connector 3. SAS port 1 connector 4. Primary I/O module (Drawer 2) 5.
  • Page 162: Storeall 9730 Cx 1 Connections To The Sas Switches

    StoreAll 9730 CX 1 connections to the SAS switches The connections to the SAS switches are: SAS port 1 connector on the primary I/O module (Drawer 1) to port 1 on the Bay 5 SAS switch SAS port 1 connector on the secondary I/O module (Drawer 1) to port 1 on the Bay 6 SAS switch SAS port 1 connector on the primary I/O module (Drawer 2) to port 1 on the Bay 7 SAS switch...
  • Page 163: Storeall 9730 Cx 2 Connections To The Sas Switches

    StoreAll 9730 CX 2 connections to the SAS switches On Drawer 1: SAS port 1 connector on the primary I/O module (Drawer 1) to port 2 on the Bay 5 SAS switch SAS port 1 connector on the secondary I/O module (Drawer 1) to port 2 on the Bay 6 SAS switch On Drawer 2: SAS port 1 connector on the primary I/O module (Drawer 2) to port 2 on the Bay 7 SAS...
  • Page 164: Storeall 9730 Cx 3 Connections To The Sas Switches

    StoreAll 9730 CX 3 connections to the SAS switches On Drawer 1: SAS port 1 connector on the primary I/O module (Drawer 1) to port 3 on the Bay 5 SAS switch SAS port 1 connector on the secondary I/O module (Drawer 1) to port 3 on the Bay 6 SAS switch On Drawer 2: SAS port 1 connector on the primary I/O module (Drawer 2) to port 3 on the Bay 7 SAS...
  • Page 165: Storeall 9730 Cx 7 Connections To The Sas Switches In The Expansion Rack

    StoreAll 9730 CX 7 connections to the SAS switches in the expansion rack On Drawer 1: SAS port 1 connector on the primary I/O module (Drawer 1) to port 7 on the Bay 5 SAS switch SAS port 1 connector on the secondary I/O module (Drawer 1) to port 7 on the Bay 6 SAS switch On Drawer 2: SAS port 1 connector on the primary I/O module (Drawer 2) to port 7 on the Bay 7 SAS...
  • Page 166: B The Ibrix X9720 Component And Cabling Diagrams

    B The IBRIX X9720 component and cabling diagrams Base and expansion cabinets A minimum IBRIX X9720 Storage base cabinet has from 3 to 16 performance blocks (that is, server blades) and from 1 to 4 capacity blocks. An expansion cabinet can support up to four more capacity blocks, bringing the system to eight capacity blocks.
  • Page 167: Back View Of A Base Cabinet With One Capacity Block

    Back view of a base cabinet with one capacity block 1. Management switch 2 2. Management switch 1 3. X9700c 1 4. TFT monitor and keyboard 5. c-Class Blade enclosure 6. X9700cx 1 Base and expansion cabinets...
  • Page 168: Front View Of A Full Base Cabinet

    1 X9700c 4 6 X9700cx 3 2 X9700c 3 7 TFT monitor and keyboard 3 X9700c 2 8 c-Class Blade Enclosure 4 X9700c 1 9 X9700cx 2 5 X9700cx 4 10 X9700cx 1 168 The IBRIX X9720 component and cabling diagrams...
  • Page 169: Back View Of A Full Base Cabinet

    Back view of a full base cabinet 1 Management switch 2 7 X9700cx 4 2 Management switch 1 8 X9700cx 3 3 X9700c 4 9 TFT monitor and keyboard 4 X9700c 3 10 c-Class Blade Enclosure 5 X9700c 2 1 1 X9700cx 2 6 X9700c 1 12 X9700cx 1 Base and expansion cabinets 169...
  • Page 170: Front View Of An Expansion Cabinet

    1. X9700c 8 5. X9700cx 8 2. X9700c 7 6. X9700cx 7 3. X9700c 6 7. X9700cx 6 4. X9700c 5 8. X9700cx 5 170 The IBRIX X9720 component and cabling diagrams...
  • Page 171: Back View Of An Expansion Cabinet With Four Capacity Blocks

    Server blades must be contiguous; empty blade bays are not allowed between server blades. Only IBRIX X9720 Storage server blades can be inserted in a blade enclosure. The server blades are configured as file serving nodes. One node hosts the active Fusion Manager and the other nodes host passive Fusion Managers.
  • Page 172: Rear View Of A C-Class Blade Enclosure

    Flex- 1 0 networks The server blades in the IBRIX X9720 Storage have two built-in Flex- 1 0 10 NICs. The Flex- 1 0 technology comprises the Flex- 1 0 NICs and the Flex- 1 0 Virtual Connect modules in interconnect bays 1 and 2 of the performance chassis.
  • Page 173: Capacity Blocks

    Ethernet module cabling—Base cabinet” (page 176). If you connect several ports to the same switch in your site network, all ports must use the same media type. In addition, HP recommends you use 10 links. The X9720 Storage uses mode 1 (active/backup) for network bonds. No other bonding mode is supported.
  • Page 174: X9700C (Array Controller With 12 Disk Drives)

    This component is also known as the HP 600 Modular Disk System. For an explanation of the LEDs and buttons on this component, see the HP 600 Modular Disk System User Guide at http://www.hp.com/support/manuals. Under Storage click Disk Storage Systems, then under Disk Enclosures click HP 600 Modular Disk System.
  • Page 175: Front View Of An X9700Cx

    4. Out SAS port 8. Fan Cabling diagrams Capacity block cabling—Base and expansion cabinets A capacity block is comprised of the X9700c and X9700cx. CAUTION: Correct cabling of the capacity block is critical for proper IBRIX X9720 Storage operation. Cabling diagrams 175...
  • Page 176: Virtual Connect Flex- 1 0 Ethernet Module Cabling-Base Cabinet

    Bay 2 (Virtual Connect Flex- 1 0 10 Ethernet Bay 8 (reserved for optional components) Module for connection to site network) Bay 3 (SAS switch) 1 1. Onboard Administrator 1 Bay 4 (SAS switch) Onboard Administrator 2 The IBRIX X9720 component and cabling diagrams...
  • Page 177: Sas Switch Cabling -Base Cabinet

    SAS switch cabling —Base cabinet NOTE: Callouts 1 through 3 indicate additional X9700c components. X9700c 4 X9700c 3 X9700c 2 X9700c 1 SAS switch ports 1through 4 (in interconnect bay 3 of the c-Class Blade Enclosure). Ports 2 through 4 are reserved for additional capacity blocks.
  • Page 178 SAS switch ports 1 through 4 (in interconnect bay 4 of the c-Class Blade Enclosure). X9700c 5 SAS switch ports 5 through 8 (in interconnect bay 4 of the c-Class Blade Enclosure). Used by base cabinet. 178 The IBRIX X9720 component and cabling diagrams...
  • Page 179: C Warnings And Precautions

    Use conductive field service tools. Use a portable field service kit with a folding static-dissipating work mat. If you do not have any of the suggested equipment for proper grounding, have an HP-authorized reseller install the part. NOTE: For more information on static electricity or assistance with product installation, contact your HP-authorized reseller.
  • Page 180: Equipment Symbols

    Equipment symbols If the following symbols are located on equipment, hazardous conditions could exist. WARNING! Any enclosed surface or area of the equipment marked with these symbols indicates the presence of electrical shock hazards. Enclosed area contains no operator serviceable parts. To reduce the risk of injury from electrical shock hazards, do not open this enclosure.
  • Page 181: Device Warnings And Precautions

    WARNING! Verify that the AC power supply branch circuit that provides power to the rack is not overloaded. Overloading AC power to the rack power supply circuit increases the risk of personal injury, fire, or damage to the equipment. The total rack load should not exceed 80 percent of the branch circuit rating.
  • Page 182 CAUTION: Protect the installed solution from power fluctuations and temporary interruptions with a regulating Uninterruptible Power Supply (UPS). This device protects the hardware from damage caused by power surges and voltage spikes, and keeps the system in operation during a power failure.
  • Page 183: D Regulatory Information

    Hewlett-Packard Company, 3000 Hanover Street, Palo Alto, California 94304, U.S. Local Representative information Russian: HP Russia: ЗАО “Хьюлетт-Паккард А.О.”, 125171, Россия, г. Москва, Ленинградское шоссе, 16А, стр.3, тел/факс: +7 (495) 797 35 00, +7 (495) 287 89 05 HP Belarus: ИООО «Хьюлетт-Паккард Бел», 220030, Беларусь, г. Минск, ул.
  • Page 184 HP Enterprise Servers http://www.hp.com/support/EnterpriseServers-Warranties HP Storage Products http://www.hp.com/support/Storage-Warranties HP Networking Products http://www.hp.com/support/Networking-Warranties 184 Regulatory information...
  • Page 185: Glossary

    Domain Name System. File Transfer Protocol. Global service indicator. High availability. Host bus adapter. Host channel adapter. Hard disk drive. HP 9000 Software Administrative Daemon. Integrated Lights-Out. Initial microcode load. IOPS I/Os per second. IPMI Intelligent Platform Management Interface. JBOD Just a bunch of disks.
  • Page 186 TCP/IP Transmission Control Protocol/Internet Protocol. User Datagram Protocol. Unit identification. VACM SNMP View Access Control Model. HP Virtual Connect. Virtual interface. WINS Windows Internet Name Service. World Wide Name. A unique identifier assigned to a Fibre Channel device. WWNN World wide node name. A globally unique 64-bit identifier assigned to each Fibre Channel node process.
  • Page 187: Index

    IP address, 1 17 statistics, change network, 1 17 troubleshooting, defined, 1 13 tune, contacting HP, view process status, core dump, file system migrate segments, firewall configuration, document Flex- 1 0 networks, related information, Fusion Manager...
  • Page 188 NTP servers, delete, hostgroups prefer a user network interface, 1 16 passwords, change GUI password, technical support, Phone Home, HP Insight Remote Support ports, open, Phone Home, power sources, server, troubleshooting, hpspAdmin user account, QuickRestoreDVD, Ibrix Collect, add-on scripts, rack stability...
  • Page 189 SNMP event notification, HP Enterprise servers, SNMP MIB, HP Networking products, spare parts HP ProLiant and X86 Servers and Options, obtaining information, HP Storage products, Storage websites software , storage, remove from cluster, HP Subscriber's Choice for Business,...

This manual is also suitable for:

Storeall 9730

Table of Contents