Hp storeonce backup system concepts guide (50 pages)
Summary of Contents for HP StoreAll 8800
Page 1
This guide describes tasks related to cluster configuration and monitoring, system upgrade and recovery, hardware component replacement, and troubleshooting for the HP 8800/9320 Storage. It does not document StoreAll file system features or standard Linux administrative tools and commands. For information about configuring and using StoreAll software file system features,...
Page 2
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
System components........................8 HP StoreAll software features......................8 High availability and redundancy....................9 2 Getting started..................10 Setting up the system.......................10 Installation steps done by an HP service specialist..............10 Additional configuration steps.....................10 Management interfaces......................11 Using the StoreAll Management Console................12 Customizing the GUI......................15 Adding user accounts for Management Console access............15 Using the CLI........................16...
Page 4
Configuring High Availability on the cluster................40 What happens during a failover..................41 Configuring automated failover with the HA Wizard...............41 Changing the HA configuration..................43 Managing power sources....................44 Adding a NIC........................44 Configuring HA on a NIC....................45 Server NICs........................46 Servers..........................46 Configuring automated failover manually................47 Changing the HA configuration manually.................48 Failing a server over manually.....................49 Failing back a server ......................49...
Page 5
Other host group operations....................71 Add preferred NIC.........................71 Modify host group properties....................71 Mount a file system to a host group...................71 Host group mountpoints tab.....................72 Host group preferred NICs.......................72 8 Monitoring cluster operations..............73 Monitoring hardware......................73 Monitoring servers......................73 Monitoring hardware components..................77 Obtaining server details....................77 Monitoring storage and storage components.................81 Managing LUNs in a storage cluster................82 Vendor Storage.........................82...
Page 6
Viewing network interface information................110 Adding Storage to an HP StoreAll 8800..................110 Recovering from a changed storage cluster UUID..............114 10 Licensing.....................116 Viewing license terms......................116 Retrieving a license key......................116 Using AutoPass to retrieve and install permanent license keys............116 1 1 Troubleshooting..................117 Collecting information for HP Support with the IbrixCollect............117 Viewing the status of data collection...................117...
Page 7
Front view of 9300c array controller or 9300cx 3.5" 12-drive enclosure........148 Rear view of 9300c array controller...................149 Rear view of 9300cx 3.5" 12-drive enclosure...............149 Front view of file serving node...................150 Rear view of file serving node....................150 Cabling diagrams........................153 Cluster network cabling diagram..................153 SATA option cabling......................154 SAS option cabling......................155 Drive enclosure cabling....................156...
For 9320 system components, see “System component and cabling diagrams for 9320 systems” (page 148). For a complete list of system components, see the HP StoreAll Storage System QuickSpecs, which are available at: http://www.hp.com/go/StoreAll HP StoreAll software features HP StoreAll software is a scale-out, network-attached storage solution including a parallel file system for clusters, an integrated volume manager, high-availability features such as automatic failover of multiple components, and a centralized management interface.
Dual redundant paths to all storage components Gigabytes-per-second throughput High availability and redundancy The segmented architecture is the basis for fault resilience-loss of access to one or more segments does not render the entire file system inaccessible. Individual segments can be taken offline temporarily for maintenance operations and then returned to the file system.
Do not modify any parameters of the operating system or kernel, or update any part of the storage unless instructed to do so by HP; otherwise, the system could fail to operate properly. File serving nodes are tuned for file serving operations. With the exception of supported backup programs, do not run other applications directly on the nodes.
Data tiering. Use this feature to move files to specific tiers based on file attributes. For more information about these file system features, see the HP StoreAll OS User Guide. Localization support Red Hat Enterprise Linux 5 uses the UTF-8 (8-bit Unicode Transformation Format) encoding for supported locales.
Using the StoreAll Management Console The StoreAll Management Console is a browser-based interface to the Fusion Manager. See the release notes for the supported browsers and other software required to view charts on the dashboard. You can open multiple Management Console windows as necessary. If you are using HTTP to access the Management Console, open a web browser and navigate to the following location, specifying port 80: http://<management_console_IP>:80/fusion...
Page 13
System Status The System Status section lists the number of cluster events that have occurred in the last 24 hours. There are three types of events: Alerts. Disruptive events that can result in loss of access to file system data. Examples are a segment that is unavailable or a server that cannot be accessed.
Page 14
Statistics Historical performance graphs for the following items: Network I/O (MB/s) Disk I/O (MB/s) CPU usage (%) Memory usage (%) On each graph, the X-axis represents time and the Y-axis represents performance. Use the Statistics menu to select the servers to monitor (up to two), to change the maximum value for the Y-axis, and to show or hide resource usage distribution for CPU and memory.
Customizing the display You can customize the tables in the GUI to change the sort order of table columns, or to specify which columns in the table to display. Mouse over any column label. If the label field changes color and a pointer displays on the field's right edge, the field can be customized.
StoreAll clients can access the Fusion Manager as follows: Linux clients. Use Linux client commands for tasks such as mounting or unmounting file systems and displaying statistics. See the HP StoreAll OS CLI Reference Guide for details about these commands.
You will be prompted to enter the new password. Configuring ports for a firewall IMPORTANT: To avoid unintended consequences, HP recommends that you configure the firewall during scheduled maintenance times. When configuring a firewall, you should be aware of the following: SELinux should be disabled.
Configuring HP Insight Remote Support on StoreAll systems HP Insight Remote Support (IRS) provides comprehensive remote monitoring, notifications/advisories, dispatch, and proactive service support for HP StoreAll systems. IMPORTANT: HP IRS is mandatory for sending critical events to HP Support. Getting started...
Windows system, which is referred to as the Central Management Server (CMS). You must install HP Insight Remote Support (HP IRS) on the CMS. If you want to manage your StoreAll devices, you can install HP Systems Insight Manager (HP SIM) on the CMS. However, be aware that HP SIM is not required for sending events to HP Support.
You may configure the attached storage separately for HP Support. See the storage documentation for more information. Configuring the StoreAll cluster for Insight Remote Support The following list is an overview of the steps to perform to configure the StoreAll cluster for HP Insight Remote Support:...
Page 21
Click Enable to configure the settings on the Phone Home Settings dialog box. When entering information in this dialog box, consider the following: You must enter the IP address of the CMS on which HP IRS is installed. All other fields are optional.
<BASE>\mibs>mcompile ibrixMib.mib <BASE>\mibs>mxmib -a ibrixMib.cfg For more information about the MIB, see the "Compiling and customizing MIBs" chapter in the HP Systems Insight Manager User Guide, which is available at: http://www.hp.com/go/insightmanagement/sim/ Click Support & Documents and then click Manuals. Navigate to the user guide.
The Customer Entered Serial Number and Customer Entered Product Number fields are required entries. This information is used by HP Support for warranty checks. These numbers are located on the information tag attached to the front panel of the hardware. Or, you can reuse the values that display in the Serial Number and Product Number fields (where are automatically discovered by the StoreAll OS software).
To use the CLI to configure entitlements, see the ibrix_phonehome command in the HP StoreAll OS CLI Reference Guide for more information. Configuring server entitlements Entitlements must be entered for the applicable devices in your configuration (servers, storage, chassis). This information includes the hardware-related information (product name, serial number, and product number) and the IP address or host name of the device.
OS CLI Reference Guide for more information. Discovering devices With StoreAll OS 6.5 or later and HP IRS 7.0.8 or later, you can discover devices using HP IRS. “Discovering devices using HP IRS” (page 25). If you want to manage devices using HP SIM, you must also discover devices using HP SIM.
Page 26
StoreAll OS software (these fields are called Override Serial Number and Override Product Number in HP IRS). If the discovered device displays a green check mark in the Warranty & Contract column, then the device is enabled for HP Support.
IMPORTANT: If you are running StoreAll OS 6.5 or later and IRS 7.0.8, device discovery through HP SIM is only required if you want to manage devices through HP SIM. Otherwise, you can skip this procedure. HP Systems Insight Manager (SIM) uses the SNMP protocol to discover and identify StoreAll systems automatically.
Page 28
HP StoreAll 9300 Gateway Storage Node HP StoreAll 8800 Storage Node HP StoreAll 8200 Gateway Storage Node HP StoreAll 9720 Storage Node (only for ProLiant G7–based 9720) Table 4 Device names and branding for StoreAll OS 6.3 or earlier Device...
Page 29
When running a StoreAll OS version earlier than 6.5, all devices discovered through HP SIM 7.3 will have a System Subtype of StoreAll. The following figures show examples of discovered devices in HP SIM 7.3 when running StoreAll OS 6.5 or later.
If Phone Home was configured prior to upgrading to StoreAll OS 6.5, there are four scenarios to consider: ◦ If Phone Home is not reconfigured after the upgrade, HP SIM 7.3 will display the old StoreAll branding names (see Table 4 (page 28)).
Only first IP address of MSA disk array is discovered When enabling Phone Home for MSA disk arrays, Phone Home only discovers the first registered IP address. You must manually discover the second IP address of the MSA on HP SIM, using the HP SIM discovery option.
Page 32
Fusion Manager IP is discovered as “Unknown” in HP SIM Verify that the read community string entered in HP SIM matches the Phone Home read community string. Also, run snmpwalk on the VIF IP, and verify the information: # snmpwalk -v 1 -c <read community string>...
Page 33
Install the credentials for a single HP StoreAll capacity block by running the following four commands, once for each HP StoreAll capacity block that the hpsp_couplet_info command lists in Step 1. Replace <MSA_SAS_BASE_ADDRESS> with one of the UUIDs as provided by the output of the hpsp_couplet_info command.
Page 34
The Customer Entered Serial Number and Customer Entered Product Number is displayed when you run the ibrix_phonehome -l command. Details for the Standby OA device must be entered manually In HP SIM, you must manually update the CMS IP address and Custom Delivery ID details for the Standby OA device. Getting started...
Fusion Manager, a virtual interface is created for the cluster network interface. Although the cluster network interface can carry traffic between file serving nodes and clients, HP recommends that you configure one or more user network interfaces for this purpose.
To assign the IFNAME a default route for the parent cluster bond and the user VIFS assigned to FSNs for use with SMB/NFS, enter the following ibrix_nic command at the command prompt: # ibrix_nic -r -n IFNAME -h HOSTNAME-A -R <ROUTE_IP> Configure backup monitoring, as described in “Configuring backup servers”...
For example: # ibric_nic -m -h node1 -A node2/bond0:1 # ibric_nic -m -h node2 -A node1/bond0:1 # ibric_nic -m -h node3 -A node4/bond0:1 # ibric_nic -m -h node4 -A node3/bond0:1 Configuring automated failover To enable automated failover for your file serving nodes, execute the following command: ibrix_server -m [-h SERVERNAME] Example configuration This example uses two nodes, ib50-81 and ib50-82.
# ibrix_nic -b -H ib142-131/bond0.51,ib142-129/bond0.51:2 Create the user FM VIF: ibrix_fm -c 192.168.51.125 -d bond0.51:1 -n 255.255.255.0 -v user For more information about VLAN tagging, see the HP StoreAll Storage Network Best Practices Guide. Configuring virtual interfaces for client access...
4 Configuring failover This chapter describes how to configure failover for agile management consoles, file serving nodes, network interfaces, and HBAs. Agile management consoles The agile Fusion Manager maintains the cluster configuration and provides graphical and command-line user interfaces for managing and monitoring the cluster. The agile Fusion Manager is installed on all file serving nodes when the cluster is installed.
The failover will take approximately one minute. To see which node is now the active Fusion Manager, enter the following command: ibrix_fm -i The failed-over Fusion Manager remains in nofmfailover mode until it is moved to passive mode using the following command: ibrix_fm -m passive NOTE: A Fusion Manager cannot be moved from nofmfailover mode to active mode.
What happens during a failover The following actions occur when a server is failed over to its backup: The Fusion Manager verifies that the backup server is powered on and accessible. The Fusion Manager migrates ownership of the server’s segments to the backup and notifies all servers and StoreAll clients about the migration.
Page 42
Click Next to continue. The NIC HA Setup window appears. Use the NIC HA Setup window to configure NICs that will be used for data services such as SMB or NFS. You can also designate NIC HA pairs on the server and its backup and enable monitoring of these NICs.
Next, enable NIC monitoring on the VIF. Select the new user NIC and click NIC HA. The NIC HA Config dialog box appears. See “Configuring HA on a NIC” (page 45) for more information. After completing the NIC HA Config dialog box, the NIC HA Setup window appears again.
Managing power sources To view the power source for a server, select the server on the Servers panel, and then select Power from the lower Navigator. The Power Source panel shows the power source configured on the server when HA was configured. You can add or remove power sources on the server, and can power the server on or off, or reset the server.
Configuring HA on a NIC On the NIC HA Config dialog box: Select Enable NIC Monitoring. Select the NIC to be the standby NIC to the backup server (the server listed in the Standby Server box). The standby NIC you select must valid and available. If you need to create a standby NIC, select New Standby NIC in this box, which opens the Add NIC dialog box.
Server NICs This panel displays information about the NICs on the selected server and allows you to add, modify, migrate, or remove an interface. The options are: Add: Add a new interface to the selected server. When you click Add, the Add NIC dialog box opens, and you can specify the name of the new interface.
Field Description Backup The name of the standby server, if assigned. Whether high availability features are on or off. A File Serving Node can be in the following states: Registered: Configured but not operational. Up: Operational. Up-Alert: Server has encountered a condition that has been logged. Check the events log on the Events tab.
-m -h node2 -A node1/bond0:1 ibric_nic -m -h node3 -A node4/bond0:1 ibric_nic -m -h node4 -A node3/bond0:1 The next example sets up server s2.hp.com to monitor server s1.hp.com over user network interface eth1: ibrix_nic -m -h s2.hp.com -A s1.hp.com/eth1 4.
The STATE field indicates the status of the failover. If the field persistently shows Down-InFailover or Up-InFailover, the failover did not complete; contact HP Support for assistance. For information about the values that can appear in the STATE field, see “What happens during a...
A failback might not succeed if the time period between the failover and the failback is too short, and the primary server has not fully recovered. HP recommends ensuring that both servers are up and running and then waiting 60 seconds before starting the failback. Use the ibrix_server -l command to verify that the primary server is up and running.
Page 51
HBA failure. Use the following command: ibrix_hba -m -h HOSTNAME -p PORT For example, to turn on HBA monitoring for port 20.00.12.34.56.78.9a.bc on node s1.hp.com: ibrix_hba -m -h s1.hp.com -p 20.00.12.34.56.78.9a.bc To turn off HBA monitoring for an HBA port, include the -U option:...
Field Description Port WWN This HBA’s WWPN. Port State Operational state of the port. Backup Port WWN WWPN of the standby port for this port (standby-paired HBAs only). Monitoring Whether HBA monitoring is enabled for this port. Servers modify HBA properties Servers modify HBA properties page This dialog allows you to enable or disable the monitoring feature for HBA High Availability.
-l [-h HOSTLIST] [-f] [-b] For example, to view a summary report for file serving nodes xs01.hp.com and xs02.hp.com: ibrix_haconfig -l -h xs01.hp.com,xs02.hp.com Host HA Configuration Power Sources Backup Servers Auto Failover Nics Monitored Standby Nics HBAs Monitored xs01.hp.com...
After the core dump is created, the failed node reboots and its state changes to Up, FailedOver. Prerequisites for setting up the crash capture The following parameters must be configured in the ROM-based setup utility (RBSU) before a crash can be captured automatically on a file server node in failed condition. Start RBSU.
Page 55
Tune Fusion Manager to set the DUMPING status timeout by entering the following command: ibrix_fm_tune -S -o dumpingStatusTimeout=240 This command is required to delay the failover until the crash kernel is loaded; otherwise, Fusion Manager will bring down the failed node. Capturing a core dump from a failed node...
5 Configuring cluster event notification Cluster events There are three types of cluster events: Table 5 Event types Icon Type Description Alerts Disruptive events that can result in loss of access to file system data (for example, a segment is unavailable or a server is unreachable).
When viewing events on the Events window, the following information is displayed: Level: Indicates the event type by icon (see Table 5 (page 56)). This column is sortable. Time: Indicates the time the event originated on the management server. Event: Displays the details of the event, including suggested actions. You can be notified of cluster events by email or SNMP traps.
Enter the new email address in the Update events for addresses box. Select the applicable events. Click OK when finished. To manage events using the CLI, use the ibrix_event command. See the HP StoreAll OS CLI Reference Guide for more information. Dissociating events and email addresses...
decryption. Both authentication and privacy, and their passwords, are optional and will use default settings where security is less of a concern. With users validated, the VACM determines which managed objects these users are allowed to access. The VACM includes an access scheme to control user access to managed objects; context matching to define which objects can be accessed;...
Click OK when finished. Configuring trapsink settings A trapsink is the host destination where agents send traps, which are asynchronous notifications sent by the agent to the management station. A trapsink is specified either by name or IP address. StoreAll software supports multiple trapsinks; you can define any number of trapsinks of any SNMP version, but you can define only one trapsink per host, regardless of the version.
The subtree is added in the named view. For example, to add the StoreAll software private MIB to the view named hp, enter: ibrix_snmpview -a -v hp -o .1.3.6.1.4.1.18997 -m .1.1.1.1.1.1.1 Configuring groups and users A group defines the access control policy on managed objects for one or more users. All users must belong to a group.
Viewing SNMP notifications View the current configuration for sending out event notifications via SNMP traps. This includes information about trapsinks (servers that receive SNMP traps), as well as the association of events with these trapsinks. A single event can generate notifications to multiple trapsinks. Also, different sets of events can generate notifications to different trapsinks.
Page 63
See the MSA array documentation for additional information. For HP P2000 G3 MSA systems, see the HP P2000 G3 MSA System SMU Reference Guide. For P2000 G2 MSA systems, see the HP 2000 G2 Modular Smart Array Reference Guide. To locate these documents, go to http:// www.hp.com/support/manuals.
6 Configuring system backups Backing up the Fusion Manager configuration The Fusion Manager configuration is automatically backed up whenever the cluster configuration changes. The backup occurs on the node hosting the active Fusion Manager. The backup file is stored at <ibrixhome>/tmp/fmbackup.zip on that node. The active Fusion Manager notifies the passive Fusion Manager when a new backup file is available.
Log Level The level of log verbosity. This value should be set to 0, the default. The level should be increased only under the direction of HP Support. TCP Window Size (Bytes) The window size allowed for data transfers. This value should be changed only for performance reasons.
Click Synchronize on the NDMP Configuration Summary window to copy the configuration to all nodes. To configure NDMP using the CLI, see the ibrix_ndmpconfig command in the HP StoreAll OS CLI Reference Guide. Managing NDMP processes Normally all NDMP actions are controlled from the DMA. However, if the DMA cannot resolve a...
NDMP events An NDMP Server can generate three types of events: INFO, WARN, and ALERT. These events are displayed on the GUI and can be viewed with the ibrix_event command. INFO events. Identifies when major NDMP operations start and finish, and also report progress. For example: 7012:Level 3 backup of /mnt/ibfs7 finished at Sat Nov 7 21:20:58 PST 2011...
Parent: If the host group is a child group, the name of the host group's parent in the group hierarchy is displayed. Domain: If defined, the subnet IP address used to assign HP StoreAll clients to this group is displayed. A domain rule restricts host group membership to HP StoreAll clients on the cluster subnet identified by this address.
For example, suppose that you want all clients to be able to mount file system ifs1 and to implement a set of host tunings denoted as Tuning 1, but you want to override these global settings for certain host groups. To do this, mount ifs1 on the clients host group, ifs2 on host group A, ifs3 on host group C, and ifs4 on host group D, in any order.
Admin and Server threads, select a protocol (TCP or UDP) and define a domain. Under the Advanced tab, you can specify module tune options. For more information, see the HP StoreAll OS User Guide and the HP StoreAll OS CLI Reference Guide.
The dialog contains the following fields: Field Description Mountpoint The path that will be used as the mountpoint on each client. Filesystem The files system to be mounted. atime Update the inode access time when the inode is accessed. nodiratime Do not update the directory inode access time when the directory is accessed.
8 Monitoring cluster operations This chapter describes how to monitor the operational state of the cluster and how to monitor cluster health. Monitoring hardware The GUI displays status, firmware versions, and device information for the servers, virtual chassis, and system storage included in 8800/9320 systems. Monitoring servers To view information about the server and chassis included in your system.
Page 74
Select the server component that you want to view from the lower Navigator panel, such as NICs. Monitoring cluster operations...
Page 75
The following are the top-level options provided for the server: NOTE: Information about the Hardware node can be found in “Monitoring hardware components” (page 77). HBAs. The HBAs panel displays the following information: ◦ Node WWN ◦ Port WWN ◦ Backup Monitoring hardware...
Page 76
◦ Monitoring ◦ State NICs. The NICs panel shows all NICs on the server, including offline NICs. The NICs panel displays the following information: ◦ Name ◦ ◦ Type ◦ State ◦ Route ◦ Standby Server ◦ Standby Interface Mountpoints. The Mountpoints panel displays the following information: ◦...
Events. The Events panel displays the following information: ◦ Level ◦ Time ◦ Event Hardware. The Hardware panel displays the following information: ◦ The name of the hardware component. ◦ The information gathered in regards to that hardware component. “Monitoring hardware components” (page 77) for detailed information about the Hardware panel.
Page 78
Message Diagnostic Message Column dynamically appears depending on the situation. Obtain detailed information for hardware components in the server by clicking the nodes under the Server node. Monitoring cluster operations...
Page 79
Table 7 Obtaining detailed information about a server Panel name Information provided Status Type Name UUID Model Location ILO Module Status Type Name UUID Serial Number Model Firmware Version Properties Memory DiMM Status Type Name UUID Location Properties Status Type Name UUID Properties...
Page 80
Table 7 Obtaining detailed information about a server (continued) Panel name Information provided Drive: Displays information about each drive in a storage Status cluster. Type Name UUID Serial Number Model Firmware Version Location Properties Storage Controller (Displayed for a server) Status Type Name...
Table 7 Obtaining detailed information about a server (continued) Panel name Information provided Temperature Sensor: Displays information for each Status temperature sensor. Type Name UUID Locations Properties Monitoring storage and storage components Select Vendor Storage from the Navigator tree to display status and device information for storage and storage components.
The Management Console provides a wide-range of information in regards to vendor storage. Drill down into the following components in the lower Navigator tree to obtain additional details: Servers. The Servers panel lists the host names for the attached storage. LUNs.
LUNs. The LUNs panel provides information about the LUNs in a storage cluster. For HP StoreAll 9730/X9720 systems, the Vendor Storage panel lists the HP 600 Modular Disk Systems (MDS600) included in the system. The Summary panel shows details for the selected MDS600.
Events are written to an events table in the configuration database as they are generated. To maintain the size of the file, HP recommends that you periodically remove the oldest events. See “Removing events from the events database table” (page 85).
INFO Nic eth0[99.224.24.03] on host ix24-03.ad.hp.com up 1981 Feb 14 15:08:15 INFO Ibrix kernel file system is up on ix24-03.ad.hp.com The ibrix_event -i command displays events in long format, including the complete event description. $ ibrix_event -i -n 2 Event:...
Monitoring cluster health To monitor the functional health of file serving nodes and StoreAll clients, execute the ibrix_health command. This command checks host performance in several functional areas and provides either a summary or a detailed report of the results. Health checks The ibrix_health command runs these health checks on file serving nodes: Pings remote file serving nodes that share a network with the test hosts.
Page 87
The following is an example of the output from the ibrix_health -i command: [root@r211-s16 ~]# ibrix_health -i -h r211-s15 Overall Health Checker Results - PASSED ======================================= Host Summary Results ==================== Host Result Type State Network Last Update -------- ------ ------ ----- ---------- -----------...
Page 88
Result Information ----------------------------------------------------- ------ ------------------ User nic r211-s15/bond0:3 pingable from host r211-s16 PASSED Check : Physical volumes are readable ===================================== Check Description Result Result Information --------------------------------------------------------------- ------ ------------------ Physical volume PtJtdz-TXe3-v6Kf-XvQB-uGHz-9nJe-BTFteM readable PASSED /dev/mpath/mpath2 Physical volume gzxyTW-oiRT-THMc-zE2i-q7x0-3Kbj-YuyWLn readable PASSED /dev/mpath/mpath1 Physical volume knuTJf-09Vp-ErNm-dfjn-nwsk-lBOq-AXONvE readable PASSED /dev/mpath/mpath3...
Page 89
The -f option displays results only for hosts that failed the check. The -s option includes information about the file system and its segments. The -v option includes details about checks that received a Passed or Warning result. The following example shows a detailed health report for file serving node r38-s2: [root@r38-s2 ~]# ibrix_health -i -h r38-s2 Overall Health Checker Results - PASSED =======================================...
Logs are provided for the Fusion Manager, nodes, and StoreAll clients. Contact HP Support for assistance in interpreting log files. You might be asked to tar the logs and email them to HP. Viewing operating statistics for file serving nodes Periodically, the file serving nodes report the following statistics to the Fusion Manager: Summary.
Page 91
-f NFS statistics -h The file serving nodes to be included in the report Sample output follows: ---------Summary------------ HOST Status CPU Disk(MB/s) Net(MB/s) lab12-10.hp.com 22528 ---------IO------------ HOST Read(MB/s) Read(IO/s) Read(ms/op) Write(MB/s) Write(IO/s) Write(ms/op) lab12-10.hp.com 22528 0.00 ---------Net------------ HOST In(MB/s) In(IO/s) Out(MB/s) Out(IO/s) lab12-10.hp.com...
9 Maintaining the system Shutting down the system To shut down the system completely, first shut down the StoreAll software, and then power off the system hardware. Shutting down the StoreAll software Use the following procedure to shut down the StoreAll software. Unless noted otherwise, run the commands from the node hosting the active Fusion Manager.
Unmount all file systems on the cluster nodes: ibrix_umount -f <fs_name> To unmount file systems from the GUI, select Filesystems > unmount. Verify that all file systems are unmounted: ibrix_fs -l If a file system fails to unmount on a particular node, continue with this procedure. The file system will be forcibly unmounted during the node shutdown.
Power on the node hosting the active Fusion Manager. Power on the file serving nodes (*root segment = segment 1; power on owner first, if possible). Monitor the nodes on the GUI and wait for them all to report UP in the output from the following command: ibrix_server -l Mount file systems and verify their content.
HOSTNAME is the name of the node that you just rebooted. Starting and stopping processes You can start, stop, and restart processes and can display status for the processes that perform internal StoreAll software functions. The following commands also control the operation of PostgreSQL on the machine.
Verify that the iadconf.xml has been created and that the cluster name is correct. Tuning nodes and StoreAll clients Typically, HP Support sets the tuning parameters on the file serving nodes during the cluster installation and changes should be needed only for special situations.
Page 97
The General Tunings dialog box specifies the communications protocol (TCP or UDP) and the number of admin and server threads. The IAD Tunings dialog box configures the StoreAll administrative daemon. The Module Tunings dialog box adjusts various advanced parameters that affect server operations. Tuning nodes and StoreAll clients...
Page 98
On the Servers dialog box, select the servers to which the tunings should be applied. To tune nodes using the CLI, see the ibrix_host_tune command in the HP StoreAll OS CLI Reference Guide. Maintaining the system...
To list host tuning parameters that have been changed from their defaults: ibrix_lwhost --list See the ibrix_lwhost command description in the HP StoreAll OS CLI Reference Guide other available options. Windows clients. Click the Tune Host tab on the Windows StoreAll client GUI. Tunable parameters include the NIC to prefer (the default is the cluster interface), the communications protocol (UDP or TCP), and the number of server threads to use.
Page 100
The Change Ownership dialog box reports the status of the servers in the cluster and lists the segments owned by each server. In the Segment Properties section of the dialog box, select the segment whose ownership you are transferring, and click Change Owner. 100 Maintaining the system...
The Summary dialog box shows the segment migration you specified. Click Back to make any changes, or click Finish to complete the operation. To migrate ownership of segments from the CLI, see the ibrix_fs command in the HP StoreAll OS CLI Reference Guide.
Page 102
On the Evacuate Advanced dialog box, locate the segment to be evacuated and click Source. Then locate the segments that will receive the data from the segment and click Destination. If the file system is tiered, be sure to select destination segments on the same tier as the source segment.
Page 103
Guide. Troubleshooting segment evacuation If segment evacuation fails, HP recommends that you run phase 1 of the ibrix_fsck command in corrective mode on the segment that failed the evacuation. For more information, see "Checking and repairing file systems" in the HP StoreAll OS User Guide.
3015A4021.C34A994C, poid 3015A4021.C34A994C, primary 4083040FF.7793558E poid 4083040FF.7793558E Use the inum2name utility to translate the primary inode ID into the file name. Removing a node from a cluster In the following procedure, the cluster contains four nodes: FSN1, FSN2, FSN3, and FSN4. FSN4 is the node being removed.
In general, it is better to assign a user network for protocol (NFS/SMB/HTTP/FTP) traffic because the cluster network cannot host the virtual interfaces (VIFs) required for failover. HP recommends that you use a Gigabit Ethernet port (or faster) for user networks.
Page 106
For a highly available cluster, HP recommends that you put protocol traffic on a user network and then set up automated failover for it (see “Configuring High Availability on the cluster”...
Execute this command once for each destination host that the file serving node or StoreAll client should contact using the specified network interface (IFNAME). For example, to prefer network interface eth3 for traffic from file serving node s1.hp.com to file serving node s2.hp.com: ibrix_server -n -h s1.hp.com -A s2.hp.com/eth3...
-n -g HOSTGROUP -A DESTHOST/IFNAME The destination host (DESTHOST) cannot be a hostgroup. For example, to prefer network interface eth3 for traffic from all StoreAll clients (the clients hostgroup) to file serving node s2.hp.com: ibrix_hostgroup -n -g clients -A s2.hp.com/eth3...
The following command adds a route for virtual interface eth2:232 on file serving node s2.hp.com, sending all traffic through gateway gw.hp.com: ibrix_nic -r -n eth2:232 -h s2.hp.com -A -R gw.hp.com Deleting a routing table entry If you delete a routing table entry, it is not replaced with a default entry. A new replacement route must be added manually.
“Changing the cluster interface” (page 109). To delete a network interface, use the following command: ibrix_nic -d -n IFNAME -h HOSTLIST The following command deletes interface eth3 from file serving nodes s1.hp.com and s2.hp.com: ibrix_nic -d -n eth3 -h s1.hp.com,s2.hp.com Viewing network interface information Executing the ibrix_nic command with no arguments lists all interfaces on all file serving nodes.
Page 113
### Sending request to Fusion Manager to register vendor storage vs_069def80-0000-1000-802a-533457303249 ### Please run the following commands to verify Vendor Storage registration succeeded for host r206s17 ### ibrix_vs -l ### ibrix_vs -l -s Auto registration of chassis and vs completed Adding Storage to an HP StoreAll 8800 1 13...
10. On the server running the active Fusion Manager, execute the following command: ibrix_pv 1 1. Use the Extend Filesystem Wizard in the GUI to complete the addition of the new storage. Navigate to the wizard by selecting Filesystems in the upper Navigator and then selecting Summary >...
Page 115
### ibrix_vs -l ### ibrix_vs -l -s Auto registration of chassis and vs completed Verify that the new vendor storage is now registered with the new storage cluster UUID by executing the following command: ibrix_vs -l -s In the following output example, the vendor storage is now identified by the cluster ID noted in the command output of step 1.
Fax the Password Request Form that came with your License Entitlement Certificate. See the certificate for fax numbers in your area. Call or email the HP Password Center. See the certificate for telephone numbers in your area or email addresses.
Data Collection is a log collection utility that allows you collect relevant information for diagnosis by HP Support when system issues occur. The collection can be triggered manually using the GUI or CLI (using the ibrix_collect command), or automatically during a system crash. Data Collection...
Page 118
The data is stored locally on each node in a compressed archive file <nodename>_<filename>_<timestamp>.tgz under /local/ibrixcollect. Enter the name of the .tgz file that contains the collected data. The default location to store this .tgz file is located on the active Fusion Manager node at /local/ibrixcollect/ archive.
<timestamp>_PROCESSED. HP Support may request that you send this information to assist in resolving the system crash. HP recommends that you maintain your crash dumps in the /var/crash directory. Ibrix Collect processes the core dumps present in the /var/crash directory (linked to /local/ platform/crash) only.
The average size of the archive file depends on the size of the logs present on individual nodes in the cluster. You may later be asked to email this final .tgz file to HP Support. Deleting logs You can delete a specific data collection or all data collection sets. Deletion removes the tar files on each node from the system.
Enter the number of previously collected data sets (archive files) to be retained in each node of the cluster. Under Email Settings, you can choose to have a zip file containing specific system and HP StoreAll command outputs about the cluster configuration sent via email. To enable this setting, select Enable sending cluster configuration by email.
Table 8 ibrix_collect add-on scripts Step Description Where to find more information? Create an add-on script. “Creating an add-on script” (page 122) Run the add-on script. “Running an add-on script” (page 123) View the output from the add-on script. “Viewing the output from an add-on script” (page 123) Creating an add-on script To create an add-on script:...
[root@host2 archive]#tar -xvf addOnCollection.tgz In this instance, addOnCollection.tgz is the tar file containing the output of the add-on script. The tar command displays the following: ./host2_addOnCollection_2012-12-20-12-38-36.tgz Collecting information for HP Support with the IbrixCollect 123...
Individual node files in the tar format are provided as <hostname>_<collection-name>_<time-date stamp>.tgz Extract the <hostname>_<collection-name>_<time-date stamp>.tgz tar file by entering the following command: [root@host2 archive]#tar -xvf host2_addOnCollection_2012-12-20-12-38-36.tgz In this instance, host2_addOnCollection_2012-12-20-12-38-36.tgz is the individual node file (<hostname>_<collection-name>_<time-date stamp>.tgz ). A directory with the host name is extracted. The output of the add-on script is found in the /<hostname>/logs/add_on_script/local/ibrixcollect/ ibrix_collect_additional_data Find the directory containing the host name by entering the ls -l command, as shown in...
The file system and IAD/FS output fields should show matching version numbers unless you have installed special releases or patches. If the output fields show mismatched version numbers and you do not know of any reason for the mismatch, contact HP Support. A mismatch might affect the operation of your cluster.
Failover Cannot fail back from failover caused by storage subsystem failure When a storage subsystem fails and automated failover is turned on, the Fusion Manager will initiate its failover protocol. It updates the configuration database to record that segment ownership has transferred from primary servers to their standbys and then attempts to migrate the segments to the standbys.
To maintain access to a file system, file serving nodes must have current information about the file system. HP recommends that you execute ibrix_health on a regular basis to monitor the health of this information. If the information becomes outdated on a file serving node, execute ibrix_dbck -o to resynchronize the server's information with the configuration database.
Page 128
If a full file system restore is possible, perform a full file system restore as described in ”Backing up and restoring file systems with Express Query data” in the HP StoreAll OS User Guide. If a full file system is not possible or desirable, then you can rebuild the database using the...
Page 129
-C <FSNAME> command. Then, schedule the Online Metadata Synchronizer to run to restore the path names affected by the lost rename operations. See "Online Metadata Synchronizer" in the HP StoreAll OS User Guide for more information.
Obtaining the latest StoreAll software release StoreAll OS version 6.5.x is only available through the registered release process. To obtain the ISO image, contact HP Support to register for the release and obtain access to the software dropbox. IMPORTANT: The bootable device must have at least 4 GB of free space and contain no other content.
Download the HP StoreAll QR image to the Windows computer. Mount the HP StoreAll QR image by using a tool for creating a virtual DVD drive, such as Virtual CloneDrive. If you are using Virtual CloneDrive version 5.4.5.0, you would right-click the HP StoreAll QR ISO image, and click Mount (Virtual CloneDrive) to mount the HP StoreAll QR ISO image, as shown in the following figure.
Page 132
Launch ImgBurn, and click the Create image file from disc option, as shown in the following figure. Click Source. The ISO file is mounted through Virtual CloneDrive. Click Destination, and choose the location to save the image file. While saving the file, click IMG Files (*.img) as the saved format.
Page 133
Create the image file by clicking File→Read. Connect a USB flash drive to the Windows computer. Use a software product to copy the bootable image file to a USB flash drive. The following steps are from Win32 Disk Imager version 0.7. Win32 Disk Imager can be obtained from various freeware sites on the Internet.
Use the hpsp_credmgmt command to set the custom credentials for each of the MSA Credential URIs. Using the example output from step 3, these commands would be: /opt/hp/platform/bin/hpsp_credmgmt —init-cred —master-passwd=hpdefault —hw-username=<Custom username> — hw-password=<Custom password> /opt/hp/platform/bin/hpsp_credmgmt --init-cred --master-passwd=hpdefault --hw-username=<Custom username>...
/opt/hp/platform/bin/hpsp_credmgmt --update-cred --cred-selector=couplet:couplet/array/500c0ff194705000/mgmtport_b/rw --cred-type=upwpair --cred-username=<Custom username> --cred-password=<Custom password> /opt/hp/platform/bin/hpsp_credmgmt --update-cred --cred-selector=couplet:couplet/array/500c0ff194705000/mgmtport_a/ro --cred-type=upwpair --cred-username=<Custom username> --cred-password=<Custom password> /opt/hp/platform/bin/hpsp_credmgmt --update-cred --cred-selector=couplet:couplet/array/500c0ff194705000/mgmtport_a/rw --cred-type=upwpair --cred-username=<Custom username> --cred-password=<Custom password> Exit the command prompt and log into the node again. When the installation wizard appears, continue with the standard restore process.
Page 136
On the Installation — Networking Menu, select Single Network (data, cluster, & mgmt traffic). On the User Info dialog box, select Ok to confirm settings. 136 Recovering a file serving node...
Page 137
Enter the information for the node being restored on the Network Configuration dialog box and click OK. Confirm that the information displayed in the Configuration Summary dialog box is correct and click Commit. Performing the recovery 137...
The wizard scans the network for existing clusters. On the Installation — Network Setup Complete dialog box, select Join this StoreAll server to an existing cluster and click OK. If you would like to, you can now reconfigure the bond using an advanced wizard. See “Reconfiguring the bond”...
Page 139
To reconfigure the bond, press F2. On the Advanced Configuration dialog box, select the interface you would like to configure. Select the desired bond mode and click OK. Provide the configuration parameters for the bond and click OK. Performing the recovery 139...
Page 140
On the Advanced Configuration dialog box, select the interface to configure and click Continue. Review the Configuration Summary and click Commit if all settings appear correctly. Once the server is successfully configured, join the newly configured server to the cluster. Select Join this StoreAll server to an existing cluster and click OK.
Page 141
The wizard scans the network for existing clusters. On the Join Cluster dialog box, select the management console (Fusion Manager) for your cluster, and then click OK. If your cluster does not exist in the list of choices, click Cancel so that you can provide the IP address of the FM to which this node has to be registered.
For example: ibrix_nic -m -h titan16 -A titan15/eth2 Configure Insight Remote Support on the node. See “Configuring HP Insight Remote Support on StoreAll systems” (page 18). Run ibrix_health -l from the StoreAll management console to verify that no errors are being reported.
Page 143
Restoring services When you perform a Quick Restore of a file serving node, the NFS, SMB, FTP, and HTTP export information is not automatically restored to the node. After operations are failed back to the node, the I/O from client systems to the node fails for the NFS, SMB, FTP, and HTTP shares. To avoid this situation, manually restore the NFS, SMB, FTP, and HTTP exports on the node before failing it back.
-h hostnameX Iad error on host hostnameX failed command (<HIDDEN_COMMAND>) status (1) output: (Joining to AD Domain:IBRQA1.HP.COM With Computer DNS Name: hostsnameX.ibrqa1.hp.com ) Verify that the content of the /etc/resolve.conf file is not empty. If the content is empty, copy the contents of the /etc/resolve.conf file on another server to the empty resolve.conf...
HP ProLiant DL380 G6 Server Maintenance and Service Guide To find these documents, go to the Manuals page (http://www.hp.com/support/manuals) and select servers > ProLiant ml/dl and tc series servers > HP ProLiant DL380 G7 Server series or HP ProLiant DL380 G6 Server series.
Online help for HP Storage Management Utility (SMU) and Command Line Interface (CLI) To find these documents, go the Manuals page (http://www.hp.com/support/manuals) and select storage >Disk Storage Systems > MSA Disk Arrays >HP 2000sa G2 Modular Smart Array or HP P2000 G3 MSA Array Systems.
14 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL...
A System component and cabling diagrams for 9320 systems System component diagrams Front view of 9300c array controller or 9300cx 3.5" 12-drive enclosure Item Description 1- 1 2 Disk drive bay numbers Enclosure ID LED Disk drive Online/Activity LED Disk drive Fault/UID LED Unit Identification (UID) LED Fault ID LED Heartbeat ID LED...
Rear view of 9300c array controller Item Description Power supplies Power switches Host ports CLI port Network port Service port (used by service personnel only) Expansion port (connects to drive enclosure) Rear view of 9300cx 3.5" 12-drive enclosure Item Description Power supplies Power switches SAS In port (connects to the controller enclosure)
Front view of file serving node Item Description Quick-release levers (2) HP Systems Insight Manager display Hard drive bays SATA optical drive bay Video connector USB connectors (2) Rear view of file serving node Item Description PCI slot 5 PCI slot 6...
Page 151
Item Description Mouse connector Keyboard connector Serial connector iLO 2 connector NIC 3 connector NIC 4 connector System component diagrams...
Page 152
Server PCIe card PCI slot HP SC08Ge 3Gb SAS Host Bus Adapter NC364T Quad 1Gb NIC empty SATA 1Gb empty empty empty HP SC08Ge 3Gb SAS Host Bus Adapter empty empty SATA 10Gb NC522SFP dual 10Gb NIC empty empty HP SC08Ge 3Gb SAS Host Bus Adapter...
SAS option cabling Line Description SAS I/O pathArray 1: Controller A SAS I/O pathArray 1: Controller B SAS I/O pathArray 2: Controller A SAS I/O pathArray 2: Controller B Cabling diagrams 155...
Drive enclosure cabling Item Description SAS controller in 9300c controller enclosure I/O modules in four 9300cx drive enclosures 156 System component and cabling diagrams for 9320 systems...
Use conductive field service tools. Use a portable field service kit with a folding static-dissipating work mat. If you do not have any of the suggested equipment for proper grounding, have an HP-authorized reseller install the part. NOTE: For more information on static electricity or assistance with product installation, contact your HP-authorized reseller.
Equipment symbols If the following symbols are located on equipment, hazardous conditions could exist. WARNING! Any enclosed surface or area of the equipment marked with these symbols indicates the presence of electrical shock hazards. Enclosed area contains no operator serviceable parts. To reduce the risk of injury from electrical shock hazards, do not open this enclosure.
WARNING! Verify that the AC power supply branch circuit that provides power to the rack is not overloaded. Overloading AC power to the rack power supply circuit increases the risk of personal injury, fire, or damage to the equipment. The total rack load should not exceed 80 percent of the branch circuit rating.
Page 160
CAUTION: Protect the installed solution from power fluctuations and temporary interruptions with a regulating Uninterruptible Power Supply (UPS). This device protects the hardware from damage caused by power surges and voltage spikes, and keeps the system in operation during a power failure.
Hewlett-Packard Company, 3000 Hanover Street, Palo Alto, California 94304, U.S. Local Representative information Russian: HP Russia: ЗАО “Хьюлетт-Паккард А.О.”, 125171, Россия, г. Москва, Ленинградское шоссе, 16А, стр.3, тел/факс: +7 (495) 797 35 00, +7 (495) 287 89 05 HP Belarus: ИООО «Хьюлетт-Паккард Бел», 220030, Беларусь, г. Минск, ул.
Page 162
HP Enterprise Servers http://www.hp.com/support/EnterpriseServers-Warranties HP Storage Products http://www.hp.com/support/Storage-Warranties HP Networking Products http://www.hp.com/support/Networking-Warranties 162 Regulatory information...
Domain Name System. File Transfer Protocol. Global service indicator. High availability. Host bus adapter. Host channel adapter. Hard disk drive. HP 9000 Software Administrative Daemon. Integrated Lights-Out. Initial microcode load. IOPS I/Os per second. IPMI Intelligent Platform Management Interface. JBOD Just a bunch of disks.
Page 164
TCP/IP Transmission Control Protocol/Internet Protocol. User Datagram Protocol. Unit identification. VACM SNMP View Access Control Model. HP Virtual Connect. Virtual interface. WINS Windows Internet Name Service. World Wide Name. A unique identifier assigned to a Fibre Channel device. WWNN World wide node name. A globally unique 64-bit identifier assigned to each Fibre Channel node process.
IP address, rolling reboot, change network, run health check, defined, start or stop processes, components statistics, 8800 diagrams, troubleshooting, contacting HP, tune, core dump, view process status, file system migrate segments, document firewall configuration, related information, Fusion Manager documentation...
Page 166
HP Support, 1 17 customize, Details panel, Navigator, manpages, open, monitoring view events, chassis and components, cluster events, cluster health, hardware, power off, file serving nodes, hazardous conditions node statistics, symbols on equipment, servers , 73,...
Page 167
HP Enterprise servers, tune, HP Networking products, SNMP event notification, HP ProLiant and X86 Servers and Options, SNMP MIB, HP Storage products, spare parts websites obtaining information, Storage HP Subscriber's Choice for Business,...
Need help?
Do you have a question about the StoreAll 8800 and is the answer not in the manual?
Questions and answers