36-gigabyte disk drive module installation (1 page)
Summary of Contents for HP Surestore Disk Array 12h - And FC60
Page 1
HP SureStore E Disk Array FC60 Advanced User’s Guide This manual was downloaded from http://www.hp.com/support/fc60/ h p H H Edition E1200 Printed in U.S.A.
Page 3
Format Conventions Denotes A hazard that can cause personal injury WARNING A hazard that can cause hardware or software damage Caution Significant concepts or operating instructions Note this font Text to be typed verbatim: all commands, path names, file names, and directory names Text displayed on the screen this font Printing History...
Page 4
Manual Revision History December 2000 Change Page Added Figure 87 to clarify operation of the write cache flush thresholds. Added note regarding the impact of LUN binding on performance. Added information on Managing the Universal Transport Mechanism (UTM). Added information on major event logging available with firmware HP08. Added Allocating Space for Disk Array Logs section describing use of...
About This Book This guide is intended for use by system administrators and others involved in operating and managing the HP SureStore E Disk Array FC60. It is organized into the following chapters and section. Chapter 1, Product Description Describes the features, controls, and operation of the disk array.
Page 6
Related Documents and Information The following items contain information related to the installation and use of the HP SureStore E Disk Array and its management software. • HP SureStore E Disk Array FC60 Advanced User’s Guide - this is the expanded version of the book you are reading.
Product Description The HP SureStore E Disk Array FC60 (Disk Array FC60) is a disk storage system that features high data availability, high performance, and storage scalability. To provide high availability, the Disk Array FC60 uses redundant, hot swappable modules, which can be replaced without disrupting disk array operation should they fail.
Page 19
Array Controller FC60 SureStore E Disk System SC10 Figure 1 HP SureStore E Disk Array FC60 (Controller with Six Disk Enclosures) Product Description...
Operating System Support The Disk Array FC60 is currently supported on the following operating systems: • HP-UX 11.0, 11.11, and 10.20 • Windows NT 4.0 • Windows 2000 Some disk array features are specific to each operating system. These features Note are clearly identified throughout this book.
• RAID levels 0, 1, 0/1, 3, and 5 (RAID level 3 supported on Windows NT and Windows 2000 only) • EMS hardware monitoring (HP-UX only) High Availability High availability is a general term that describes hardware and software systems that are designed to minimize system downtime —...
This allows failed modules to be quickly identified and replaced. EMS Hardware Event Monitoring (HP-UX Only) The Disk Array FC60 is fully supported by Hewlett-Packard's EMS Hardware Monitors, which allow you to monitor all aspects of product operation and be alerted immediately if any failure or other unusual event occurs.
Disk Enclosure Components The SureStore E Disk System SC10, or disk enclosure, is a high availability Ultra2 SCSI storage product. It provides an LVD SCSI connection to the controller enclosure and ten slots on a single-ended backplane for high-speed, high-capacity LVD SCSI disks. Six disk enclosures fully populated with 9.1 Gbtye disks provide 0.54 Tbytes of storage in a 2-meter System/E rack.
Page 24
BCC Modules Power Supply Modules Fan Modules Disk Modules Chassis (and Backplane) (Front Door Not Shown) Figure 2 Disk Enclosure Components, Exploded View Disk Enclosure Components...
Operation Features The disk enclosure is designed to be installed in a standard 19-inch rack and occupies 3.5 EIA units (high). Disk drives mount in the front of the enclosure. Also located in the front of the enclosure are a power switch and status LEDs. A lockable front door shields RFI and restricts access to the disk drives and power button (Figure 3 on page 26).
A system LEDs B power button C disk module D disk module LEDs E door lock F ESD plug G mounting ear H power supply I BCCs J fans K component LEDs Figure 3 Disk Enclosure Front and Back View Power Switch The power switch (B in Figure...
Disk Enclosure SC10 Modules The disk enclosure hot-swappable modules include the following: • Disks and fillers • Fans • Power supplies Disks and Fillers Hot-swappable disk modules make it easy to add or replace disks. Fillers are required in all unused slots to maintain proper airflow within the enclosure.
Page 28
A bezel handle B cam latch E circuit board C carrier frame F insertion guide D standoffs G capacity label Figure 4 Disk Module Disks fit snugly in their slots. The cam latch (B in Figure 4) is used to seat and unseat the connectors on the backplane.
Page 29
BCCs Two Backplane Controller Cards, BCCs, control the disks on one or two buses according to the setting of the Full Bus switch. When the Full Bus switch is set to on, BCC A, in the top slot, accesses the disks in all ten slots. When the Full Bus switch is off, BCC A accesses disks in the even-numbered slots and BCC B accesses disks in the odd-numbered slots.
Page 30
Each BCC provides two LVD SCSI ports (B in Figure 5) for connection to the controller enclosure. The EEPROM on each BCC stores 2 bytes of configuration information and user-defined data, including the manufacturer serial number, World Wide Name, and product number. The following are additional features of the BCC: •...
Page 31
Fans Redundant, hot-swappable fans provide cooling for all enclosure components. Each fan has two internal high-speed blowers (A in Figure 6), an LED (B), a pull tab (C), and two locking screws (D). A internal blowers B LED C pull tab D locking screws Figure 6 Internal circuitry senses blower motion and triggers a fault when the speed of either blower...
Power Supplies Redundant, hot-swappable 450-watt power supplies convert wide-ranging AC voltage from an external main to stable DC output and deliver it to the backplane. Each power supply has two internal blowers, an AC receptacle (A in Figure 7), a cam handle (B) with locking screw, and an LED (C).
Page 33
Power supplies share the load reciprocally; that is, each supply automatically increases its output to compensate for reduced output from the other. If one power supply fails, the other delivers the entire load. Internal circuitry triggers a fault when a power supply fan or other power supply part fails. If a power supply failure occurs, the amber fault LED will go on.
Array Controller Enclosure Components The array controller enclosure, like the disk enclosure, consists of several modules that can be easily replaced, plus several additional internal assemblies. See Figure 8. Together, these removable modules and internal assemblies make up the field replaceable units (FRUs).
Page 35
Power Supply Fan Module Power Supply Modules Controller Chassis Controller Fan Controller Mo dule A Controller Mo dule B (Front Cover Not Shown) Figure 8 Controller Enclosure Exploded View During operation, controller enclosure status is indicated by five LEDs on the front left of the controller enclosure.
Figure 10 Controller Enclosure Rear View Front Cover The controller enclosure has a removable front cover which contains slots for viewing the main operating LEDs. The cover also contains grills that aid air circulation. The controller modules, controller fan, and battery backup unit are located behind this cover. This cover must be removed to gain access to these modules, and also, to observe the controller status and BBU LEDs.
Controller Modules The controller enclosure contains one or two controller modules. See Figure 11. These modules provide the main data and status processing for the Disk Array FC60. The controller modules slide into two controller slots (A and B) and plug directly into the backplane.
Page 39
Each controller module has ten LEDs. See Figure 12. One LED identifies the controller module’s power status. A second LED indicates when a fault is detected. The remaining eight LEDs provide detailed fault condition status. The most significant LED, the heartbeat, flashes approximately every two seconds beginning 15 seconds after power-on.
Controller Memory Modules Each controller module contains SIMM and DIMM memory modules. Two 16-Mbyte SIMMs (32 Mbytes total) store controller program and other data required for operation. The standard controller module includes 256-Mbytes of cache DIMM, which is upgradeable to 512 Mbytes.
Power Supply Modules Two separate power supplies provide electrical power to the internal components by converting incoming AC voltage to DC voltage. Both power supplies are housed in removable power supply modules that slide into two slots in the back of the controller and plug directly into the power interface board.
Each power supply is equipped with a power switch to disconnect power to the supply. Turning off both switches turns off power to the controller. This should not be performed unless I/O activity to the disk array has been stopped, and the write cache has been flushed as indicated by the Fast Write Cache LED being off.
Page 44
Figure 15 Power Supply Fan Module Array Controller Enclosure Components...
Battery Backup Unit The controller enclosure contains one removable battery backup unit (BBU) that houses two rechargeable internal batteries (A and B) and a battery charger board. The BBU plugs into the front of the controller enclosure where it provides backup power to the controller’s cache memory during a power outage.
Page 46
The BBU contains four LEDs that identify the condition of the battery. Internally, the BBU consists of two batteries or banks, identified as bank “A” and bank “B.” During normal operation both of the Full Charge LEDs (Full Charge-A and Full Charge-B) are on and the two amber Fault LEDs are off.
Disk Array High Availability Features High availability systems are designed to provide uninterrupted operation should a hardware failure occur. Disk arrays contribute to high availability by ensuring that user data remains accessible even when a disk or other component within the Disk Array FC60 fails.
The disk array uses hardware mirroring, in which the disk array automatically synchronizes the two disk images, without user or operating system involvement. This is unlike the software mirroring, in which the host operating system software (for example, LVM) synchronizes the disk images. Disk mirroring is used by RAID 1 and RAID 0/1 LUNs.
Data Data Data Data Parity If this bit is now written as 1... This bit will also be changed to a 1 so the total still equals 0. Figure 17 Calculating Data Parity Data Striping Data striping, which is used on RAID 0, 0/1, 3 and 5 LUNs, is the performance-enhancing technique of reading and writing data to uniformly sized segments on all disks in a LUN simultaneously.
using a 5-disk RAID 5 LUN, a stripe segment size of 32 blocks (16 B) would ensure that an entire I/O would fit on a single stripe (16 B on each of the four data disks). The total stripe size is the number of disks in a LUN multiplied by the stripe segment size. For example, if the stripe segment size is 32 blocks and the LUN comprises five disks, the stripe size is 32 X 5, or 160 blocks (81,920 bytes).
Page 51
fails. RAID-0 provides enhanced performance through simultaneous I/Os to multiple disk modules. Software mirroring the RAID-0 group provides high availability. Figure 18 illustrates the distribution of user and parity data in a four-disk RAID 0 LUN. The the stripe segment size is 8 blocks, and the stripe size is 32 blocks (8 blocks times 4 disks). The disk block addresses in the stripe proceed sequentially from the first disk to the second, third, and fourth, then back to the first, and so on.
Page 52
individual disks. For highest data availability, each disk in the mirrored pair must be located in a different enclosure. When a data disk or disk mirror in a RAID 1 LUN fails, the disk array automatically uses the remaining disk for data access. Until the failed disk is replaced (or a rebuild on a global hot spare is completed), the LUN operates in degraded mode.
Page 53
pair. For highest data availability, each disk in the mirrored pair must be located in a different enclosure. When a disk fails, the disk array automatically uses the remaining disk of the mirrored pair for data access. A RAID 0/1 LUN can survive the failure of multiple disks, as long as one disk in each mirrored pair remains accessible.
Page 54
more disks. For highest availability, the disks in a RAID 3 LUN must be in different enclosures. If a disk fails or becomes inaccessible, the disk array can dynamically reconstruct all user data from the data and parity information on the remaining disks. When a failed disk is replaced, the disk array automatically rebuilds the contents of the failed disk on the new disk.
Page 55
RAID 3 works well for single-task applications using large block I/Os. It is not a good choice for transaction processing systems because the dedicated parity drive is a performance bottleneck. Whenever data is written to a data disk, a write must also be performed to the parity drive.
Page 56
Figure 22 RAID 5 With its individual access characteristics, RAID 5 provides high read throughput for small block-size requests (2 B to 8 B) by allowing simultaneous read operations from each disk in the LUN. During a write I/O, the disk array must perform four individual operations, which affects the write performance of a RAID 5 LUN.
RAID Level Comparisons To help you decide which RAID level to select for a LUN, the following tables compare the characteristics for the supported RAID levels. Where appropriate, the relative strengths and weakness of each RAID level are noted. RAID 3 is supported on Windows NT and Windows 2000 only. Note Table 1 RAID Level Comparison: Data Redundancy Characteristics...
Page 58
Table 2 RAID Level Comparison: Storage Efficiency Characteristics RAID Level Storage Efficiency RAID 0 100%. All disk space is use for data storage. RAID 1 and 0/1 50%. All data is duplicated, requiring twice the disk storage for a given amount of data capacity. RAID 3 and 5 One disk’s worth of capacity from each LUN is required to store parity data.
Page 59
Table 4 RAID Level Comparison: General Performance Characteristics RAID Level General Performance Characteristics RAID 0 – Simultaneous access to multiple disks increases I/O performance. In general, the greater the number of mirrored pairs, the greater the increase in performance. RAID 1 –...
Page 60
Table 5 RAID Level Comparison: Application and I/O Pattern Performance Characteristics RAID level Application and I/O Pattern Performance RAID 0 RAID 0 is a good choice in the following situations: – Data protection is not critical. RAID 0 provides no data redundancy for protection against disk failure.
Global Hot Spare Disks A global hot spare disk is reserved for use as a replacement disk if a data disk fails. Their role is to provide hardware redundancy for the disks in the array. To achieve the highest level of availability, it is recommended that one global hot spare disk be created for each channel.
Page 62
Settings that give a higher priority to the rebuild process will cause the rebuild to complete sooner, but at the expense of I/O performance. Lower rebuild priority settings favors host I/Os, which will maintain I/O performance but delay the completion of the rebuild. The rebuild priority settings selected reflect the importance of performance versus data availability.
Page 63
Data and parity from the remaining disks are used to rebuild the contents of disk 3 on the hot spare disk. The information on the hot spare is copied to the replaced disk, and the hot spare is again available to protect against another disk failure.
Primary and Alternate I/O Paths There are two I/O paths to each LUN on the disk array - one through controller A and one through controller B. Logical Volume Manager (LVM) is used to establish the primary path and the alternate path to a LUN. The primary path becomes the path for all host I/Os to that LUN.
Capacity Management Features The disk array uses a number of features to manage its disk capacity efficiently. The use of LUNs allow you to divide the total disk capacity into smaller, more flexible partitions. Caching improves disk array performance by using controller RAM to temporarily store data during I/Os.
• Hot spare group – All disks assigned the role of global hot spare become members of this group. Up to six disks (one for each channel) can be assigned as global hot spares. • Unassigned group – Any disk that is neither part of a LUN nor a global hot spare is considered unassigned and becomes a member of this group.
controller with 256 Mbytes of cache will use half of the memory to mirror the other controller, leaving only 128 Mbytes for its own cache. The write cache contents cannot be flushed when both controllers are removed from the disk array simultaneously. In this case the write cache image is lost and data integrity on the disk array is compromised.
Overview This chapter provides information to assist you in configuring the Disk Array FC60 to meet your specific storage needs. Factors to be considered when configuring the disk array include high availability requirements, performance, storage capacity, and future expandability. This chapter discusses configuration features of the Disk Array FC60 as it relates to these requirements.
Array Design Considerations The Disk Array FC60 provides the versatility to meet varying application storage needs. To meet a specific application need, the array should be configured to optimize the features most important for the application. Array features include: • High availability •...
enclosures can be added incrementally (up to six) as storage requirements grow. Multiple SCSI channels also increase data throughput. This increased data throughput occurs as a result of the controller’s ability to transfer data simultaneously over multiple data paths (channels). The more channels used, the faster the data throughput. Disk Enclosure Bus Configuration The disk enclosure can connect to either one or two SCSI channels, depending on its bus configuration.
the array for high availability, there must be no single points of failure. This means that the configuration must have at least these minimum characteristics: • Two controllers connected to separate Fibre Channel loops (using separate Fibre Channel host I/O adaptors) •...
Page 74
of the buses must be configured with at least four disk modules (eight disk modules per disk enclosure). This configuration also offers full sequential performance and is more economical to implement. To scale up sequential transfer performance from the host, configure additional disk arrays.
Storage Capacity For configurations where maximum storage capacity at minimum cost is a requirement, consider configuring the disk array in RAID 5 (using the maximum number of data drives per parity drives) and only supplying one or two hot spare drives per disk array. Also, purchase the lowest cost/Mbyte drive available (typically the largest capacity drives available at the time of purchase).
Page 76
another, two or one disk enclosures, respectively, can be added by using split-bus mode. However, if you are adding up to four, five, or six enclosure, the enclosures configuration will need to be switched from split-bus to full-bus (refer to “Disk Enclosure Bus Configuration”...
This section presents recommended configurations for disk arrays using one to six disk enclosures. Configurations are provided for achieving high availability/high performance, and maximum capacity. The configuration recommended by Hewlett-Packard is the high availability/ high performance configuration, which is used for factory assembled disk arrays (A5277AZ).
• Global hot spares - although none of the configurations use global hot spares, their use is recommended to achieve maximum protection against disk failure. For more information, see "Global Hot Spare Disks" on page • Split bus operation - With three or fewer disk enclosures, increased performance can be achieved by operating the disk enclosures in split bus mode, which increases the number of SCSI busses available for data transfer.
Page 79
• Data Availability – Not recommended for maximum high availability. – Handles a single disk failure, single BCC failure, a single channel failure, or a single controller failure – Expansion requires powering down the disk array, removing terminators and/or cables from the enclosures, and cabling additional disk enclosures. •...
Two Disk Enclosure Configurations High Availability/ High Performance • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Two disk enclosures with ten 73 GByte disk modules (20 disks total) – Disk enclosures configured for split-bus mode (two SCSI channels per enclosure) •...
Page 81
Figure 25 Two Disk Enclosure High Availability/ High Performance Configuration Recommended Disk Array Configurations...
Page 82
Maximum Capacity This configuration is not recommended for environments where high Note availability is critical. To achieve high availability each disk in a LUN should be in a different disk enclosure. This configuration does not achieve that level of protection. •...
Page 83
Figure 26 Two Disk Enclosure Maximum Capacity Configuration Recommended Disk Array Configurations...
Three Disk Enclosure Configurations High Availability/ High Performance • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Three disk enclosures with ten 73 GByte disks each (30 disks total) – Disk enclosures configured for split-bus mode (two SCSI channels per enclosure) •...
Page 85
Figure 27 Three Disk Enclosure High Availability/ High Performance Configuration Recommended Disk Array Configurations...
Page 86
Maximum Capacity • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Three disk enclosures with ten 73 GByte disks each (30 disks total) – Disk enclosures configured for split-bus mode (two SCSI channels per enclosure) •...
Page 87
Figure 28 Three Disk Enclosure Maximum Capacity Configuration Recommended Disk Array Configurations...
Four Disk Enclosure Configurations High Availability/High Performance • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Four disk enclosures with ten 73 GByte disks each (40 disks total) – Disk enclosures configured for full-bus mode (one SCSI channel per enclosure) •...
Page 89
Figure 29 Four Disk Enclosure High Availability/High Performance Configuration Recommended Disk Array Configurations...
Page 90
Maximum Capacity • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Four disk enclosures with ten 73 GByte disks each (40 disks total) – Disk enclosures configured for full-bus mode (one SCSI channel per enclosure) •...
Page 91
Figure 30 Four Disk Enclosure Maximum Capacity Configuration Recommended Disk Array Configurations...
Five Disk Enclosure Configurations High Availability/High Performance • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Five disk enclosures with ten 73 GByte disks each (50 disks total) – Disk enclosures configured for full-bus mode (one SCSI channel per enclosure) •...
Page 93
Figure 31 Five Disk Enclosure High Availability/High Performance Configuration Recommended Disk Array Configurations...
Page 94
Maximum Capacity • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Five disk enclosures with ten 73 GByte disks each (50 disks total) – Disk enclosures configured for full-bus mode (one SCSI channel per enclosure) •...
Page 95
Figure 32 Five Disk Enclosure Maximum Capacity Configuration Recommended Disk Array Configurations...
Six Disk Enclosure Configurations High Availability/High Performance • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Six disk enclosures with ten 73 GByte disks each (60 disks total) – Disk enclosures configured for full-bus mode (one SCSI channel per enclosure) •...
Page 97
Figure 33 Six Disk Enclosure High Availability/High Performance Configuration Recommended Disk Array Configurations...
Page 98
Maximum Capacity • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Six disk enclosures with ten 73 GByte disks each (60 disks total) – Disk enclosures configured for full-bus mode (one SCSI channel per enclosure) •...
Page 99
Figure 34 Six Disk Enclosure High Maximum Capacity Configuration Recommended Disk Array Configurations...
Total Disk Array Capacity The total capacity provided by the disk array depends on the number and capacity of disks installed in the array, and the RAID levels used. RAID levels are selected to optimize performance or capacity. Table 6 lists the total capacities available when using fully loaded disk enclosures configured for optimum performance.
For high-availability, one disk per SCSI channel is used as a global hot spare. Table 6 Capacities for Optimized Performance Configurations Number Total Capacity (with indicated disks) of disk RAID No. of Disks enclosures Level LUNs per LUN 9.1 GB 18.2 GB 36.4 GB 73 GB...
Topologies for HP-UX The topology of a network or a Fibre Channel Arbitrated Loop (Fibre Channel-AL) is the physical layout of the interconnected devices; that is, a map of the connections between the physical devices. The topology of a Fibre Channel-AL is extremely flexible because of the variety and number of devices, or nodes, that can be connected to the Fibre Channel- AL.
100 m, and 500 m. Fibre optic cables in lengths of 2 m, 16 m, 50 m, and 100 m can be ordered from Hewlett-Packard (see the chapter 8 for part numbers). Fibre optic cables longer than 100 m must be custom-fabricated for each implementation.
Page 104
For high availability the hosts and disk arrays can be connected in any of the following ways, with each connection of adapter and controller module creating a separate 2-node Fibre Channel-AL: • One disk array with two controller modules with each controller module connected to a separate adapter in a single host •...
Page 105
Figure 35 Basic Topology, High Availability Version: Host with Two Fibre Channel I/O Adapters Figure 36 shows the high availability version of the basic topology implemented on a either -Class or T-Class host with four Fibre Channel I/O adapters. Two of the Fibre Channel adapters are connected to one dual-controller module disk array while the other two Fibre Channel adapters are connected to a second dual-controller module disk array.
Page 106
Figure 36 Basic Topology, High Availability Version: Host with Four Fibre Channel I/O Adapters The non-high availability version of this topology connects a host or server to one or more single-controller module disk arrays. This version provides no hardware redundancy and does not protect against single points of controller module, Fibre Channel cable, host Fibre Channel I/O adapter, or internal Ultra2 SCSI bus failure.
Page 107
controller modules in four disk arrays. Each connection between adapter and controller module creates a separate Fibre Channel-AL. Figure 37 Basic Topology, Non-High Availability Version: Host with Four Fibre Channel I/O Adapters Topologies for HP-UX...
Page 108
Table 8 Basic Topology Error Recovery Failing Continue component after failure What happens and how to recover Disk module Applications continue to run on all supported RAID levels (RAID- 1, 0/1, and 5). The system administrator or service provider hot- swaps the failed disk module.
Page 109
Table 8 Basic Topology Error Recovery (cont’d) Failing Continue component after failure What happens and how to recover Fibre No on path to I/O operations fail along the path through the failed cable. If the Channel failed cable; host has two Fibre Channel adapters connected to a dual- cable Yes if array has controller module disk array, the array can be accessed through...
Because of this it is recommended that the total cable length of the Fibre Channel-AL be as short as possible. Fibre optic cables in lengths of 2 m, 16 m, 50 m, and 100 m cables can be ordered from Hewlett-Packard (see the Reference Topologies for HP-UX...
Page 111
chapter for part numbers). Fibre optic cables longer than 100 m must be custom-fabricated for each implementation. Like the basic topology, both high availability versions (two controller modules per disk array) and non-high availability versions (one controller module per disk array) of this topology can be implemented.
Page 112
Figure 38 illustrates the single-system distance topology with one host with two Fibre Channel I/O adapters and three dual-controller module disk arrays. In this example two of the HP Fibre Channel-AL Hub’s ten ports are unused. Figure 38 Single-System Distance Topology Topologies for HP-UX...
Page 113
Table 9 Single-System Distance Topology Error Recovery Failing Continue component after failure What happens and how to recover Disk module Applications continue to run on all supported RAID levels (RAID- 1, 0/1, and 5). The system administrator or service provider hot- swaps the failed disk module.
Page 114
Table 9 Single-System Distance Topology Error Recovery (cont’d) Failing Continue component after failure What happens and how to recover Fibre No on path to I/O operations fail along the path through the failed adapter. If the Channel I/O failed adapter; host has two Fibre Channel adapters connected to a dual- adapter Yes if array has...
High Availability Topology The high availability topology increases the availability of the single system distance topology by protecting against single points of HP Fibre Channel-AL Hub failure with the use of redundant HP Fibre Channel-AL Hubs connected to each other. Adding a second HP Fibre Channel-AL Hub also increases the number of hosts and disk arrays that can be connected to a single Fibre Channel-AL.
Page 116
Fibre Channel-AL be as short as possible. Fibre optic cables in lengths of 2 m, 16 m, 50 m, and 100 m cables can be ordered from Hewlett-Packard (see chapter 8 for part numbers). Fibre optic cables longer than 100 m must be custom-fabricated for each implementation.
Page 117
Figure 39 High Availability Topology Topologies for HP-UX...
Page 118
Table 10 High Availability Topology Error Recovery Failing Continue component after failure What happens and how to recover Disk module Applications continue to run on all supported RAID levels (RAID-1, 0/1, and 5). The system administrator or service provider hot- swaps the failed disk module.
Page 119
Table 10 High Availability Topology Error Recovery (cont’d) Failing Continue component after failure What happens and how to recover Fibre No on path to I/O operations fail along the path through the failed cable. If the Channel failed cable; host has two Fibre Channel adapters connected to a dual-controller cable Yes if array has module disk array, the array can be accessed through the...
High Availability, Distance, and Capacity Topology The high availability, distance, and capacity topology expands on the high availability topology by using cascaded HP Fibre Channel-AL Hubs to increase the distance of each Fibre Channel-AL and the number of devices that can be interconnected on the Fibre Channel-AL.
Page 121
Fibre Channel-AL be as short as possible. Fibre optic cables in lengths of 2 m, 16 m, 50 m, and 100 m cables can be ordered from Hewlett-Packard (see the Reference chapter for part numbers). Fibre optic cables longer than 100 m must be custom-fabricated for each implementation.
Page 122
Figure 40 High Availability, Distance, and Capacity Topology Topologies for HP-UX...
Page 123
Table 11 High Availability, Distance, and Capacity Topology Error Recovery Failing Continue component after failure What happens and how to recover Disk module Applications continue to run on all supported RAID levels (RAID- 0/1, and 5). The system administrator or service provider hot- swaps the failed disk module.
Page 124
Table 11 High Availability, Distance, and Capacity Topology Error Recovery (cont’d) Failing Continue component after failure What happens and how to recover Fibre Channel No on path to I/O operations fail along the path through the failed cable. If the cable failed cable;...
Campus Topology The campus topology uses the same hardware components as the high availability, distance, and capacity topology. The components for each instance of this topology include: • Up to four host systems or servers • Two Fibre Channel I/O adapters per host •...
Page 126
Figure 41 Campus Topology Topologies for HP-UX...
Page 127
Table 12 Campus Topology Error Recovery Failing Continue component after failure What happens and how to recover Disk module Applications continue to run on all supported RAID levels (RAID-1, 0/1, and 5). The system administrator or service provider hot-swaps the failed disk module.
Page 128
Failing Continue component after failure What happens and how to recover Fibre Channel No on path to I/O operations fail along the path through the failed cable. If the host cable failed cable; has two Fibre Channel adapters connected to a dual-controller module Yes if array has disk array, the array can be accessed through the operational path if dual controller...
Performance Topology with Switches Previous topologies use Fibre Channel HUBs for interconnecting the arrays with the hosts. In these topologies there is basically one loop with all devices connected to it. The disk array FC60 can be connected to switches. Connecting the array using switches provides for increased performance.
Page 130
Figure 43 Four Hosts Connected to Cascaded Switches Topologies for HP-UX...
Topologies for Windows NT and Windows 2000 The topology of a network or a Fibre Channel Arbitrated Loop (Fibre Channel-AL) is the physical layout of the interconnected devices; that is, a map of the connections between the physical devices. The topology of a Fibre Channel-AL is extremely flexible because of the variety and number of devices, or nodes, that can be connected to the Fibre Channel- AL.
Page 132
Unsupported Windows Topology Because this topology provides four paths from the host to each disk array, it is not supported. Any topology that provides more than two paths from a host to the disk array is not supported. Path 1 Path 2 Path 3 Path 4...
Non-High Availability Topologies Figure 45 through Figure 47 illustrate non-high availability topologies. These topologies do not achieve the highest level of data availability because they have a hardware component that represents a single point of failure. That is, if the critical component fails, access to the data on the disk array will be interrupted.
Page 134
Figure 45 Four Host/Single Hub/ Single Disk Array Non-HA Topology Topologies for Windows NT and Windows 2000...
Page 135
Figure 46 Four Host/Cascaded Hubs/ Dual Disk Array Non-HA Topology Topologies for Windows NT and Windows 2000...
Page 136
Figure 47 Four Host/Single Switch/ Dual Disk Array Non-HA Topology Topologies for Windows NT and Windows 2000...
High Availability Topologies Figure 48 through Figure 51 illustrate high availability topologies. These topologies achieve the highest level of availability because they have fully redundant hardware data paths to each disk array. There are no single points of failure in these topologies. These topologies are more complex and expensive to implement than non-high availability topologies.
Page 138
Figure 48 Direct Connect Single Host/Single Disk Array HA Topology Topologies for Windows NT and Windows 2000...
Page 139
Figure 49 Dual Host/Dual Hub/Four Disk Array HA Topology Topologies for Windows NT and Windows 2000...
Page 140
Figure 50 Four Host/Dual Hub/Dual Disk Array HA Topology Topologies for Windows NT and Windows 2000...
Page 141
Figure 51 Four Host/Dual Cascaded-Hubs/Four Disk Array HA Topology Topologies for Windows NT and Windows 2000...
Page 142
Topologies for Windows NT and Windows 2000...
Page 143
INSTALLATION Host System Requirements..........145 Site Requirements .
Overview This chapter explains how to install the Disk Array FC60 enclosures into a cabinet and how to configure and connect the controller enclosure to the disk enclosures. It also covers the Fibre Cable connection to the host. Finally this chapter provides power up instructions and initial software installation requirements for operation of the disk array.
Host System Requirements HP-UX The Disk Array FC60 is supported on the following host configurations: • Supported host platforms: Table 13 lists the supported host platforms. The table also identifies which platforms support the disk array as a boot device when running HP-UX 11.x •...
Table 13 Supported Host Platform Information (cont’d) Boot Support on Supported Host HP-UX 11.x? Fibre Channel I/O Adapter C-class A5158A on HP-UX 11.x A3740A on HP-UX 10.20 A4xx-A5xx class A5158A Fibre Channel I/O Adapters The host must have the correct adapter installed. The supported host adapters are listed in Table 13.
Site Requirements Environmental Requirements The area around the array must be cooled sufficiently so it does not overheat. Chapter 8, Reference and Regulatory, contains environmental specifications for the Disk Array FC60. Refer to that section for the required environmental specifications. Electrical Requirements The site must be able to provide sufficient power to meet the needs of the devices in the cabinet(s).
Page 148
Table 14 Total Operating and In-Rush Currents Operating Operating In-Rush Current Current Power @ 110v @ 220v Current Cords Controller w/ 6 Disk Enclosures 41.3A 20.4A 124A Controller w/ 5 Disk Enclosures 34.8A 17.2A 104A Controller w/ 4 Disk Enclosures 28.3A 14.0A Controller w/ 3 Disk Enclosures...
Page 149
Table 16 Disk Enclosure Electrical Requirements Measurement Value Voltage – Range 220-240V 100 - 127V – Frequency 50-60Hz Current – Typical 2.9 - 3.2A Maximum Operating – 100 - 120 V 2.6 - 3.2A – 200 - 240 V 5.3 - 6.7A –...
Power Distribution Units (PDU/PDRU) PDUs provide a sufficient number of receptacles for a large number of devices installed in a cabinet. A PDU connects to the site power source and distributes power to its ten receptacles. The disk array power cords are connected to the PDUs and each PDU is connected to a separate UPS.
Page 151
The following tables show recommended PDU/PDRU combinations for one or more components in a rack. Data assumes 220V AC nominal power and redundant PDU/PDRUs. For nonredundant configurations, divide the number of recommended PDU/PDRUs by 2. Table 18 Recommended PDU/PDRUs for HP Legacy Racks Number of Components 1.1-meter (21 U) Rack...
Refer to the documentation supplied with the PDU/PDRU for installation instructions. Recommended UPS Models The following Hewlett-Packard Power Trust models are recommended for use with the HP SureStore E Disk Array FC60. Each UPS supplies up to 15 minutes of standby power.
Page 153
PDU 16 Amp or PDU 16 Amp or PDRU 30 Amp PDRU 30 Amp Figure 52 PDU Placement in 1.6-Meter Rack Power Distribution Units (PDU/PDRU)
Page 154
PDU (16 Amp)or PDU (16 Amp) or PDRU (30 Amp) PDRU (30 Amp) Figure 53 PDRU Placement in 2.0-Meter Rack Power Distribution Units (PDU/PDRU)
The HP SureStore E Disk Array FC60 has been tested for proper operation in Note supported Hewlett-Packard cabinets. If the disk array is installed in an untested rack configuration, care must be taken to ensure that all necessary environmental requirements are met. This includes power, airflow, temperature, and humidity.
Page 156
Table 21 EIA Spacing for Racks and Array Enclosures Component Measure (EIA Units) Legacy Cabinets (1 EIA Unit = 1.75”) 1.1 Meter Cabinet 21 EIA Units, Total Available 1.6 Meter Cabinet 32 EIA Units, Total Available 2.0 Meter Cabinet 41 EIA Units, Total Available Controller Enclosure FC60 5 EIA Units Used (includes 1/2 rail space below and remaining 1/2 EIA unit above enclosure)
Page 157
installation to utilize 1/2 EIA units available from the disk system SC10’s 3.5 EIA unit height. Figure 55 shows rack locations for installation of six disk enclosures and one controller enclosure (positioned on top) for legacy racks. When disk enclosures are installed in legacy racks, an unusable 1/2-EIA space is left at the bottom of the enclosures.
Page 158
Figure 54 Enclosure EIA Positions for System/E Racks Installing the Disk Array FC60...
Page 159
Figure 55 Enclosure EIA Positions for Legacy Cabinets Installing the Disk Array FC60...
Installing the Disk Enclosures Disk enclosures should be installed in the rack starting at the bottom and proceeding upward. When all disk enclosures are installed, the controller enclosure is installed at the top, directly above the disk enclosure. Installation instructions for the disk enclosure SC10 are provided below;...
Page 161
Figure 56 Disk Enclosure Contents Installing the Disk Enclosures...
Step 3: Install Mounting Rails Select the rail kit for the appropriate rack and follow the instructions included with the rail kit to install the rails in the rack. The following rail kits are available for use with the disk enclosure: •...
Page 163
A Front Mounting Ears C Rail B Chassis D Rail clamp Figure 57 Mounting the Disk Enclosure (Rack System/E shown) Installing the Disk Enclosures...
Page 164
To protect the door, do not lift or move the disk enclosure with the door open. AUTION Unlock and open the disk enclosure door, using a thin flat-blade screwdriver to turn the lock (Figure 58). Figure 58 Door Lock Ensure that one hole in each mounting ear (A in Figure 57) aligns with the sheet metal nuts previously installed on the rack front columns.
Page 165
If using an HP rack, fasten the back of the disk enclosure to the rails using the rear hold-down clamps from the rail kit. a. If you are installing the disk enclosure in an HP legacy rack, set the clamp (A in Figure 59) on top of the rail (B) so that the tabs point up and the screw holes are on the slotted side of the rail.
Step 5: Install Disks and Fillers Touching exposed areas on the disk can cause electrical AUTION discharge and disable the disk. Be sure you are grounded and be careful not to touch exposed circuits. Disks are fragile and ESD sensitive. Dropping one end of the disk two inches is enough to cause permanent damage.
Page 167
Open the disk enclosure door. Put on the ESD strap (provided with the accessories) and insert the end into the ESD plug-in (D in Figure 60) near the upper left corner of the disk enclosure. Disks are fragile. Handle them carefully. AUTION Remove the bagged disk from the disk pack.
Open the cam latch (C Figure 60) by pulling the tab toward you. Align the disk insertion guide (F) with a slot guide (G) and insert the disk into the slot. Typically, install disk modules on the left side of the enclosure and fillers on the right Installing disks left to right allows you to insert the disk completely without Note releasing your grip on the handle.
Page 169
What if LUN 0 is on disks in the enclosure? Note If any of the disks in the enclosure are part of LUN 0, you will not be able to unbind the LUN before moving the disks. Instead, you must replace LUN 0 and exclude any of the disks in the enclosures from LUN 0.
Installing the Controller This procedure describes how to install the Disk Array FC60 controller enclosure into an HP legacy rack or an HP System/E Rack. Step 1: Gather Required Tools • Torx T25 screwdriver • Torx T15 screwdriver • Small flat-blade screwdriver Step 2: Unpack the Product Lift off the overcarton and verify the contents (see Table 23...
Page 171
Table 23 Controller Package Contents Figure Label Part Description (See) Controller chassis with pre-installed modules Filler panel, 1/2 EIA unit, 2ea. Rail kit (A5251A) for System/E racks Rail kit (A5250A) for legacy racks, 1 ea. SCSI Cables (length depends on option ordered) 2 meter (5064-2492) or 5 meter (5064-2470) 2 ea.
Step 3: Install Mounting Rails Select the rail kit for the appropriate rack and follow the instructions included with the rail kit to install the rails in the rack. The following rail kits are available for use with the controller enclosure: •...
Page 174
Figure 63 Mounting the Controller Enclosure Installing the Controller...
Page 175
If installing in an HP rack, secure the back of the enclosure to the rails using the two rail clamps from the rail kit. In legacy HP racks: a. Align screw holes and insert the clamp tab into the slot in the upper surface of the rail.
Configuration Switches This section describes the configuration switches on the controller enclosure and the disk enclosures. Configuration switch settings must be set to support the configuration (full-bus ore split-bus) being installed (as planned for from chapter 2, Topology and Array Planning). Controller enclosure and the disk enclosures configuration switches include: –...
Page 177
Note that one BCC is inverted with respect to the other. Thus, the settings on Note one BCC appear as inverted and in reverse order from the other. Tray ID Tray ID Configuration DIP switch Configuration DIP switch Tray ID Tray ID Configuration DIP switch Configuration DIP switch...
Page 178
Table 24 Disk Enclosure Switches Setting Switch Operation Full-Bus Mode 1 - Full-/Split Bus Mode Split-Bus Mode 2 - Stand-Alone/Array Mode Always set to Off (Array Mode) 3 - Bus Reset on Power Fail Must be set to 0 4 - High/Low Bus Addressing Set to 0 (Low addressing) 5 - Not Used Not used;...
a low range of IDs (0, 1, 2, 3, and 4) and a high range of IDs (8, 9, 10, 11, and 12). (BCCs are also provided addresses as shown in Table 25). Note that the SCSI IDs do not correspond to the physical slot number.
Page 180
controller module A (Fibre Channel connector J3) and Host ID BD2 SW2 selects the address for controller module B (Fibre Channel connector J4). Each Fibre Channel Host ID DIP switch contains a bank of seven switches that select the address using a binary value, 000 0000 (0) through 111 1111 (126). To set an address, set the switches in the up position for “1”...
Page 181
Fibre Channel Host ID Switch ( 0 1 0 1 0 1 0 = 42) Figure 65 Fibre Channel Connectors and Fibre Channel Host (Loop) ID Switches Occasionally two or more ports in an arbitrated loop will arbitrate Note simultaneously. Priorities are decided according to the loop IDs. The higher the loop ID, the higher the priority.
Attaching Power Cords Each enclosure (controller and disk enclosures) contains dual power supplies that must be connected to the power source (PDU). When connecting power cords for high availability installations, connect one enclosure power cord to one source (PDU) and the other power cord to an alternate source (PDU).
Page 184
letters among all disk enclosures. “Cascading” refers to overload faults that occur on a backup PDU as a result of power surges after the primary PDU fails. – Serviceability. Choose PDU locations that prevent power cords from interfering with the removal and replacement of serviceable components. Also leave a 6-inch service loop to allow for the rotation of PDRUs.
Page 185
AC IN AC IN 30A PDRU 30A PDRU Figure 66 Wiring Scheme for 1.6-Meter Rack Attaching Power Cords...
Page 186
30A PDRU 30A PDRU AC IN AC IN 30A PDRU 30A PDRU AC IN AC IN Figure 67 Wiring Scheme for 2.0-Meter Rack Attaching Power Cords...
Attaching SCSI Cables and Configuring the Disk Enclosure Switches NOTE! It is critical that all SCSI cables be tightened securely. Use the following steps to ensure the cable connectors are seated properly. 1. Connect the cable to the enclosure connector and tighten the mounting screws finger tight.
Full-Bus Cabling and Switch Configuration Cabling for a full bus configuration requires connecting one SCSI cable from the controller to the disk enclosure and setting the configuration switches. Figure 69 illustrates full-bus cabling connections for a six disk enclosure array. It is possible to configure any number of disk enclosures, from one to six, using this method.
Page 189
Segment 1 set to “1” Segment 1 set to “1” Segment 1 set to “1” Segment 1 set to “1” All other segments set to “0” All other segments set to “0” All other segments set to “0” All other segments set to “0” Tray ID set to unique value Tray ID set to unique value Tray ID set to unique value...
Page 190
SCSI terminator required here Figure 69 Full-Bus Cabling Attaching SCSI Cables and Configuring the Disk Enclosure Switches...
Split-Bus Switch and Cabling Configurations Split-bus cabling requires two SCSI cables from each disk enclosure to the controller enclosure. Split-bus cabling is typically used for installations with 3 or fewer disk enclosures. Cabling for a split-bus configuration is shown in Figure For consistency and ease of managment, it is recommended that you observe Note...
Page 192
All segments set to “0” All segments set to “0” All segments set to “0” All segments set to “0” Tray ID set to unique value Tray ID set to unique value Tray ID set to unique value Tray ID set to unique value for each enclosure for each enclosure for each enclosure...
Page 193
SCSI terminators required on both BCCs Figure 71 Split-Bus Cabling Attaching SCSI Cables and Configuring the Disk Enclosure Switches...
Page 194
Bus Addressing Examples Each disk module within the disk array is identified by its channel number and SCSI ID. These values will differ depending on which type of bus configuration is used for the disk enclosures. See "How are disk modules in the array identified?" on page 244 for more information.
Page 195
This disk is on channel 4 with an ID of Figure 73 Full-Bus Addressing Example Attaching SCSI Cables and Configuring the Disk Enclosure Switches...
For operation on HP-UX, the host must contain an HP Fibre Channel Mass Storage/9000 adapter. For HP-UX supported adapters, installation, and configuration information for the HP Fibre Channel adapters, refer to the Hewlett-Packard Fibre Channel Mass Storage Adapter Service and User Manual (J3636-90002) supplied with the adapter, or you can download this document from http://www.docs.hp.com/.
Page 197
Figure 74 MIA, RFI Gasket, and Fibre Channel Installation Connect the Fibre Channel connectors to the controller MIAs. a. Remove the optical protectors from the ends of the MIAs and the Fibre Channel cables (Figure 74). b. Insert the Fibre Channel connectors into the MIAs. The fibre optic cable connector is keyed to install only one way.
Applying Power to the Disk Array Once the hardware installation is complete, the disk array can be powered up. It is important that the proper sequence be followed when powering up the components of the disk array. To ensure proper operation, power should be applied to the disk enclosures first and then to the controller enclosure, or all components can be powered up simultaneously.
Page 199
Figure 75 Disk Enclosure Power Switch and System LEDs Check the LEDs on the front of the disk enclosures (see Figure 77). The System Power LED (B in Figure 75) should be on and the Enclosure Fault LED (C) should be off. It is normal for the Enclosure Fault LED (amber) to go on momentarily when the enclosure is first powered on.
Page 200
Power Switches Figure 76 Controller Enclosure Power Switches Check the controller enclosure LEDs (see Figure 78). The Power LED should be on and the remaining LEDs should be off. If any fault LED is on, an error has been detected. Refer to "Troubleshooting"...
Page 201
Table 27 Normal LED Status for the Disk Enclosure Module Normal State Front Enclosure System Fault System Power On (green) Disk Activity Flashing (green) when disk is being accessed. Disk Fault LED Power Supply Power Supply On (green) BCC Module Term.
Page 202
A System fault LED B System power LED C Disk activity LED D Disk fault LED E Power On LED F Term. Pwr. LED G Full Bus LED H BCC Fault LED I Bus Active LED J LVD LED K Fan Fault LED Figure 77 Disk Enclosure LEDs Applying Power to the Disk Array...
Page 203
Table 28 Normal LED Status for Controller Enclosure Module Normal State Controller Power On On (green) Enclosure Power Fault Fan Fault Controller Fault Fast Write Cache On (green) while data is in cache Controller Controller Power On (green) Controller Fault Heartbeat Blink (green) Status...
Page 204
A Power On LED B Power Fault LED C Fan Fault LED D Controller Fault LED E Fast Write Cache LED F Controller Power LED G Controller Fault LED H Heartbeat LED I Status LEDs J Fault B LED K Full Charge B LED L Fault A LED M Full Charge A LED N Power 1 LED...
Page 205
Powering Down the Array When powering down the disk array, the controller enclosure should be powered down before the disk enclosures. To power down the disk array: Stop all I/Os from the host to the disk array. Wait for the Fast Write Cache LED to go off, indicating that all data in cache has been written to the disks.
Verifying Disk Array Connection On Windows NT and Windows 2000 The HP Storage Manager 60 software is used to verify that the disk array is visible to the Windows host. See the HP Storage Manager 60 User’s Guide for instructions on installing and using the HP Storage Manager 60 software.
Page 207
Class H/W Path Driver State H/W Type Description ============================================================================ target 8/8.8.0.6.0.0 CLAIMED DEVICE disk 8/8.8.0.6.0.0.0 sdisk CLAIMED DEVICE HP A5277A /dev/dsk/c0t0d0 /dev/rdsk/c0t0d0 disk 8/8.8.0.6.0.0.1 sdisk CLAIMED DEVICE HP A5277A /dev/dsk/c0t0d1 /dev/rdsk/c0t0d1 disk 8/8.8.0.6.0.1.0 sdisk CLAIMED DEVICE HP A5277A /dev/dsk/c0t1d0 /dev/rdsk/c0t1d0 disk 8/8.8.0.6.0.2.0 sdisk...
Interpreting Hardware Paths Each component on the disk array is identified by a unique hardware path. The interpretation of the hardware path differs depending on the type of addressing used to access the component. Two types of addressing are used with the Disk Array FC60: •...
Page 209
The port value will always be 255 when using PDA. The loop address, Fibre Channel Host ID of the disk array controller module (two address possible, one for controller module A and one for module B), is encoded in the Bus and Target segments of the hardware path. For example, in the hardware path shown in Figure 80, the Bus value of 1 and Target value...
Page 210
VSA is an enhancement that increases the number of LUNs that can be addressed on a fibre channel disk array to 16382 (2 ). This compares with the 8 LUN limit imposed by PDA. The HP SureStore E Disk Array FC60 requires that 32 LUNs (0 - 31) be addressable. To implement VSA, the fibre channel driver creates four virtual SCSI busses, each capable of supporting up to eight LUNs.
Page 211
The following information is returned: SCSI describe of dev/rdsk/c9t1d0 vendor: hp product: id type: direct access size: 10272kbytes bytes per sector: 512 If the LUN exists, the size will reflect the capacity of the LUN. If the size returned is 0 kbytes, there is no LUN 0 created for that logical SCSI bus.
Page 212
A quick way to determine the LUN number is to multiply the value of the next-to-last segment times 8, and add the result to the last segment value. Using the above example, the LUN number is computed as follows: (3 x 8) + (5) = 29 Verifying Disk Array Connection...
Installing the Disk Array FC60 Software (HP-UX Only) Once the disk array hardware is installed and operating, the disk array management software, diagnostic tools, and system patches must be installed. The software required for the Disk Array FC60 is distributed on the HP-UX Support Plus CD-ROM (9909 or later). System Requirements The following host system requirements must be met to install and use the Array Manager 60 utilities successfully.
Verifying the Operating System The Disk Array FC60 is supported on the following operating systems versions: • HP-UX 11.0, 11.11, and 10.20 Before installing the Disk Array FC60 system software, verify that you are running the required operating system version. To identify the operating system version, type: uname -a A response similar to the following should be displayed indicating the version of HP-UX the host is running:...
swlist Execute the following command to create the required device files (this is not required if the system was re-booted): insf -e Run the Array Manager 60 amdsp command to re-scan the system for the array: amdsp -R Downgrading the Disk Array Firmware for HP-UX 11.11 Hosts Controller firmware HP08 is not supported on HP-UX 11.11.
Configuring the Disk Array HP-UX After installing the disk array software, the following steps must be performed to configure the disk array for operation. These steps should be performed by the service-trained installer with assistance from the customer where appropriate. Step 1.
Page 217
Step 3. Reformat Disk Array Media This step will destroy all data on the disk array and remove any LUN AUTION structure that has been created. If there is data on the disk array, make sure it is backed up before performing this step. If a LUN structure has been created on the disk array either at the factory (A5277AZ) or by a reseller, do not perform steps 3, 4, and 5.
Page 218
Step 5. Replace LUN 0 LUN 0 was created solely to allow the host to communicate with the disk array when it is first powered on. It should be replaced with a usable LUN. If not replaced, a substantial amount of storage will be wasted. To replace LUN 0, type: amcfg -R <cntrlr>:0 -d <channel:ID>,<channel:ID>..
Page 219
settings on the host to ensure valid time stamps. This ensures that any information created by the disk array such as log entries reflect the proper time they occurred. To set the controller date and time, type: ammgr -t <ArrayID> Step 8.
• For more information, see "Adding a Global Hot Spare" on page 296 • To use SAM, see "Adding a Global Hot Spare" on page 273 Step 10. Install Special Device Files After binding LUNs, you must install special device files on the LUNs. This makes the LUNs usable by the operating system.
Page 221
Set up storage partitions if this premium feature is enabled. Set the disk array controller clocks Configuring the Disk Array...
Using the Disk Array FC60 as a Boot Device (HP-UX Only) The Disk Array FC60 is supported for use as boot device on the following HP 9000 Servers running HP-UX 11.0: – K-Class – D-Class – N-Class – L-Class Not all levels of server PDC (processor dependent code) support Fibre Channel Note boot.
Solving Common Installation Problems Problem. When performing an ioscan, the host sees the disk array controllers but none of the disks or LUNs in the disk array. Solution. This is typically caused by powering on the disk array controller enclosure before powering on the disk enclosure(s).
Adding Disk Enclosures to Increase Capacity Scalability is an important part of the design of the HP SureStore E Disk Array FC60. The capacity of the disk array can be increased in a variety of ways to meet growing storage needs.
• Consider Adding More Than One Disk Enclosure - Because the process of adding disk enclosures involves backing up data and powering off the disk array, you should consider adding more than one enclosure to meet your future capacity needs. This will avoid having to redo the procedure each time you add another disk enclosure.
Identify the expanded disk array layout by performing the following tasks: a. Create a detailed diagram of the expanded HP FC60 array layout. Include all Fibre Channel and SCSI cabling connections. This diagram will serve as your configuration guide as you add the new enclosures. The Capacity Expansion Map on page should assist you in identifying where disk will be moved in the new configuration.
Do not proceed to the next step if any LUN is not in an optimal state and you AUTION intend to move any of the disks which comprise the LUN. Contact HP Support if the LUNs cannot be made OPTIMAL before the moving disk drives. If you intend to move any global hot spares, remove them from the hot spare group as follows: a.
Page 228
Configure the necessary disk enclosures for full-bus operation. See "Configuration Switches" on page 176. Set the disk enclosure DIP switches on both BCC A and BCC B to the following settings for full-bus operation: sw1=1 (This switch is set to 0 for split-bus mode.) sw2=0 sw3=0 sw4=0...
Page 229
Set the disk Enclosure (Tray) ID switches. See "Disk Enclosure (Tray) ID Switch" on page 176. a. Set the Enclosure ID switches on both BCC A and BCC B cards to identify the disk enclosure. The Enclosure ID switch setting must be the same for both BCC A and BCC B.
Step 5. Completing the Expansion The disk array components must be powered up in the specified sequence - AUTION disk enclosures first, followed by the controller enclosure. Failure to follow the proper sequence may result in the host not recognizing LUNs on the disk array.
Page 231
taken not to cross the cables, as this may cause problems with applications that depend on a specific path. Rescan the disk array from the host using the command. The ioscan -fnC disk disk array and the paths to each LUN should be displayed. This completes the process of expanding the disk array.
Capacity Expansion Example An example of expanding an Disk Array FC60 is shown in Figure 83. In this example, three new disk enclosures are added to a disk array with three fully loaded enclosures. The disk array is configured with five 6-disk LUNs. The original enclosures were operating in split-bus mode, and have been reconfigured to full-bus mode.
Page 233
Disks are moved to the slot that corresponds to their original channel:ID. High availability is maintained by having no more than one disk per LUN or volume group on each channel. Figure 83 Capacity Expansion Example Adding Disk Enclosures to Increase Capacity...
Page 234
Adding Disk Enclosures to Increase Capacity...
Tools for Managing the Disk Array FC60 On Windows NT and Windows 2000, the disk array is managed using the HP Note Storage Manager 60 software. See the HP Storage Manager 60 User’s Guide for information on managing the disk array on Windows NT and Windows 2000. There are three tools available for managing the Disk Array FC60 on HP-UX: the HP-UX System Administrator Manager (SAM), Array Manager 60, and Support Tools Manager (STM).
Page 239
Table 30 Management Tools and Tasks Task Tool Array Manager 60 Checking disk array status Yes (amdsp) Managing LUNs Yes (amcfg) Managing global hot spares Yes (ammgr) Assigning an alias to the disk array Yes (ammgr) Managing cache memory Yes (ammgr) Managing the rebuild process Yes (amutil) Synchronizing controller date and time...
Installing the Array Manager 60 Software The Array Manager 60 software must be installed on the host to which the disk array is connected. The software should have been installed with disk array hardware. However, if you decide to move the disk array to a different host, you will need to install the software on the new host.
AM60Srvr Daemon The AM60Srvr daemon is the server portion of the Array Manager 60 software. It monitors the operation and performance of all disk arrays, and services requests from clients used to manage the disk arrays. Tasks initiated from SAM or Array Manager 60 are serviced by the AM60Srvr daemon.
Managing Disk Array Capacity During installation, a LUN structure is created on the disk array. This structure may meet your initial storage requirements, but at some point additional capacity may be required. This involves adding disks and binding new LUNs. Careful LUN planning will ensure that you achieve the desired levels of data protection and performance from your disk array.
Selecting Disks for a LUN When binding a LUN , you must select the disks that will be used. The capacity of the LUN is determined by the number and capacity of the disks you select, and the RAID level. When selecting disks for a LUN , consider the following: •...
Page 244
Selecting disks in the incorrect order of 1:2, 2:2, 1:3, and 2:3 results in mirrored pairs of 1:2/1:3 and 2:2/2:3. This puts the primary disk and mirror disk of each pair in the same enclosure, making the LUN vulnerable to an enclosure failure. Incorrect Disk Correct Disk Selection Order...
Page 245
internal management of enclosure components.) If the disk enclosure is configured for split-bus operation, the both the even-numbered slots and the odd-numbered slots are assigned IDs of 0 - 4. • When viewing status information for the disk array, you may also see the disk enclosure number and slot number displayed.
Page 246
Disk Module Addressing Parameters Disk enclosure ID set to 3 Enclosure connected to channel 2 Slot Numbers SCSI IDs Full bus configuration Split bus configuration This disk module uses the following address parameters: Channel 2 SCSI ID 10 (full bus) or 2 (split bus) Enclosure 3 Slot 5 Figure 86...
Assigning LUN Ownership When a LUN is bound, you must identify which disk array controller (A or B) owns the LUN. The controller that is assigned ownership serves as the primary I/O path to the LUN. The other controller serves as the secondary or alternate path to the LUN. If there is a failure in the primary I/O path and alternate links are configured, ownership of the LUN automatically switches to the alternate path, maintaining access to all data on the LUN.
the RAID level used by a LUN, you must unbind the LUN and rebind it using the new RAID level. With the exception of RAID 0, all RAID levels supported by the disk array provide protection against disk failure. However, there are differences in performance and storage efficiency between RAID levels.
• If you choose to limit the number of global hot spares, make sure you are able to respond quickly to replace a failed disk. If an operator is always available to replace a disk, you may not need the added protection offered by multiple global hot spares. Setting Stripe Segment Size Another factor you may have to consider is the stripe segment size you use for a LUN.
Evaluating Performance Impact Several disk array configuration settings have a direct impact on I/O performance of the array. When selecting a setting, you should understand how it may affect performance. Table 31 identifies the settings that impact disk array performance and what the impact is. The LUN binding process impacts disk array performance.
Page 251
Table 31 Performance Impact of Configuration Settings (cont’d) Cache flush threshold (default 80%) Setting: Function: Sets the level at which the disk array controller begins flushing write cache content to disk media. The setting is specified as the percentage of total available cache that can contain write data before flushing begins.
Page 252
Table 31 Performance Impact of Configuration Settings (cont’d) Cache flush limit (default 100%) Setting: Function: Determines how much data will remain in write cache when flushing stops. It is expressed as a percentage of the cache flush threshold. For optimum performance this value is set to 100% by default.
Page 253
Cache Flush Threshold set to 80% Cache Flush Limit set to 100% Initial cache settings Write Data Write data exceeds flush threshold Start data flushing Write Data Write data below flush threshold Stop data flushing Write Data Figure 87 Cache Flush Threshold Example Managing Disk Array Capacity...
Adding Capacity to the Disk Array As your system storage requirements grow, you may need to increase the capacity of your disk array. Disk array capacity can be increased in any of the following ways: • You can add new disk modules to the disk array if there are empty slots in the disk enclosures.
Page 255
Bind a LUN with the new disks using the management tool of your choice,: – To use SAM, see "Binding a LUN" on page 267 – To use Array Manager 60, see "Binding a LUN" on page 289 – To use STM, see "Binding a LUN"...
Adding Additional Disk Enclosures Adding additional disk enclosures is another way to increase the capacity of the disk array. Up to six disk enclosures can be added to a disk array. Because it requires shutting down and possibly reconfiguring the disk array, adding new disk enclosures should be done by a trained-service representative.
Page 257
Bind a LUN with the new disks using the management tool of your choice: – To use SAM, see "Binding a LUN" on page 267 – To use Array Manager 60, see "Binding a LUN" on page 289 – To use STM, see "Binding a LUN"...
Upgrading Controller Cache to 512 Mbytes Controller cache can be upgraded from the standard 256 Mbytes of cache to 512 Mbytes. This provides improved I/O performance for write operations. See Table 58 on page 416 cache upgrade kit part numbers. The cache upgrade kit must be installed by service-trained personnel only.
Page 259
Table 32 Controller Cache Upgrade Kit Selection Initial controller configuration Cache Upgrade Kit(s) Dual controllers, each with two 128 MB Two A5279A kits DIMMs Dual controllers, each with one 256 MB One A5279A kit DIMM Single controller with two 128 MB DIMMs One A5279A kit Single controller with one 256 MB DIMM One A5279A kit (only one of the...
Managing the Disk Array Using SAM Most of the tasks involved in everyday management of the disk array can be performed using SAM. Using SAM you can: • Check disk array status • Bind and unbind LUNs • Add and remove global hot spares Does it make any difference which controller I select when managing Note the disk array?
Checking Disk Array Status All aspects of disk array operation are continually monitored and the current status is stored for viewing. You can selectively view the status of any portion of the disk array configuration. To view disk array status: On the main SAM screen, double-click the Disks and File Systems icon.
Page 262
Select a controller for the appropriate disk array from the Disk Devices list. Select the Actions menu, and the View More Information... menu option. The Main Status screen is displayed showing the overall status of the disk array. Click the appropriate button to view the detailed status for the corresponding disk array component.
Page 263
General disk array status displayed here Click here Click here Click here Click here for detailed for detailed for detailed for detailed status of status of status of status of indicated indicated indicated indicated component component component component Managing the Disk Array Using SAM...
Interpreting Status Indicators A common set of colored status indicators are used to convey the current operating status of each disk array component. The colors are interpreted as follows: Green The component is functioning normally. On a disk it also indicates that the disk is part of a LUN.
Select the Actions menu, the Disk Array Maintenance menu option, then Modify Array Alias. The Modify Array Alias screen is displayed. Enter alias name here Enter the name in the Alias field. An alias can contain up to 16 of the following characters: upper case letters, numbers, pound sign (#), period (.), and underscore (_).
Page 266
Click the Disk Module Status button. The Disk Status window is displayed. All disks within the same group are marked Status of selected disk Select the option to flash LEDs Managing the Disk Array Using SAM...
Select the disk you want to identify. A check mark will appear on the selected disk and all the other disks in the same disk group. For example, if the selected disk is part of a LUN, all disks within the LUN will be checked. If the disk is a global hot spare, all global hot spares will be checked.
Page 268
To bind a LUN: On the main SAM screen, double-click the Disks and File Systems icon. On the Disks and File Systems screen, double-click the Disk Devices icon. The Disk Devices list is displayed. There is an entry for each disk array controller. Select a controller for the appropriate disk array from the Disk Devices list.
Page 269
Select unassigned disks for a new LUN Order of selected disks displayed here Managing the Disk Array Using SAM...
Page 270
Click the LUN # button and select the desired number for the LUN. You can also enter the LUN number directly in the field. Click the RAID Level button and select the desired RAID level for the LUN. Select the disks to include in the LUN. All unassigned disks are identified with a status of white.
Unbinding a LUN Unbinding a LUN makes its capacity available for the creation of a new LUN. All disks in the LUN are returned to the Unassigned disk group when the LUN is unbound. All data on a LUN is lost when it is unbound. Make sure you backup any AUTION important data on the LUN before unbinding it.
Page 272
Can I replace any LUN on the disk array? Note Yes. In addition, the replace command is the only way you can alter the configuration of LUN 0. LUN 0 is unique in that it must exist on the disk array to permit communication with the host.
Adding a Global Hot Spare Global hot spares provide an additional level of protection for the data on your disk array. A global hot spare automatically replaces a failed disk, restoring redundancy and protecting against a second disk failure. For maximum protection against disk failures it is recommended that you add a global hot spare for each channel.
Page 274
Unassigned disks selected as hot spares Managing the Disk Array Using SAM...
Select the disk to be used as a global hot spare. Only unassigned disks, identified by a white status indicator, are available for selection as hot spares. Click O to add the global hot spare and exit the screen, or click Apply if you want to add more global hot spares.
Managing the Disk Array Using Array Manager 60 The Array Manager 60 command line utilities allow you to configure, control, and monitor all aspects of disk array operation. Array Manager 60 is intended for performing the more advanced tasks involved in managing the disk array. The Array Manager 60 utilities and the tasks they are used to perform are summarized in Table 33 Table...
Page 277
Table 33 Array Manager 60 Task Summary (cont’d) Task Command Disk Array Configuration Assigning an Alias to the Disk ammgr -D <ArrayAlias> <ArrayID> Array Setting Cache Page Size ammgr -p {4 | 16} <ArrayID> Setting the Cache Flush ammgr -T <cntrlrID>:<percent> <ArrayID> Threshold Setting the Cache Flush Limit ammgr -L <cntrlrID>:<percent>...
Page 278
Table 34 Array Manager 60 Command Summary Command Tasks amcfg Binding a LUN Unbinding a LUN Changing LUN Ownership Replacing a LUN Calculating LUN Capacity ammgr Adding a Global Hot Spare Removing a Global Hot Spare Assigning an Alias to the Disk Array Setting Cache Page Size Setting the Cache Flush Threshold Setting the Cache Flush Limit...
Command Syntax Conventions The following symbols are used in the command descriptions and examples in this chapter. Table 35 Syntax Conventions Symbol Meaning < > Indicates a variable that must be entered by the user. Only one of the listed parameters can be used (exclusive OR). Values enclosed in these braces are optional.
Selecting a Disk Array and Its Components When using Array Manager 60, you must select the disk array you will be managing. In addition, many commands also require you to identify the controller, disk, or LUN within the disk array that will be impacted by the command. The command parameters used to select a disk array and its internal components are listed and described in Table...
Preparing to Manage the Disk Array Before you begin using Array Manager 60 to manage your disk array for the first time, you may want to perform the following procedure. It will locate all the disk arrays on the host and allow you to assign an alias to each one.
Checking Disk Array Status An important part of managing the disk array involves monitoring its status to ensure it is working properly. Changes in disk array status may indicate a possible hardware failure, so it is important to check disk array status regularly. All aspects of disk array operation are continually monitored and the current status is stored for viewing.
Page 283
Table 37 Command Options for Displaying Disk Array Status (cont’d) Option Status Information Displayed All status. This option displays all the information returned by the preceding options. -p <devicefile> Hardware path information. Displays hardware path information for the controller corresponding to the specified device file. Rebuild status.
Page 284
Figure 88 Disk Array Sample Status Output (amdsp) Vendor ID = HP Product ID = A5277A Array ID = 000A00A0B80673A6 Array alias = Array1 ----------------------------------------- Array State = READY Server name = speedy Array type Mfg. Product Code = 348-0040789 --- Disk space usage -------------------- Total physical = 271.4 GB...
Page 285
Vendor ID = HP Product ID = A5277A Array ID = 000A00A0B80673A6 Array alias = Array1 ----------------------------------------- SCSI Channel:ID = 1:0 Enclosure Slot Disk Information - the status of each Disk State = OPTIMAL disk should be Optimal for each disk Disk Group and Type = 060E86000238C6360F LUN assigned to a LUN.
Page 286
Information for Controller A - 000A00A0B80673A6: Controller Status = GOOD Controller Mode = ACTIVE Vendor ID = HP Product ID = A5277A Serial Number = 1T00310110 Controller Information - make sure the Firmware Revision = 04000304 following conditions are met: Boot Revision = 04000200 - Both controllers should be ACTIVE...
Page 287
Vendor ID = HP Product ID = A5277A Array ID = 000A00A0B80673A6 Array alias = Array1 ----------------------------------------- Overall State of Array = READY Array configuration: Cache Block Size = 4 KB / 4 KB Cache settings Cache Flush Threshold = 80 % / 80 % Cache Flush Limit = 100 % / 100 % Cache Size...
Page 288
Listing Disk Array IDs You may find it useful to list the disk arrays recognized by the host. The list will include both the disk array ID (or S/N) and alias name if one has been assigned. This is a quick way to determine the ID of each disk array on your system.
Managing LUNs Using Array Manager 60 you can perform the following tasks: • Binding and unbinding LUNs • Calculating LUN capacity • Changing LUN ownership • Replacing a LUN Binding a LUN Binding LUNs is one of the most common tasks you will perform in managing the disk array.
Page 290
• RAIDlevel - RAID 0 support RAID level used for the LUN. Valid RAID levels are 0, 1, and 5. requires firmware version HP08 or later. A RAID 0/1 LUN is created by selecting RAID 1 with more than two disks. •...
Page 291
Table 38 Command Options for Binding a LUN (cont’d) Option Description Default -force Allows you to bind a LUN using two or more If not specified, you cannot bind a disks in the same enclosure or on the same LUN using multiple disks in the channel.
Page 292
Identifying Disks Binding a LUN requires the use of unassigned disks. If you are not sure which disks are unassigned, you can determine which disks are available. To identify unassigned disks, type amdsp -d <ArrayID> The status of all disks in the array will be returned. The information includes the disk group the disk is a member of.
Page 293
Unbinding a LUN Unbinding a LUN makes its capacity available for the creation of a new LUN. All disks assigned to the LUN are returned to the Unassigned disk group when the LUN is unbound. All data on a LUN is lost when it is unbound. Make sure you backup any AUTION important data on the LUN before unbinding it.
Page 294
Does the primary path selected using LVM impact LUN ownership? Note Yes. The primary path established using LVM defines the owning controller for the LUN. This may override the controller ownership defined when the LUN was bound. For example, if controller A was identified as the owning controller when the LUN was bound, and LVM subsequently established the primary path to the LUN through controller B, controller B becomes the owning controller.
To replace a LUN, type: amcfg -R <cntrlrID>:<LUN> -d <channel:ID>,<channel:ID>..-r <RAIDlevel> <options> <ArrayID> • The parameters and options available when replacing a LUN are the same as those used when binding a LUN. See "Binding a LUN" on page 289.
Page 296
Adding a Global Hot Spare A global hot spare is added using an unassigned disk. If there are no unassigned disks available, you cannot add a global hot spare unless you install a new disk or unbind an existing LUN. To add a global hot spare, type: ammgr -h channel:ID <ArrayID>...
Managing Disk Array Configuration Assigning an Alias to the Disk Array If you have many disk arrays to manage, you may find it useful to assign an alias name to each disk array to help you identify them. A short, meaningful alias should be easier to remember than the disk array ID when using the Array Manager 60 commands.
Managing the Universal Transport Mechanism (UTM) On firmware HP08 and later, the Universal Transport Mechanism (UTM) serves as the SCSI communication path between the host and the disk array. In earlier versions of firmware, this communication was done using LUN 0. The UTM is configured as a separate LUN, which is used only for communication and not for storing data.
After executing the above command, the disk array controllers must be Note manually reset or power cycled before the new setting will be invoked. When the power on completes, execute the following commands: ioscan insf -e amdsp -R Disabling the UTM Although it possible to disable the UTM, it is not recommended that you do so.
Page 300
Table 31 on page 250 for details on what performance impact altering these settings may have. Setting Cache Page Size Data caching improves disk array performance by storing data temporarily in cache memory. The cache page size can be set to either 4 bytes or 16 bytes.
Page 301
Setting the Cache Flush Limit Sets the amount of unwritten data to remain in cache after a flush is completed on the given controller. The cache flush limit sets the level at which the disk array stops flushing cache contents to the disks. This value is expressed as a percentage of the current cache flush threshold.
Page 302
Disabling Disk Module Write Cache Enable (WCE) To ensure optimum protection against data loss, it is recommended that Write Note Cache Enable be disabled on all disks in the array. Disabling disk WCE will impact disk array performance. However, it reduces the potential for data loss during a power loss.
Page 303
Enabling Disk Write Cache Enable (WCE) WCE should only be enabled in environments that provide uninterruptible AUTION power to the disk array. A loss of power to the disk array may result in data loss with WCE enabled. If maximum I/O performance is critical, disk WCE can be enabled on all the disks in the array.
Performing Disk Array Maintenance At some point during operation of the disk array, you may need to perform maintenance tasks that involve error recovery and problem isolation. This section describes the tasks involved in maintaining the disk array. Locating Disk Modules Array Manager 60 provides the means of identifying a disk module by flashing its amber Fault LED.
Page 305
Managing the Rebuild Process If a disk fails, the disk array automatically begins the rebuild process the first time an I/O is performed to the LUN, providing that there is a global hot spare available. If no global hot spare is available, the rebuild will not occur until the failed disk has been replaced. While a rebuild is in process, you can check its progress and change the rate at which the rebuild occurs.
Page 306
• amount identifies the number of blocks to rebuild at a time. This value can be from 1 to and specifies the number of 512-byte blocks processed during each rebuild command. The higher the setting, the more blocks that will be processed, reducing I/O performance.
What should I do if parity errors are detected? Note If errors are detected during a parity scan, it is recommended that you contact your Hewlett-Packard service representative immediately. The occurrence of parity errors may indicate a potential problem with the disk array hardware. Displaying Parity Scan Status If a parity scan is in progress, you can monitor its progress.
Page 308
previous firmware releases are logged in the major event log. Earlier versions of firmware (prior to HP08) use the standard log file format, also called Asychrnonous Event Notification (AEN). On firmware HP08 and later, major event logging is enabled by default. If major Note event logging has been disabled by disabling the UTM, only standard log entries will be available.
Page 309
Viewing Disk Array Logs To display the disk array controller log files, type: amlog [-s <StartTime>] [-e <EndTime>] [-t <Recordtype[,Recordtype]..>] [-c] [-d <LogDir>] [-a <ArrayID>] • StartTime identifies the starting date and time. Log entries with an earlier date and time will not be displayed.
Page 310
actual ArrayID must be used here. An alias cannot be used because alias names are not recorded in the log file. Command Example The following example displays the major event log entries for disk array rack_1. The log entries displayed are limited to only critical entries, and entries made after 0900 on 15 May 2000.
Page 311
FRU State = Failed Decoded SCSI Sense: Non-media Component Failure Reporting LUN For information on interpreting SCSI sense codes, see "SCSI Sense Codes" on page 327. Flushing Disk Array Log Contents Array Manager 60 automatically retrieves the contents of the disk array controller log at regular intervals, typically 15 minutes.
To purge the oldest log file in the host directory, type: amutil -p Always use the amutil -p command to purge the controller logs. This Note command maintains the catalog pointers used to access the log files. Using a system command such as rm to remove the log files will cause log catalog errors.
The patches are not currently included on the HP-UX Support Plus CD-ROM. Note They must be downloaded from the indicated web sites. Upgrading Disk Firmware The firmware on each disk can be upgraded individually. Because different disks require different firmeware files, it may be necessary to Managing the Disk Array Using Array Manager 60...
Managing the Disk Array Using STM STM is an online diagnostic tool, but it can be used to perform some of the common tasks involved in managing the disk array. The tasks described here are available to all users and do not require the purchase of a license.
Unbinding a LUN The STM Expert Tool can be used to unbind a LUN. See "Using the STM Expert Tool" on page 355 for more information on running and using this tool. STM Tool Action xstm, mstm Select: Tools > Expert Tool > Run Select: Utilities >...
Locating Disk Modules The STM Expert Tool can be used to locate disk modules. to aid in identification. The LEDs on the disk array components are flashed to aid in identification. See "Using the STM Expert Tool" on page 355 for more information on running and using this tool.
Status Conditions and Sense Code Information The following tables may be useful interpreting the various types of disk array status information that is returned by the management tools. Where appropriate, any required action is identified. LUN Status Conditions The LUN status condition terminology used by Array Manager 60 (AM60) may differ from that used by STM.
Page 318
Table 40 LUN Status Conditions (cont’d) Status Definition/Action AM60: DEGRADED--REPLACED A rebuild is in progress on the LUN. DISK BEING REBUILT No action is required. STM: DEGRADED - 2 AM60: DEAD--MORE DISK Multiple, simultaneous disk failures have occurred on the FAILURES THAN REDUNDANT LUN, causing data to be inaccessible.
Disk Status Conditions The disk status condition terminology used by Array Manager 60 (AM60) may differ from that used by STM. Both terms are identified in the table. Table 41 Disk Status Conditions Status Definition/Action AM60: OPTIMAL The disk is operating normally. STM: OPTIMAL (OPT) No action is required.
Page 320
Table 41 Disk Status Conditions (cont’d) Status Definition/Action AM60: READ FAILED The disk array could not read from the disk. STM: FLT - 19 Replace the failed disk. AM60: WRONG BLOCK SIZE The disk uses an incompatible block size (not 512 bytes). STM: OFF - 22 Replace with a supported disk.
Component Status Conditions Component status conditions are organized into the categories listed in Table 42. The interpretation and action associated with a status depends on the component. See Table 51 on page 379 for more information on Disk System SC10 component status. The component status condition terminology used by Array Manager 60 (AM60) may differ from that used by STM.
FRU Codes The FRU codes indicate which disk array component is responsible for the log entry. Log entries that do not involve disk modules typically require you to interpret the FRU Code and the FRU Code Qualifier values to determine which component is identified. To simplify reporting events, components within the disk array have been placed in FRU groups.
Page 323
Table 43 FRU Code Groups FRU Code Value Group Description 0x08 Disk Enclosure Group - comprises attached disk enclosures. This group includes the power supplies, environmental monitor, and other components in the disk enclosure. See "Disk Enclosure Group FRU Code Qualifier" on page 326 for information on identifying component and status.
Page 324
Controller Enclosure Group FRU Code Qualifier When the Controller Enclosure group is identified (FRU Code = 0x06), the FRU Code Qualifier is interpreted as follows: Status & Component ID Byte FRU Code Qualifier: Field Component Status Component ID Component Status Value Status Optimal...
Page 325
Component ID Value Component Unspecified Device Power Supply Cooling Element Temperature Sensors Audible Alarm Environmental Services Electronics Controller Electronics Nonvolatile Cache Uninterruptible Power Supply 0x0C - 0x13 Reserved 0x14 SCSI Target Port 0x15 SCSI Initiator Port Status Conditions and Sense Code Information...
Page 326
Disk Enclosure Group FRU Code Qualifier When the Disk Enclosure group is identified (FRU Code = 0x08), the FRU Code Qualifier is interpreted as follows: Status & Component ID Byte Disk Enclosure ID Byte (See Controller Enclosure group for values) FRU Code Qualifier: Field Reserved...
SCSI Sense Codes Table 44 lists the SCSI sense codes that may be returned as part of the log entry. This information may be helpful interpreting log entries. Only the Additional Sense Code and Additional Sense Code Qualifier fields are required to identify each condition. Table 44 SCSI Sense Codes Additional...
Page 328
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation If the accompanying sense key = 4, error is interrupted as follows: Unrecovered Write Error Data could not be written to media due to an unrecoverable RAM, battery or drive error.
Page 329
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation Unrecovered Read Error An unrecovered read operation to a drive occurred and the con- troller has no redundancy to recover the error (RAID 0, degraded RAID 1, degraded mode RAID 3, or degraded RAID Miscorrected Data Error - Due to Failed Drive Read A media error has occurred on a read operation during a recon- figuration operation,...
Page 330
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation Logical Block Address Out of Range The controller received a command that requested an operation at a logical block address beyond the capacity of the logical unit.
Page 331
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation Device Internal Reset The controller has reset itself due to an internal error condition. Default Configuration has been Created The controller has completed the process of creating a default logical unit.
Page 332
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation Commands Cleared by Another Initiator The controller received a Clear Queue message from another initiator. This error is to notify the current initiator that the con- troller cleared the current initiators commands if it had any out- standing.
Page 333
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation Drive No Longer Usable. The controller has set a drive to a state that prohibits use of the drive. The value of N in the ASCQ indicates the reason why the drive cannot be used.
Page 334
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation The controller has detected a drive with Mode Select parame- ters that are not recommended or which could not be changed. Currently this indicates the QErr bit is set incorrectly on the drive specified in the FRU field of the Request Sense data.
Page 335
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation Write Back Cache Battery Has Been Discharged The controller’s battery management has indicated that the cache battery has been discharged. Write Back Cache Battery Charge Has Completed The controller’s battery management has indicated that the cache battery is operational.
Page 336
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation Diagnostic Failure on Component NN (0x80 - 0xFF) The controller has detected the failure of an internal controller component. This failure may have been detected during opera- tion as well as during an on-board diagnostic routine.
Page 337
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation Internal Target Failure The controller has detected a hardware or software condition that does not allow the requested command to be completed. If the accompanying sense key is 0x04: Indicates a hardware failure.
Page 338
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation Drive Reported Reservation Conflict A drive returned a status of reservation conflict. Data Phase Error The controller encountered an error while transferring data to/ from the initiator or to/ from one of the drives. Overlapped Commands Attempted The controller received a tagged command while it had an untagged command pending from the same initiator or it...
Page 339
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation Drive IO Request Aborted IO Issued to Failed or Missing drive due to recently failed removed drive. This error can occur as a result of I/Os in progress at the time of a failed or removed drive.
Page 340
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation Quiescence Is In Progress or Has Been Achieved Quiescence Could Not Be Achieved Within the Quiescence Timeout Period Quiescence Is Not Allowed A Parity/ Data Mismatch was Detected The controller detected inconsistent parity/data during a parity verification.
Page 341
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation Command Lock Violation The controller received a Write Buffer Download Microcode, Send Diagnostic, or Mode Select command, but only one such command is allowed at a time and there was another such com- mand active.
Page 342
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation Controller Removal/Replacement Detected or Alternate Con- troller Released from Reset The controller detected the activation of the signal/signals used to indicate that the alternate controller has been removed or replaced.
Page 343
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation Recovered processor memory failure The controller has detected and corrected a recoverable error in processor memory. Recovered data buffer memory error The controller has detected and corrected a recoverable error in the data buffer memory.
Page 344
Table 44 SCSI Sense Codes (cont’d) Additional Additional Sense Sense Code Code Qualifier Interpretation 20/21 Fibre Channel Destination Channel Error ASCQ = 20: Indicates redundant path is not available to devices ASCQ = 21: Indicates destination drive channels are connected to each other Sense byte 26 will contain the Tray ID Sense byte 27 will contain the Channel ID...
Overview STM (Support Tools Manager) is the primary diagnostic tool available for the Disk Array FC60. For diagnosing problems, STM provides the capability to gather and display detailed status information about the disk array. STM can also be used to perform common management tasks.
Support Tools Manager The STM host-based utility provides capability for managing the Disk Array FC60. STM comes with HP-UX instant ignition and support media. The Support Tools Manager (STM) host-based utility is the primary online diagnostic tool available for the HP SureStore E Disk Array FC60. STM provides the capability for testing, configuring, and evaluating the operational condition of the disk array.
Page 348
xstm — the X Windows Interface xstm is the X-Windows screen-based STM interface. Because it is the easiest to use, xstm is the recommended interface for systems that support graphical displays. The main xstm window displays a map representing system resources. The STM system map represents each Disk Array FC60 as two icons labeled “A5277A Array”.
Page 349
mstm — the Menu-based Interface mstm is the menu-based STM interface. It serves as an alternate interface for systems that do not support graphical displays. The main mstm window displays a list of system resources. The Disk Array FC60 is identified as product type “A5277A Array”.
Page 350
Figure 90 mstm Interface Main Window Support Tools Manager...
STM Tools The STM tools available for use with the HP SureStore E Disk Array FC60 are listed in Table Table 45 Available Support Tools Tool type Description Information Provides detailed configuration and status information for all disk arrays components. Expert Provides capability to perform common disk array management tasks.
Using the STM Information Tool The STM Information Tool gathers status and configuration information about the selected disk array and stores this information in three logs: the information log, the activity log, and the failure log. Running Information Tool in X Windows At the system prompt: –...
Page 353
Running Information Tool in Menu Mode At the system prompt: – Type mstm – Select Ok To select the desired disk array: – Scroll down using the arrow key, select the A5277A Array – Press <Enter>. To run the Information Tool and display the Information log: –...
Interpreting the Information Tool Information Log The Information Log contains status and configuration information for all disk array components. The log is separated into the following sections: • Controller Enclosure – information for the components in the disk array controller enclosure •...
Using the STM Expert Tool The Expert Tool provides the capability to manage the HP SureStore E Disk Array FC60. Before using the Expert Tool for the first time you are encouraged to read through the Expert Tool help topics. The Step-by-Step instructions in particular provide useful tips on using the Expert Tool.
Page 356
Running Expert Tool in Menu Mode At the system prompt: – Type mstm – Select Ok To select the disk array: – Scroll down using the arrow key, select A5277A ARRAY. – Press <Enter> To run the Expert tool: – Select Menubar on or use arrow keys to get to Menubar –...
Page 357
Table 46 Expert Tool Menus and Descriptions Menu Option Property Description Logs View Event Log Displays selected event log entries Tests Parity Scan Perform a parity scan on a LUN. Utilities Bind LUN Bind selected disk modules into a LUN with a specified RAID level.
Introduction The modular design of the Disk Array FC60 simplifies the isolation and replacement of failed hardware components. Most disk array components are hot-swappable Field Replaceable Units (FRUs), which can be replaced while the disk array is operating. Some of the FRUs are customer replaceable. Other array components can be replaced in the field, but only by a trained service representative.
About Field Replaceable Units (FRUs) The Disk Array FC60 consists of a Controller Enclosure and one or more SureStore E Disk System SC10 enclosures. Table 47 identifies the disk array FRUs and whether they are customer replaceable. See "Removal and Replacement" on page 383 for more information.
HP-UX Troubleshooting Tools There are several tools available for troubleshooting the disk array on an HP-UX host. This includes monitoring the operation of the disk array and gathering information that will help identify and solve the problem. • Array Manager 60 - primarily used to manage the disk array, Array Manager 60 can also be used to check the status of the disk array and to retrieve log information.
Page 363
EMS Monitor Event Severity Levels Each event detected and reported by the EMS monitor is assigned a severity level, which indicates the impact the event may have on disk array operation. The following severity levels are used for all events: Critical An event that causes host system downtime, or other loss of service.
Page 364
• Probable Cause/Recommended Action – The cause of the event and suggested steps toward a solution. This information should be the first step in troubleshooting. Notification Time: Thu Aug 6 15:18:03 1998 yourserver sent Event Monitor notification information: /peripherals/events/mass_storage/LVD/enclosure/10_12.8.0.255.0.10.0 is !=1.
Disk Array Installation/Troubleshooting Checklist The following checklist is intended to help isolate and solve problems that may occur when installing the disk array. • Check Fibre Optic and SCSI Cables and SCSI Terminators: – No damaged fibre optic cables – No damaged or loose screws on connectors –...
Power-Up Troubleshooting When the disk array is powered up, each component perform an internal self-test, to ensure it is operating properly. Visual indications of power-up are: • The green Power LED on the controller enclosure is on • The green Power LED on each disk enclosures is on •...
If no LEDs are ON and the fans are not running, it indicates that no AC power Note is being supplied to the disk array power supply modules. Check the input AC power to the disk array. "Applying Power to the Disk Array" on page 198 for information on powering up the disk array.
Controller Enclosure LEDs Figure 92 shows the locations of the status LEDs for the controller enclosure. Table 48 summarizes the operating LED states for all components within the controller enclosure. A Power On LED B Power Fault LED C Fan Fault LED D Controller Fault LED E Fast Write Cache LED F Controller Power LED...
Page 369
Table 48 Normal LED Status for Controller Enclosure Module Normal State Controller Power On On (green) Enclosure Power Fault Fan Fault Controller Fault Fast Write Cache On (green) while data is in cache Controller Controller Power On (green) Controller Fault Heartbeat Blink (green) Status...
Master Troubleshooting Table Table 49 contains troubleshooting information for the controller enclosure and modules. Table 49 Master Troubleshooting Controller Symptom Possible Cause Procedure Controller Controller LED (front A Controller missing or Check the power LEDs on both controller cover) is on and the unplugged modules.
Page 371
Table 49 Master Troubleshooting Controller (cont’d) Symptom Possible Cause Procedure Controller enclosure Controller enclosure fan 1. Stop all activity to the controller module and Fan Fault LED failure caused one or both and turn off the power. (front cover) are on controller(s) to overheat 2.
Page 372
Table 49 Master Troubleshooting Controller (cont’d) Symptom Possible Cause Procedure Software errors occur A Software function or Check the appropriate software and when attempting to configuration problems documentation to make sure the system is access controller or set up correctly or that the proper command disks was executed.
Page 373
Table 49 Master Troubleshooting Controller (cont’d) Symptom Possible Cause Procedure Controller Fan Module Fan Fault LED is on One or both of the fans in Replace the controller fan module. the controller fan module has failed. The power supply fan 1.
Page 374
Table 49 Master Troubleshooting Controller (cont’d) Symptom Possible Cause Procedure “Battery Low” error Power turned OFF for Turn ON the power and allow controller issued by software extended period and module to run 7 hours to recharge the drained battery power. batteries.
Page 375
Table 49 Master Troubleshooting Controller (cont’d) Symptom Possible Cause Procedure Power Supply LED A Power supply module is Insert and lock the power supply module (front cover) is on missing or not plugged into place. If the Fault LED is still on, go to in properly.
SureStore E Disk System SC10 Troubleshooting This section contains information on identifying and isolating problems with the Disk System SC10 disk enclosure. Disk Enclosure LEDs Figure 93 shows the locations of the disk enclosure status LEDs. Table 50 summarizes the operating LED states for all components within the disk enclosure.
Page 377
A System fault LED B System power LED C Disk activity LED D Disk fault LED E Power On LED F Term. Pwr. LED G Full Bus LED H BCC Fault LED I Bus Active LED J LVD LED K Fan Fault LED Figure 93 Disk Enclosure LEDs Table 50 Disk Enclosure LED Functions...
Page 378
Table 50 Disk Enclosure LED Functions (cont’d) State Indication BCC Fault Amber Self-test / Fault Normal operation Flashing Peer BCC DIP switch settings do not match Green Bus operating in LVD mode Bus operating in single-ended mode Term. Pwr. Green Termination power is available from the host.
It is normal for the amber Fault LED on a component to go on briefly when the Note component initially starts up. However, if the Fault LED remains on for more than a few seconds, a fault has been detected. Interpreting Component Status Values (HP-UX Only) Common status terms have specific indications for various disk enclosure components.
Isolating Causes Table 52 lists the probable causes and solutions for problems you may detect on the disk enclosure. When more than one problem applies to your situation, investigate the first description that applies. The table lists the most basic problems first and excludes them from subsequent problem descriptions.
Page 381
Table 52 Disk Enclosure Troubleshooting Table (cont’d) Problem HW Event Description Category LED State Status Probable Cause/Solution Power Supply Critical Amber Critical – An incompatible or defective LED is amber component caused a temporary fault. – Power supply hardware is faulty. Unplug the power cord and wait for the LED to turn off.
Page 382
Table 52 Disk Enclosure Troubleshooting Table (cont’d) Problem HW Event Description Category LED State Status Probable Cause/Solution Peer BCC status, Major none Both BCCs: Firmware on BCC A and BCC B are temperature and Warning Non-critical different versions. voltage are Not none Internal bus is faulty.
Overview This chapter describes removal and replacement procedures for the disk array hot- swappable modules that are customer replaceable. Hot-swappable modules can be replaced without impacting host interaction with the disk array. Procedures for replacing the following modules are included in this chapter: •...
Page 385
The HP SureStore E Disk Array FC60 is fully covered by a warranty from Hewlett-Packard. Additional support services may have also been purchased for the disk array. During the warranty period, or if the product is covered by a service contract, it is recommended that you contact your service representative for all service and support issues.
Disk Enclosure Modules This section describes the procedures for replacing the hot swappable modules in the disk enclosure. Disk Module or Filler Module Hot Swappable Component! This procedure describes how to add or replace disk modules and disk slot filler modules. When adding or replacing disk filler modules use the same procedure, ignoring any steps or information that applies to disk modules only.
Page 387
When a disk module is replaced, the new disk inherits the group properties of Note the original disk. For example, if you replace a disk that was part of LUN 1, the replacement will also become part of LUN 1. If the disk is a replacement for a global hot spare or an unassigned disk, the replacement will become a global hot spare or an unassigned disk.
Page 388
A ESD plug-in B cam latch C handle Figure 94 Disk Module Removal Installing a Disk Module or Filler Module Touching the disk circuit board can cause high energy AUTION discharge and damage the disk. Disks modules are fragile and should be handled carefully. Disk Enclosure Modules...
Page 389
If the disk module you are installing has been removed from another Disk Array Note FC60, you should ensure that the module has a status of Unassigned. This is done by unbinding the LUN the disk module was a part of in the original disk array.
Page 390
Close the cam latch to seat the module firmly into the backplane. An audible click indicates the latch is closed properly. Check the LEDs (D in Figure 96) above the disk module for the following behavior: – Both LEDs should turn on briefly. –...
Page 391
A handle B cam latch C capacity label D LEDs Figure 96 Disk Module Replacement Disk Enclosure Modules...
Disk Enclosure Fan Module Hot Swappable Component! A failed fan module should be replaced as soon as possible. There are two fan modules in the enclosure. If a fan fails, the remaining fan module will maintain proper cooling. However, if the remaining fan module fails before the defective fan is replaced, the disk enclosure must be shut down to prevent heat damage.
A - Locking screw B - Pull tab Figure 97 Disk Enclosure Fan Module Removal and Replacement Installing the Fan Module Slide the replacement fan module into the empty slot (C in Figure 97). Tighten the locking screws (A). Check the fan module LED for the following behavior: –...
Disk Enclosure Power Supply Module Hot Swappable Component! A failed power supply module should be replaced as soon as possible. When one power supply fails, the remaining power supply will maintain the proper operating voltage for the disk enclosure. However, if the remaining power supply fails before the first power supply is replaced, all power will be lost to the disk enclosure.
Page 395
A - cam handle B - locking screw C - power supplies D - power supply slot Figure 98 Disk Enclosure Power Supply Module Removal and Replacement Installing the Power Supply Module With the handle down, slide the replacement power supply into the empty slot (D in Figure 98).
Controller Enclosure Modules This section provides removal and replacement procedures for the controller enclosure modules, plus the controller enclosure front cover. Most controller modules are hot swappable, however certain restrictions need to be observed for some modules, as identified in these descriptions. The controller modules, the controller fan module, and the BBU are accessed from the front of the controller enclosure.
Front Cover Removal/Replacement Hot Swappable Component! To gain access to the front of the controller module, the controller fan module, or the battery backup unit (BBU), the front cover must be removed. Removing the Front Cover Pull the bottom of the cover out about one inch to release the pins. See Figure Slide the cover down one inch and pull it away from the controller enclosure.
Installing the Front Cover Slide the top edge of the cover up under the lip of the chassis. Push the cover up as far as it will go, then push the bottom in until the pins snap into the mounting holes. Controller Fan Module Hot Swappable Component! Do not operate the controller enclosure without adequate ventilation and...
Page 399
To Remove: Loosen captive screw, pull firmly on handle, and remove CRU. To Install: Push controller fan CRU firmly into slot and tighten captive screw. Figure 100 Controller Fan Module Removal and Replacement Installing the Controller Fan Module. Slide the new module into the slot and tighten the screw. The captive screw is spring- loaded and will not tighten unless it is inserted all the way into the chassis.
Battery Backup Unit (BBU) Removal/Replacement Hot Swappable Component! If the Fast Write Cache LED is on when the BBU is removed from the enclosure Note (or if the BBU fails), write caching will be disabled and the write cache data will be written to disk.
Installing the BBU Unpack the new BBU. Save the shipping material for transporting the used BBU to the disposal facility. Fill in the following information on the “Battery Support Information” label on the front of the battery. See Figure 102. a.
Dispose of the old BBU. Dispose of the used BBU according to local and federal regulations, which may Note include hazardous material handling procedures. Power Supply Fan Module Removal/Replacement Hot Swappable Component! Do not operate the enclosure without adequate ventilation and cooling to the AUTION power supplies.
Page 404
Figure 103 Power Supply Fan Module Removal and Replacement Installing the Power Supply Fan Module Slide the power supply fan module into the enclosure. The latch will snap down when the module is seated properly. If the latch remains up, lift up on the ring/latch and push in on the module until it snaps into place.
Power Supply Module Removal/Replacement Hot Swappable Component! A power supply should be replaced as quickly as possible to avoid the possibility of the remaining supply failing and shutting down the disk array. Removing the Power Supply Module Turn off the power switch and unplug the power cord from the failed power supply module.
Page 406
Figure 105 Power Supply Module Removal and Replacement Installing the Power Supply Module Slide the supply into the slot until it is fully seated and the latch snaps into place. Plug in the power cord and turn on the power. See Figure 104.
SCSI Cables Replacing SCSI cables requires that the disk enclosure be shut down. Shutting down the enclosure will degrade the performance of the array during the replacement. When the replacement is completed and the disk enclosure is powered up, the array will perform a rebuild (since I/O has occurred to the array while the disk enclosure was powered off).
Page 408
Once the disk enclosure is powered up, check the status of the disk modules using one of the software management tools. Initially the disk modules status will be either “write failed” or “no_response.” Eventually, all the disk modules should return to “replaced” status.
Models and Options The HP SureStore E Disk Array FC60 consists of two products: the A5277A/AZ controller enclosure and the A5294A/AZ SureStore E Disk System SC10, or disk enclosure. Each of these products have their own options as indicated below. A5277A/AZ Controller Enclosure Models and Options •...
Page 413
Table 54 A5277A/AZ Product Options Option Description Controller Options (must order one option) Single controller module with 256 Mbyte cache, one Media Interface Adaptor, and one controller slot filler module. Configured with HP-UX firmware. Two controller modules with 256 Mbyte cache and two Media Interface Adaptors. Configured with HP-UX firmware.
A5294A/AZ Disk Enclosure SC10 Models and Options Order the following product and options as required. Enter the following product and options as sub-items to the A5277A and A5277AZ products above. • A5294A disk enclosure is a field racked Sure Store E Disk System SC10 integrated by a service-trained engineer.
Page 415
Table 55 A5294A Custom Cabling Option Option Description Delete one 2m cable included in A5294A product and add one 5m VHDCI SCSI cable for connection of A5277A to A5294A in a different rack Table 56 A5294A/AZ Storage Capacity Options Option Description Note: All disk enclosures ordered with a single A5277A/AZ must have identical Storage Capacity Options.
Disk Array FC60 Upgrade and Add-On Products Order the following parts to expand or reconfigure your original purchase: Table 58 Upgrade Products Order No. Description A5276A 9.1-Gbyte disk drive module 10K rpm Ultra 2 LVD A5282A 18.2-Gbyte disk drive module 10K rpm Ultra 2 LVD A5633A 18.2-Gbyte disk drive module 15K rpm Ultra 2 LVD A5595A...
PDU/PDRU Products Hewlett-Packard offers the following PDUs and PDRUs, with US and international power options, for meeting electrical requirements: Table 59 PDU/PDRU Products Order No. Description Supported on Original Racks E7676A 19 inch, 100-240 V, 16 Amp, 1 C20 inlet, 10 C20 outlets E7671A 19 inch, 100-240 V, 16 Amp, 1 C20 inlet, 2 C19 &...
Replaceable Parts A5277A/AZ Controller Enclosure Replaceable Parts Table 60 Controller Enclosure Replaceable Parts Exchange Part Number Field Replaceable Units Part # A5278-60001 A5278-69001 HP-UX Controller Module (5v model ) w/32 MB SIMM (no cache DIMMs) This part has been replaced by the A5278- 60006.
Page 419
Table 60 Controller Enclosure Replaceable Parts (cont’d) Exchange Part Number Field Replaceable Units Part # A5277-60004 Power Supply Modules A5277-60002 Power Supply Fan Module A5277-60001 Front Door Assembly 5021-1121 Terminator, SCSI, 68 pin, LVD 5064-2464 Media Interface Adapter (MIA) A5294A/AZ Disk Enclosure Replaceable Parts Table 61 Disk Enclosure Replaceable Parts Replacement Part...
AC Power: AC Voltage and Frequency: • 120 VAC (100 - 127 VAC), 50 to 60 Hz single phase • 230 VAC (220 - 240 VAC), 50 to 60 Hz single phase • Auto-ranging Current: Voltage Typical Maximum In-Rush Current Operating Operating Current Current...
The HP SureStore E Disk Array FC60 has been tested for proper operation in Note supported Hewlett-Packard cabinets. If the disk array is installed in an untested rack configuration, care must be taken to ensure that all necessary environmental requirements are met. This includes power, airflow, temperature, and humidity.
Non-operating Environmental (shipping and storage): • Temperature: -40º C to 70º C (-40º F to 158º F) • Maximum gradient: 20º C per hour (68º F per hour) • Relative humidity: 10% to 90% RH @ 28º C (wet bulb) •...
A5294A/AZ Disk Enclosure Specifications Dimensions: Height Width Depth 5.91 in. (15.0 cm) 18.9 in. (48.0 cm) 27.2 in. (69.1 cm) Weight: Component Weight of Each (lbs) Quantity Subtotal (lbs) Disk Drive (HH) 3 .3 Power Supply 10.6 Midplane-Mezzanine Door Chassis Total, Approx.
AC Power: AC Voltage and Frequency: • 100 - 127 VAC, 50 to 60 Hz single phase • 220 - 240 VAC, 50 to 60 Hz single phase: Current: Voltage Typical Current Maximum Current 100 - 127 VAC 4.8 a 6.5 a 220 - 240 VAC 2.4 a...
The HP SureStore E Disk Array FC60 has been tested for proper operation in Note supported Hewlett-Packard cabinets. If the disk array is installed in an untested rack configuration, care must be taken to ensure that all necessary environmental requirements are met. This includes power, airflow, temperature, and humidity.
For continuous, trouble-free operation, the disk enclosure should NOT be Note operated at its maximum environmental limits for extended periods of time. Operating within the recommended operating range, a less stressful operating environment, ensures maximum reliability. The environmental limits in a nonoperating state (shipping and storage) are wider: •...
Warranty and License Information Hewlett-Packard Hardware Limited Warranty HP warrants to you, the end-user Customer, that HP SureStore E Disk Array FC60 hardware components and supplies will be free from defects in material and workmanship under normal use after the date of purchase for three years. If HP or Authorized Reseller receives notice of such defects during the warranty period, HP or Authorized Reseller will, at its option, either repair or replace products that prove to be defective.
Software Product Limited Warranty The HP Software Product Limited Warranty will apply to all Software that is provided to you by HP as part of the HP SureStore E Disk Array FC60 for the NINETY (90) day period specified below. This HP Software Product Limited Warranty will supersede any non-HP software warranty terms that may be found in any documentation or other materials contained in the computer product packaging with respect to covered Software.
Page 430
This warranty extends only to the original owner in the original country of purchase and is not transferable. Consumables, such as batteries, have no warranty. The above warranties will not apply to products from which serial numbers have been removed or to defects resulting from misuse (including operation of HP SureStore E Disk Array FC60 without covers and incorrect input voltage), unauthorized modification, operation or storage outside the environmental specifications for the product, in-transit damage, improper maintenance, or defects resulting from use of third-party software,...
ADDITION TO THE MANDATORY STATUTORY RIGHTS APPLICABLE TO THE SALE OF THIS PRODUCT TO YOU. Hewlett-Packard Software License Terms The disk array hardware described herein uses Licensed Internal Code ("LIC"). The LIC, including any updates or replacements, any LIC utility software, and Supplier's software are collectively referred to as "Software".
Page 432
Software or disable any licensing or control features of the Software. If the Software is licensed for "concurrent use", you may not allow more than the maximum number of authorized users to Use the Software concurrently. You may not allow the Software to be used by any other party or for the benefit of any other party.
Regulatory Compliance Safety Certifications: • UL listed • CUL certified • TUV certified with GS mark • Gost Certified • CE-Mark EMC Compliance • US FCC, Class A • CSA, Class A • VCC1, Class A • BCIQ, Class A •...
FCC Statements (USA Only) The Federal Communications Commission (in 47 CFR 15.105) has specified that the following notice be brought to the attention of the users of this product. This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to Part 15 of the FCC Rules.
VCCI Statement (Japan) This equipment is in the Class A category information technology equipment based on the rules of Voluntary Control Council For Interference by Information Technology Equipment (VCCI). When used in a residential area, radio interference may be caused. In this case, user may be required to take appropriate Harmonics Conformance (Japan) Class A Warning Statement (Taiwan)
Spécification ATI Classe A (France Seulement) DECLARATION D'INSTALLATION ET DE MISE EN EXPLOITATION d'un matèriel de traitement de l'information (ATI), classé A en fonction des niveaux de perturbations radioélectriques émis, définis dans la norme européenne EN 55022 concernant la Compatibilité Electromagnétique. Cher Client, Conformément à...
Geräuschemission (For Germany Only) • LpA: 45.0 dB (suchend) • Am fiktiven Arbeitsplatz nach DIN 45635 T. 19. • Die Daten sind die Ergebnisse von Typprüfungen an Gerätekonfigurationen mit den höchsten Geräuschemissionen:12 Plattenlaufwerke. • Alle andere onfigurationen haben geringere Geräuschpegel. •...
1.)This product was tested with Hewlett-Packard Unix server host computer system. Boise Idaho U.S.A., 03/22/99 Dan T. Michauld / QA Manager European Contact: Your local Hewlett-Packard Sales and Service office or Hewlett-Packard GmbH, Department HQ-TRE, Herrenberger Straße 130, D-71034 Boblingen (FAX +49-7031-14-3143) FCC Statements (USA Only)
GLOSSARY adapter A printed circuit assembly that transmits user data (I/Os) between the host system’s internal bus and the external Fibre Channel link and vice versa. Also called an I/O adapter, FC adapter, or host bus adapter (HBA). ArrayID The value used to identify a disk array when using Array Manager 60. The ArrayID can be either the disk array S/N, or an alias assigned to the disk array.
Page 442
bind The process of configuring unassigned disks into a LUN disk group. Disks can be bound into one of the following LUN disk groups: RAID 5, RAID 1 (single mirrored pair), RAID 0/1 (multiple mirrored pairs). bootware This controller firmware comprises the bring-up or boot code, the kernel or executive under which the firmware executes, the firmware to run hardware diagnostics, initialize the hardware and to upload other controller firmware/software from Flash memory, and the XMODEM download functionality.
Page 443
Class of Service The types of services provided by the Fibre Channel topology and used by the communicating port. controller A removable unit that contains an array controller. dacstore A region on each disk used to store configuration information. During the Start Of Day process, this information is used to configure controller NVSRAM and to establish other operating parameters, such as the current LUN configuration.
disk array controller A printed-circuit board with memory modules that manages the overall operation of the disk array. The disk array controllers manage all aspects of disk array operation, including I/O transfers, data recovery in the event of a failure, and management of disk array capacity.
Page 445
EPROM Erasable Programmable Read-Only Memory. fabric A Fibre Channel term that describes a crosspoint switched network, which is one of three existing Fibre Channel topologies. A fabric consists of one or more fabric elements, which are switches responsible for frame routing. A fabric can interconnect a maximum of 244 devices.
Page 446
25 MB/s (quarter speed), or 12.5 MB/s (eighth speed) over distances of up to 100 m over copper media, or up to 10 km over optical links. The disk array operates at full speed. Fibre Channel Arbitrated Loop (FC-AL) One of three existing Fibre Channel topologies in which two to 126 ports are interconnected serially in a single loop circuit.
Page 447
FRU (Field Replaceable Unit) A disk array hardware component that can be removed and replaced by a customer or Hewlett-Packard service representative. global hot spare A disk that is powered up and electrically connected to a disk array but not used until a disk failure occurs.
Page 448
host A processor that runs an operating system using a disk array for data storage and retrieval. hot swappable Hot swappable components can be removed and replaced while the disk array is online without disrupting system operation. Disk array controller modules, disk modules, power supply modules, and fan modules are all hot swappable components.
Page 449
created on the same disk array. A numeric value is assigned to a LUN at the time it is created. LVD-SCSI Low voltage differential implementation of SCSI. Also referred to as Ultra2 SCSI. LVM (Logical Volume Manager) The default disk configuration strategy on HP-UX. In LVM one or more physical disk modules are configured into volume groups that are then configured into logical volumes.
Page 450
NVSRAM The disk array controller stores operating configuration information in this non-volatile SRAM (referred to as NVSRAM). The contents of NVSRAM can only be accessed or changed using special diagnostic tools. path See primary disk array path or primary path. parity A data protection technique that provides data redundancy by creating extra data based on the original data.
Page 451
PROM (Programmable Read-Only Memory) SP-resident boot code that loads the SP microcode from one of the disk array’s database drives when the disk array is powered up or when an SP is enabled. RAID An acronym for “Redundant Array of Independent Disks.” RAID was developed to provide data redundancy using independent disk drives.
Page 452
parity information, depending on the RAID level of the LUN. Until a rebuild is complete, the disk array is operating in degraded mode and is vulnerable to a second disk failure. reconstruction See rebuild. resident controller The last controller to complete the Start-Of-Day process in a given slot. At the completion of the SOD process, the identification of the controller is stored.
Page 453
SIMM (Single In-line Memory Module) A memory module that provides the local storage (cache) for an SP. An SP must have at least two 4-MB memory modules to support the storage system cache. Start Of Day (SOD) The initialization process used by the disk array controllers to configure itself and establish various operating parameters.
Page 454
drivers on the bus, and also impedance matching to prevent signal reflections at the ends of the cable. The SCSI bus requires termination at both ends of the bus. One end of the SCSI bus is terminated by the adapter’s internal termination. The other end should have a terminator placed on the 68-pin high density SCSI connector on the last SCSI peripheral.
Page 455
INDEX AM60Srvr A5628A, Array Manager 60 20 starting 241 AC power specifications 425 AM60Srvr daemon 241 disk enclosure 421 amcfg acoustics binding a LUN 289 controller enclosure 423 changing LUN ownership 293 disk enclosure 427 unbinding a LUN 292 adapter amdsp Fibre Channel host 411 checking rebuild progress 305...
Page 456
calculating LUN capacity 292 life expectancy 46 changing LUN ownership 293 removal and replacement 400 See battery backup module changing rebuild priority settings 305 battery charger checking disk array status 282 BCC module checking rebuild progress 305 described 29 command summary 278 troubleshooting 380 described 238 binding a LUN...
Page 457
maximum 75 controller fan module changing LUN ownership 293 described 40 channel number removal and replacement 398 disk module 244 controller memory modules channel:ID DIMM 40 described 280 SIMM 40 checking disk array status controller module using Array Manager 60 282 described 38 using SAM 260 interface connectors 39...
Page 458
power-down sequence 205 hot swappable modules 27 power-up sequence 198 installation 160 rebuild process 61 moving 168 upgrade and add-on products 416 operating with empty slot 28 using as a boot device 222 operation features 25 ventilation 403 options 414 disk array capacity power 425 maximum 75...
Page 459
primary LUN path 64 troubleshooting 380 drive lockout 387 fan module, disk enclosure drivers described 31 system 146 fault detection 31 removal and replacement 392 fast write cache LED 40 electrical requirements 147 FC-AL, See Fibre Channel Arbitrated Loop EMC compliance 434 Fibre Channel EMS hardware event monitoring 21, 362 controller modules 39...
Page 460
tips for selecting disks 62 global hot spare disks I/Os per second described 61 performance 74 identifying disk modules 244 using Array Manager 60 292 See EMS hardware event monitoring increasing capacity hardware event monitoring of a LUN 245 of the disk array 254 hardware path information log interpreting 208...
Page 461
See mstm log files menu-based interface managing 307 logs installation 197 managing 309 mirroring See Fibre Channel host ID loop ID described 47 losing LUN 0 376 missing LUN 0 376 models and options addressing 208 controller enclosure 412 assigning ownership 247 modules binding using Array Manager 60 289 controller enclosure 34...
Page 462
recommended for HP System/E racks 151 removal and replacement 394 troubleshooting 380 power switch performance controller enclosure 200 array configuration 73 disk enclosure 199 I/Os per second 74 disk enclosure, location 26 impact of configuration settings 250 power, controller enclosure rebuild 61 redundancy, controller enclosure 42 SCSI channels 72...
Page 463
described 55 modules 256 five disk enclosure array 94 rescanning for disk arrays 288 four disk enclosure array 90 Rittal rack 18 six disk enclosure array 98 running Array Manager 60 241 storage capacity 75 three disk enclosure array 86 safety compliance 434 two disk enclosure array 82 SAM 260...
Page 464
cache flush threshold 300 cache page size 300 adding a global hot spare 315 configuration switches 176 binding a LUN 314 controller date and time 297 checking disk array status 314 stripe segment size 249 cstm 351 SF21 384 device logs 351 SF88 384 Expert Tool 355 SIMMs 40...
Page 465
UTM 298 throughput Fibre Channel 71 ventilation 403 SCSI channels 72 verifying host to array path 206 topologies viewing logs 309 unsupported Windows 131 voltage specifications topology controller enclosure 421 basic 102, 103 disk enclosure 425 error recovery 108 volumn set addressing (VSA) 208 campus 102, 125 high availability 102 error recovery 118...