Download Print this page

HP Integrity Superdome 16-socket Specification page 28

Integrity superdome servers
Hide thumbs Also See for Integrity Superdome 16-socket:

Advertisement

QuickSpecs
Configuration
High Availability
High Availability
High Availability
High Availability
37
38
39
40
41
42
43
44
45
46
47
Multi-System
Multi-System
Multi-System
Multi-System
48
High Availability
High Availability
High Availability
High Availability
(Please also refer to
Multi-System High
Availability section
following this table)
Traditional
Traditional
Traditional
Traditional
49
Multi-System
Multi-System
Multi-System
Multi-System
High Availability
High Availability
High Availability
High Availability
50
51
52
53
54
55
56
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
Each cell should have at least 4 GB (8 DIMMs) of memory using 512-MB DIMMs and at least 8 GB of memory
using 1-GB DIMMs.
I/O chassis ownership must be localized as much as possible. One way is to assign I/O chassis to partitions in
sequential order starting from INSIDE the single cabinet, then out to the I/O expansion cabinet 'owned' by the
single cabinet.
I/O expansion cabinets can be used only when the main system cabinet holds maximum number of I/O card
cages. Thus, the cabinet must first be filled with I/O card cages before using an I/O expansion cabinet.
Single cabinets connected to form a dual cabinet (using flex cables) should use a single I/O expansion cabinet if
possible.
Spread enough connections across as many I/O chassis as it takes to become 'redundant' in I/O chassis'. In
other words, if an I/O chassis fails, the remaining chassis have enough connections to keep the system up and
running, or in the worst case, have the ability to reboot with the connections to peripherals and networking
intact.
All SCSI cards are configured in the factory as unterminated. Any auto termination is defeated. If auto
termination is not defeatable by hardware, the card is not used at first release. Terminated cable would be used
for connection to the first external device. In the factory and for shipment, no cables are connected to the SCSI
cards. In place of the terminated cable, a terminator is placed on the cable port to provide termination until the
cable is attached. This is needed to allow HP-UX to boot. The customer does not need to order the terminators for
these factory integrated SCSI cards, since the customer will probably discard them. The terminators are provided
in the factory by use of constraint net logic.
Partitions whose I/O chassis are contained within a single cabinet have higher availability than those partitions
that have their I/O chassis spread across cabinets.
A partition's core I/O chassis should go in a system cabinet, not an I/O expansion cabinet
A partition should be connected to at least two I/O chassis containing Core I/O cards. This implies that all
partitions should be at least 2 cells in size. The lowest number cell or I/O chassis is the 'root' cell; the second
lowest number cell or I/O chassis combo in the partition is the 'backup root' cell.
A partition should consist of at least two cells.
Not more than one partition should span a cabinet or a crossbar link. When crossbar links are shared, the
partition is more at risk relative to a crossbar failure that may bring down all the cells connected to it.
Multi-initiator support is required for Serviceguard.
To configure a cluster with no SPOF, the membership must extend beyond a single cabinet. The cluster must be
configured such that the failure of a single cabinet does not result in the failure of a majority of the nodes in the
cluster. The cluster lock device must be powered independently of the cabinets containing the cluster nodes.
Alternative cluster lock solution is the Quorum Service, which resides outside the Serviceguard cluster providing
arbitration services.
A cluster lock is required if the cluster is wholly contained within two single cabinets (i.e., two Superdome/16-
socket or 32-socket systems or two Superdome/PA-8800 32-socket or 64-socket systems) or two dual cabinets (i.e.
two Superdome/64-socket systems or two Superdome/PA-8800 128-socket systems). This requirement is due to a
possible 50% cluster failure.
Serviceguard only supports cluster lock up to four nodes. Thus a two cabinet configuration is limited to four
nodes (i.e., two nodes in one dual cabinet Superdome/64-socket system or Superdome/PA-8800 128-socket
system and two nodes in another dual cabinet Superdome/64-socket system or Superdome/PA-8800 128-socket
system). The Quorum Service can support up to 50 clusters or 100 nodes (can be arbitrator to both HP-UX and
Linux clusters).
Two-cabinet configurations must evenly divide nodes between the cabinets (i.e. 3 and 1 is not a legal 4-node
configuration).
Cluster lock must be powered independently of either cabinet.
Root volume mirrors must be on separate power circuits.
Redundant heartbeat paths are required and can be accomplished by using either multiple heartbeat subnets or
via standby interface cards.
Redundant heartbeat paths should be configured in separate I/O chassis when possible.
DA - 11717
North America — Version 15 — January 3, 2005
socket, and 64-socket
socket, and 64-socket
socket, and 64-socket
socket, and 64-socket
Page 28

Advertisement

loading