Download Print this page

HP Tc4100 - Server - 256 MB RAM Configuration Manual page 17

Hp mc/serviceguard for linux configuration guide for hp netservers
Hide thumbs Also See for Tc4100 - Server - 256 MB RAM:

Advertisement

Cluster Topology and Geography
The current maximum number of nodes supported is 8. A quorum server node is required to provide
quorum service to the cluster. It provides arbitration services for the cluster when a cluster partition is
discovered. A node running the Quorum Server cannot be a member of any cluster that it is providing
cluster quorum services to.
The recommended configuration implements three data centers with the third data center housing the
Quorum Server.
The minimum supported configuration is two data centers with one of the data center housing the
Quorum Server. In this configuration, if the data center housing the Quorum Server is down, the nodes
in the second data center will not be able to form the cluster, as there is no quorum. At the minimum
the Quorum Server should be in a separate room with its own power circuit.
The maximum distance between the data centers is currently limited either by the maximum distance
supported for the networking type or by the CA link being used, whichever is shorter but no more than
100 kilometers.
DWDM (Dense Wave Division Multiplexing) device can be used for the network and data replication
links to increase the distance up to 100 kilometers between data centers.
Cluster Networking Links
The supported network interfaces used for cluster heartbeat are 10B-T and 100B-T.
There must be less than 200 milliseconds of latency in the cluster heartbeat network between the data
centers.
No routing is allowed for the cluster heartbeat network between the data centers.
There must be at least two alternately routed cluster heartbeat links between the data centers to prevent
the "backhoe problem". The "backhoe problem" can occur when all cables are routed through a single
trench and a tractor on a construction job severs all cables disabling all communications between the
data centers.
One of the heartbeat links has to be a dedicated link. The other heartbeat link can be shared with the
application network link.
Data Replication Continuous Access (CA) Links
There must be at least two alternately routed CA links between the two primary data centers.
The supported distance for CA is varied depending on the link types. For more details on CA
connectivity options, see the "Continuous Access XP Extension and Performance" white paper at
http://xpslpgrms.corp.hp.com/whitepapers/xp512_whitepapers/CAET112701b.doc
Note that the maximum distance between the two primary data centers is currently limited either by the
maximum distance supported for the cluster networking type or by the CA link being used, whichever
is shorter.
DWDM Links for both Networking (Cluster Heartbeat, application network) and CA:
The maximum distance supported between the data centers for a DWDM configuration is 100
kilometers.
Both the networking (cluster heartbeat and application network) and CA links can go through the same
DWDM box. Separate DWDM box is not required.
The fiber optic links between the DWDM boxes must be "dark fiber" links, non-switched circuits.
They must be alternately routed between the two primary data centers.
For the highest availability, it is recommended to have two separate DWDM boxes (in each data
center) used for the links between each data center. However, since most DWDM boxes are typically
designed to be fault tolerant, it is acceptable to use only one DWDM box (in each data center) for the
links between each data center. If a single DWDM box is used, a minimum of one active and one
redundant standby fiber link feature of the DWDM box must be configured. When using ESCON for
CA, note that the ESCON timeout is shorter than the DWDM link failover time. Therefore, a minimum
of two active fiber links on the DWDM box must be configured.

Advertisement

loading