Chapter 2. Before Configuring a Red Hat Cluster
Note
IPV6 is not supported for Cluster Suite in Red Hat Enterprise Linux 5.
2.9. Considerations for Using Conga
When using Conga to configure and manage your Red Hat Cluster, make sure that each computer
running luci (the Conga user interface server) is running on the same network that the cluster is using
for cluster communication. Otherwise, luci cannot configure the nodes to communicate on the right
network. If the computer running luci is on another network (for example, a public network rather
than a private network that the cluster is communicating on), contact an authorized Red Hat support
representative to make sure that the appropriate host name is configured for each cluster node.
2.10. General Configuration Considerations
You can configure a Red Hat Cluster in a variety of ways to suit your needs. Take into account the
following considerations when you plan, configure, and implement your Red Hat Cluster.
No-single-point-of-failure hardware configuration
Clusters can include a dual-controller RAID array, multiple bonded network channels, multiple
paths between cluster members and storage, and redundant un-interruptible power supply (UPS)
systems to ensure that no single failure results in application down time or loss of data.
Alternatively, a low-cost cluster can be set up to provide less availability than a no-single-point-of-
failure cluster. For example, you can set up a cluster with a single-controller RAID array and only a
single Ethernet channel.
Certain low-cost alternatives, such as host RAID controllers, software RAID without cluster
support, and multi-initiator parallel SCSI configurations are not compatible or appropriate for use
as shared cluster storage.
Data integrity assurance
To ensure data integrity, only one node can run a cluster service and access cluster-service data
at a time. The use of power switches in the cluster hardware configuration enables a node to
power-cycle another node before restarting that node's HA services during a failover process. This
prevents two nodes from simultaneously accessing the same data and corrupting it. It is strongly
recommended that fence devices (hardware or software solutions that remotely power, shutdown,
and reboot cluster nodes) are used to guarantee data integrity under all failure conditions.
Watchdog timers provide an alternative way to to ensure correct operation of HA service failover.
Ethernet channel bonding
Cluster quorum and node health is determined by communication of messages among cluster
nodes via Ethernet. In addition, cluster nodes use Ethernet for a variety of other critical cluster
functions (for example, fencing). With Ethernet channel bonding, multiple Ethernet interfaces are
configured to behave as one, reducing the risk of a single-point-of-failure in the typical switched
Ethernet connection among cluster nodes and other cluster hardware.
22
Need help?
Do you have a question about the ENTERPRISE LINUX 5 - ADMINISTRATION and is the answer not in the manual?
Questions and answers