20
Chapter 2. Hardware Installation and Operating System Configuration
Table 2-10. Installing the Basic Cluster Hardware
Most systems come with at least one serial port. If a system has graphics display capabil-
ity, it is possible to use the serial console port for a power switch connection. To expand
your serial port capacity, use multi-port serial PCI cards. For multiple-node clusters, use a
network power switch.
Also, ensure that local system disks are not on the same SCSI bus as the shared disks.
For example, use two-channel SCSI adapters, such as the Adaptec 39160-series cards, and
put the internal devices on one channel and the shared disks on the other channel. Using
multiple SCSI cards is also possible.
Refer to the system documentation supplied by the vendor for detailed installation infor-
mation. Refer to Appendix A Supplementary Hardware Information for hardware-specific
information about using host bus adapters in a cluster.
2.3.2. Shared Storage considerations
In a cluster, shared disks can be used to store cluster service data. Because this storage
must be available to all nodes running the cluster service configured to use the storage, it
cannot be located on disks that depend on the availability of any one node.
There are some factors to consider when setting up shared disk storage in a cluster:
It is recommended to use a clustered file system such as Red Hat GFS to configure Red
•
Hat Cluster Manager storage resources, as it offers shared storage that is suited for high-
availability cluster services. For more information about installing and configuring Red
Hat GFS, refer to the Red Hat GFS Administrator's Guide.
Whether you are using Red Hat GFS, local, or remote (for example, NFS) storage, it is
•
strongly recommended that you connect any storage systems or enclosures to redundant
UPS systems for a highly-available source of power. Refer to Section 2.5.3 Configuring
UPS Systems for more information.
The use of software RAID or Logical Volume Management (LVM) for shared storage is
•
not supported. This is because these products do not coordinate access to shared storage
from multiple hosts. Software RAID or LVM may be used on non-shared storage on
cluster nodes (for example, boot and system partitions, and other file systems that are
not associated with any cluster services).
An exception to this rule is CLVM, the daemon and library that supports clustering of
LVM2. CLVM allows administrators to configure shared storage for use as a resource
in cluster services when used in conjunction with the CMAN cluster manager and the
Distributed Lock Manager (DLM) mechanism for prevention of simultaneous node ac-
cess to data and possible corruption. In addition, CLVM works with GULM as its cluster
manager and lock manager.
Need help?
Do you have a question about the CLUSTER SUITE - CONFIGURING AND MANAGING A CLUSTER 2006 and is the answer not in the manual?
Questions and answers