Introduction 1. Document Conventions ....................vi 1.1. Typographic Conventions ..................vi 1.2. Pull-quote Conventions ..................viii 1.3. Notes and Warnings ................... viii 2. Feedback ........................ix 1. Red Hat Cluster Configuration and Management Overview 1.1. Configuration Basics ..................... 1 1.1.1. Setting Up Hardware ..................1 1.1.2.
Cluster Administration 4. Managing Red Hat Cluster With Conga 4.1. Starting, Stopping, and Deleting Clusters ..............43 4.2. Managing Cluster Nodes .................... 44 4.3. Managing High-Availability Services ................45 4.4. Diagnosing and Correcting Problems in a Cluster ............46 5. Configuring Red Hat Cluster With system-config-cluster 5.1.
Introduction This document provides information about installing, configuring and managing Red Hat Cluster components. Red Hat Cluster components are part of Red Hat Cluster Suite and allow you to connect a group of computers (called nodes or members) to work together as a cluster. This document does not include information about installing, configuring, and managing Linux Virtual Server (LVS) software.
Red Hat Cluster Suite documentation and other Red Hat documents are available in HTML, http:// PDF, and RPM versions on the Red Hat Enterprise Linux Documentation CD and online at www.redhat.com/docs/. 1. Document Conventions This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of information.
Typographic Conventions The first sentence highlights the particular key cap to press. The second highlights two sets of three key caps, each set pressed simultaneously. If source code is discussed, class names, methods, functions, variable names and returned values mentioned within a paragraph will be presented as above, in Mono-spaced Bold. For example: File-related classes include filesystem for file systems, file for files, and dir for directories.
Introduction Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and important term. For example: When the Apache HTTP Server accepts requests, it dispatches child processes or threads to handle them. This group of child processes or threads is known as a server-pool.
2. Feedback If you spot a typo, or if you have thought of a way to make this manual better, we would love to hear from you. Please submit a report in Bugzilla (http://bugzilla.redhat.com/bugzilla/) against the component Documentation-cluster. Be sure to mention the manual identifier: Cluster_Administration(EN)-5 (2009-08-18T16:22) By mentioning this manual's identifier, we know exactly which version of the guide you have.
Chapter 1. Red Hat Cluster Configuration and Management Overview Red Hat Cluster allows you to connect a group of computers (called nodes or members) to work together as a cluster. You can use Red Hat Cluster to suit your clustering needs (for example, setting up a cluster for sharing files on a GFS file system or setting up service failover).
Chapter 1. Red Hat Cluster Configuration and Management Overview Figure 1.1. Red Hat Cluster Hardware Overview 1.1.2. Installing Red Hat Cluster software To install Red Hat Cluster software, you must have entitlements for the software. If you are using the Conga configuration GUI, you can let it install the cluster software. If you are using other tools to configure the cluster, secure and install the software as you would with Red Hat Enterprise Linux software.
Configuring Red Hat Cluster Software Figure 1.2. Cluster Configuration Structure The following cluster configuration tools are available with Red Hat Cluster: • Conga — This is a comprehensive user interface for installing, configuring, and managing Red Hat clusters, computers, and storage attached to clusters and computers. •...
Chapter 1. Red Hat Cluster Configuration and Management Overview 1.2. Conga Conga is an integrated set of software components that provides centralized configuration and management of Red Hat clusters and storage. Conga provides the following major features: • One Web interface for managing cluster and storage •...
Conga The following figures show sample displays of the three major luci tabs: homebase, cluster, and storage. For more information about Conga, refer to Chapter 3, Configuring Red Hat Cluster With Conga, Conga, and the online help available with the luci server. Chapter 4, Managing Red Hat Cluster With Figure 1.3.
Chapter 1. Red Hat Cluster Configuration and Management Overview Figure 1.5. luci storage Tab 1.3. system-config-cluster Cluster Administration GUI This section provides an overview of the cluster administration graphical user interface (GUI) available with Red Hat Cluster Suite — system-config-cluster. It is for use with the cluster infrastructure and the high-availability service management components.
Cluster Configuration Tool 1.3.1. Cluster Configuration Tool You can access the Cluster Configuration Tool (Figure 1.6, “Cluster Configuration Tool”) through the Cluster Configuration tab in the Cluster Administration GUI. Figure 1.6. Cluster Configuration Tool The Cluster Configuration Tool represents cluster configuration components in the configuration file (/etc/cluster/cluster.conf) with a hierarchical graphical display in the left panel.
Chapter 1. Red Hat Cluster Configuration and Management Overview • Fence Devices — Displays fence devices. Fence devices are represented as subordinate elements under Fence Devices. Using configuration buttons at the bottom of the right frame (below Properties), you can add fence devices, delete fence devices, and edit fence-device properties. Fence devices must be defined before you can configure fencing (with the Manage Fencing For This Node button) for each node.
Command Line Administration Tools Figure 1.7. Cluster Status Tool The nodes and services displayed in the Cluster Status Tool are determined by the cluster configuration file (/etc/cluster/cluster.conf). You can use the Cluster Status Tool to enable, disable, restart, or relocate a high-availability service. 1.4.
Chapter 1. Red Hat Cluster Configuration and Management Overview Command Line Used With Purpose Tool Cluster ccs_tool is a program for making online updates to the ccs_tool — Cluster Infrastructure cluster configuration file. It provides the capability to create Configuration and modify cluster infrastructure components (for example, System Tool creating a cluster, adding and removing a node).
2.1. Compatible Hardware Before configuring Red Hat Cluster software, make sure that your cluster uses appropriate hardware (for example, supported fence devices, storage devices, and Fibre Channel switches). Refer to the http://www.redhat.com/cluster_suite/hardware/ hardware configuration guidelines at for the most current hardware compatibility information.
If your cluster uses integrated fence devices, you must configure ACPI (Advanced Configuration and Power Interface) to ensure immediate and complete fencing. Note For the most current information about integrated fence devices supported by Red Hat http://www.redhat.com/cluster_suite/hardware/ Cluster Suite, refer to...
Configuring ACPI For Use with Integrated Fence Devices If a cluster node is configured to be fenced by an integrated fence device, disable ACPI Soft-Off for that node. Disabling ACPI Soft-Off allows an integrated fence device to turn off a node immediately and completely rather than attempting a clean shutdown (for example, shutdown -h now).
Chapter 2. Before Configuring a Red Hat Cluster 2.3.1. Disabling ACPI Soft-Off with chkconfig Management You can use chkconfig management to disable ACPI Soft-Off either by removing the ACPI daemon (acpid) from chkconfig management or by turning off acpid. Note This is the preferred method of disabling ACPI Soft-Off.
Disabling ACPI Completely in the grub.conf File Note The equivalents to ACPI Function, Soft-Off by PWR-BTTN, and Instant-Off may vary among computers. However, the objective of this procedure is to configure the BIOS so that the computer is turned off via the power button without delay. 4.
Chapter 2. Before Configuring a Red Hat Cluster you can disable ACPI completely by appending acpi=off to the kernel boot command line in the grub.conf file. Important This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled.
Considerations for Configuring HA Services 2.4. Considerations for Configuring HA Services You can create a cluster to suit your needs for high availability by configuring HA (high-availability) services. The key component for HA service management in a Red Hat cluster, rgmanager, implements cold failover for off-the-shelf applications.
Chapter 2. Before Configuring a Red Hat Cluster Figure 2.1. Web Server Cluster Service Example Clients access the HA service through the IP address 10.10.10.201, enabling interaction with the web server application, httpd-content. The httpd-content application uses the gfs-content-webserver file system.
Configuring max_luns file, each resource tree is an XML representation that specifies each resource, its attributes, and its relationship among other resources in the resource tree (parent, child, and sibling relationships). Note Because an HA service consists of resources organized into a hierarchical tree, a service is sometimes referred to as a resource tree or resource group.
Chapter 2. Before Configuring a Red Hat Cluster In Red Hat Enterprise Linux releases prior to Red Hat Enterprise Linux 5, if RAID storage in a cluster presents multiple LUNs, it is necessary to enable access to those LUNs by configuring max_luns (or max_scsi_luns for 2.4 kernels) in the /etc/modprobe.conf file of each node.
Red Hat Cluster Suite and SELinux Fencing To ensure reliable fencing when using qdiskd, use power fencing. While other types of fencing (such as watchdog timers and software-based solutions to reboot a node internally) can be reliable for clusters not configured with qdiskd, they are not reliable for a cluster configured with qdiskd. Maximum nodes A cluster configured with qdiskd supports a maximum of 16 nodes.
Chapter 2. Before Configuring a Red Hat Cluster Note IPV6 is not supported for Cluster Suite in Red Hat Enterprise Linux 5. 2.9. Considerations for Using Conga When using Conga to configure and manage your Red Hat Cluster, make sure that each computer running luci (the Conga user interface server) is running on the same network that the cluster is using for cluster communication.
Chapter 3. Configuring Red Hat Cluster With Conga This chapter describes how to configure Red Hat Cluster software using Conga, and consists of the following sections: Section 3.1, “Configuration Tasks” • Section 3.2, “Starting luci and ricci” • Section 3.3, “Creating A Cluster” •...
Chapter 3. Configuring Red Hat Cluster With Conga # yum install ricci 2. At each node to be administered by Conga, start ricci. For example: # service ricci start Starting ricci: 3. Select a computer to host luci and install the luci software on that computer. For example: # yum install luci Note Typically, a computer in a server cage or a data center hosts luci;...
Creating A Cluster Please, point your web browser to https://nano-01:8084 to access luci 6. At a Web browser, place the URL of the luci server into the URL address box and click Go (or the equivalent). The URL syntax for the luci server is https://luci_server_hostname:8084. The first time you access luci, two SSL certificate dialog boxes are displayed.
Chapter 3. Configuring Red Hat Cluster With Conga • The Cluster Name text box displays the cluster name; it does not accept a cluster name change. You cannot change the cluster name. The only way to change the name of a Red Hat cluster is to create a new cluster configuration with the new name.
Global Cluster Properties Note The cluster ID is a unique identifier that cman generates for each cluster. To view the cluster ID, run the cman_tool status command on a cluster node. If you do specify a multicast address, you should use the 239.192.x.x series that cman uses. Otherwise, using a multicast address outside that range may cause unpredictable results.
Chapter 3. Configuring Red Hat Cluster With Conga Parameter Description The number of cycles a node must miss to be declared dead. Minimum Score The minimum score for a node to be considered "alive". If omitted or set to 0, the default function, floor((n+1)/2), is used, where n is the sum of the heuristics scores.
Configuring Fence Devices • Egenera SAN Controller • GNBD • IBM Blade Center • McData SAN Switch • QLogic SANbox2 • SCSI Fencing (*See Note) • Virtual Machine Fencing • Vixel SAN Switch • WTI Power Switch Note Use of SCSI persistent reservations as a fence method is supported with the following limitations: •...
Chapter 3. Configuring Red Hat Cluster With Conga The starting point of each procedure is at the cluster-specific page that you navigate to from Choose a cluster to administer displayed on the cluster tab. 3.5.1. Creating a Shared Fence Device To create a shared fence device, follow these steps: 1.
Modifying or Deleting a Fence Device Figure 3.1. Fence Device Configuration 3. At the Add a Sharable Fence Device page, click the drop-down box under Fencing Type and select the type of fence device to configure. 4. Specify the information in the Fencing Type dialog box according to the type of fence device. Appendix B, Fence Device Parameters Refer to for more information about fence device...
Chapter 3. Configuring Red Hat Cluster With Conga 1. At the detailed menu for the cluster (below the clusters menu), click Shared Fence Devices. Clicking Shared Fence Devices causes the display of the fence devices for a cluster and causes the display of menu items for fence device configuration: Add a Fence Device and Configure a Fence Device.
Adding a Member to a Running Cluster 3. At the bottom of the page, under Main Fencing Method, click Add a fence device to this level. 4. Select a fence device and provide parameters for the fence device (for example port number). Note You can choose from an existing fence device or create a new fence device.
Chapter 3. Configuring Red Hat Cluster With Conga 7. Click the link for an added node at either the list in the center of the page or in the list in the detailed menu under the clusters menu. Clicking the link for the added node causes a page to be displayed for that link showing how that node is configured.
Configuring a Failover Domain 3. On that page, at the Choose a taskdrop-down box, choose Delete this node and click Go. When the node is deleted, a page is displayed that lists the nodes in the cluster. Check the list to make sure that the node has been deleted.
Chapter 3. Configuring Red Hat Cluster With Conga run the cluster service, you must set up only the members in the restricted failover domain that you associate with the cluster service. Note To configure a preferred member, you can create an unrestricted failover domain comprising only one cluster member.
Modifying a Failover Domain 7. Configure members for this failover domain. Under Failover domain membership, click the Member checkbox for each node that is to be a member of the failover domain. If Prioritized is checked, set the priority in the Priority text box for each member of the failover domain. 8.
Chapter 3. Configuring Red Hat Cluster With Conga 8. Modifying failover domain membership — Under Failover domain membership, click the Membercheckbox for each node that is to be a member of the failover domain. A checked box for a node means that the node is a member of the failover domain. If Prioritized is checked, you can adjust the priority in the Priority text box for each member of the failover domain.
Adding a Cluster Service to the Cluster Note Use a descriptive name that clearly distinguishes the service from other services in the cluster. 4. Add a resource to the service; click Add a resource to this service. Clicking Add a resource to this service causes the display of two drop-down boxes: Add a new local resource and Use an existing global resource.
Chapter 3. Configuring Red Hat Cluster With Conga valid_lft forever preferred_lft forever 3.10. Configuring Cluster Storage To configure storage for a cluster, click the storage tab. Clicking that tab causes the display of the Welcome to Storage Configuration Interface page. The storage tab allows you to monitor and configure storage on remote systems.
Configuring Cluster Storage Management agents (HA-LVM). If you are not able to use either the clvmd daemon or HA-LVM for operational reasons or because you do not have the correct entitlements, you must not use single-instance LVM on the shared disk as this may result in data corruption. If you have any concerns please contact your Red Hat service representative.
Chapter 4. Managing Red Hat Cluster With Conga This chapter describes various administrative tasks for managing a Red Hat Cluster and consists of the following sections: Section 4.1, “Starting, Stopping, and Deleting Clusters” • Section 4.2, “Managing Cluster Nodes” • Section 4.3, “Managing High-Availability Services”...
Chapter 4. Managing Red Hat Cluster With Conga • For Restart this cluster and Stop this cluster/Start this cluster — Displays a page with the list of nodes for the cluster. • For Delete this cluster — Displays the Choose a cluster to administer page in the cluster tab, showing a list of clusters.
Managing High-Availability Services 4. Clicking Go causes a progress page to be displayed. When the action is complete, a page is displayed showing the list of nodes for the cluster. 4.3. Managing High-Availability Services You can perform the following management functions for high-availability services through the luci server component of Conga: •...
Chapter 4. Managing Red Hat Cluster With Conga • Restart this service and Stop this service — These selections are available when the service is running. Select either function and click Go to make the change take effect. Clicking Go causes a progress page to be displayed. When the change is complete, another page is displayed showing a list of services for the cluster.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster This chapter describes how to configure Red Hat Cluster software using system-config-cluster, and consists of the following sections: Section 5.1, “Configuration Tasks” • Section 5.2, “Starting the Cluster Configuration Tool” • Section 5.3, “Configuring Cluster Properties” •...
Chapter 5. Configuring Red Hat Cluster With system-config-cluster Section 5.8, “Adding a Cluster Service to the Cluster”. Refer to 8. Propagating the configuration file to the other nodes in the cluster. Section 5.9, “Propagating The Configuration File: New Cluster”. Refer to Section 5.10, “Starting the Cluster Software”.
Custom Configure Multicast cluster service operation. To manage the cluster system further, choose the Cluster Configuration tab.) 3. Clicking Create New Configuration causes the New Configuration dialog box to be displayed Configuration”). The New Configuration dialog box provides Figure 5.2, “Creating A New (refer to a text box for cluster name and the following checkboxes: Custom Configure Multicast and Use a Quorum Disk.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster Use a Quorum Disk If you need to use a quorum disk, click the Use a Quorum disk checkbox and enter quorum disk parameters. The following quorum-disk parameters are available in the dialog box if you enable Use a Quorum disk: Interval, TKO, Votes, Minimum Score, Device, Label, and Quorum Disk Heuristic.
Use a Quorum Disk Figure 5.2. Creating A New Configuration 4. When you have completed entering the cluster name and other parameters in the New Configuration dialog box, click OK. Clicking OK starts the Cluster Configuration Tool, (Figure 5.3, “The Cluster Configuration displaying a graphical representation of the configuration Tool”).
Chapter 5. Configuring Red Hat Cluster With system-config-cluster Figure 5.3. The Cluster Configuration Tool Parameter Description Use a Quorum Disk Enables quorum disk. Enables quorum-disk parameters in the New Configuration dialog box. Interval The frequency of read/write cycles, in seconds. The number of cycles a node must miss in order to be declared dead.
Configuring Cluster Properties Parameter Description Label Specifies the quorum disk label created by the mkqdisk utility. If this field contains an entry, the label overrides the Device field. If this field is used, the quorum daemon reads /proc/partitions and checks for qdisk signatures on every block device found, comparing the label against the specified label.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster Note For more information about Post-Join Delay and Post-Fail Delay, refer to the fenced(8) man page. 6. Save cluster configuration changes by selecting File => Save. 5.4. Configuring Fence Devices Configuring fence devices for the cluster consists of selecting one or more fence devices and specifying fence-device-dependent parameters (for example, name, IP address, login, and password).
Adding and Deleting Members 5.5. Adding and Deleting Members The procedure to add a member to a cluster varies depending on whether the cluster is a newly- configured cluster or a cluster that is already configured and running. To add a member to a new Section 5.5.1, “Adding a Member to a Cluster”.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster a. Click the node that you added in the previous step. b. At the bottom of the right frame (below Properties), click Manage Fencing For This Node. Clicking Manage Fencing For This Node causes the Fence Configuration dialog box to be displayed.
Adding a Member to a Running Cluster Section 5.5.1, “Adding a Member to a Cluster”. 2. Click Send to Cluster to propagate the updated configuration to other running nodes in the cluster. 3. Use the scp command to send the updated /etc/cluster/cluster.conf file from one of the existing cluster nodes to the new node.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster a. service cman start b. service clvmd start, if CLVM has been used to create clustered volumes c. service gfs start, if you are using Red Hat GFS d. service rgmanager start 5.
Configuring a Failover Domain e. Propagate the updated configuration by clicking the Send to Cluster button. (Propagating the updated configuration automatically saves the configuration.) 4. Stop the cluster software on the remaining running nodes by running the following commands at each node in this order: a.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster Note Failover domains are not required for operation. By default, failover domains are unrestricted and unordered. In a cluster with several members, using a restricted failover domain can minimize the work to set up the cluster to run a cluster service (such as httpd), which requires you to set up the configuration identically on all members that run the cluster service).
Adding a Failover Domain Figure 5.7. Failover Domain Configuration: Configuring a Failover Domain 4. Click the Available Cluster Nodes drop-down box and select the members for this failover domain. 5. To restrict failover to members in this failover domain, click (check) the Restrict Failover To This Domains Members checkbox.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster Figure 5.8. Failover Domain Configuration: Adjusting Priority b. For each node that requires a priority adjustment, click the node listed in the Member Node/ Priority columns and adjust priority by clicking one of the Adjust Priority arrows. Priority is indicated by the position in the Member Node column and the value in the Priority column.
Removing a Member from a Failover Domain 2. At the bottom of the right frame (labeled Properties), click the Delete Failover Domain button. Clicking the Delete Failover Domain button causes a warning dialog box do be displayed asking if you want to remove the failover domain. Confirm that the failover domain identified in the warning dialog box is the one you want to delete and click Yes.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster 1. On the Resources property of the Cluster Configuration Tool, click the Create a Resource button. Clicking the Create a Resource button causes the Resource Configuration dialog box to be displayed. 2. At the Resource Configuration dialog box, under Select a Resource Type, click the drop-down Appendix C, HA Resource Parameters box.
Adding a Cluster Service to the Cluster Figure 5.9. Adding a Cluster Service 4. If you want to restrict the members on which this cluster service is able to run, choose a failover domain from the Failover Domain drop-down box. (Refer to Section 5.6, “Configuring a Failover Domain”...
Chapter 5. Configuring Red Hat Cluster With system-config-cluster Note Circumstances that require enabling Run Exclusive are rare. Enabling Run Exclusive can render a service offline if the node it is running on fails and no other nodes are empty. 7. Select a recovery policy to specify how the resource manager should recover from a service failure.
Propagating The Configuration File: New Cluster inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP> mtu 1356 qdisc pfifo_fast qlen 1000 link/ether 00:05:5d:9a:d8:91 brd ff:ff:ff:ff:ff:ff inet 10.11.4.31/22 brd 10.11.7.255 scope global eth0 inet6 fe80::205:5dff:fe9a:d891/64 scope link inet 10.11.4.240/22 scope global secondary eth0 valid_lft forever preferred_lft forever 5.9.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster 4. service rgmanager start 5. Start the Red Hat Cluster Suite management GUI. At the Cluster Configuration Tool tab, verify that the configuration is correct. At the Cluster Status Tool tab verify that the nodes and services are running as expected.
Chapter 6. Managing Red Hat Cluster With system-config-cluster This chapter describes various administrative tasks for managing a Red Hat Cluster and consists of the following sections: Section 6.1, “Starting and Stopping the Cluster Software” • Section 6.2, “Managing High-Availability Services” •...
Chapter 6. Managing Red Hat Cluster With system-config-cluster Figure 6.1. Cluster Status Tool You can use the Cluster Status Tool to enable, disable, restart, or relocate a high-availability service. The Cluster Status Tool displays the current cluster status in the Services area and automatically updates the status every 10 seconds.
Modifying the Cluster Configuration Members Status Description Member The node is part of the cluster. Note: A node can be a member of a cluster; however, the node may be inactive and incapable of running services. For example, if rgmanager is not running on the node, but all other cluster software components are running in the node, the node appears as a Member in the Cluster Status Tool.
Chapter 6. Managing Red Hat Cluster With system-config-cluster To edit the cluster configuration file, click the Cluster Configuration tab in the cluster configuration GUI. Clicking the Cluster Configuration tab displays a graphical representation of the cluster configuration. Change the configuration file according the the following steps: 1.
Disabling the Cluster Software 4. Increment the configuration version beyond the current working version number as follows: a. Click Cluster => Edit Cluster Properties. b. At the Cluster Properties dialog box, change the Config Version value and click OK. 5. Click File => Save As. 6.
Chapter 6. Managing Red Hat Cluster With system-config-cluster # chkconfig --level 2345 rgmanager on # chkconfig --level 2345 gfs on # chkconfig --level 2345 clvmd on # chkconfig --level 2345 cman on You can then reboot the member for the changes to take effect or run the following commands in the order shown to restart cluster software: 1.
Appendix A. Example of Setting Up Apache HTTP Server This appendix provides an example of setting up a highly available Apache HTTP Server on a Red Hat Cluster. The example describes how to set up a service to fail over an Apache HTTP Server. Variables in the example apply to this example only;...
Appendix A. Example of Setting Up Apache HTTP Server 1. On one cluster node, use the interactive parted utility to create a partition to use for the document root directory. Note that it is possible to create multiple document root directories on different disk partitions.
Installing and Configuring the Apache HTTP Server This IP address then must be configured as a cluster resource for the service using the Cluster Configuration Tool. • If the script directory resides in a non-standard location, specify the directory that contains the CGI programs.
Appendix A. Example of Setting Up Apache HTTP Server • Click Create a Resource. • In the Resource Configuration dialog, select File System from the drop-down menu. • Enter the Name for the resource (for example, httpd-content. • Choose ext3 from the File System Type drop-down menu. •...
Appendix B. Fence Device Parameters This appendix provides tables with parameter descriptions of fence devices. Note Certain fence devices have an optional Password Script parameter. The Password Scriptparameter allows specifying that a fence-device password is supplied from a script rather than from the Password parameter. Using the Password Script parameter supersedes the Password parameter, allowing passwords to not be visible in the cluster configuration file (/etc/cluster/cluster.conf).
Appendix B. Fence Device Parameters Field Description Password The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Script (optional) Domain Domain of the Bull PAP system to power cycle. Table B.3. Bull PAP (Platform Administration Processor) Field Description Name...
Field Description Server The hostname of the server to fence the client from, in either IP address or hostname form. For multiple hostnames, separate each hostname with a space. IP address The cluster name of the node to be fenced. Refer to the fence_gnbd(8) man page for more information.
Appendix B. Fence Device Parameters Field Description Login The login name of a user capable of issuing power on/off commands to the given IPMI port. Password The password used to authenticate the connection to the IPMI port. Password Script (optional) The script that supplies a password for access to the fence device. Using this supersedes the Password parameter.
Field Description Device Name The device name of the device the switch is connected to on the controlling host (for example, /dev/ttys2). Port The switch outlet number. Table B.15. RPS-10 Power Switch (two-node clusters only) Field Description Name A name for the SCSI fence device. Node name Name of the node to be fenced.
Appendix B. Fence Device Parameters Field Description Password The password used to authenticate the connection to the device. Password The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Script (optional) Port The switch outlet number.
Appendix C. HA Resource Parameters This appendix provides descriptions of HA resource parameters. You can configure the parameters with Luci, system-config-cluster, or by editing etc/cluster/cluster.conf. Table C.1, “HA Resource Summary” lists the resources, their corresponding resource agents, and references to other tables containing parameter descriptions.
Appendix C. HA Resource Parameters Field Description Shutdown Specifies the number of seconds to wait for correct end of service shutdown. Wait (seconds) Table C.2. Apache Server Field Description Name Specifies a name for the file system resource. File System If not specified, mount tries to determine the file system type.
Field Description When creating a new GFS resource, you can leave this field blank. Leaving the field blank causes a file system ID to be assigned automatically after you commit the parameter during configuration. If you need to assign a file system ID explicitly, specify it in this field.
Appendix C. HA Resource Parameters Field Description Target This is the server from which you are mounting. It can be specified using a hostname, a wildcard (IP address or hostname based), or a netgroup defining a host or hosts to export to. Option Defines a list of options for this client —...
Field Description URL List The default value is ldap:///. Other command line options for slapd. slapd Options Shutdown Specifies the number of seconds to wait for correct end of service shutdown. Wait (seconds) Table C.11. Open LDAP Field Description Instance Instance name.
Appendix C. HA Resource Parameters Field Description Database Specifies one of the following database types: Oracle, DB6, or ADA. type Oracle TNS Specifies Oracle TNS listener name. listener name ABAP stack is If you do not have an ABAP stack installed in the SAP database, enable this not installed, parameter.
Field Description Name Specifies a name for the custom user script. The script resource allows a standard LSB-compliant init script to be used to start a clustered service. File (with Enter the path where this custom script is located (for example, /etc/ path) init.d/userscript).
Appendix C. HA Resource Parameters Field Description SYBASE_OCS The directory name under sybase_home where OCS products are installed. For directory example, ASE-15_0. name Sybase user The user who can run ASE server. Deep probe The maximum seconds to wait for the response of ASE server before determining timeout that the server had no response while running deep probe.
Field Description Failover Defines lists of cluster members to try in the event that a virtual machine fails. Domain Recovery Recovery policy provides the following options: policy • Disable — Disables the virtual machine if it fails. • Relocate — Tries to restart the virtual machine in another node; that is, it does not try to restart in the current node.
Appendix D. HA Resource Behavior This appendix describes common behavior of HA resources. It is meant to provide ancillary information that may be helpful in configuring HA services. You can configure the parameters with Luci, system-config-cluster, or by editing etc/cluster/cluster.conf. For descriptions Appendix C, HA Resource Parameters.
Appendix D. HA Resource Behavior D.1. Parent, Child, and Sibling Relationships Among Resources A cluster service is an integrated entity that runs under the control of rgmanager. All resources in a service run on the same node. From the perspective of rgmanager, a cluster service is one entity that can be started, stopped, or relocated.
Typed Child Resource Start and Stop Ordering Note The only resource to implement defined child resource type ordering is the Service resource. Section D.2.1, “Typed For more information about typed child resource start and stop ordering, refer to Child Resource Start and Stop Ordering”.
Appendix D. HA Resource Behavior Ordering within a resource type is preserved as it exists in the cluster configuration file, /etc/ cluster/cluster.conf. For example, consider the starting order and stopping order of the typed Example D.3, “Ordering Within a Resource Type”.
Non-typed Child Resource Start and Stop Ordering 4. lvm:2 — This is an LVM resource. All LVM resources are stopped last. lvm:2 (<lvm name="2" .../>) is stopped before lvm:1; resources within a group of a resource type are stopped in the reverse order listed in the Service foo portion of /etc/cluster/cluster.conf. 5.
Appendix D. HA Resource Behavior 5. script:1 — This is a Script resource. If there were other Script resources in Service foo, they would start in the order listed in the Service foo portion of /etc/cluster/cluster.conf. 6. nontypedresource:foo — This is a non-typed resource. Because it is a non-typed resource, it is started after the typed resources start.
Inheritance, the <resources> Block, and Reusing Resources D.3. Inheritance, the <resources> Block, and Reusing Resources Some resources benefit by inheriting values from a parent resource; that is commonly the case in an Example D.5, “NFS Service Set Up for Resource Reuse and Inheritance” NFS service.
Appendix D. HA Resource Behavior resource in multiple places can result in mounting one file system on two nodes, therefore causing problems. D.4. Failure Recovery and Independent Subtrees In most enterprise environments, the normal course of action for failure recovery of a service is to Example D.6, “Service restart the entire service if any component in the service fails.
Debugging and Testing Services and Resource Ordering Action Syntax Display the rg_test rules resource rules that rg_test understands. Test a rg_test test /etc/cluster/cluster.conf configuration (and /usr/ share/ cluster) for errors or redundant resource agents. Display Display start order: the start rg_test noop /etc/cluster/cluster.conf start service and stop servicename...
Appendix E. Upgrading A Red Hat Cluster from RHEL 4 to RHEL 5 This appendix provides a procedure for upgrading a Red Hat cluster from RHEL 4 to RHEL 5. The procedure includes changes required for Red Hat GFS and CLVM, also. For more information about Red Hat GFS, refer to Global File System: Configuration and Administration.
Appendix E. Upgrading A Red Hat Cluster from RHEL 4 to RHEL 5 # chkconfig --level 2345 gfs off # chkconfig --level 2345 clvmd off # chkconfig --level 2345 fenced off # chkconfig --level 2345 cman off # chkconfig --level 2345 ccsd off 4.
7. Run lvmconf --enable-cluster. 8. Enable cluster software to start upon reboot. At each node run /sbin/chkconfig as follows: # chkconfig --level 2345 rgmanager on # chkconfig --level 2345 gfs on # chkconfig --level 2345 clvmd on # chkconfig --level 2345 cman on 9.
Appendix F. Revision History Revision 5.4-1 Tue Aug 18 2009 Paul Kennedy email@example.com Resolves: #516128 Adds notes about not supporting IPV6. Resolves: #482936 Corrects Section 5.7 title and intro text. Resolves: #488751 Corrects iptables rules. Removed examples. Resolves: #502053 Corrects iptables rules for rgmanager.
Index backing up, 72 restoring, 72 cluster resource relationships, 96 cluster resource types, 19 cluster service ACPI displaying status, 9, 70 configuring, 12 cluster service managers Apache HTTP Server configuration, 38, 64, 67 httpd.conf, 76 cluster services, 38, 64 setting up service, 75 (see also adding to the cluster configuration) Apache HTTP Server, setting up, 75 httpd.conf, 76...
Index configuring ACPI, 12 introduction, v upgrading, RHEL 4 to RHEL 5, 105 other Red Hat Enterprise Linux documents, v IP ports enabling, 11 iptables configuring, 11 max_luns configuring, 19 multicast addresses considerations for using with network switches and multicast addresses, 21 parameters, fence device, 79 parameters, HA resources, 85 power controller connection, configuring, 79...