Number Of Lock_Gulm Servers; Starting Lock_Gulm Servers; Fencing And Lock_Gulm; Shutting Down A Lock_Gulm Server - Red Hat GFS 6.0 Administrator's Manual

Hide thumbs Also See for GFS 6.0:
Table of Contents

Advertisement

86
For optimal performance,
also be run on nodes using GFS. All nodes, including those only running
in the
configuration file (
nodes.ccs

8.2.2. Number of LOCK_GULM Servers

You can use just one
must be reset. For that reason, you can run multiple instances of the
on multiple nodes for redundancy. The redundant servers allow the cluster to continue running if the
master
lock_gulmd
Over half of the
(
cluster.ccs:cluster/lock_gulm/servers
from GFS nodes. That quorum requirement is necessary to prevent split groups of servers from
forming independent clusters — which would lead to file system corruption.
For example, if there are three
two of those three
lock_gulmd
A
server can rejoin existing servers if it fails and is restarted.
lock_gulmd
When running redundant lock_gulmd servers, the minimum number of nodes required is three; the
maximum number of nodes is five.

8.2.3. Starting LOCK_GULM Servers

If no
lock_gulmd
must verify that no GFS nodes are hung from a previous instance of the cluster. If there are hung
GFS nodes, reset them before starting
starting
lock_gulmd
can communicate over the network; that is, there is no network partition.
lock_gulmd
The
server is started with no command line options.
lock_gulmd

8.2.4. Fencing and LOCK_GULM

Cluster state is managed in the
server initiates a fence operation for each failed node and waits for the fence to complete
lock_gulmd
before proceeding with recovery.
The master
lock_gulmd
name of the failed node. That command looks up fencing configuration in CCS to carry out the fence
operation.
When using RLM, you need to use a fencing method that shuts down and reboots a node. With RLM
you cannot use any method that does not reboot the node.

8.2.5. Shutting Down a LOCK_GULM Server

Before shutting down a node running a LOCK_GULM server,
using the
gulm_tool
may be fenced by the remaining LOCK_GULM servers.
servers should be run on dedicated nodes; however, they can
lock_gulmd
nodes.ccs:nodes
server; however, if it fails, the entire cluster that depends on it
lock_gulmd
server fails.
servers on the nodes listed in the
lock_gulmd
lock_gulmd
servers (a quorum) must be running for the cluster to operate.
servers are running in the cluster, take caution before restarting them — you
lock_gulmd
servers prevents file system corruption. Also, be sure that all nodes running
lock_gulmd
server fences failed nodes by calling the
command. If
lock_gulmd
Chapter 8. Using Clustering and Locking Systems
).
) must be operating to process locking requests
servers listed in the
servers. Resetting the hung GFS nodes before
server. When GFS nodes or server nodes fail, the
lock_gulmd
is not properly stopped, the LOCK_GULM server
, must be listed
lock_gulmd
server daemon
lock_gulmd
cluster.ccs
configuration file,
cluster.ccs
command with the
fence_node
should be terminated
file

Advertisement

Table of Contents
loading

Table of Contents