Red Hat GFS 6.0 Administrator's Manual

Hide thumbs Also See for GFS 6.0:
Table of Contents

Advertisement

Quick Links

Red Hat GFS 6.0
Administrator's Guide

Advertisement

Table of Contents
loading

Summary of Contents for Red Hat GFS 6.0

  • Page 1 Red Hat GFS 6.0 Administrator’s Guide...
  • Page 2 All other trademarks referenced herein are the property of their respective owners. The GPG fingerprint of the security@redhat.com key is: CA 20 86 86 2B D6 9D FC 65 F6 EC C4 21 91 80 CD DB 42 A6 0E...
  • Page 3: Table Of Contents

    Table of Contents Introduction............................i 1. Audience ..........................i 2. Document Conventions ......................i 3. More to Come ........................iii 3.1. Send in Your Feedback ..................iv 4. Activate Your Subscription ....................iv 4.1. Provide a Red Hat Login..................iv 4.2.
  • Page 4 5. Using the Pool Volume Manager ....................21 5.1. Overview of GFS Pool Volume Manager ................ 21 5.2. Synopsis of Pool Management Commands ..............21 5.2.1......................22 pool_tool 5.2.2....................23 pool_assemble 5.2.3......................23 pool_info 5.2.4......................24 pool_mp 5.3.
  • Page 5 7. Using the Cluster Configuration System..................75 7.1. Creating a CCS Archive....................75 7.1.1. Usage......................... 75 7.1.2. Example ......................76 7.1.3. Comments ......................76 7.2. Starting CCS in the Cluster ....................76 7.2.1. Usage......................... 76 7.2.2. Example ......................77 7.2.3. Comments ......................77 7.3.
  • Page 6 9.7. Direct I/O ........................103 9.7.1......................103 O_DIRECT 9.7.2. GFS File Attribute................... 103 9.7.3. GFS Directory Attribute ................. 104 9.8. Data Journaling ......................104 9.8.1. Usage....................... 105 9.8.2. Examples......................105 9.9. Configuring Updates ..................105 atime 9.9.1. Mount with ..................
  • Page 7 B. Upgrading GFS........................... 135 C. Basic GFS Examples ........................137 C.1. LOCK_GULM, RLM Embedded ................. 137 C.1.1. Key Characteristics ..................137 C.1.2. Kernel Modules Loaded ................. 138 C.1.3. Setup Process....................139 C.2. LOCK_GULM, RLM External ..................142 C.2.1. Key Characteristics ..................142 C.2.2.
  • Page 9: Introduction

    GFS configurations. HTML and PDF versions of all the official Red Hat Enterprise Linux manuals and release notes are available online at http://www.redhat.com/docs/. 1. Audience This book is intended primarily for Linux system administrators who are familiar with the following activities: Linux system administration procedures, including kernel configuration...
  • Page 10 Introduction [key] A key on the keyboard is shown in this style. For example: To use [Tab] completion, type in a character and then press the [Tab] key. Your terminal displays the list of files in the directory that start with that letter. [key]-[combination] A combination of keystrokes is represented in this way.
  • Page 11 Introduction user input Text that the user has to type, either on the command line, or into a text box on a GUI screen, is displayed in this style. In the following example, text is displayed in this style: To boot your system into the text based installation program, you must type in the text com- mand at the prompt.
  • Page 12: More To Come

    If you can not complete registration during the Setup Agent (which requires network access), you can alternatively complete the Red Hat registration process online at http://www.redhat.com/register/. 4.1. Provide a Red Hat Login...
  • Page 13: Provide Your Subscription Number

    You can provide your subscription number when prompted during the Setup Agent or by visiting http://www.redhat.com/register/. 4.3. Connect Your System The Red Hat Network Registration Client helps you connect your system so that you can begin to get updates and perform systems management.
  • Page 14 Introduction Topic Reference Comment Blueprints for High Applications and High Provides a summary of best Availability: Designing Availability practices in high availability. Resilient Distributed Systems by E. Marcus and H. Stern. Wiley, 2000. Table 1. Recommended References Table...
  • Page 15: Red Hat Gfs Overview

    Chapter 1. Red Hat GFS Overview Red Hat GFS is a cluster file system that provides data sharing among Linux-based computers. GFS provides a single, consistent view of the file system name space across all nodes in a cluster. It allows applications to install and run without much knowledge of the underlying storage infrastructure.
  • Page 16: Performance, Scalability, And Economy

    Chapter 1. Red Hat GFS Overview Initial-configuration druid via Red Hat Cluster Suite — When GFS is installed with Red Hat Cluster • Suite, a configuration druid is available with Cluster Suite for initial configuration of GFS. For more information about the druid, refer to the Cluster Suite documentation. New and Changed Features with Red Hat GFS 6.0 for Red Hat Enterprise Linux 3 Update 5 Enhanced performance and changes to the...
  • Page 17: Performance, Scalability, Moderate Price

    Chapter 1. Red Hat GFS Overview Applications FC or iSCSI Fabric Shared Files Figure 1-1. GFS with a SAN 1.2.2. Performance, Scalability, Moderate Price Multiple Linux client applications on a LAN can share the same SAN-based data as shown in Figure 1-2.
  • Page 18: Economy And Performance

    Chapter 1. Red Hat GFS Overview Clients Applications GNBD servers Fabric Shared Files Figure 1-2. GFS and GNBD with a SAN 1.2.3. Economy and Performance Figure 1-3 shows how Linux client applications can take advantage of an existing Ethernet topology to gain shared access to all block storage devices.
  • Page 19: Gfs Functions

    Chapter 1. Red Hat GFS Overview Clients Applications GNBD servers Shared Files Disk Disk Disk Disk Disk Disk Figure 1-3. GFS and GNBD with Directly Connected Storage 1.3. GFS Functions GFS is a native file system that interfaces directly with the VFS layer of the Linux kernel file-system interface.
  • Page 20: Cluster Management, Fencing, And Recovery

    Chapter 1. Red Hat GFS Overview 1.3.2. Lock Management A lock management mechanism is a key component of any cluster file system. The Red Hat GFS lock-management mechanism provides the following lock managers: Single Lock Manager (SLM) — A simple centralized lock manager that can be configured to run •...
  • Page 21: Gfs Software Subsystems

    Chapter 1. Red Hat GFS Overview 1.4. GFS Software Subsystems GFS consists of the following subsystems: Cluster Configuration System (CCS) • Fence • Pool • LOCK_GULM • LOCK_NOLOCK • Table 1-1 summarizes the GFS Software subsystems and their components. Software Subsystem Components Description Cluster Configuration...
  • Page 22 Chapter 1. Red Hat GFS Overview Software Subsystem Components Description Fence agent for manual interaction. fence_manual WARNING: Manual fencing should not be used in a production environment. Manual fencing depends on human intervention whenever a node needs recovery. Cluster operation is halted during the intervention. User interface for agent.
  • Page 23: Before Configuring Gfs

    Chapter 1. Red Hat GFS Overview Software Subsystem Components Description Command that repairs an unmounted GFS gfs_fsck file system. GNBD Kernel module that implements the GNBD gnbd.o device driver on clients. Kernel module that implements the GNBD gnbd_serv.o server. It allows a node to export local storage over the network.
  • Page 24 Chapter 1. Red Hat GFS Overview GNBD server nodes If you are using GNBD, determine how many GNBD server nodes are needed. Note the hostname and IP address of each GNBD server node for use in configuration files later. Fencing method Determine the fencing method for each GFS node.
  • Page 25: System Requirements

    Chapter 2. System Requirements This chapter describes the system requirements for Red Hat GFS 6.0 and consists of the following sections: Section 2.1 Platform Requirements • Section 2.2 TCP/IP Network • Section 2.3 Fibre Channel Storage Network • Section 2.4 Fibre Channel Storage Devices •...
  • Page 26: Fibre Channel Storage Devices

    Chapter 2. System Requirements Requirement Description HBA (Host Bus Adapter) One HBA minimum per GFS node Connection method Fibre Channel switch Note: If an FC switch is used for I/O fencing nodes, you may want to consider using Brocade, McData, or Vixel FC switches, for which GFS fencing agents exist.
  • Page 27: Installing Gfs

    Chapter 3. Installing GFS This chapter describes how to install GFS and includes the following sections: Section 3.1 Prerequisite Tasks • Section 3.2 Installation Tasks • Section 3.2.1 Installing GFS RPMs • Section 3.2.2 Loading the GFS Kernel Modules • Note For information about installing and using GFS with Red Hat Cluster Suite, refer to Appendix A Using Red Hat GFS with Red Hat Cluster Suite.
  • Page 28: Specifying A Persistent Major Number

    Chapter 3. Installing GFS 3.1.1.2. Clock Synchronization Software Make sure that each GFS node is running clock synchronization software. The system clocks in GFS nodes need to be within a few minutes of each other to prevent unnecessary inode time-stamp updates. If the node clocks are not synchronized, the inode time stamps will be updated unnecessarily, severely impacting cluster performance.
  • Page 29: Loading The Gfs Kernel Modules

    Chapter 3. Installing GFS 3.2.1. Installing GFS RPMs Installing GFS RPMs consists of acquiring and installing two GFS RPMs: the GFS tools RPM (for example, ) and the GFS kernel-modules RPM (for example, GFS-6.0.2.20-1.i686.rpm GFS-modules-smp-6.0.2.20-1.i686.rpm Note You must install the GFS tools RPM before installing the GFS kernel-modules RPM. To install GFS RPMs, follow these steps: 1.
  • Page 30 Chapter 3. Installing GFS Note The GFS kernel modules must be loaded into a GFS node each time the node is started. It is recom- mended that you use the scripts included with GFS to automate loading the GFS kernel mod- init.d ules.
  • Page 31: Initial Configuration

    Chapter 4. Initial Configuration This chapter describes procedures for initial configuration of GFS and contains the following sections: Section 4.1 Prerequisite Tasks • Section 4.2 Initial Configuration Tasks • Section 4.2.1 Setting Up Logical Devices • Section 4.2.2 Setting Up and Starting the Cluster Configuration System •...
  • Page 32: Setting Up Logical Devices

    Chapter 4. Initial Configuration Note GFS kernel modules must be loaded prior to performing initial configuration tasks. Refer to Section 3.2.2 Loading the GFS Kernel Modules. Note For examples of GFS configurations, refer to Appendix C Basic GFS Examples. The following sections describe the initial configuration tasks. 4.2.1.
  • Page 33: Starting Clustering And Locking Systems

    Chapter 4. Initial Configuration 4.2.2. Setting Up and Starting the Cluster Configuration System To set up and start the Cluster Configuration System, follow these steps: 1. Create CCS configuration files and place them into a temporary directory. Refer to Chapter 6 Creating the Cluster Configuration System Files. 2.
  • Page 34 Chapter 4. Initial Configuration gfs_mkfs -p lock_gulm -t ClusterName:FSName -j NumberJournals BlockDevice 2. At each node, mount the GFS file systems. Refer to Section 9.2 Mounting a File System. Command usage: mount -t gfs BlockDevice MountPoint mount -t gfs -o acl BlockDevice MountPoint mount option allows manipulating file ACLs.
  • Page 35: Using The Pool Volume Manager

    Chapter 5. Using the Pool Volume Manager This chapter describes the GFS volume manager — named Pool — and its commands. The chapter consists of the following sections: Section 5.1 Overview of GFS Pool Volume Manager • Section 5.2 Synopsis of Pool Management Commands •...
  • Page 36: Pool_Tool

    Chapter 5. Using the Pool Volume Manager 5.2. Synopsis of Pool Management Commands Four commands are available to manage pools: • pool_tool • pool_assemble • pool_info • pool_mp The following sections briefly describe the commands and provide references to other sections in this chapter, where more detailed information about the commands and their use is described.
  • Page 37: Pool_Info

    Chapter 5. Using the Pool Volume Manager Flag Option Be quiet. Do not display output from the command. Display command version information, then exit. Verbose operation. Table 5-2. Command Options pool_tool 5.2.2. pool_assemble command activates and deactivates pools on a system (refer to Table 5-3 and pool_assemble Table 5-4).
  • Page 38: Pool_Mp

    Chapter 5. Using the Pool Volume Manager Flag Function Section/Page Reference Section 5.12 Displaying Pool Volume Information Display information Section 5.13 Using Pool Volume Statistics Display statistics Display an active Section 5.7 Displaying Pool Configuration Information configuration Table 5-5. Command Functions pool_info Flag Option...
  • Page 39: Scanning Block Devices

    Chapter 5. Using the Pool Volume Manager Flag Option Verbose operation. Table 5-8. Command Options pool_mp 5.3. Scanning Block Devices Scanning block devices provides information about the availability and characteristics of the devices. That information is important for creating a pool configuration file. You can scan block devices by issuing the command with the option.
  • Page 40: Creating A Configuration File For A New Volume

    Chapter 5. Using the Pool Volume Manager /dev/sdd8 <- unknown -> /dev/hda <- partition information -> /dev/hda1 <- EXT2/3 filesystem -> /dev/hda2 <- swap device -> /dev/hda3 <- EXT2/3 filesystem -> 5.4. Creating a Configuration File for a New Volume A pool configuration file is used as input to the command when creating or growing a pool_tool...
  • Page 41: Examples

    Chapter 5. Using the Pool Volume Manager File Line and Variable Description Keyword subpool The details of each subpool: id stripe devices [type] id is the subpool identifier. Number the subpools in order beginning with 0. stripe is the stripe size in sectors (512 bytes per sector) for each device.
  • Page 42: Usage

    Chapter 5. Using the Pool Volume Manager 5.5. Creating a Pool Volume Once configuration file created edited (refer Section 5.4 Creating a Configuration File for a New Volume), a pool volume can be created using the command. Because the command writes labels to the beginning of the pool_tool pool_tool devices or partitions, the new pool volume’s devices or partitions must be accessible to the system.
  • Page 43: Usage

    Chapter 5. Using the Pool Volume Manager command only activates pools from devices visible to the node (those listed in pool_assemble ). Disk labels created by the command are scanned to determine /proc/partitions/ pool_tool which pools exist and, as a result, should be activated. The command also creates pool_assemble device nodes in the...
  • Page 44: Displaying Pool Configuration Information

    Chapter 5. Using the Pool Volume Manager Note You can use GFS scripts included with GFS to automate activating and deactivating pools. init.d For more information about GFS scripts, refer to Chapter 12 Using GFS Scripts. init.d init.d 5.7. Displaying Pool Configuration Information Using the command with the (print) option displays pool configuration information in...
  • Page 45: Example Procedure

    Chapter 5. Using the Pool Volume Manager ConfigFile Specifies the file describing the extended pool. Note command supersedes the command as of GFS 5.2. Although the pool_tool pool_grow command is still available, it is not supported in GFS 5.2 and later. pool_grow 5.8.2.
  • Page 46: Usage

    Chapter 5. Using the Pool Volume Manager 5.9. Erasing a Pool Volume A deactivated pool can be erased by using the option of the command. Using pool_tool erases the disk labels written when the pool was created. pool_tool 5.9.1. Usage pool_tool -e [PoolName] PoolName Specifies the pool to erase.
  • Page 47: Example

    Chapter 5. Using the Pool Volume Manager Note You must deactivate a pool before renaming it. You can deactivate a pool with the pool_assemble -r command or by using the script. For more information about GFS PoolName pool init.d init.d scripts, refer to Chapter 12 Using GFS Scripts.
  • Page 48: Comments

    Chapter 5. Using the Pool Volume Manager 5.11.3. Comments Before changing a pool volume minor number, deactivate the pool. For this command to take effect throughout the cluster, you must reload the pools on each node in the cluster by issuing command followed by a pool_assemble PoolName...
  • Page 49: Using Pool Volume Statistics

    Chapter 5. Using the Pool Volume Manager pool_info -v This example displays complete information about pool0: pool_info -v pool0 5.13. Using Pool Volume Statistics command can be used to display pool read-write information and to clear statistics pool_info from pools. Using the command with the option displays the number of reads and writes for the...
  • Page 50: Usage

    Chapter 5. Using the Pool Volume Manager 5.14. Adjusting Pool Volume Multipathing command adjusts multipathing for running pools. Using the command with pool_mp pool_mp option, you can change the type of multipathing. Using the command with the pool_mp option, you can reintegrate failed paths. 5.14.1.
  • Page 51: Creating The Cluster Configuration System Files

    Chapter 6. Creating the Cluster Configuration System Files The GFS Cluster Configuration System (CCS) requires the following files: — The cluster file contains the name of the cluster and the names of the nodes where • cluster.ccs LOCK_GULM servers are run. —...
  • Page 52: Dual Power And Multipath Fc Fencing Considerations

    Chapter 6. Creating the Cluster Configuration System Files 6.2. CCS File Creation Tasks To create the CCS files perform the following steps: 1. Create the file. cluster.ccs 2. Create the file. fence.ccs 3. Create the file. nodes.ccs Note The contents of CCS files are case sensitive. 6.3.
  • Page 53: File

    Chapter 6. Creating the Cluster Configuration System Files Warning Do not specify the GNBD fencing agent ( ) as a fencing device for the GNBD server fence_gnbd nodes. If you specify as a fence device for a GFS node using GNBD multipath, •...
  • Page 54 Chapter 6. Creating the Cluster Configuration System Files 4. (Optional) For the heartbeat rate ( ), specify Seconds. Refer to heartbeat_rate = Example 6-1. The Seconds parameter in combination with the Number parameter spec- allowed_misses ify the amount of time for node failure detection as follows: Seconds x (Number+1) = Time (in seconds) 5.
  • Page 55 Chapter 6. Creating the Cluster Configuration System Files cluster { name = "alpha" lock_gulm { servers = ["n01", "n02", "n03"] heartbeat_rate = 20 allowed_misses = 3 Example 6-1. cluster.ccs 6.6. Creating the File fence.ccs You can configure each node in a GFS cluster for a variety of fencing devices. To configure fencing for a node, you need to perform the following tasks: Create the file —...
  • Page 56 Chapter 6. Creating the Cluster Configuration System Files To create the file, follow these steps: fence.ccs 1. Create a file named . Use a file format according to the fencing method as follows. fence.ccs Refer to Table 6-2 for syntax description. APC MasterSwitch —...
  • Page 57 Chapter 6. Creating the Cluster Configuration System Files Note Do not use to fence GNBD server nodes. fence_gnbd For descriptions of those parameters refer to Table 6-2. Refer to Example 6-7 for a file that specifies a GNBD fencing device for a configuration that does not fence.ccs employ GNBD multipath.
  • Page 58 Chapter 6. Creating the Cluster Configuration System Files fence_devices{ DeviceName { agent = "fence_wti" ipaddr = " IPAddress" passwd = " LoginPassword" DeviceName { Figure 6-3. File Structure: fence_devices fence_wti fence_devices{ DeviceName { agent = "fence_brocade" ipaddr = "IPAddress" login = "LoginName" passwd = "LoginPassword"...
  • Page 59 Chapter 6. Creating the Cluster Configuration System Files fence_devices{ DeviceName { agent = "fence_vixel" ipaddr = "IPAddress" passwd = "LoginPassword" DeviceName { Figure 6-6. File Structure: fence_devices fence_vixel fence_devices{ DeviceName { agent = "fence_gnbd" server = "ServerName" server = "ServerName" DeviceName { Figure 6-7.
  • Page 60 Chapter 6. Creating the Cluster Configuration System Files fence_devices{ DeviceName { agent = "fence_rib" hostname = "HostName" login = "LoginName" passwd = "LoginPassword" DeviceName { Figure 6-9. File Structure: fence_devices fence_rib fence_devices{ DeviceName { agent = "fence_xcat" rpower = "RpowerBinaryPath" DeviceName { Figure 6-10.
  • Page 61: File

    Chapter 6. Creating the Cluster Configuration System Files Warning Manual fencing should not be used in a production environment. Manual fencing depends on human intervention whenever a node needs recovery. Cluster operation is halted during the intervention. Parameter Description For Egenera BladeFrame fencing device: The name of an Egenera CserverName control blade, the Egenera control blade with which the fence agent communicates via ssh.
  • Page 62 Chapter 6. Creating the Cluster Configuration System Files fence_devices { apc1 { agent = "fence_apc" ipaddr = "10.0.3.3" login = "apc" passwd = "apc" apc2 { agent = "fence_apc" ipaddr = "10.0.3.4" login = "apc" passwd = "apc" Example 6-2. APC MasterSwitch Fencing Devices Named apc1 apc2 fence_devices {...
  • Page 63 Chapter 6. Creating the Cluster Configuration System Files fence_devices { mdfc1 { agent = "fence_mcdata" ipaddr = "10.0.3.3" login = "admin" passwd = "password" mdfc2 { agent = "fence_mcdata" ipaddr = "10.0.3.4" login = "admin" passwd = "password" Example 6-5. McData FC-Switch Fencing Devices Named mdfc1 mdfc2 fence_devices {...
  • Page 64 Chapter 6. Creating the Cluster Configuration System Files fence_devices { gnbdmp { agent = "fence_gnbd" server = "nodea" server = "nodeb" <-- Additional entry option = "multipath" <-- Number of retries set to 5 retrys = "5" <-- Wait time between retries set to 3 wait_time = "3"...
  • Page 65 Chapter 6. Creating the Cluster Configuration System Files fence_devices { admin { agent = "fence_manual" Example 6-12. Manual Fencing Device Named admin Warning Manual fencing should not be used in a production environment. Manual fencing depends on human intervention whenever a node needs recovery. Cluster operation is halted during the intervention. 6.7.
  • Page 66 Chapter 6. Creating the Cluster Configuration System Files To create the file, follow these steps: nodes.ccs 1. Create a file named nodes.ccs a. If you are configuring a node for one fencing method (not cascaded), specify only one fencing method per node in the file.
  • Page 67 Chapter 6. Creating the Cluster Configuration System Files Note Make sure that you specify Nodename as the Linux hostname and that the primary IP address of the node is associated with the hostname. Specifying NodeName other than the Linux host- name (for example the interface name) can cause unpredictable results —...
  • Page 68 Chapter 6. Creating the Cluster Configuration System Files 4. Save the file. nodes.ccs nodes { NodeName NodeName File format for node identification (same ip_interfaces format for all nodes) IFNAME = " IPAddress " fence { File format for APC MethodName MasterSwitch fencing method, for DeviceName...
  • Page 69 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName { NodeName { File format for node ip_interfaces { identification (same format for all nodes) IFNAME = "IPAddress " fence { MethodName { File format for APC DeviceName { Fencing device MasterSwitch for pwr supply 1...
  • Page 70 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName NodeName File format for node identification (same ip_interfaces format for all nodes) IFNAME = " IPAddress " fence { File format for MethodName WTI NPS fencing method DeviceName port = PortNumber NodeName Figure 6-15.
  • Page 71 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName NodeName File format for node identification (same ip_interfaces format for all nodes) IFNAME = " IPAddress " fence { File format for Brocade, McData, MethodName or Vixel FC-Switch fencing method DeviceName port = PortNumber DeviceName...
  • Page 72 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName NodeName File format for node identification (same ip_interfaces format for all nodes) IFNAME = " IPAddress " fence { File format for MethodName GNBD fencing method DeviceName ipaddr = " IPAddress " NodeName Figure 6-17.
  • Page 73 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName NodeName File format for node identification (same ip_interfaces format for all nodes) IFNAME = " IPAddress " fence { File format for MethodName HP RILOE fencing method DeviceName localport = PortNumber NodeName Figure 6-18.
  • Page 74 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName NodeName File format for node identification (same ip_interfaces format for all nodes) IFNAME = " IPAddress " fence { File format for MethodName xCAT fencing method DeviceName nodename = “NodelistName”...
  • Page 75 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName NodeName File format for node identification (same ip_interfaces format for all nodes) IFNAME = " IPAddress " fence { File format for MethodName Egenera BladeFrame fencing method DeviceName lpan = “LPANName” pserver = “PserverName”...
  • Page 76 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName NodeName File format for node identification (same ip_interfaces format for all nodes) IFNAME = " IPAddress " fence { File format for MethodName manual fencing method DeviceName ipaddr = " IPAddress " NodeName Figure 6-21.
  • Page 77 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName { NodeName { Node that will use ip_interfaces { cascaded fencing methods IFName = "IPAddress" fence { MethodName { DeviceName { Fencing method 1 #Device-specific parameter(s) Cascades to next if fencing fails MethodName { Fencing method 2...
  • Page 78 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName { ip_interfaces { <-- Must be an IP address; not a name IFNAME="IPAddress" <-- Optional parameter usedev usedev = "NamedDevice" fence { NodeName { Figure 6-23. File Structure: Optional Parameter usedev Parameter...
  • Page 79 Chapter 6. Creating the Cluster Configuration System Files Parameter Description Used with . NamedDevice indicates that the IP address is NamedDevice usedev , and not by the IP address specified by the optional parameter usedev pulled from . The and NamedDevice libresolv usedev parameters are available with Red Hat GFS 6.0 for Red Hat Enterprise...
  • Page 80 Chapter 6. Creating the Cluster Configuration System Files nodes { n01 { ip_interfaces { hsi0 = "10.0.0.1" fence { power { apc1 { port = 6 switch = 2 n02 { Example 6-13. Node Defined for APC Fencing, Single Power Supply nodes { n01 { ip_interfaces {...
  • Page 81 Chapter 6. Creating the Cluster Configuration System Files nodes { n01 { ip_interfaces { hsi0 = "10.0.0.1" fence { power { wti1 { port = 1 n02 { Example 6-15. Node Defined for WTI NPS Fencing nodes { n01 { ip_interfaces { hsi0 = "10.0.0.1"...
  • Page 82 Chapter 6. Creating the Cluster Configuration System Files nodes { n01 { ip_interfaces { hsi0 = "10.0.0.1" fence { san { mdfc1 { port = 3 mdfc2 { <--- Additional fencing device, for additional path to FC storage port = 4 n02 { Example 6-17.
  • Page 83 Chapter 6. Creating the Cluster Configuration System Files nodes { n01 { ip_interfaces { hsi0 = "10.0.0.1" fence { server { gnbd { ipaddr = "10.0.1.1" n02 { Example 6-19. Node Defined for GNBD Fencing nodes { n01 { ip_interfaces { hsi0 = "10.0.0.1"...
  • Page 84 Chapter 6. Creating the Cluster Configuration System Files nodes { n01 { ip_interfaces { hsi0 = "10.0.0.1" fence { blade-center { xcat { nodename = "blade-01" n02 { ip_interfaces { hsi0 = "10.0.0.2" fence { blade-center { xcat { nodename = "blade-02" n03 { Example 6-21.
  • Page 85 Chapter 6. Creating the Cluster Configuration System Files nodes { n01 { ip_interfaces { hsi0 = "10.0.0.1" fence { blade-center { egenera { lpan = "opsgroup" pserver = "ops-1 n02 { ip_interfaces { hsi0 = "10.0.0.2" fence { blade-center { egenera { lpan = "opsgroup"...
  • Page 86 Chapter 6. Creating the Cluster Configuration System Files Warning Manual fencing should not be used in a production environment. Manual fencing depends on human intervention whenever a node needs recovery. Cluster operation is halted during the intervention. nodes { n01 { ip_interfaces { eth0 = "10.0.1.21"...
  • Page 87 Chapter 6. Creating the Cluster Configuration System Files nodes { n01 { ip_interfaces { hsi0 = "10.0.0.1" fence { power { <------------- APC MasterSwitch fencing device apc1 { port = 6 switch = 2 n02 { Example 6-25. GNBD Server Node Defined for APC Fencing, Single Power Supply nodes { n01 { ip_interfaces {...
  • Page 88 Chapter 6. Creating the Cluster Configuration System Files...
  • Page 89: Using The Cluster Configuration System

    Chapter 7. Using the Cluster Configuration System This chapter describes how to use the cluster configuration system (CCS) and consists of the following sections: Section 7.1 Creating a CCS Archive • Section 7.2 Starting CCS in the Cluster • Section 7.3 Using Other CCS Administrative Options •...
  • Page 90: Example

    Chapter 7. Using the Cluster Configuration System CCADevice Specifies the name of the CCA device. 7.1.2. Example In this example, the name of the cluster is , and the name of the pool is alpha . The CCS configuration files in directory are used to /dev/pool/alpha_cca /root/alpha/...
  • Page 91: Example

    Chapter 7. Using the Cluster Configuration System Note You can use the script included with GFS to automate starting and stopping . For ccsd init.d ccsd more information about GFS scripts, refer to Chapter 12 Using GFS Scripts. init.d init.d 7.2.2.
  • Page 92: Listing Files In A Ccs Archive

    Chapter 7. Using the Cluster Configuration System 7.3.2. Listing Files in a CCS Archive The CCS configuration files contained within a CCS archive can be listed by using the list ccs_tool command. 7.3.2.1. Usage ccs_tool list CCADevice CCADevice Specifies the name of the CCA device. 7.3.2.2.
  • Page 93: Example Procedure

    Chapter 7. Using the Cluster Configuration System 7.4. Changing CCS Configuration Files Based on the LOCK_GULM locking protocol, the following list defines what can or cannot be changed in a CCS archive while a cluster is running. There are no restrictions to making changes to configuration files when the cluster is offline.
  • Page 94 Chapter 7. Using the Cluster Configuration System Steps related to CCS in the setup procedure must be modified to use a CCS server in place of a CCA device. Note provides information to any computer that can connect to it. Therefore, should ccs_servd ccs_servd...
  • Page 95 Chapter 7. Using the Cluster Configuration System 7.5.1.2.1. Usage ccs_servd ccs_servd -p Path Path Specifies an alternative location of CCA files. 7.5.1.2.2. Examples This example shows starting the CCS server normally; that is, using the default location for CCA files. ccs_servd This example shows starting the CCS server using a user-defined location for CCA files.
  • Page 96: Local Cca Files

    Chapter 7. Using the Cluster Configuration System 7.5.2. Local CCA Files An alternative to both a CCA device and a CCS server is to replicate CCA files on all cluster nodes. Note Care must be taken to keep all the copies identical. A CCA file is created using the same steps as for a CCS server.
  • Page 97 Chapter 7. Using the Cluster Configuration System Note When you update a CCS archive, update the shared-device archive first, then update the local archives. Be sure to keep the archives synchronized .
  • Page 98 Chapter 7. Using the Cluster Configuration System...
  • Page 99: Using Clustering And Locking Systems

    Chapter 8. Using Clustering and Locking Systems This chapter describes how to use the clustering and locking systems available with GFS, and consists of the following sections: Section 8.1 Locking System Overview • Section 8.2 LOCK_GULM • Section 8.3 LOCK_NOLOCK •...
  • Page 100: Number Of Lock_Gulm Servers

    Chapter 8. Using Clustering and Locking Systems For optimal performance, servers should be run on dedicated nodes; however, they can lock_gulmd also be run on nodes using GFS. All nodes, including those only running , must be listed lock_gulmd in the configuration file ( nodes.ccs nodes.ccs:nodes...
  • Page 101: Lock_Nolock

    Chapter 8. Using Clustering and Locking Systems Caution Shutting down one of multiple redundant LOCK_GULM servers may result in suspension of cluster operation if the remaining number of servers is half or less of the total number of servers listed in the cluster.ccs file cluster.ccs:lock_gulm/servers 8.2.5.1.
  • Page 102 Chapter 8. Using Clustering and Locking Systems...
  • Page 103: Managing Gfs

    Chapter 9. Managing GFS This chapter describes the tasks and commands for managing GFS and consists of the following sections: Section 9.1 Making a File System • Section 9.2 Mounting a File System • Section 9.3 Unmounting a File System •...
  • Page 104: Examples

    Chapter 9. Managing GFS Warning Make sure that you are very familiar with using the LockProtoName and LockTableName parame- ters. Improper use of the LockProtoName and LockTableName parameters may cause file system or lock space corruption. LockProtoName Specifies the name of the locking protocol (typically ) to use.
  • Page 105: Mounting A File System

    Chapter 9. Managing GFS Flag Parameter Description Sets the file system block size to BlockSize. BlockSize Default block size is 4096 bytes. Enables debugging output. Help. Displays available options, then exits. Specifies the size of the journal in megabytes. Default MegaBytes journal size is 128 megabytes.
  • Page 106: Usage

    Chapter 9. Managing GFS Chapter 4 Initial Configuration). After those requirements have been met, you can mount the GFS file system as you would any Linux file system. To manipulate file ACLs, you must mount the file system with the mount option.
  • Page 107: Unmounting A File System

    Chapter 9. Managing GFS Option Description Allows manipulating file ACLs. If a file system is mounted without the mount option, users are allowed to view ACLs (with ), but are not getfacl allowed to set them (with setfacl LOCK_GULM file systems use this information to set hostdata=nodename the local node name, overriding the usual selection of node name from...
  • Page 108: Usage

    Chapter 9. Managing GFS 9.3.1. Usage umount MountPoint MountPoint Specifies the directory where the GFS file system should be mounted. 9.4. GFS Quota Management File system quotas are used to limit the amount of file-system space a user or group can use. A user or group does not have a quota limit until one is set.
  • Page 109: Displaying Quota Limits And Usage

    Chapter 9. Managing GFS Group A group ID to limit or warn. It can be either a group name from the group file or the GID number. Size Specifies the new value to limit or warn. By default, the value is in units of megabytes. The additional flags change the units to kilobytes, sectors, and file-system blocks, respectively.
  • Page 110 Chapter 9. Managing GFS MountPoint Specifies the GFS file system to which the actions apply. 9.4.2.2. Command Output GFS quota information from the command is displayed as follows: gfs_quota user User: limit:LimitSize warn:WarnSize value:Value group Group: limit:LimitSize warn:WarnSize value:Value The LimitSize, WarnSize, and Value numbers (values) are in units of megabytes by default. Adding the , or flags to the command line change the units to kilobytes, sectors, or file...
  • Page 111: Synchronizing Quotas

    Chapter 9. Managing GFS 9.4.3. Synchronizing Quotas GFS stores all quota information in its own internal file on disk. A GFS node does not update this quota file for every file-system write; rather, it updates the quota file once every 60 seconds. This is necessary to avoid contention among nodes writing to the quota file, which would cause a slowdown in performance.
  • Page 112: Disabling/Enabling Quota Enforcement

    Chapter 9. Managing GFS 9.4.4. Disabling/Enabling Quota Enforcement Enforcement of quotas can be disabled for a file system without clearing the limits set for all users and groups. Enforcement can also be enabled. Disabling and enabling of quota enforcement is done by changing a tunable parameter, , with the command.
  • Page 113: Growing A File System

    Chapter 9. Managing GFS MountPoint Specifies the GFS file system to which the actions apply. quota_account {0|1} 0 = disabled 1 = enabled 9.4.5.2. Comments To enable quota accounting on a file system, the parameter must be set back to 1. Af- quota_account terward, the GFS quota file must be initialized to account for all current disk usage for users and groups on the file system.
  • Page 114: Usage

    Chapter 9. Managing GFS 9.5.1. Usage gfs_grow MountPoint MountPoint Specifies the GFS file system to which the actions apply. 9.5.2. Comments Before running the command: gfs_grow Back up important data on the file system. • Display the pool volume that is used by the file system to be expanded by running a •...
  • Page 115: Adding Journals To A File System

    Chapter 9. Managing GFS Option Description Help. Display a short usage message, then exist. Quiet. Turn down the verbosity level. Test. Do all calculations, but do not write any data to the disk and do not expand the file system. Display command version information, then exit.
  • Page 116: Examples

    Chapter 9. Managing GFS 9.6.3. Examples In this example, one journal is added to the file system on the directory. /gfs1/ gfs_jadd -j1 /gfs1 In this example, two journals are added to the file system on the directory. /gfs1/ gfs_jadd -j2 /gfs1 In this example, the current state of the file system on the directory can be checked for the /gfs1/...
  • Page 117: Direct I/O

    Chapter 9. Managing GFS 9.7. Direct I/O Direct I/O is a feature of the file system whereby file reads and writes go directly from the applications to the storage device, bypassing the operating system read and write caches. Direct I/O is used by only a few applications that manage their own caches, such as databases.
  • Page 118: Gfs Directory Attribute

    Chapter 9. Managing GFS 9.7.2.2. Example In this example, the command sets the flag on the file named in directory directio datafile /gfs1/ gfs_tool setflag directio /gfs1/datafile 9.7.3. GFS Directory Attribute command can be used to assign a direct I/O attribute flag, , to a gfs_tool inherit_directio...
  • Page 119: Usage

    Chapter 9. Managing GFS tory (and all its subdirectories). Existing files with zero length can also have data journaling turned on or off. Using the command, data journaling is enabled on a directory (and all its subdirectories) gfs_tool or on a zero-length file by setting the attribute flags to the directory or inherit_jdata jdata...
  • Page 120: Mount With

    Chapter 9. Managing GFS Note For more information about , and updates, refer to the man page. ctime mtime atime stat(2) updates are enabled as they are by default on GFS and other Linux file systems then every atime time a file is read, its inode needs to be updated. Because few applications use the information provided by , those updates can require a signifi- atime...
  • Page 121: Suspending Activity On A File System

    Chapter 9. Managing GFS By using the action flag of the command, all current tunable parameters includ- gettune gfs_tool (default is 3600 seconds) are displayed. atime_quantum command is used to change the parameter value. It must gfs_tool settune atime_quantum be set on each node and each time the file system is mounted. (The setting is not persistent across unmounts.) 9.9.2.1.
  • Page 122: Examples

    Chapter 9. Managing GFS 9.10.1. Usage Freeze Activity gfs_tool MountPoint freeze Unfreeze Activity gfs_tool MountPoint unfreeze MountPoint Specifies the file system to freeze or unfreeze. 9.10.2. Examples This example freezes file system /gfs gfs_tool freeze /gfs This example unfreezes file system /gfs gfs_tool unfreeze /gfs 9.11.
  • Page 123: Examples

    Chapter 9. Managing GFS File Specifies the file from which to get information. command provides additional action flags (options) not listed in this section. For more gfs_tool information about other action flags, refer to the man page. gfs_tool gfs_tool 9.11.2. Examples This example reports extended file system usage about file system /gfs gfs_tool df /gfs...
  • Page 124: Example

    Chapter 9. Managing GFS flag causes all questions to be answered with . With the specified, the gfs_fsck does not prompt you for an answer before making changes. BlockDevice Specifies the block device where the GFS file system resides. 9.12.2. Example In this example, the GFS file system residing on block device is repaired.
  • Page 125: Example

    Chapter 9. Managing GFS LinkName Specifies a name that will be seen and used by applications and will be followed to get to one of the multiple real files or directories. When LinkName is followed, the destination depends on the type of variable and the node or user doing the following. Variable Description This variable resolves to a real file or directory named with the...
  • Page 126: Shutting Down A Gfs Cluster

    Chapter 9. Managing GFS n01# touch /gfs/log/fileA n02# touch /gfs/log/fileB n03# touch /gfs/log/fileC n01# ls /gfs/log/ fileA n02# ls /gfs/log/ fileB n03# ls /gfs/log/ fileC 9.14. Shutting Down a GFS Cluster To cleanly shut down a GFS cluster, perform the following steps: 1.
  • Page 127 Chapter 9. Managing GFS Note The GFS kernel modules must be loaded prior to performing these steps. Refer to Section 3.2.2 Loading the GFS Kernel Modules for more information. 1. At each node, activate pools. Refer to Section 5.6 Activating/Deactivating a Pool Volume for more information.
  • Page 128 Chapter 9. Managing GFS...
  • Page 129: Using The Fencing System

    Chapter 10. Using the Fencing System Fencing (or I/O fencing) is the mechanism that disables an errant GFS node’s access to a file system, preventing the node from causing data corruption. This chapter explains the necessity of fencing, summarizes how the fencing system works, and describes each form of fencing that can be used in a GFS cluster.
  • Page 130: Apc Masterswitch

    Chapter 10. Using the Fencing System Fending Method Fencing Agent APC Network Power Switch fence_apc WTI Network Power Switch fence_wti Brocade FC Switch fence_brocade McData FC Switch fence_mcdata Vixel FC Switch fence_vixel HP RILOE fence_rib GNBD fence_gnbd xCAT fence_xcat Manual fence_manual Table 10-1.
  • Page 131: Wti Network Power Switch

    Chapter 10. Using the Fencing System 10.2.2. WTI Network Power Switch WTI network power switches (NPSs) are used to power cycle nodes that need to be fenced. The fencing agent, , logs into the device and reboots the specific port for the offline node. The fence_wti fencing agent does not support nodes with dual power supplies plugged into a WTI NPS.
  • Page 132: Hp Riloe Card

    Chapter 10. Using the Fencing System Caution Red Hat GFS does not support the following Vixel firmware: Vixel 7xxx series firmware versions 4.0 or later, Vixel 9xxx series firmware versions 6.0 or later. 10.2.5. HP RILOE Card A GFS node that has an HP RILOE (Remote Insight Lights-Out Edition) card can be fenced fencing agent.
  • Page 133 Chapter 10. Using the Fencing System Upon seeing this message (by monitoring or equivalent), an administrator /var/log/messages must manually reset the node specified in the message. After the node is reset, the administrator must run the command to indicate to the system that the failed node has been fence_ack_manual reset.
  • Page 134 Chapter 10. Using the Fencing System...
  • Page 135: Using Gnbd

    Chapter 11. Using GNBD GNBD (Global Network Block Device) provides block-level storage access over an Ethernet LAN. GNBD components run as a client in a GFS node and as a server in a GNBD server node. A GNBD server node exports block-level storage from its local storage (either directly attached storage or SAN storage) to a GFS node.
  • Page 136 Chapter 11. Using GNBD Note A server should not import the GNBDs to use them as a client would. If a server exports the devices uncached, they may also be used by ccsd 11.1.1.1. Usage gnbd_export -d pathname -e gnbdname [ pathname Specifies a storage device to export.
  • Page 137: Importing A Gnbd On A Client

    Chapter 11. Using GNBD 11.1.1.2. Examples This example is for a GNBD server configured with GNBD multipath. It exports device /dev/sdc2 as GNBD . Cache is disabled by default. gamma gnbd_export -d /dev/sdc2 -e gamma This example is for a GNBD server not configured with GNBD multipath. It exports device as GNBD with cache enabled.
  • Page 138: Lock Server Startup

    Chapter 11. Using GNBD 11.2.1. Linux Page Caching For GNBD multipath, do not specify Linux page caching (the option of the com- gnbd_export mand). All GNBDs that are part of the pool must run with caching disabled. Data corruption occurs if the GNBDs are run with caching enabled.
  • Page 139: Fencing Gnbd Server Nodes

    Chapter 11. Using GNBD Node Deployment CCS File Location GFS dedicated GNBD, local, or FC-attached storage GFS with lock server Local or FC-attached storage only GNBD server dedicated Local or FC-attached storage only GNBD server with lock server Local or FC-attached storage only Lock server dedicated Local or FC-attached storage only Table 11-1.
  • Page 140 Chapter 11. Using GNBD 1. A GNBD server node must have local access to all storage devices needed to mount a GFS file system. The GNBD server node must not import ( command) other GNBD gnbd_import devices to run the file system. 2.
  • Page 141: Using Gfs Init.d Scripts

    Chapter 12. Using GFS Scripts init.d This chapter describes GFS scripts and consists of the following sections: init.d Section 12.1 GFS Scripts Overview • init.d Section 12.2 GFS Scripts Use • init.d 12.1. GFS Scripts Overview init.d The GFS scripts start GFS services during node startup and stop GFS services during node init.d shutdown.
  • Page 142 init.d Chapter 12. Using GFS Scripts 12.2. GFS Scripts Use init.d The following example procedure demonstrates using the GFS scripts to start GFS: init.d 1. Install GFS on each node. 2. Load the module: pool Note If you need to specify a persistent major number, edit before loading /etc/modules.conf .
  • Page 143 init.d Chapter 12. Using GFS Scripts 12. Modify to include GFS file systems. For example, here is part of an /etc/fstab /ect/fstab file that includes the GFS file system trin1.gfs /dev/pool/trin1.gfs /gfs gfs defaults 0 0 If you do not want a GFS file system to automatically mount on startup, add to the noauto options in the...
  • Page 144 init.d Chapter 12. Using GFS Scripts...
  • Page 145: Using Red Hat Gfs With Red Hat Cluster Suite

    Configuration Tool creates /etc/cluster.xml To run the GFS Setup Druid, enter the following at the command line: # redhat-config-gfscluster gulm-bridge This is a fence method available for Red Hat Cluster nodes, if and only if the Red Hat GFS RPM is installed on the node that the Cluster Configuration Tool runs on.
  • Page 146: Changes To Red Hat Cluster

    Appendix A. Using Red Hat GFS with Red Hat Cluster Suite Also, the Cluster Configuration Tool wraps several command line calls into the Red Hat Cluster Manager, such as starting and stopping services. A.2. Changes to Red Hat Cluster The following changes to Red Hat Cluster enable running it with Red Hat GFS in RHEL3-U3 The Cluster Configuration Tool has been changed.
  • Page 147: Upgrading Red Hat Gfs 5.2.1 To Red Hat Gfs 6.0

    Adding Red Hat GFS to an existing Red Hat Cluster Manager deployment requires running the Red Hat GFS druid application, GFS Setup Druid (also known as redhat-config-gfscluster). As with the scenario in Section A.3.1 New Installations of Red Hat GFS and Red Hat Cluster Manager, while Red Hat GFS is scalable up to 300 nodes, a Red Hat Cluster Manager limits the total number of nodes in a cluster to 16.
  • Page 148 Appendix A. Using Red Hat GFS with Red Hat Cluster Suite...
  • Page 149: Upgrading Gfs

    Appendix B. Upgrading GFS This appendix contains instructions for upgrading GFS 5.2.1 to GFS 6.0 software. Note If you are using GFS with Red Hat Cluster, the order in which you upgrade GFS compared other Cluster installation configuration tasks vary. For information about installing and using GFS with Red Hat Cluster Suite, refer to Appendix A Using Red Hat GFS with Red Hat Cluster Suite.
  • Page 150 Appendix B. Upgrading GFS Note Although GFS 6.0 requires no file, you can safely leave the license file in the license.ccs CCA. You can use the command to extract the Cluster Configuration System ccs_tool extract (CCS) files for modification. 5. (Optional) Activate pools on all nodes. Command usage: pool_assemble -a Reference: Section 5.6 Activating/Deactivating a Pool Volume...
  • Page 151: Basic Gfs Examples

    Appendix C. Basic GFS Examples This appendix contains examples of setting up and using GFS in the following basic scenarios: Section C.1 LOCK_GULM, RLM Embedded • Section C.2 LOCK_GULM, RLM External • Section C.3 LOCK_GULM, SLM Embedded • Section C.4 LOCK_GULM, SLM External •...
  • Page 152: Kernel Modules Loaded

    Appendix C. Basic GFS Examples Host Name IP Address Login Name Password 10.0.1.10 Table C-1. APC MasterSwitch Information Host Name IP Address APC Port Number 10.0.1.1 10.0.1.2 10.0.1.3 Table C-2. GFS and Lock Server Node Information Major Minor #Blocks Name 8388608 8001 sda1...
  • Page 153: Setup Process

    Appendix C. Basic GFS Examples C.1.3. Setup Process The setup process for this example consists of the following steps: 1. Create pool configurations for the two file systems. Create pool configuration files for each file system’s pool: for the first file system, pool_gfs01 for the second file system.
  • Page 154 Appendix C. Basic GFS Examples b. Create the file. This file contains the name of the cluster and the name of cluster.ccs the nodes where the LOCK_GULM server is run. The file should look like the following: cluster { name = "alpha" lock_gulm { servers = ["n01", "n02", "n03"] c.
  • Page 155 Appendix C. Basic GFS Examples fence_devices { apc { agent = "fence_apc" ipaddr = "10.0.1.10" login = "apc" passwd = "apc" 6. Create the CCS Archive on the CCA Device. Note This step only needs to be done once and from a single node. It should not be performed every time the cluster is restarted.
  • Page 156: Lock_Gulm, Rlm External

    Appendix C. Basic GFS Examples Blocksize: 4096 Filesystem Size:1963416 Journals: 3 Resource Groups:30 Locking Protocol:lock_gulm Lock Table: alpha:gfs02 Syncing... All Done 10. Mount the GFS file systems on all the nodes. Mount points are used on each node: /gfs01 /gfs02 n01# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n01# mount -t gfs /dev/pool/pool_gfs02 /gfs02 n02# mount -t gfs /dev/pool/pool_gfs01 /gfs01...
  • Page 157 Appendix C. Basic GFS Examples Host Name IP Address Login Name Password 10.0.1.10 Table C-4. APC MasterSwitch Information Host Name IP Address APC Port Number 10.0.1.1 10.0.1.2 10.0.1.3 Table C-5. GFS Node Information Host Name IP Address APC Port Number lck01 10.0.1.4 lck02...
  • Page 158: Kernel Modules Loaded

    Appendix C. Basic GFS Examples C.2.2. Kernel Modules Loaded Each node must have the following kernel modules loaded: • gfs.o • lock_harness.o • lock_gulm.o • pool.o C.2.3. Setup Process The setup process for this example consists of the following steps: 1.
  • Page 159 Appendix C. Basic GFS Examples alpha_cca assembled pool_gfs01 assembled pool_gfs02 assembled n03# pool_assemble -a <-- Activate pools alpha_cca assembled pool_gfs01 assembled pool_gfs02 assembled lck01# pool_assemble -a <-- Activate pools alpha_cca assembled pool_gfs01 assembled pool_gfs02 assembled lck02# pool_assemble -a <-- Activate pools alpha_cca assembled pool_gfs01 assembled pool_gfs02 assembled...
  • Page 160 Appendix C. Basic GFS Examples n03 { ip_interfaces { eth0 = "10.0.1.3" fence { power { apc { port = 3 lck01 { ip_interfaces { eth0 = "10.0.1.4" fence { power { apc { port = 4 lck02 { ip_interfaces { eth0 = "10.0.1.5"...
  • Page 161 Appendix C. Basic GFS Examples Note If your cluster is running Red Hat GFS 6.0 for Red Hat Enterprise Linux 3 Update 5 and later, you can use the optional parameter to explicitly specify an IP address usedev rather than relying on an IP address from .
  • Page 162: Lock_Gulm, Slm Embedded

    Appendix C. Basic GFS Examples 9. Create the GFS file systems. Create the first file system on and the second on . The names of the pool_gfs01 pool_gfs02 two file systems are , respectively, as shown in the example: gfs01 gfs02 n01# gfs_mkfs -p lock_gulm -t alpha:gfs01 -j 3 /dev/pool/pool_gfs01 Device: /dev/pool/pool_gfs01...
  • Page 163: Key Characteristics

    Appendix C. Basic GFS Examples C.3.1. Key Characteristics This example configuration has the following key characteristics: Fencing device — An APC MasterSwitch (single-switch configuration). Refer to Table C-8 for • switch information. Number of GFS nodes — 3. Refer to Table C-9 for node information. •...
  • Page 164: Kernel Modules Loaded

    Appendix C. Basic GFS Examples driver is loaded and that it loaded without errors. The small partition ( ) is used to store the cluster configuration information. The two re- /dev/sda1 maining partitions ( ) are used for the GFS file systems. /dev/sda2 sdb1 You can display the storage device information at each node in your GFS cluster by running the follow-...
  • Page 165 Appendix C. Basic GFS Examples Note This step must be performed every time a node is rebooted. If it is not, the pool devices will not be accessible. Activate the pools using the command for each node as follows: pool_assemble -a n01# pool_assemble -a <-- Activate pools alpha_cca assembled...
  • Page 166 Appendix C. Basic GFS Examples n03 { ip_interfaces { eth0 = "10.0.1.3" fence { power { apc { port = 3 Note If your cluster is running Red Hat GFS 6.0 for Red Hat Enterprise Linux 3 Update 5 and later, you can use the optional parameter to explicitly specify an IP address usedev rather than relying on an IP address from...
  • Page 167 Appendix C. Basic GFS Examples note This step must be performed each time the cluster is rebooted. The CCA device must be specified when starting ccsd n01# ccsd -d /dev/pool/alpha_cca n02# ccsd -d /dev/pool/alpha_cca n03# ccsd -d /dev/pool/alpha_cca 8. Start the LOCK_GULM server on each node. For example: n01# lock_gulmd 9.
  • Page 168: Lock_Gulm, Slm External

    Appendix C. Basic GFS Examples C.4. LOCK_GULM, SLM External This example sets up a cluster with three nodes and two GFS file systems. It requires three nodes for the GFS cluster and an additional (external) node to run the LOCK_GULM server. This section provides the following information about the example: Section C.4.1 Key Characteristics •...
  • Page 169: Kernel Modules Loaded

    Appendix C. Basic GFS Examples Major Minor #Blocks Name 8388608 8001 sda1 8377897 sda2 8388608 8388608 sdb1 Table C-14. Storage Device Information Notes For shared storage devices to be visible to the nodes, it may be necessary to load an appropriate device driver.
  • Page 170 Appendix C. Basic GFS Examples subpool 0 0 1 pooldevice 0 0 /dev/sdb1 2. Create a pool configuration for the CCS data. Create a pool configuration file for the pool that will be used for CCS data. The pool does not need to be very large.
  • Page 171 Appendix C. Basic GFS Examples c. Create the file. This file contains the name of each node, its IP address, and nodes.ccs node-specific I/O fencing parameters. The file should look like the following: nodes { n01 { ip_interfaces { eth0 = "10.0.1.1" fence { power { apc {...
  • Page 172 Appendix C. Basic GFS Examples Note If your cluster is running Red Hat GFS 6.0 for Red Hat Enterprise Linux 3 Update 5 and later, you can use the optional parameter to explicitly specify an IP address usedev rather than relying on an IP address from .
  • Page 173: Lock_Gulm, Slm External, And Gnbd

    Appendix C. Basic GFS Examples n01# gfs_mkfs -p lock_gulm -t alpha:gfs01 -j 3 /dev/pool/pool_gfs01 Device: /dev/pool/pool_gfs01 Blocksize: 4096 Filesystem Size:1963216 Journals: 3 Resource Groups:30 Locking Protocol:lock_gulm Lock Table: alpha:gfs01 Syncing... All Done n01# gfs_mkfs -p lock_gulm -t alpha:gfs02 -j 3 /dev/pool/pool_gfs02 Device: /dev/pool/pool_gfs02 Blocksize: 4096 Filesystem Size:1963416...
  • Page 174 Appendix C. Basic GFS Examples Number of lock server nodes — 1. The lock server is run on one of the GFS nodes (embedded). • Refer to Table C-17 for node information. Number of GNBD server nodes — 1. Refer to Table C-18 for node information. •...
  • Page 175: Kernel Modules Loaded

    Appendix C. Basic GFS Examples Major Minor #Blocks Name 8388608 8001 sda1 8377897 sda2 8388608 8388608 sdb1 Table C-19. Storage Device Information Notes The storage must only be visible on the GNBD server node. The GNBD server node will ensure that the storage is visible to the GFS cluster nodes via the GNBD protocol.
  • Page 176 Appendix C. Basic GFS Examples gnbdsrv# gnbd_export -e cca -d /dev/sda1 -c gnbdsrv# gnbd_export -e gfs01 -d /dev/sda2 -c gnbdsrv# gnbd_export -e gfs02 -d /dev/sdb1 -c Caution The GNBD server should not attempt to use the cached devices it exports — either directly or by importing them.
  • Page 177 Appendix C. Basic GFS Examples Note This step must be performed every time a node is rebooted. If it is not, the pool devices will not be accessible. Activate the pools using the command for each node as follows: pool_assemble -a n01# pool_assemble -a <-- Activate pools alpha_cca assembled...
  • Page 178 Appendix C. Basic GFS Examples power { apc { port = 2 n03 { ip_interfaces { eth0 = "10.0.1.3" fence { power { apc { port = 3 lcksrv { ip_interfaces { eth0 = "10.0.1.4" fence { power { apc { port = 4 gnbdsrv { ip_interfaces {...
  • Page 179 Appendix C. Basic GFS Examples fence_devices { apc { agent = "fence_apc" ipaddr = "10.0.1.10" login = "apc" passwd = "apc" 8. Create the CCS Archive on the CCA Device. Note This step only needs to be done once and from a single node. It should not be performed every time the cluster is restarted.
  • Page 180: Lock_Nolock

    Appendix C. Basic GFS Examples Blocksize: 4096 Filesystem Size:1963416 Journals: 3 Resource Groups:30 Locking Protocol:lock_gulm Lock Table: alpha:gfs02 Syncing... All Done 12. Mount the GFS file systems on all the nodes. Mount points are used on each node: /gfs01 /gfs02 n01# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n01# mount -t gfs /dev/pool/pool_gfs02 /gfs02 n02# mount -t gfs /dev/pool/pool_gfs01 /gfs01...
  • Page 181: Kernel Modules Loaded

    Appendix C. Basic GFS Examples Major Minor #Blocks Name 8388608 8001 sda1 8388608 8388608 sdb1 Table C-21. Storage Device Information Notes For storage to be visible to the node, it may be necessary to load an appropriate device driver. If the storage is not visible on the node, confirm that the device driver is loaded and that it loaded without errors.
  • Page 182 Appendix C. Basic GFS Examples n01# pool_tool -c pool_gfs01.cf pool_gfs02.cf Pool label written successfully from pool_gfs01.cf Pool label written successfully from pool_gfs02.cf 3. Activate the pools. Note This step must be performed every time a node is rebooted. If it is not, the pool devices will not be accessible.
  • Page 183 Appendix C. Basic GFS Examples All Done n01# gfs_mkfs -p lock_gulm -t alpha:gfs02 -j 1 /dev/pool/pool_gfs02 Device: /dev/pool/pool_gfs02 Blocksize: 4096 Filesystem Size:1963416 Journals: 1 Resource Groups:30 Locking Protocol:lock_nolock Lock Table: Syncing... All Done 7. Mount the GFS file systems on the nodes. Mount points are used on the node: /gfs01...
  • Page 184 Appendix C. Basic GFS Examples...
  • Page 185: Index

    Index using, 75 using clustering and locking systems, 85 fencing and LOCK_GULM, 86 locking system overview, 85 LOCK_GULM, 85 activating your subscription, iv LOCK_NOLOCK, 87 adding journals to a file system, 101 number of LOCK_GULM servers, 86 administrative options, 77 selection of LOCK_GULM servers, 85 comparing CCS configuration files to a CCS shutting down a LOCK_GULM server, 86...
  • Page 186 growing, 99 making, 89 examples mounting, 91 basic GFS examples, 137 quota management, 94 LOCK_GULM, RLM embedded, 137 disabling/enabling quota accounting, 98 key characteristics, 137 disabling/enabling quota enforcement, 98 setup process, 139 displaying quota limits, 95 LOCK_GULM, RLM external, 142 setting quotas, 94 key characteristics, 142 synchronizing quotas, 97...
  • Page 187 gfs_mkfs command options table, 90 GNBD lock management, 6 driver and command usage, 121 lock server node information (examples) table, 143, exporting from a server, 121 154, 160 importing on a client, 123 locking system using, 121 LOCK_GULM, 85 using GFS on a GNBD server node, 125 fencing, 86 using GNBD multipath, 123 LOCK_GULM severs...
  • Page 188 path names, context-dependent (CDPNs), 110 quota management, 94 platform disabling/enabling quota accounting, 98 system requirements, 11 disabling/enabling quota enforcement, 98 platform requirements table, 11 displaying quota limits, 95 pool configuration setting quotas, 94 displaying information, 30 synchronizing quotas, 97 pool configuration file keyword and variable descrip- tions table, 26 pool management commands, 22...
  • Page 189 volume, new tables checking for block devices before creating pool APC MasterSwitch information (examples), 137, configuration file, 25 142, 149, 154, 160 creating a configuration file, 26 CCS file location for GNBD multipath cluster, 125 CDPN variable values, 111 cluster.ccs variables, 40 fence.css variables, 47 fencing methods and agents, 116 fibre channel network requirements, 12...
  • Page 191: Colophon

    Colophon The manuals are written in DocBook SGML v4.1 format. The HTML and PDF formats are produced using custom DSSSL stylesheets and custom jade wrapper scripts. The DocBook SGML files are written in Emacs with the help of PSGML mode. Garrett LeSage created the admonition graphics (note, tip, important, caution, and warning).
  • Page 192 Nadine Richter — German translations Audrey Simons — French translations Francesco Valente — Italian translations Sarah Wang — Simplified Chinese translations Ben Hung-Pin Wu — Traditional Chinese translations...

Table of Contents