Table of Contents

Advertisement

Quick Links

Red Hat GFS 5.2.1
Administrator's Guide

Advertisement

Table of Contents
loading

Summary of Contents for Red Hat GFS 5.2.1

  • Page 1 Red Hat GFS 5.2.1 Administrator’s Guide...
  • Page 2 All other trademarks and copyrights referred to are the property of their respective owners. The GPG fingerprint of the security@redhat.com key is: CA 20 86 86 2B D6 9D FC 65 F6 EC C4 21 91 80 CD DB 42 A6 0E...
  • Page 3: Table Of Contents

    Table of Contents Introduction............................i 1. Audience ..........................i 2. Document Conventions......................i 3. More to Come ........................iii 3.1. Send in Your Feedback ..................iv 4. Sign Up for Support ......................iv 5. Recommended References....................iv 1. GFS Overview ..........................1 1.1.
  • Page 4 5. Using the Pool Volume Manager ....................23 5.1. Overview of GFS Pool Volume Manager ................ 23 5.2. Synopsis of Pool Management Commands ..............24 5.2.1......................24 pool_tool 5.2.2....................25 pool_assemble 5.2.3......................25 pool_info 5.2.4......................26 pool_mp 5.3.
  • Page 5 7. Using the Cluster Configuration System..................71 7.1. Creating a CCS Archive....................71 7.1.1. Usage......................... 71 7.1.2. Example ......................71 7.1.3. Comments ......................72 7.2. Starting CCS in the Cluster....................72 7.2.1. Usage......................... 72 7.2.2. Example ......................72 7.2.3. Comments ......................72 7.3.
  • Page 6 9.7. Direct I/O ......................... 96 9.7.1......................96 O_DIRECT 9.7.2. GFS File Attribute..................... 97 9.7.3. GFS Directory Attribute ................... 97 9.8. Data Journaling ........................ 98 9.8.1. Usage......................... 98 9.8.2. Examples......................99 9.9. Configuring Updates .................... 99 atime 9.9.1. Mount with ..................
  • Page 7 A. Upgrading GFS .......................... 123 A.1. Overview of Differences between GFS 5.1.x and GFS 5.2.x ........123 A.1.1. Configuration Information ................123 A.1.2. GFS License....................124 A.1.3. GFS LockProto ....................124 A.1.4. GFS LockTable ....................124 A.1.5. GFS Mount Options ..................124 A.1.6.
  • Page 9: Introduction

    GFS configurations. HTML and PDF versions of all the official Red Hat Enterprise Linux manuals and release notes are available online at http://www.redhat.com/docs/. 1. Audience This book is intended primarily for Linux system administrators who are familiar with the following activities: Linux system administration procedures, including kernel configuration...
  • Page 10 Introduction [key] A key on the keyboard is shown in this style. For example: To use [Tab] completion, type in a character and then press the [Tab] key. Your terminal displays the list of files in the directory that start with that letter. [key]-[combination] A combination of keystrokes is represented in this way.
  • Page 11 Introduction user input Text that the user has to type, either on the command line, or into a text box on a GUI screen, is displayed in this style. In the following example, text is displayed in this style: To boot your system into the text based installation program, you must type in the text com- mand at the prompt.
  • Page 12: More To Come

    If you spot a typo in the Red Hat GFS Administrator’s Guide, or if you have thought of a way to make this manual better, we would love to hear from you! Please submit a report in Bugzilla (http://www.redhat.com/bugzilla) against the component rh-gfsg. Be sure to mention the manual’s identifier: Red Hat GFS Administrator’s Guide(EN)-5.2.1-Print-RHI (2004-01-15T12:12-0400)
  • Page 13 Introduction Topic Reference Comment Shared Data Clustering and Shared Data Clusters by Dilip Provides detailed technical File Systems M. Ranade. Wiley, 2002. information on cluster file system and cluster volume manager design. Storage Area Networks (SANs) Designing Storage Area Provides a concise summary of Networks: A Practical Fibre Channel and IP SAN Reference for Implementing...
  • Page 14 Introduction...
  • Page 15: Gfs Overview

    Chapter 1. GFS Overview GFS is a cluster file system that provides data sharing among Linux-based computers. GFS provides a single, consistent view of the file system name space across all nodes in a cluster. It allows appli- cations to install and run without much knowledge of the underlying storage infrastructure. GFS is fully compliant with the IEEE POSIX interface, allowing applications to perform file operations as if they were running on a local file system.
  • Page 16: Superior Performance And Scalability

    Chapter 1. GFS Overview You can configure GNBD servers for GNBD multipath. GNBD multipath allows you to configure multiple GNBD server nodes with redundant paths between the GNBD server nodes and storage de- vices. The GNBD servers, in turn, present multiple storage paths to GFS nodes via redundant GNBDs. With GNBD multipath, if a GNBD server node becomes unavailable, another GNBD server node can provide GFS nodes with access to storage devices.
  • Page 17: Performance, Scalability, Moderate Price

    Chapter 1. GFS Overview 1.2.2. Performance, Scalability, Moderate Price Multiple Linux client applications on a LAN can share the same SAN-based data as shown in Figure 1-2. SAN block storage is presented to network clients as block storage devices by GNBD servers. From the perspective of a client application, storage is accessed as if it were directly attached to the server in which the application is running.
  • Page 18: Economy And Performance

    Chapter 1. GFS Overview 1.2.3. Economy and Performance Figure 1-3 shows how Linux client applications can take advantage of an existing Ethernet topology to gain shared access to all block storage devices. Client data files and file systems can be shared with GFS on each client.
  • Page 19: Lock Management

    Chapter 1. GFS Overview 1.3.1. Cluster Volume Management Cluster volume management provides simplified management of volumes and the ability to dynam- ically extend file system capacity without interrupting file-system access. With cluster volume man- agement, you can aggregate multiple physical volumes into a single, logical device across all nodes in a cluster.
  • Page 20: Gfs Software Subsystems

    Chapter 1. GFS Overview For information about cluster configuration management refer to Chapter 6 Creating the Cluster Con- figuration System Files and Chapter 7 Using the Cluster Configuration System. 1.4. GFS Software Subsystems GFS consists of the following subsystems: Cluster Configuration System (CCS) •...
  • Page 21 Chapter 1. GFS Overview Software Subsystem Components Description User interface for agent. fence_ack_manual fence_manual Pool Kernel module implementing the pool pool.o block-device driver. Command that activates and deactivates pool_assemble pool volumes. Command that configures pool volumes pool_tool from individual storage devices. Command that reports information about pool_info system pools.
  • Page 22: Before Configuring Gfs

    Chapter 1. GFS Overview Software Subsystem Components Description License Command that is used to check a license license_tool file. GNBD Kernel module that implements the GNBD gnbd.o blade-device driver on clients. Kernel module that implements the GNBD gnbd_serv.o server. It allows a node to export local storage over the network.
  • Page 23 Chapter 1. GFS Overview GNBD server nodes If you are using GNBD, determine how many nodes are needed. The hostname and IP address of all GNBD server nodes are used. Fencing method Determine the fencing method for each GFS node. If you are using GNBD multipath, determine the fencing method for each GNBD server node (node that exports GNBDs to GFS nodes).
  • Page 24 Chapter 1. GFS Overview...
  • Page 25: System Requirements

    Chapter 2. System Requirements This chapter describes the system requirements for Red Hat GFS 5.2.1 and consists of the following sections: Section 2.1 Platform Requirements • Section 2.2 TCP/IP Network • Section 2.3 Fibre Channel Storage Network • Section 2.4 Fibre Channel Storage Devices •...
  • Page 26: Fibre Channel Storage Devices

    Chapter 2. System Requirements Requirement Description HBA (Host Bus Adapter) One HBA minimum per GFS node Connection method Fibre Channel switch Note: If an FC switch is used for I/O fencing nodes, you may want to consider using Brocade and Vixel FC switches, for which GFS fencing agents exist.
  • Page 27: Installing System Software

    Chapter 3. Installing System Software Installing system software consists of installing a Linux kernel and the corresponding GFS software into each node of your GFS cluster. This chapter explains how to install system software and includes the following sections: Section 3.1 Prerequisite Tasks •...
  • Page 28: Installation Tasks

    Chapter 3. Installing System Software Note One example of time synchronization software is the Network Time Protocol (NTP) software. You can find more information about NTP at http://www.ntp.org. 3.1.3. Utility Stunnel utility needs to be installed only on nodes that use the HP RILOE PCI card for I/O fenc- Stunnel ing.
  • Page 29: Installing A Gfs Rpm

    Chapter 3. Installing System Software Step Action Comment Acquire the appropriate binary For example: GFS kernel. Copy or download the kernel-gfs-smp-2.4.21-9.0.1.EL.i686.rpm appropriate RPM file for each Make sure that the file is appropriate for the GFS node. hardware and libraries of the computer on which the kernel will be installed.
  • Page 30: Loading The Gfs Kernel Modules

    Chapter 3. Installing System Software 3.2.2. Installing A GFS RPM Installing a GFS RPM consists of acquiring the appropriate GFS software and installing it. Note Only one instance of a GFS RPM can be installed on a node. If a GFS RPM is present on a node, that RPM must be removed before installing another GFS RPM.
  • Page 31 Chapter 3. Installing System Software Note The GFS kernel modules need to be loaded every time a GFS node is started. It is recommended that you use a startup script to automate loading the GFS kernel modules. Note The procedures in this section are for a GFS configuration that uses LOCK_GULM. If you are using LOCK_NOLOCK, refer to Appendix B Basic GFS Examples for information about which GFS kernel modules you should load.
  • Page 32 Chapter 3. Installing System Software Step Command Description Loads the module. insmod lock_gulm lock_gulm.o Loads the module. insmod gfs gfs.o Verifies that all GFS kernel modules are loaded. This lsmod shows a listing of currently loaded modules. It should display all the modules loaded in the previous steps and other system kernel modules.
  • Page 33: Initial Configuration

    Chapter 4. Initial Configuration This chapter describes procedures for initial configuration of GFS and contains the following sections: Section 4.1 Prerequisite Tasks • Section 4.2 Initial Configuration Tasks • Section 4.2.1 Setting Up Logical Devices • Section 4.2.2 Setting Up and Starting the Cluster Configuration System •...
  • Page 34: Setting Up And Starting The Cluster Configuration System

    Chapter 4. Initial Configuration 4.2.1. Setting Up Logical Devices To set up logical devices (pools) follow these steps: 1. Create file system pools. a. Create pool configuration files. Refer to Section 5.4 Creating a Configuration File for a New Volume. b.
  • Page 35: Starting Clustering And Locking Systems

    Chapter 4. Initial Configuration 4.2.3. Starting Clustering and Locking Systems To start clustering and locking systems, follow these steps: 1. Check the file to identify/verify which nodes are designated as lock server nodes. cluster.ccs 2. Start the LOCK_GULM servers. At each lock-server node, start .
  • Page 36 Chapter 4. Initial Configuration...
  • Page 37: Using The Pool Volume Manager

    Chapter 5. Using the Pool Volume Manager This chapter describes the GFS volume manager — named Pool — and its commands. The chapter consists of the following sections: Section 5.1 Overview of GFS Pool Volume Manager • Section 5.2 Synopsis of Pool Management Commands •...
  • Page 38: Synopsis Of Pool Management Commands

    Chapter 5. Using the Pool Volume Manager 5.2. Synopsis of Pool Management Commands Four commands are available to manage pools: • pool_tool • pool_assemble • pool_info • pool-mp The following sections briefly describe the commands and provide references to other sections in this chapter, where more detailed information about the commands and their use is described.
  • Page 39: Pool_Info

    Chapter 5. Using the Pool Volume Manager Flag Option Display command version information, then exit. Verbose operation. Table 5-2. Command Options pool_tool 5.2.2. pool_assemble command activates and deactivates pools on a system (refer to Table 5-3 and pool_assemble Table 5-4). One or more pool names can be specified on the command line, indicating the pools to be activated or deactivated.
  • Page 40: Pool_Mp

    Chapter 5. Using the Pool Volume Manager Flag Function Section/Page Reference Display statistics Section 5.13 Using Pool Volume Statistics Display an active Section 5.7 Displaying Pool Configuration configuration Information Table 5-5. Command Functions pool_info Flag Option Enable debugging output. Show capacity in human readable form. Help.
  • Page 41: Scanning Block Devices

    Chapter 5. Using the Pool Volume Manager 5.3. Scanning Block Devices Scanning block devices provides information about the availability and characteristics of the devices. That information is important for creating a pool configuration file. You can scan block devices by issuing the command with the option.
  • Page 42 Chapter 5. Using the Pool Volume Manager 5.4. Creating a Configuration File for a New Volume A pool configuration file is used as input to the command when creating or growing a pool pool_tool volume. The configuration file defines the name and layout of a single pool volume. Refer to Figure 5-1 for the pool configuration file format.
  • Page 43: Examples

    Chapter 5. Using the Pool Volume Manager File Line and Variable Description Keyword pooldevice Adds a storage device to a subpool: subpool id device subpool specifies the subpool identifier to which the device is to be added. id is the device identifier. Number the devices in order beginning with 0.
  • Page 44: Usage

    Chapter 5. Using the Pool Volume Manager 5.5.1. Usage pool_tool -c ConfigFile ConfigFile Specifies the file that defines the pool. 5.5.2. Example In this example, the file describes the new pool, pool0, created by the command. pool0.cfg pool_tool -c pool0.cfg 5.5.3.
  • Page 45: Examples

    Chapter 5. Using the Pool Volume Manager PoolName Specifies the pool to deactivate. More than one name can be listed. If no pool names are specified, all pools visible to the system are deactivated. 5.6.2. Examples This example activates all pools on a node: pool_assemble -a This example deactivates all pools on a node: pool_assemble -r...
  • Page 46: Growing A Pool Volume

    Chapter 5. Using the Pool Volume Manager 5.7.2. Example In this example, the command displays the configuration for pool_tool -p pool0 # pool_tool -p pool0 poolname pool0 #minor dynamically assigned ¤ ¥ subpools 1 subpool 0 0 1 gfs_data pooldevice 0 0 /dev/sda1 5.8.
  • Page 47: Erasing A Pool Volume

    Chapter 5. Using the Pool Volume Manager 2. Edit the new file, , by adding one or more subpools that contain the devices or pool0-new.cfg partitions, as indicated in this example: poolname pool0 <--- Change subpools 2 subpool 0 128 4 gfs_data <--- Add subpool 1 0 1 gfs_data pooldevice 0 0 /dev/sdb1...
  • Page 48: Usage

    Chapter 5. Using the Pool Volume Manager 5.10. Renaming a Pool Volume command can be used to change the name of a pool. pool_tool 5.10.1. Usage pool_tool -r NewPoolName CurrentPoolName NewPoolName Specifies the new name of the pool. CurrentPoolName Specifies the pool name to be changed. Note In releases before GFS 5.2, the flag had a different usage.
  • Page 49: Example

    Chapter 5. Using the Pool Volume Manager PoolName Specifies the name of the pool to be changed. The minor number must have a value between 0 and 64. Specifying a minor number of 0 dynamically selects an actual minor number between 65 and 127 at activation time.
  • Page 50: Examples

    Chapter 5. Using the Pool Volume Manager Complete Display pool_info -v [PoolName] PoolName Specifies the pool name(s) for which to display information. If no pool names are specified, all active pools are displayed. 5.12.2. Examples This example displays basic information about all activated pools: pool_info -i This example displays complete information about all activated pools: pool_info -v...
  • Page 51: Adjusting Pool Volume Multipathing

    Chapter 5. Using the Pool Volume Manager 5.13.2. Examples This example displays statistics for all activated pools: pool_info -s This example displays statistics for pool0: pool_info -s pool0 This example clears statistics for pool0: pool_info -c pool0 5.14. Adjusting Pool Volume Multipathing command adjusts multipathing for running pools.
  • Page 52 Chapter 5. Using the Pool Volume Manager 5.14.2. Examples This example adjusts the multipathing for all pools to none. pool_mp -m none This example adjusts the multipathing for pool0 to failover. pool_mp -m failover pool0 This example adjusts the multipathing for pool0 to round-robin with a stripe size of 512 KB. pool_mp -m 512 pool0 This example restores failed paths for all active pools.
  • Page 53: Creating The Cluster Configuration System Files

    Chapter 6. Creating the Cluster Configuration System Files The GFS Cluster Configuration System (CCS) requires the following files: — The license file accompanies the GFS software and is required to use GFS. • license.ccs — The cluster file contains the name of the cluster and the names of the nodes where •...
  • Page 54: Dual Power And Multipath Fc Fencing Considerations

    Chapter 6. Creating the Cluster Configuration System Files 6.2. CCS File Creation Tasks To create the CCS files perform the following steps: 1. Add the file. license.ccs 2. Create the file. cluster.ccs 3. Create the file. fence.ccs 4. Create the file.
  • Page 55: Adding The License.ccs

    Chapter 6. Creating the Cluster Configuration System Files To fence GNBD server nodes, consider the following actions when creating fence.ccs files: nodes.ccs — Define fencing devices as follows: • fence.ccs For fencing GFS nodes using GNBD multipath, a GNBD fencing device must include an •...
  • Page 56: File

    Chapter 6. Creating the Cluster Configuration System Files license { name = "example" license_version = 1 license_number = "0" license_type = "Enterprise" node_count = 4 gfs { directio = "TRUE" cdpn = "TRUE" locking { type = "Redundant" fencing { multiple_agents = "TRUE"...
  • Page 57: File

    Chapter 6. Creating the Cluster Configuration System Files Note Two commonly used optional parameters are included in this procedure. For a descrip- cluster.ccs tion of other optional parameters, refer to the man page. lock_gulmd(5) cluster { name = "ClusterName" lock_gulm { servers = ["NodeName",..., "NodeName"] <-- Optional heartbeat_rate = Seconds...
  • Page 58 Chapter 6. Creating the Cluster Configuration System Files Creating the file consists of defining each fencing device you are going to use. You can fence.ccs define the following types of fencing devices in the file: fence.ccs APC MasterSwitch • WTI NPS (Network Power Switch) •...
  • Page 59 Chapter 6. Creating the Cluster Configuration System Files d. For each Vixel FC-switch fencing device, specify the following parameters: DeviceName, the fencing agent ( ) as , IPAddress, and agent = fence_vixel LoginPassword. Refer to Example 6-6 for a file that specifies a Vixel fence.ccs FC-switch fencing device.
  • Page 60 Chapter 6. Creating the Cluster Configuration System Files fence_devices{ DeviceName { agent = "fence_wti" ipaddr = " IPAddress" passwd = " LoginPassword" DeviceName { Figure 6-3. File Structure: fence_devices fence_wti fence_devices{ DeviceName { agent = "fence_brocade" ipaddr = "IPAddress" login = "LoginName" passwd = "LoginPassword"...
  • Page 61 Chapter 6. Creating the Cluster Configuration System Files fence_devices{ DeviceName { agent = "fence_gnbd" server = "ServerName" server = "ServerName" DeviceName { Figure 6-6. File Structure: without GNBD Multipath fence_devices fence_gnbd fence_devices{ DeviceName { agent = "fence_gnbd" server = "ServerName" server = "ServerName"...
  • Page 62 Chapter 6. Creating the Cluster Configuration System Files fence_devices{ DeviceName { agent = "fence_rib" hostname = "HostName" login = "LoginName" passwd = "LoginPassword" DeviceName { Figure 6-9. File Structure: fence_devices fence_rib fence_devices{ DeviceName { agent = "fence_manual" DeviceName { Figure 6-10. File Structure: fence_devices fence_manual Parameter...
  • Page 63 Chapter 6. Creating the Cluster Configuration System Files Parameter Description The password for logging in to a power switch, an FC switch, or LoginPassword a RILOE card. multipath Selects GNBD multipath style fencing. CAUTION: When multipath style fencing is used, if the process of a GNBD server node cannot be kgnbd_portd contacted, it is fenced as well, using its specified fencing method.
  • Page 64 Chapter 6. Creating the Cluster Configuration System Files fence_devices { wti1 { agent = "fence_wti" ipaddr = "10.0.3.3" passwd = "password" wti2 { agent = "fence_wti" ipaddr = "10.0.3.4" passwd = "password" Example 6-4. WTI NPS Fencing Devices Named wti1 wti2 fence_devices { silkworm1 {...
  • Page 65 Chapter 6. Creating the Cluster Configuration System Files fence_devices { gnbd { agent = "fence_gnbd" server = "nodea" server = "nodeb" This example shows a fencing device named with two servers: gnbd nodea nodeb Example 6-7. GNBD Fencing Device Named , without GNBD Multipath gnbd fence_devices {...
  • Page 66: File

    Chapter 6. Creating the Cluster Configuration System Files fence_devices { riloe1 { agent = "fence_rib" ipaddr = "10.0.4.1" login = "admin" passwd = "password" riloe2 { agent = "fence_rib" ipaddr = "10.0.4.2" login = "admin" passwd = "password" Example 6-10. Two HP-RILOE-Card Fencing Device Named riloe1 riloe2 In this example, two RILOE fencing devices are defined for two nodes.
  • Page 67 Chapter 6. Creating the Cluster Configuration System Files Refer to Chapter 10 Using the Fencing System for basic fencing details, descriptions of how fencing is used, and descriptions of available fencing methods. To create the file, follow these steps: nodes.ccs 1.
  • Page 68 Chapter 6. Creating the Cluster Configuration System Files specifies APC MasterSwitch fencing for a single power supply. Refer to Example 6-13 for a file that specifies APC MasterSwitch fencing for dual power supplies. nodes.ccs b. If using WTI NPS fencing, specify MethodName, DeviceName, and PortNumber. Refer to Example 6-14 for a file that specifies WTI NPS fencing.
  • Page 69 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName NodeName File format for node identification (same ip_interfaces format for all nodes) IFNAME = " IPAddress " fence { File format for APC MethodName MasterSwitch fencing method, for DeviceName node with single power supply only port = PortNumber...
  • Page 70 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName { NodeName { File format for node ip_interfaces { identification (same format for all nodes) IFNAME = "IPAddress " fence { MethodName { File format for APC DeviceName { Fencing device MasterSwitch for pwr supply 1...
  • Page 71 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName NodeName File format for node identification (same ip_interfaces format for all nodes) IFNAME = " IPAddress " fence { File format for MethodName WTI NPS fencing method DeviceName port = PortNumber NodeName Figure 6-13.
  • Page 72 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName NodeName File format for node identification (same ip_interfaces format for all nodes) IFNAME = " IPAddress " fence { File format for Brocade or Vixel MethodName FC-Switch fencing method DeviceName port = PortNumber DeviceName...
  • Page 73 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName NodeName File format for node identification (same ip_interfaces format for all nodes) IFNAME = " IPAddress " fence { File format for MethodName GNBD fencing method DeviceName ipaddr = " IPAddress " NodeName Figure 6-15.
  • Page 74 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName NodeName File format for node identification (same ip_interfaces format for all nodes) IFNAME = " IPAddress " fence { File format for MethodName HP RILOE fencing method DeviceName localport = PortNumber NodeName Figure 6-16.
  • Page 75 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName NodeName File format for node identification (same ip_interfaces format for all nodes) IFNAME = " IPAddress " fence { File format for MethodName manual fencing method DeviceName ipaddr = " IPAddress " NodeName Figure 6-17.
  • Page 76 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName { NodeName { Node that will use ip_interfaces { cascaded fencing methods IFName = "IPAddress" fence { MethodName { DeviceName { Fencing method 1 #Device-specific parameter(s) Cascades to next if fencing fails MethodName { Fencing method 2...
  • Page 77 Chapter 6. Creating the Cluster Configuration System Files nodes { NodeName { NodeName { File format for node identification ip_interfaces { (same format for IFName = "IPAddress" all nodes) fence { Can use any MethodName { fencing agent File format for DeviceName { except fence_gnbd fencing GNBD...
  • Page 78 Chapter 6. Creating the Cluster Configuration System Files Parameter Description A name describing the fencing method performed by the listed MethodName devices. For example, a MethodName of could be used power to describe a fencing method using an APC MasterSwitch. Or, a MethodName of could be used to describe a Cascade1...
  • Page 79 Chapter 6. Creating the Cluster Configuration System Files nodes { n01 { ip_interfaces { hsi0 = "10.0.0.1" fence { power { apc1 { <----------- Fencing device for power supply 1 port = 6 switch = 1 option = "off" <-- Power down power supply 1 apc2 { <----------- Fencing device for power supply 2 port = 7 switch = 2...
  • Page 80 Chapter 6. Creating the Cluster Configuration System Files nodes { n01 { ip_interfaces { hsi0 = "10.0.0.1" fence { san { silkworm1 { port = 3 silkworm2 { <--- Additional fencing device, for additional path to FC storage port = 4 n02 { Example 6-15.
  • Page 81 Chapter 6. Creating the Cluster Configuration System Files nodes { n01 { ip_interfaces { hsi0 = "10.0.0.1" fence { server { gnbd { ipaddr = "10.0.1.1" n02 { Example 6-17. Node Defined for GNBD Fencing nodes { n01 { ip_interfaces { hsi0 = "10.0.0.1"...
  • Page 82 Chapter 6. Creating the Cluster Configuration System Files nodes { n01 { ip_interfaces { hsi0 = "10.0.0.1" fence { human { admin { ipaddr = "10.0.0.1" n02 { Example 6-19. Nodes Defined for Manual Fencing nodes { n01 { ip_interfaces { eth0 = "10.0.1.21"...
  • Page 83 Chapter 6. Creating the Cluster Configuration System Files nodes { n01 { ip_interfaces { hsi0 = "10.0.0.1" fence { power { <------------- APC MasterSwitch fencing device apc1 { port = 6 switch = 2 notify_gnbd { <------- Fence Notify GNBD fencing device, ipaddr = "10.0.0.1"...
  • Page 84 Chapter 6. Creating the Cluster Configuration System Files...
  • Page 85: Using The Cluster Configuration System

    Chapter 7. Using the Cluster Configuration System This chapter describes how to use the cluster configuration system (CCS) and consists of the following sections: Section 7.1 Creating a CCS Archive • Section 7.2 Starting CCS in the Cluster • Section 7.3 Using Other CCS Administrative Options •...
  • Page 86: Example

    Chapter 7. Using the Cluster Configuration System 7.1.2. Example In this example, the name of the cluster is , and the name of the pool is alpha . The CCS configuration files in directory are used to /dev/pool/alpha_cca /root/alpha/ create a CCS archive on the CCA device /dev/pool/alpha_cca ccs_tool create /root/alpha/ /dev/pool/alpha_cca 7.1.3.
  • Page 87: Using Other Ccs Administrative Options

    Chapter 7. Using the Cluster Configuration System 7.2.3. Comments The CCS daemon ( ) uses the Linux raw-device interface to update and read a CCA device di- ccsd rectly, bypassing operating system caches. Caching effects could otherwise create inconsistent views of the CCA device between cluster nodes.
  • Page 88: Comparing Ccs Configuration Files To A Ccs Archive

    Chapter 7. Using the Cluster Configuration System 7.3.2.2. Example This example causes the CCS files contained on the CCA device, , to be /dev/pool/alpha_cca listed. ccs_tool list /dev/pool/alpha_cca 7.3.3. Comparing CCS Configuration Files to a CCS Archive command can be used to compare a directory of CCS configuration files with ccs_tool diff the configuration files in a CCS archive.
  • Page 89: Example Procedure

    Chapter 7. Using the Cluster Configuration System 7.4.1. Example Procedure This example procedure shows how to change configuration files in a CCS archive. 1. Extract configuration files from the CCA device into temporary directory /root/alpha-new/ ccs_tool extract /dev/pool/alpha_cca /root/alpha-new/ 2. Make changes to the configuration files in /root/alpha-new/ 3.
  • Page 90 Chapter 7. Using the Cluster Configuration System 7.5.1.1.1. Usage ccs_tool create Directory CCAFile Directory The relative path to the directory containing the CCS files for the cluster. CCAFile Specifies the CCA file to create. 7.5.1.1.2. Example In this example, the name of the cluster is alpha and the name of the CCA file is .
  • Page 91: Local Cca Files

    Chapter 7. Using the Cluster Configuration System 7.5.1.3. Starting the CCS Daemon When using a CCS server, must connect to it over the network, and requires two parameters on ccsd command line: the IP address (and optional port number) of the node running ccsd ccs_servd and the name of the cluster.
  • Page 92: Combining Ccs Methods

    Chapter 7. Using the Cluster Configuration System 7.5.2.2. Usage ccsd -f File File Specifies the local copy of the CCA file. 7.5.2.3. Example This example starts on a node using a local copy of a CCA file. ccsd ccsd -f /etc/sistina/ccs-build/alpha.cca 7.6.
  • Page 93: Using Clustering And Locking Systems

    Chapter 8. Using Clustering and Locking Systems This chapter describes how to use the clustering and locking systems available with GFS, and consists of the following sections: Section 8.1 Locking System Overview • Section 8.2 LOCK_GULM • Section 8.3 LOCK_NOLOCK •...
  • Page 94: Starting Lock_Gulm Servers

    Chapter 8. Using Clustering and Locking Systems Over half of the servers on the nodes listed in the file lock_gulmd cluster.ccs ) must be operating to process locking requests cluster.ccs:cluster/lock_gulm/servers from GFS nodes. That quorum requirement is necessary to prevent split groups of servers from forming independent clusters —...
  • Page 95: Lock_Nolock

    Chapter 8. Using Clustering and Locking Systems 8.2.5.1. Usage gulm_tool shutdown IPAddress IPAddress Specifies the IP address or hostname of the node running the instance of to be lock_gulmd terminated. 8.3. LOCK_NOLOCK The LOCK_NOLOCK system allows GFS to be used as a local file system on a single node. The kernel module for a GFS/LOCK_NOLOCK node is .
  • Page 96 Chapter 8. Using Clustering and Locking Systems...
  • Page 97: Managing Gfs

    Chapter 9. Managing GFS This chapter describes the tasks and commands for managing GFS and consists of the following sections: Section 9.1 Making a File System • Section 9.2 Mounting a File System • Section 9.3 Unmounting a File System •...
  • Page 98: Usage

    Chapter 9. Managing GFS 9.1.1. Usage gfs_mkfs -p LockProtoName -t LockTableName -j Number BlockDevice Warning Make sure that you are very familiar with using the LockProtoName and LockTableName parame- ters. Improper use of the LockProtoName and LockTableName parameters may cause file system or lock space corruption.
  • Page 99: Mounting A File System

    Chapter 9. Managing GFS Flag Parameter Description Sets the file system block size to BlockSize. BlockSize Default block size is 4096 bytes. Enables debugging output. Help. Displays available options, then exits. Specifies the size of the journal in megabytes. Default MegaBytes journal size is 128 megabytes.
  • Page 100: Usage

    Chapter 9. Managing GFS 9.2.1. Usage mount -t gfs BlockDevice MountPoint BlockDevice Specifies the block device where the GFS file system resides. MountPoint Specifies the directory where the GFS file system should be mounted. 9.2.2. Example In this example, the GFS file system on the block device is mounted on the directory.
  • Page 101: Unmounting A File System

    Chapter 9. Managing GFS Option Description Forces GFS to treat the file system as a multihost file ignore_local_fs Caution: This option should not be used system. By default, using LOCK_NOLOCK when GFS file systems are shared. automatically turns on the localcaching flags.
  • Page 102: Setting Quotas

    Chapter 9. Managing GFS 9.4.1. Setting Quotas Two quota settings are available for each user ID (UID) or group ID (GID): a hard limit and a warn limit. A hard limit is the amount space that can be used. The file system will not let the user or group use more than that amount of disk space.
  • Page 103 Chapter 9. Managing GFS 9.4.2. Displaying Quota Limits and Usage Quota limits and current usage can be displayed for a specific user or group using the gfs_quota command. The entire contents of the quota file can also be displayed using the gfs_quota command, in which case all IDs with a non-zero hard limit, warn limit, or value are listed.
  • Page 104: Synchronizing Quotas

    Chapter 9. Managing GFS LimitSize The hard limit set for the user or group. This value is zero if no limit has been set. Value The actual amount of disk space used by the user or group. 9.4.2.3. Comments When displaying quota information, the command does not resolve UIDs and GIDs into gfs_quota names if the...
  • Page 105: Disabling/Enabling Quota Enforcement

    Chapter 9. Managing GFS MountPoint Specifies the GFS file system to which the actions apply. Tuning the Time Between Synchronizations gfs_tool settune MountPoint quota_quantum Seconds MountPoint Specifies the GFS file system to which the actions apply. Seconds Specifies the new time period between regular quota-file synchronizations by GFS. Smaller val- ues may increase contention and slow down performance.
  • Page 106: Disabling/Enabling Quota Accounting

    Chapter 9. Managing GFS 9.4.4.2. Comments A value of 0 disables enforcement. Enforcement can be enabled by running the command with a value of 1 (instead of 0) as the final command line parameter. Even when GFS is not enforcing quotas, it still keeps track of the file system usage for all users and groups so that quota-usage information does not require rebuilding after re-enabling quotas.
  • Page 107: Growing A File System

    Chapter 9. Managing GFS 9.4.5.3. Examples This example disables quota accounting on file system on a single node. /gfs gfs_tool settune /gfs quota_account 0 This example enables quota accounting on file system on a single node and initializes the quota /gfs file.
  • Page 108: Complete Usage

    Chapter 9. Managing GFS 9.5.3. Examples In this example, the file system on the directory is expanded. /gfs1/ gfs_grow /gfs1 In this example, the state of the mounted file system is checked. gfs_grow -Tv /gfs1 9.5.4. Complete Usage gfs_grow [Options] {MountPoint | Device} [MountPoint | Device] MountPoint Specifies the directory where the GFS file system is mounted.
  • Page 109: Usage

    Chapter 9. Managing GFS 9.6.1. Usage gfs_jadd -j Number MountPoint Number Specifies the number of new journals to be added. MountPoint Specifies the directory where the GFS file system is mounted. 9.6.2. Comments Before running the command: gfs_jadd Back up important data on the file system. •...
  • Page 110: Direct I/O

    Chapter 9. Managing GFS Device Specifies the device node of the file system. Table 9-4 describes the GFS-specific options that can be used when adding journals to a GFS file system. Flag Parameter Description Help. Displays short usage message, then exits. Specifies the size of the new journals in MBytes.
  • Page 111: Gfs File Attribute

    Chapter 9. Managing GFS 9.7.1. O_DIRECT If an application uses the flag on an system call, direct I/O is used for the opened O_DIRECT open() file. To cause the flag to be defined with recent glibc libraries, define at the O_DIRECT _GNU_SOURCE beginning of a source file before any includes, or define it on the cc line when compiling.
  • Page 112: Data Journaling

    Chapter 9. Managing GFS 9.7.3.1. Usage Setting the flag inherit_directio gfs_tool setflag Directory inherit_directio Setting the flag inherit_directio gfs_tool clearflag Directory inherit_directio Directory Specifies the directory where the flag is set. inherit_directio 9.7.3.2. Example In this example, the command sets the flag on the directory named inherit_directio /gfs1/data/...
  • Page 113: Examples

    Chapter 9. Managing GFS Directory Specifies the directory where the flag is set or cleared. File Specifies the zero-length file where the flag is set or cleared. 9.8.2. Examples This example shows setting the flag on a directory. All files created in the directory inherit_jdata or any of its subdirectories will have the flag assigned automatically.
  • Page 114: Quantum

    Chapter 9. Managing GFS MountPoint Specifies the directory where the GFS file system should be mounted. 9.9.1.2. Example In this example, the GFS file system resides on the block device and is mounted on directory pool0 with atime updates turned off. /gfs1/ mount -t gfs /dev/pool/pool0 /gfs1 -o noatime 9.9.2.
  • Page 115: Atime 9.10. Suspending Activity On A File System

    Chapter 9. Managing GFS 9.9.2.2. Examples In this example, all GFS tunable parameters for the file system on the mount point are dis- /gfs1 played. gfs_tool gettune /gfs1 In this example, the update period is set to once a day (86,400 seconds) for the GFS file system atime on mount point /gfs1...
  • Page 116: Displaying Extended Gfs Information And Statistics

    Chapter 9. Managing GFS 9.11. Displaying Extended GFS Information and Statistics A variety of details can be gathered about GFS using the command. Typical usage of the gfs_tool command is described here. gfs_tool 9.11.1. Usage Displaying Statistics gfs_tool MountPoint counters action flag displays statistics about a file system.
  • Page 117: Example

    Chapter 9. Managing GFS 9.12.1. Usage gfs_fsck BlockDevice flag causes all questions to be answered with . With the specified, the gfs_fsck does not prompt you for an answer before making changes. BlockDevice Specifies the block device where the GFS file system resides. 9.12.2.
  • Page 118: Example

    Chapter 9. Managing GFS Variable Specifies a special reserved name from a list of values (refer to Table 9-5) to represent one of multiple existing files or directories. This string is not the name of an actual file or directory itself.
  • Page 119: Shutting Down A Gfs Cluster

    Chapter 9. Managing GFS n01# ls /gfs/log/ fileA n02# ls /gfs/log/ fileB n03# ls /gfs/log/ fileC 9.14. Shutting Down a GFS Cluster To cleanly shut down a GFS cluster, perform the following steps: 1. Unmount all GFS file systems on all nodes. Refer to Section 9.3 Unmounting a File System for more information.
  • Page 120 Chapter 9. Managing GFS 3. Start the LOCK_GULM servers. At each lock server node, start . Refer to Section lock_gulmd 8.2.3 Starting LOCK_GULM Servers for more information. Command usage: lock_gulmd 4. At each node, mount the GFS file systems. Refer to Section 9.2 Mounting a File System for more information.
  • Page 121: Using The Fencing System

    Chapter 10. Using the Fencing System Fencing (or I/O fencing) is the mechanism that disables an errant GFS node’s access to a file system, preventing the node from causing data corruption. This chapter explains the necessity of fencing, summarizes how the fencing system works, and describes each form of fencing that can be used in a GFS cluster.
  • Page 122: Apc Masterswitch

    Chapter 10. Using the Fencing System Fending Method Fencing Agent APC Network Power Switch fence_apc WTI Network Power Switch fence_wti Brocade FC Switch fence_brocade Vixel FC Switch fence_vixel HP RILOE fence_rib GNBD fence_gnbd Fence Notify GNBD fence_notify_gnbd Manual fence_manual Table 10-1. Fencing Methods and Agents When a GFS cluster is operating, the fencing system executes those fencing agents.
  • Page 123: Brocade Fc Switch

    Chapter 10. Using the Fencing System Note Lengthy Telnet connections to the WTI NPS should be avoided during the cluster operation. A fencing operation trying to use the WTI NPS will be blocked until it can log in. 10.2.3. Brocade FC Switch A node connected to a Brocade FC (Fibre Channel) switch can be fenced by disabling the switch port that the node is connected to.
  • Page 124: Hp Riloe Card

    Chapter 10. Using the Fencing System 10.2.5. HP RILOE Card A GFS node that has an HP RILOE (Remote Insight Lights-Out Edition) card can be fenced with fencing agent. Refer to Section 6.7 Creating the File and Section 6.8 fence_rib fence.ccs Creating the File for information on how to configure with this type of fencing.
  • Page 125 Chapter 10. Using the Fencing System The manual fencing agent, , writes a message into the system log of the node on which fence_manual the fencing agent is running. The message indicates the cluster node that requires fencing. Upon seeing this message (by monitoring or equivalent), an administrator must manually /var/log/messages...
  • Page 126 Chapter 10. Using the Fencing System...
  • Page 127: Using Gnbd

    Chapter 11. Using GNBD This chapter describes how to use a GNBD (Global Network Block Device) and consists of the fol- lowing sections: Section 11.1 Considerations for Using GNBD Multipath • Section 11.2 GNBD Driver and Command Usage • 11.1. Considerations for Using GNBD Multipath GNBD multipath allows you to configure multiple GNBD server nodes (nodes that export GNBDs to GFS nodes) with redundant paths between the GNBD server nodes and storage devices.
  • Page 128: Fencing Gnbd Server Nodes

    Chapter 11. Using GNBD Note If FC-attached storage can be shared among nodes, the CCS files can be stored on that shared storage. Note A node with CCS files stored on local storage or FC-attached storage can serve the CCS files to other nodes in a GFS cluster via .
  • Page 129: Exporting A Gnbd From A Server

    Chapter 11. Using GNBD The GNBD driver is implemented through the following client and server kernel modules. — Implements the GNBD device driver on GNBD clients (nodes using GNBD devices). • gnbd.o — Implements the GNBD server. It allows a node to export local storage over the •...
  • Page 130: Importing A Gnbd On A Client

    Chapter 11. Using GNBD Caution For GNBD multipath, you must not specify the option. All GNBDs that are part of the pool must run with caching disabled . Pool, the GFS volume manager, does not check for caching being disabled; therefore, data corruption will occur if the GNBD devices are run with caching enabled.
  • Page 131 Chapter 11. Using GNBD 11.2.2.2. Example This example imports all GNBDs from the server named nodeA gnbd_import -i nodeA...
  • Page 132 Chapter 11. Using GNBD...
  • Page 133: Software License

    Chapter 12. Software License 12.1. Overview GFS features are enabled by a license file that is provided with the GFS product. Table 12-1 lists the features (by function) controlled by the license file according to license type. Function Feature License License License License...
  • Page 134: Upgrading And Replacing A License

    Chapter 12. Software License The license file should be placed into the same directory as other CCS files. When the cluster archive is written (with ), the license file is included in the CCS archive. Once the CCS daemon ccs_tool has been started, the GFS software modules and programs can access the license file and enable the corresponding features.
  • Page 135: License Faq

    Chapter 12. Software License 6. Write the configuration files to the CCA device and verify that it was written correctly. 7. At each lock node, start the LOCK_GULM server. Note The license file gets reloaded automatically when the LOCK_GULM servers are restarted. 8.
  • Page 136 Chapter 12. Software License 12.5. Solving License Problems The first step toward solving a license problem is determining whether the license file has been prop- erly included in a CCS archive. When creating an archive with the command, it is possible ccs_tool to give it the verbose option, .
  • Page 137: Upgrading Gfs

    Appendix A. Upgrading GFS This appendix contains instructions for upgrading GFS 5.1.x to GFS 5.2.x software and consists of the following sections: Section A.1 Overview of Differences between GFS 5.1.x and GFS 5.2.x • Section A.2 Upgrade Procedure • A.1. Overview of Differences between GFS 5.1.x and GFS 5.2.x If you are upgrading from GFS 5.1.x to GFS 5.2.x, you must be familiar with the differences between those major releases.
  • Page 138: Gfs License

    Appendix A. Upgrading GFS Difference GFS 5.1.x GFS 5.2.x Configuration File Device CIDEV CCA Device Configuration Tool gfs_conf ccs_tool Configuration Files gfs.cf license.ccs cluster.ccs nodes.ccs fence.ccs Table A-1. Configuration Differences A.1.2. GFS License In GFS 5.1.x, the license file had to be installed using when the file system was first gfs_tool mounted after creation.
  • Page 139: Gfs Mount Options

    Appendix A. Upgrading GFS A.1.5. GFS Mount Options In GFS 5.1.x, there were required GFS-specific mount options. In GFS 5.2.x, there are no hostdata required mount options. Table A-4 summarizes the mount-option differences between GFS 5.1.x and GFS 5.2.x. Difference GFS 5.1.x GFS 5.2.x LOCK_GULM Mount...
  • Page 140: Upgrade Procedure

    Appendix A. Upgrading GFS A.2. Upgrade Procedure This procedure is for upgrading from LOCK_DMEP or LOCK_GULM GFS 5.1.x file systems to GFS 5.2.x LOCK_GULM file systems. It includes command usage, references to other sections of this book, and examples as necessary. To upgrade the software follow these steps: 1.
  • Page 141 Appendix A. Upgrading GFS Note command uses the node names shown in the GFS 5.1.x CIDEV as the node gfs_conf names in the file. GFS 5.2.x requires those names to match the hostname of each nodes.ccs node. After the , and files are in place, the file cluster.ccs...
  • Page 142 Appendix A. Upgrading GFS 8. Create CCS archive on CCA device (former CIDEV). The CCS archive is created from the directory of new CCS files as described in Step 2. Command usage: ccs_tool create Directory Device Reference: Section 7.1 Creating a CCS Archive Example: ccs_tool create /root/alpha/ /dev/pool/alpha_cca 9.
  • Page 143 Appendix A. Upgrading GFS Reference: Section 7.2 Starting CCS in the Cluster Example: ccsd -d /dev/pool/alpha_cca 12. Start LOCK_GULM server. server started server nodes listed lock_gulmd cluster.ccs:cluster/lock_gulm/servers Command usage: lock_gulmd Reference: Section 8.2.3 Starting LOCK_GULM Servers Example: lock_gulmd 13. Mount GFS file systems on all GFS nodes. Command usage: mount -t gfs BlockDevice MountPoint Reference: Section 9.2 Mounting a File System...
  • Page 144 Appendix A. Upgrading GFS...
  • Page 145: Basic Gfs Examples

    Appendix B. Basic GFS Examples This appendix contains examples of setting up and using GFS in the following basic scenarios: Section B.1 LOCK_GULM, RLM Embedded • Section B.2 LOCK_GULM, RLM External • Section B.3 LOCK_GULM, SLM Embedded • Section B.4 LOCK_GULM, SLM External •...
  • Page 146: Prerequisites

    Appendix B. Basic GFS Examples Host Name IP Address Login Name Password 10.0.1.10 Table B-1. APC MasterSwitch Information Host Name IP Address APC Port Number 10.0.1.1 10.0.1.2 10.0.1.3 Table B-2. GFS and Lock Server Node Information Major Minor #Blocks Name 8388608 8001 sda1...
  • Page 147 Appendix B. Basic GFS Examples B.1.2.1. Kernel Installed An appropriate kernel must be installed and running on each node. Refer to Section 3.2.1 Installing a Linux Kernel for more information about installing a Linux kernel. B.1.2.2. GFS RPM Installed The GFS RPMs must be installed, and all GFS commands, daemons and fencing agents must be accessible via root’s search path.
  • Page 148: Setup Process

    Appendix B. Basic GFS Examples B.1.3. Setup Process The setup process for this example consists of the following steps: 1. Create pool configurations for the two file systems. Create pool configuration files for each file system’s pool: for the first file system, pool_gfs01 for the second file system.
  • Page 149 Appendix B. Basic GFS Examples 5. Create CCS files. a. Create a directory called on node as follows: /root/alpha n01# mkdir /root/alpha n01# cd /root/alpha b. Create the file. This file contains the name of the cluster and the name of cluster.ccs the nodes where the LOCK_GULM server is run.
  • Page 150 Appendix B. Basic GFS Examples d. Create the file. This file contains information required for the fencing fence.ccs method(s) used by the GFS cluster. The file should look like the following: fence_devices { apc { agent = "fence_apc" ipaddr = "10.0.1.10" login = "apc"...
  • Page 151: Lock_Gulm, Rlm External

    Appendix B. Basic GFS Examples 9. Create the GFS file systems. Create the first file system on and the second on . The names of the pool_gfs01 pool_gfs02 two file systems are , respectively, as shown in the example: gfs01 gfs02 n01# gfs_mkfs -p lock_gulm -t alpha:gfs01 -j 3 /dev/pool/pool_gfs01 Device: /dev/pool/pool_gfs01...
  • Page 152 Appendix B. Basic GFS Examples B.2.1. Key Characteristics This example configuration has the following key characteristics: Fencing device — An APC MasterSwitch (single-switch configuration). Refer to Table B-4 for • switch information. Number of GFS nodes — 3. Refer to Table B-5 for node information. •...
  • Page 153: Prerequisites

    Appendix B. Basic GFS Examples Major Minor #Blocks Name 8388608 8001 sda1 8377897 sda2 8388608 8388608 sdb1 Table B-7. Storage Device Information Notes For shared storage devices to be visible to the nodes, it may be necessary to load an appropriate device driver.
  • Page 154: Setup Process

    Appendix B. Basic GFS Examples B.2.2.3. Kernel Modules Loaded Each node must have the following kernel modules loaded: • ccs.o • gfs.o • lock_harness.o • lock_gulm.o • pool.o Note The GFS kernel modules must be loaded every time a node is rebooted. If they are not loaded, GFS will not function.
  • Page 155 Appendix B. Basic GFS Examples poolname pool_gfs02 subpools 1 subpool 0 0 1 pooldevice 0 0 /dev/sdb1 2. Create a pool configuration for the CCS data. Create a pool configuration file for the pool that will be used for CCS data. The pool does not need to be very large.
  • Page 156 Appendix B. Basic GFS Examples 5. Create CCS files. a. Create a directory called on node as follows: /root/alpha n01# mkdir /root/alpha n01# cd /root/alpha b. Create the file. This file contains the name of the cluster and the name of cluster.ccs the nodes where the LOCK_GULM server is run.
  • Page 157 Appendix B. Basic GFS Examples lck02 { ip_interfaces { eth0 = "10.0.1.5" fence { power { apc { port = 5 lck03 { ip_interfaces { eth0 = "10.0.1.6" fence { power { apc { port = 6 d. Create the file.
  • Page 158 Appendix B. Basic GFS Examples 7. Start the CCS daemon (ccsd) on all the nodes. Note This step must be performed each time the cluster is rebooted. The CCA device must be specified when starting ccsd. n01# ccsd -d /dev/pool/alpha_cca n02# ccsd -d /dev/pool/alpha_cca n03# ccsd -d /dev/pool/alpha_cca lck01# ccsd -d /dev/pool/alpha_cca...
  • Page 159: Lock_Gulm, Slm Embedded

    Appendix B. Basic GFS Examples 10. Mount the GFS file systems on all the nodes. Mount points are used on each node: /gfs01 /gfs02 n01# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n01# mount -t gfs /dev/pool/pool_gfs02 /gfs02 n02# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n02# mount -t gfs /dev/pool/pool_gfs02 /gfs02 n03# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n03# mount -t gfs /dev/pool/pool_gfs02 /gfs02...
  • Page 160: Prerequisites

    Appendix B. Basic GFS Examples Host Name IP Address APC Port Number 10.0.1.1 10.0.1.2 10.0.1.3 Table B-9. GFS and Lock Server Node Information Major Minor #Blocks Name 8388608 8001 sda1 8377897 sda2 8388608 8388608 sdb1 Table B-10. Storage Device Information Notes For shared storage devices to be visible to the nodes, it may be necessary to load an appropriate device driver.
  • Page 161: Setup Process

    Appendix B. Basic GFS Examples B.3.2.2. GFS RPM Installed The GFS RPMs must be installed, and all GFS commands, daemons and fencing agents must be accessible via root’s search path. B.3.2.3. Kernel Modules Loaded Each node must have the following kernel modules loaded: •...
  • Page 162 Appendix B. Basic GFS Examples subpool 0 0 1 pooldevice 0 0 /dev/sda2 poolname pool_gfs02 subpools 1 subpool 0 0 1 pooldevice 0 0 /dev/sdb1 2. Create a pool configuration for the CCS data. Create a pool configuration file for the pool that will be used for CCS data. The pool does not need to be very large.
  • Page 163 Appendix B. Basic GFS Examples c. Create the file. This file contains the name of each node, its IP address, and nodes.ccs node-specific I/O fencing parameters. The file should look like the following: nodes { n01 { ip_interfaces { eth0 = "10.0.1.1" fence { power { apc {...
  • Page 164 Appendix B. Basic GFS Examples 6. Create the CCS Archive on the CCA Device. Note This step only needs to be done once and from a single node. It should not be performed every time the cluster is restarted. Use the command to create the archive from the CCS configuration files: ccs_tool n01# ccs_tool create /root/alpha /dev/pool/alpha_cca...
  • Page 165: Lock_Gulm, Slm External

    Appendix B. Basic GFS Examples Syncing... All Done 10. Mount the GFS file systems on all the nodes. Mount points are used on each node: /gfs01 /gfs02 n01# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n01# mount -t gfs /dev/pool/pool_gfs02 /gfs02 n02# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n02# mount -t gfs /dev/pool/pool_gfs02 /gfs02 n03# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n03# mount -t gfs /dev/pool/pool_gfs02 /gfs02...
  • Page 166: Prerequisites

    Appendix B. Basic GFS Examples Host Name IP Address APC Port Number 10.0.1.1 10.0.1.2 10.0.1.3 Table B-12. GFS Node Information Host Name IP Address APC Port Number lcksrv 10.0.1.4 Table B-13. Lock Server Node Information Major Minor #Blocks Name 8388608 8001 sda1 8377897...
  • Page 167 Appendix B. Basic GFS Examples B.4.2.1. Kernel Installed An appropriate kernel must be installed and running on each node. Refer to Section 3.2.1 Installing a Linux Kernel for more information about installing a Linux kernel. B.4.2.2. GFS RPM Installed The GFS RPMs must be installed, and all GFS commands, daemons and fencing agents must be accessible via root’s search path.
  • Page 168: Setup Process

    Appendix B. Basic GFS Examples B.4.3. Setup Process The setup process for this example consists of the following steps: 1. Create pool configurations for the two file systems. Create pool configuration files for each file system’s pool: for the first file system, pool_gfs01 for the second file system.
  • Page 169 Appendix B. Basic GFS Examples 5. Create CCS files. a. Create a directory called on node as follows: /root/alpha n01# mkdir /root/alpha n01# cd /root/alpha b. Create the file. This file contains the name of the cluster and the name of cluster.ccs the nodes where the LOCK_GULM server is run.
  • Page 170 Appendix B. Basic GFS Examples d. Create the file. This file contains information required for the fencing fence.ccs method(s) used by the GFS cluster. The file should look like the following: fence_devices { apc { agent = "fence_apc" ipaddr = "10.0.1.10" login = "apc"...
  • Page 171: Lock_Gulm, Slm External, And Gnbd

    Appendix B. Basic GFS Examples Note The lock server node, , was specified in the file earlier. lcksrv cluster.ccs 9. Create the GFS file systems. Create the first file system on and the second on . The names of the pool_gfs01 pool_gfs02 two file systems are...
  • Page 172 Appendix B. Basic GFS Examples B.5.1. Key Characteristics This example configuration has the following key characteristics: Fencing device — An APC MasterSwitch (single-switch configuration). Refer to Table B-15 for • switch information. Number of GFS nodes — 3. Refer to Table B-16 for node information. •...
  • Page 173: Prerequisites

    Appendix B. Basic GFS Examples Major Minor #Blocks Name 8388608 8001 sda1 8377897 sda2 8388608 8388608 sdb1 Table B-19. Storage Device Information Notes The storage must only be visible on the GNBD server node. The GNBD server node will ensure that the storage is visible to the GFS cluster nodes via the GNBD protocol.
  • Page 174: Setup Process

    Appendix B. Basic GFS Examples B.5.2.3. Kernel Modules Loaded Each node must have the following kernel modules loaded: • ccs.o • gfs.o • lock_harness.o • lock_gulm.o • pool.o Note The GFS kernel modules must be loaded every time a node is rebooted. If they are not loaded, GFS will not function.
  • Page 175 Appendix B. Basic GFS Examples Caution The GNBD server should not attempt to use the devices it exports — either directly or by importing them. Doing so can cause cache coherency problems. 2. Import GNBD devices on all GFS nodes and the lock server node. to import the GNBD devices from the GNBD server ( gnbd_import gnbdsrv...
  • Page 176 Appendix B. Basic GFS Examples 6. Activate the pools on all nodes. Note This step must be performed every time a node is rebooted. If it is not, the pool devices will not be accessible. Activate the pools using the command for each node as follows: pool_assemble -a n01# pool_assemble -a...
  • Page 177 Appendix B. Basic GFS Examples fence { power { apc { port = 2 n03 { ip_interfaces { eth0 = "10.0.1.3" fence { power { apc { port = 3 lcksrv { ip_interfaces { eth0 = "10.0.1.4" fence { power { apc { port = 4 gnbdsrv {...
  • Page 178 Appendix B. Basic GFS Examples 8. Create the CCS Archive on the CCA Device. Note This step only needs to be done once and from a single node. It should not be performed every time the cluster is restarted. Use the command to create the archive from the CCS configuration files: ccs_tool n01# ccs_tool create /root/alpha /dev/pool/alpha_cca...
  • Page 179: Lock_Nolock

    Appendix B. Basic GFS Examples Lock Table: alpha:gfs02 Syncing... All Done 12. Mount the GFS file systems on all the nodes. Mount points are used on each node: /gfs01 /gfs02 n01# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n01# mount -t gfs /dev/pool/pool_gfs02 /gfs02 n02# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n02# mount -t gfs /dev/pool/pool_gfs02 /gfs02 n03# mount -t gfs /dev/pool/pool_gfs01 /gfs01...
  • Page 180: Prerequisites

    Appendix B. Basic GFS Examples Major Minor #Blocks Name 8388608 8001 sda1 8388608 8388608 sdb1 Table B-21. Storage Device Information Notes For storage to be visible to the node, it may be necessary to load an appropriate device driver. If the storage is not visible on the node, confirm that the device driver is loaded and that it loaded without errors.
  • Page 181: Setup Process

    Appendix B. Basic GFS Examples • lock_harness.o • lock_gulm.o • pool.o Note The GFS kernel modules must be loaded every time a node is rebooted. If they are not loaded, GFS will not function. You can confirm that the modules have been loaded by running the command.
  • Page 182 Appendix B. Basic GFS Examples 3. Activate the pools. Note This step must be performed every time a node is rebooted. If it is not, the pool devices will not be accessible. Activate the pools using the command as follows: pool_assemble -a n01# pool_assemble -a pool_gfs01 assembled...
  • Page 183 Appendix B. Basic GFS Examples All Done n01# gfs_mkfs -p lock_gulm -t alpha:gfs02 -j 1 /dev/pool/pool_gfs02 Device: /dev/pool/pool_gfs02 Blocksize: 4096 Filesystem Size:1963416 Journals: 1 Resource Groups:30 Locking Protocol:lock_nolock Lock Table: Syncing... All Done 7. Mount the GFS file systems on the nodes. Mount points are used on the node: /gfs01...
  • Page 184 Appendix B. Basic GFS Examples...
  • Page 185: Index

    Index fencing and LOCK_GULM, 80 locking system overview, 79 LOCK_GULM, 79 LOCK_NOLOCK, 81 number of LOCK_GULM servers, 79 adding journals to a file system, 94 selection of LOCK_GULM servers, 79 administrative options, 73 shutting down a LOCK_GULM server, 80 comparing CCS configuration files to a CCS starting LOCK_GULM servers, 80 archive, 74 extracting files from a CCS archive, 73...
  • Page 186 tuning atime quantum, 100 context-dependent path names (CDPNs), 103 examples data journaling, 98 basic GFS examples, 131 direct I/O, 96 LOCK_GULM, RLM embedded, 131 directory attribute, 97 key characteristics, 131 file attribute, 97 prerequisites, 132 O_DIRECT, 97 setup process, 134 growing, 93 LOCK_GULM, RLM external, 137 making, 83...
  • Page 187 installation tasks, 16 GFS node information (examples) table, 138, 152, license installation differences (upgrade) table, 124 158, 165 GFS RPM installation license.ccs, 41 installation tasks, 16 Linux kernel installation GFS RPM installation table, 16 installation tasks, 14 GFS software subsystem components table, 6 GFS software subsystems, 6 lock management, 5 GFS-specific options for adding journals table, 96...
  • Page 188 pool_tool command functions table, 24 pool_tool command options table, 24 overview, 1 preface configuration, before, 8 (See introduction) economy, 1 prerequisite tasks features, new and changed, 1 configuration, initial, 19 GFS functions, 4 installing system software, 13 cluster configuration management, 5 clock synchronization software, 13 cluster management, fencing, recovery, 5 Net::Telnet Perl module, 13...
  • Page 189 nodes.ccs, 52 pool_mp command functions, 26 pool_mp command options, 26 prerequisite tasks, 39 pool_tool command functions, 24 system requirements, 11 pool_tool command options, 24 console access, 12 recommended references, iv fibre channel storage devices, 12 software features controlled by the license file, 119 fibre channel storage network, 11 storage device information (examples), 132, 138, I/O fencing, 12...
  • Page 191: Colophon

    Colophon The manuals are written in DocBook SGML v4.1 format. The HTML and PDF formats are produced using custom DSSSL stylesheets and custom jade wrapper scripts. The DocBook SGML files are written in Emacs with the help of PSGML mode. Garrett LeSage created the admonition graphics (note, tip, important, caution, and warning).

Table of Contents