Red Hat ENTERPRISE LINUX 4 - GLOBAL FILE SYTEM Manual

Global file system
Hide thumbs Also See for ENTERPRISE LINUX 4 - GLOBAL FILE SYTEM:
Table of Contents

Advertisement

Red Hat Enterprise Linux 4
Global File System
Red Hat Global File System

Advertisement

Table of Contents
loading
Need help?

Need help?

Do you have a question about the ENTERPRISE LINUX 4 - GLOBAL FILE SYTEM and is the answer not in the manual?

Questions and answers

Subscribe to Our Youtube Channel

Summary of Contents for Red Hat ENTERPRISE LINUX 4 - GLOBAL FILE SYTEM

  • Page 1 Red Hat Enterprise Linux 4 Global File System Red Hat Global File System...
  • Page 2 Global File System Red Hat Enterprise Linux 4 Global File System Red Hat Global File System Edition 1.1 Copyright © 2009 Red Hat, Inc. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA").
  • Page 3: Table Of Contents

    Introduction 1. Audience ........................v 2. Related Documentation ....................v 3. Document Conventions ....................vi 3.1. Typographic Conventions ..................vi 3.2. Pull-quote Conventions ..................vii 3.3. Notes and Warnings ................... viii 4. Feedback ........................viii 5. Recommended References ..................... ix 1.
  • Page 4 Global File System 4.10. Suspending Activity on a File System ................ 31 4.11. Displaying Extended GFS Information and Statistics ........... 32 4.12. Repairing a File System .................... 33 4.13. Context-Dependent Path Names ................34 A. Upgrading GFS B. Revision History Index...
  • Page 5: Introduction

    Hat Cluster. HTML and PDF versions of all the official Red Hat Enterprise Linux manuals and release notes are available online at http://www.redhat.com/docs/. 1. Audience This book is intended primarily for Linux system administrators who are familiar with the following activities: •...
  • Page 6: Document Conventions

    Red Hat Cluster Suite documentation and other Red Hat documents are available in HTML and PDF versions online at the following location: http://www.redhat.com/docs 3. Document Conventions This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of information.
  • Page 7: Pull-Quote Conventions

    Pull-quote Conventions File-related classes include filesystem for file systems, file for files, and dir for directories. Each class has its own associated set of permissions. Proportional Bold This denotes words or phrases encountered on a system, including application names; dialog box text; labeled buttons;...
  • Page 8: Notes And Warnings

    Warnings should not be ignored. Ignoring warnings will most likely cause data loss. 4. Feedback If you spot a typo, or if you have thought of a way to make this manual better, we would love to hear from you. Please submit a report in Bugzilla (http://bugzilla.redhat.com/bugzilla/) against the component rh-cs. viii...
  • Page 9: Recommended References

    Recommended References Be sure to mention the manual's identifier: rh-gfsg(EN)-4.8 (2010-03-17T16:33) By mentioning this manual's identifier, we know exactly which version of the guide you have. If you have a suggestion for improving the documentation, try to be as specific as possible. If you have found an error, please include the section number and some of the surrounding text so we can find it easily.
  • Page 11: Gfs Overview

    Chapter 1. GFS Overview Red Hat GFS is a cluster file system that is available with Red Hat Cluster Suite. Red Hat GFS nodes are configured and managed with Red Hat Cluster Suite configuration and management tools. Red Hat GFS provides data sharing among GFS nodes in a Red Hat cluster. GFS provides a single, consistent view of the file-system name space across the GFS nodes in a Red Hat cluster.
  • Page 12: Performance, Scalability, Moderate Price

    Chapter 1. GFS Overview Figure 1.1. GFS with a SAN 1.1.2. Performance, Scalability, Moderate Price Multiple Linux client applications on a LAN can share the same SAN-based data as shown in Figure 1.2, “GFS and GNBD with a SAN”. SAN block storage is presented to network clients as block storage devices by GNBD servers.
  • Page 13: Economy And Performance

    Economy and Performance Figure 1.2. GFS and GNBD with a SAN 1.1.3. Economy and Performance Figure 1.3, “GFS and GNBD with Directly Connected Storage” shows how Linux client applications can take advantage of an existing Ethernet topology to gain shared access to all block storage devices.
  • Page 14: Gfs Functions

    Chapter 1. GFS Overview 1.2. GFS Functions GFS is a native file system that interfaces directly with the VFS layer of the Linux kernel file-system interface. GFS is a cluster file system that employs distributed metadata and multiple journals for optimal operation in a cluster.
  • Page 15: Before Setting Up Gfs

    Before Setting Up GFS Software Subsystem Components Description Command that repairs an unmounted GFS gfs_fsck file system. Command that grows a mounted GFS file gfs_grow system. Command that adds journals to a mounted gfs_jadd GFS file system. Command that creates a GFS file system gfs_mkfs on a storage device.
  • Page 16 Chapter 1. GFS Overview Journals Determine the number of journals for your GFS file systems. One journal is required for each node that mounts a GFS file system. Make sure to account for additional journals needed for future expansion. GFS nodes Determine which nodes in the Red Hat Cluster Suite will mount the GFS file systems.
  • Page 17: Platform Requirements

    Chapter 2. System Requirements This chapter describes the system requirements for Red Hat GFS with Red Hat Enterprise Linux 5 and consists of the following sections: Section 2.1, “Platform Requirements” • Section 2.2, “Red Hat Cluster Suite” • Section 2.3, “Fencing” •...
  • Page 18: Fibre Channel Storage Devices

    Chapter 2. System Requirements Requirement Description HBA (Host Bus Adapter) One HBA minimum per GFS node Connection method Fibre Channel switch Note: If an FC switch is used for fencing, you may want to consider using Brocade, McData, or Vixel FC switches, for which Red Hat Cluster Suite fencing agents exist.
  • Page 19: Installing Gfs

    Installing GFS 2.8. Installing GFS Installing GFS consists of installing Red Hat GFS RPMs on nodes in a Red Hat cluster. Before installing the RPMs, make sure of the following: • The cluster nodes meet the system requirements described in this chapter. Section 1.4, “Before •...
  • Page 21: Getting Started

    Chapter 3. Getting Started This chapter describes procedures for initial setup of GFS and contains the following sections: Section 3.1, “Prerequisite Tasks” • Section 3.2, “Initial Setup Tasks” • 3.1. Prerequisite Tasks Before setting up Red Hat GFS, make sure that you have noted the key characteristics of the GFS Section 1.4, “Before Setting Up GFS”) and have loaded the GFS modules into each nodes (refer to...
  • Page 22 Chapter 3. Getting Started gfs_mkfs -p lock_dlm -t ClusterName:FSName -j NumberJournals BlockDevice 3. At each node, mount the GFS file systems. For more information about mounting a GFS file Section 4.2, “Mounting a File System”. system, refer to Command usage: mount -t gfs BlockDevice MountPoint mount -t gfs -o acl BlockDevice MountPoint The -o aclmount option allows manipulating file ACLs.
  • Page 23: Managing Gfs

    Chapter 4. Managing GFS This chapter describes the tasks and commands for managing GFS and consists of the following sections: Section 4.1, “Making a File System” • Section 4.2, “Mounting a File System” • Section 4.3, “Unmounting a File System” •...
  • Page 24 Chapter 4. Managing GFS LockProtoName Specifies the name of the locking protocol (for example, lock_dlm) to use. LockTableName This parameter has two parts separated by a colon (no spaces) as follows: ClusterName:FSName • ClusterName, the name of the Red Hat cluster for which the GFS file system is being created. •...
  • Page 25: Mounting A File System

    Mounting a File System Flag Parameter Description Specifies the number of journals to be created by the Number gfs_mkfs command. One journal is required for each node that mounts the file system. Note: More journals than are needed can be specified at creation time to allow for future expansion.
  • Page 26 Chapter 4. Managing GFS mount -t gfs BlockDevice MountPoint Mounting With ACL Manipulation mount -t gfs -o acl BlockDevice MountPoint -o acl GFS-specific option to allow manipulating file ACLs. BlockDevice Specifies the block device where the GFS file system resides. MountPoint Specifies the directory where the GFS file system should be mounted.
  • Page 27: Unmounting A File System

    Unmounting a File System Option Description This field provides host (the computer on which the file hostdata=HostIDInfo system is being mounted) identity information to the lock module. The format and behavior of HostIDInfo depends on the lock module used. For lock_gulm, it overrides the uname -n network node name used as the default value by lock_gulm.
  • Page 28: Gfs Quota Management

    Chapter 4. Managing GFS Note The umount command is a Linux system command. Information about this command can be found in the Linux umount command man pages. Usage umount MountPoint MountPoint Specifies the directory where the GFS file system should be mounted. 4.4.
  • Page 29: Displaying Quota Limits And Usage

    Displaying Quota Limits and Usage Setting Quotas, Warn Limit gfs_quota warn -u User -l Size -f MountPoint gfs_quota warn -g Group -l Size -f MountPoint User A user ID to limit or warn. It can be either a user name from the password file or the UID number. Group A group ID to limit or warn.
  • Page 30 Chapter 4. Managing GFS gfs_quota list -f MountPoint User A user ID to display information about a specific user. It can be either a user name from the password file or the UID number. Group A group ID to display information about a specific group. It can be either a group name from the group file or the GID number.
  • Page 31: Synchronizing Quotas

    Synchronizing Quotas gfs_quota list -f /gfs This example displays quota information in sectors for group users on file system /gfs. gfs_quota get -g users -f /gfs -s 4.4.3. Synchronizing Quotas GFS stores all quota information in its own internal file on disk. A GFS node does not update this quota file for every file-system write;...
  • Page 32: Disabling/Enabling Quota Enforcement

    Chapter 4. Managing GFS This example changes the default time period between regular quota-file updates to one hour (3600 seconds) for file system /gfs on a single node. gfs_tool settune /gfs quota_quantum 3600 4.4.4. Disabling/Enabling Quota Enforcement Enforcement of quotas can be disabled for a file system without clearing the limits set for all users and groups.
  • Page 33: Growing A File System

    Growing a File System tunable parameter to 0. This must be done on each node and after each mount. (The 0 setting is not persistent across unmounts.) Quota accounting can be enabled by setting the quota_account tunable parameter to 1. Usage fs_tool settune MountPoint quota_account {0|1} MountPoint...
  • Page 34 Chapter 4. Managing GFS The gfs_grow command must be run on a mounted file system, but only needs to be run on one node in a cluster. All the other nodes sense that the expansion has occurred and automatically start using the new space.
  • Page 35: Adding Journals To A File System

    Adding Journals to a File System Device Specifies the device node of the file system. Table 4.3, “GFS-specific Options Available While Expanding A File System” describes the GFS- specific options that can be used while expanding a GFS file system. Option Description Help.
  • Page 36 Chapter 4. Managing GFS After running the gfs_jadd command, run a gfs_jadd command with the -T and -v flags enabled to check that the new journals have been added to the file system. Examples In this example, one journal is added to the file system on the /gfs1 directory. gfs_jadd -j1 /gfs1 In this example, two journals are added to the file system on the /gfs1 directory.
  • Page 37: Direct I/O

    Direct I/O Flag Parameter Description would have done if it were run without this flag. Using the -v flag with the -T flag turns up the verbosity level to display more information. Quiet. Turns down the verbosity level. Displays command version information. Turns up the verbosity of messages.
  • Page 38: Gfs Directory Attribute

    Chapter 4. Managing GFS gfs_tool clearflag directio File File Specifies the file where the directio flag is assigned. Example In this example, the command sets the directio flag on the file named datafile in directory / gfs1. gfs_tool setflag directio /gfs1/datafile 4.7.3.
  • Page 39: Configuring Atime Updates

    Usage Data journaling can result in a reduced fsync() time, especially for small files, because the file data is written to the journal in addition to the metadata. An fsync() returns as soon as the data is written to the journal, which can be substantially faster than the time it takes to write the file data to the main file system.
  • Page 40: Mount With Noatime

    Chapter 4. Managing GFS • ctime — The last time the inode status was changed • mtime — The last time the file (or directory) data was modified • atime — The last time the file (or directory) data was accessed If atime updates are enabled as they are by default on GFS and other Linux file systems then every time a file is read, its inode needs to be updated.
  • Page 41: Suspending Activity On A File System

    Suspending Activity on a File System By using the gettune flag of the gfs_tool command, all current tunable parameters including atime_quantum (default is 3600 seconds) are displayed. The gfs_tool settune command is used to change the atime_quantum parameter value. It must be set on each node and each time the file system is mounted.
  • Page 42: Displaying Extended Gfs Information And Statistics

    Chapter 4. Managing GFS gfs_tool freeze MountPoint End Suspension gfs_tool unfreeze MountPoint MountPoint Specifies the file system. Examples This example suspends writes to file system /gfs. gfs_tool freeze /gfs This example ends suspension of writes to file system /gfs. gfs_tool unfreeze /gfs 4.11.
  • Page 43: Repairing A File System

    Examples The stat flag displays extended status information about a file. MountPoint Specifies the file system to which the action applies. File Specifies the file from which to get information. The gfs_tool command provides additional action flags (options) not listed in this section. For more information about other gfs_tool flags, refer to the gfs_tool man page.
  • Page 44: Context-Dependent Path Names

    Chapter 4. Managing GFS Usage gfs_fsck -y BlockDevice The -y flag causes all questions to be answered with yes. With the -y flag specified, the gfs_fsck command does not prompt you for an answer before making changes. BlockDevice Specifies the block device where the GFS file system resides. Example In this example, the GFS file system residing on block device /dev/vg01/lvol0 is repaired.
  • Page 45: Cdpn Variable Values

    Example actual file or directory itself. (The real files or directories must be created in a separate step using names that correlate with the type of variable used.) LinkName Specifies a name that will be seen and used by applications and will be followed to get to one of the multiple real files or directories.
  • Page 46 Chapter 4. Managing GFS n01# ls /gfs/log/ fileA n02# ls /gfs/log/ fileB n03# ls /gfs/log/ fileC...
  • Page 47: A. Upgrading Gfs

    Appendix A. Upgrading GFS To upgrade a node to Red Hat GFS 6.1 from earlier versions of Red Hat GFS, you must convert the GFS cluster configuration archive (CCA) to a Red Hat Cluster Suite cluster configuration system (CCS) configuration file (/etc/cluster/cluster.conf) and convert GFS pool volumes to LVM2 volumes.
  • Page 48 Appendix A. Upgrading GFS 3. At all GFS 6.1 nodes, create a cluster configuration file directory (/etc/cluster) and upgrade the CCA (in this example, located in /dev/pool/cca) to the new Red Hat Cluster Suite CCS configuration file format by running the ccs_tool upgrade command as shown in the following example: # mkdir /etc/cluster # ccs_tool upgrade /dev/pool/cca >...
  • Page 49 If static minor numbers were used on pool volumes and the GFS 6.1 nodes are using LVM2 for other purposes (root file system) there may be problems activating the pool volumes under GFS 6.1. That is because of static minor conflicts. Refer to the following Bugzilla report for more information: https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=146035...
  • Page 51: B. Revision History

    Appendix B. Revision History Revision 1.1 Wed Mar 17 2010 Paul Kennedy pkennedy@redhat.com Resolves #570798 Clarifies the number of nodes supported. Revision 1.0 Wed Apr 01 2009...
  • Page 53: Index

    Index making, 13 mounting, 15 quota management, 18 disabling/enabling quota accounting, 22 disabling/enabling quota enforcement, 22 adding journals to a file system, 25 displaying quota limits, 19 atime, configuring updates, 29 setting quotas, 18 mounting with noatime, 30 synchronizing quotas, 21 tuning atime quantum, 30 repairing, 33 audience, v...
  • Page 54 Index network power switches tables system requirements, 8 CDPN variable values, 35 fibre channel network requirements, 8 fibre channel storage device requirements, 8 GFS software subsystem components, 4 overview, 1 GFS-specific options for adding journals, 26 configuration, before, 5 GFS-specific options for expanding file economy, 1 systems, 25 GFS functions, 4...

Table of Contents