Download  Print this page

Red Hat ENTERPRISE LINUX 5 - LOGICAL VOLUME MANAGER ADMINISTRATION Manual

Hide thumbs

Advertisement

Red Hat Enterprise Linux 5
Logical Volume
Manager Administration
LVM Administrator's Guide

Advertisement

loading

  Also See for Red Hat ENTERPRISE LINUX 5 - LOGICAL VOLUME MANAGER ADMINISTRATION

  Summary of Contents for Red Hat ENTERPRISE LINUX 5 - LOGICAL VOLUME MANAGER ADMINISTRATION

  • Page 1 Red Hat Enterprise Linux 5 Logical Volume Manager Administration LVM Administrator's Guide...
  • Page 2 Logical Volume Manager Administration Red Hat Enterprise Linux 5 Logical Volume Manager Administration LVM Administrator's Guide Edition 1 Copyright © 2009 Red Hat Inc.. This material may only be distributed subject to the terms and conditions set forth in the Open Publication License, V1.0 or later (the latest version of the OPL is presently available at http://www.opencontent.org/openpub/).
  • Page 3: Table Of Contents

    Introduction 1. About This Guide ......................vii 2. Audience ........................vii 3. Software Versions ......................vii 4. Related Documentation ....................vii 5. Feedback ........................viii 6. Document Conventions ....................viii 6.1. Typographic Conventions ..................viii 6.2. Pull-quote Conventions ..................x 6.3.
  • Page 4 Logical Volume Manager Administration 4.3.10. Splitting a Volume Group ................28 4.3.11. Combining Volume Groups ................29 4.3.12. Backing Up Volume Group Metadata .............. 29 4.3.13. Renaming a Volume Group ................29 4.3.14. Moving a Volume Group to Another System ............ 30 4.3.15.
  • Page 5 6.2. Displaying Information on Failed Devices ..............67 6.3. Recovering from LVM Mirror Failure ................68 6.4. Recovering Physical Volume Metadata ................ 72 6.5. Replacing a Missing Physical Volume ................73 6.6. Removing Lost Physical Volumes from a Volume Group ..........74 6.7.
  • Page 7: Introduction

    Introduction 1. About This Guide This book describes the Logical Volume Manager (LVM), including information on running LVM in a clustered environment. The content of this document is specific to the LVM2 release. 2. Audience This book is intended to be used by system administrators managing systems running the Linux operating system.
  • Page 8: Feedback

    5. Feedback If you spot a typo, or if you have thought of a way to make this manual better, we would love to hear from you. Please submit a report in Bugzilla (http://bugzilla.redhat.com/bugzilla/) against the component rh-cs. Be sure to mention the manual's identifier:...
  • Page 9 Typographic Conventions The above includes a file name, a shell command and a key cap, all presented in Mono-spaced Bold and all distinguishable thanks to context. Key-combinations can be distinguished from key caps by the hyphen connecting each part of a key- combination.
  • Page 10: Pull-Quote Conventions

    Introduction The mount -o remount file-system command remounts the named file system. For example, to remount the /home file system, the command is mount -o remount /home. To see the version of a currently installed package, use the rpm -q package command.
  • Page 11: Notes And Warnings

    Notes and Warnings 6.3. Notes and Warnings Finally, we use three visual styles to draw attention to information that might otherwise be overlooked. Note A Note is a tip or shortcut or alternative approach to the task at hand. Ignoring a note should have no negative consequences, but you might miss out on a trick that makes your life easier.
  • Page 13: The Lvm Logical Volume Manager

    Chapter 1. The LVM Logical Volume Manager This chapter provides a high-level overview of the components of the Logical Volume Manager (LVM). 1.1. Logical Volumes Volume management creates a layer of abstraction over physical storage, allowing you to create logical storage volumes. This provides much greater flexibility in a number of ways than using physical storage directly.
  • Page 14: Lvm Architecture Overview

    Chapter 1. The LVM Logical Volume Manager 1.2. LVM Architecture Overview For the RHEL 4 release of the Linux operating system, the original LVM1 logical volume manager was replaced by LVM2, which has a more generic kernel framework than LVM1. LVM2 provides the following improvements over LVM1: •...
  • Page 15: The Clustered Logical Volume Manager (Clvm)

    The Clustered Logical Volume Manager (CLVM) 1.3. The Clustered Logical Volume Manager (CLVM) The Clustered Logical Volume Manager (CLVM) is a set of clustering extensions to LVM. These extensions allow a cluster of computers to manage shared storage (for example, on a SAN) using LVM.
  • Page 16 Chapter 1. The LVM Logical Volume Manager Warning When you create volume groups with CLVM on shared storage, you must ensure that all nodes in the cluster have access to the physical volumes that constitute the volume group. Asymmmetric cluster configurations in which some nodes have access to the storage and others do not are not supported.
  • Page 17: Document Overview

    Document Overview Appendix B, The LVM lvm.conf file itself. For information about the lvm.conf file, see Configuration Files. 1.4. Document Overview This remainder of this document includes the following chapters: Chapter 2, LVM Components • describes the components that make up an LVM logical volume. Chapter 3, LVM Administration Overview •...
  • Page 19: Lvm Components

    Chapter 2. LVM Components This chapter describes the components of an LVM Logical volume. 2.1. Physical Volumes The underlying physical storage unit of an LVM logical volume is a block device such as a partition or whole disk. To use the device for an LVM logical volume the device must be initialized as a physical volume (PV).
  • Page 20: Multiple Partitions On A Disk

    Chapter 2. LVM Components Figure 2.1. Physical Volume layout 2.1.2. Multiple Partitions on a Disk LVM allows you to create physical volumes out of disk partitions. It is generally recommended that you create a single partition that covers the whole disk to label as an LVM physical volume for the following reasons: •...
  • Page 21: Lvm Logical Volumes

    LVM Logical Volumes A logical volume is allocated into logical extents of the same size as the physical extents. The extent size is thus the same for all logical volumes in the volume group. The volume group maps the logical extents to physical extents.
  • Page 22: Striped Logical Volumes

    Chapter 2. LVM Components of 4MB. This volume group includes 2 physical volumes named PV1 and PV2. The physical volumes are divided into 4MB units, since that is the extent size. In this example, PV1 is 100 extents in size (400MB) and PV2 is 200 extents in size (800MB).
  • Page 23 Striped Logical Volumes striped logical volume. For large sequential reads and writes, this can improve the efficiency of the data I/O. Striping enhances performance by writing data to a predetermined number of physical volumes in round-round fashion. With striping, I/O can be done in parallel. In some situations, this can result in near-linear performance gain for each additional physical volume in the stripe.
  • Page 24: Mirrored Logical Volumes

    Chapter 2. LVM Components 2.3.3. Mirrored Logical Volumes A mirror maintains identical copies of data on different devices. When data is written to one device, it is written to a second device as well, mirroring the data. This provides protection for device failures. When one leg of a mirror fails, the logical volume becomes a linear volume and can still be accessed.
  • Page 25 Snapshot Volumes Note LVM snapshots are not supported across the nodes in a cluster. You cannot create a snapshot volume in a clustered volume group. Because a snapshot copies only the data areas that change after the snapshot is created, the snapshot feature requires a minimal amount of storage.
  • Page 27: Lvm Administration Overview

    Chapter 3. LVM Administration Overview This chapter provides an overview of the administrative procedures you use to configure LVM logical volumes. This chapter is intended to provide a general understanding of the steps involved. For Chapter 5, LVM specific step-by-step examples of common LVM configuration procedures, see Configuration Examples.
  • Page 28: Logical Volume Creation Overview

    Chapter 3. LVM Administration Overview For information on how to install Red Hat Cluster Suite and set up the cluster infrastructure, see Configuring and Managing a Red Hat Cluster. Section 5.5, “Creating a Mirrored For an example of creating a mirrored logical volume in a cluster, see LVM Logical Volume in a Cluster”.
  • Page 29: Logging

    Logging How long the the metadata archives stored in the /etc/lvm/archive file are kept and how many archive files are kept is determined by parameters you can set in the lvm.conf file. A daily system backup should include the contents of the /etc/lvm directory in the backup. Note that a metadata backup does not back up the user and system data contained in the logical volumes.
  • Page 31: Lvm Administration With Cli Commands

    Chapter 4. LVM Administration with CLI Commands This chapter summarizes the individual administrative tasks you can perform with the LVM Command Line Interface (CLI) commands to create and maintain logical volumes. Note If you are creating or modifying an LVM volume for a clustered environment, you must Section 3.1, ensure that you are running the clvmd daemon.
  • Page 32: Physical Volume Administration

    Chapter 4. LVM Administration with CLI Commands Found volume group "new_vg" Creating new_vg-lvol0 Loading new_vg-lvol0 table Resuming new_vg-lvol0 (253:2) Clearing start of logical volume "lvol0" Creating volume group backup "/etc/lvm/backup/new_vg" (seqno 5). Logical volume "lvol0" created You could also have used the -vv, -vvv or the -vvvv argument to display increasingly more details about the command execution.
  • Page 33: Creating Physical Volumes

    Creating Physical Volumes 4.2.1. Creating Physical Volumes The following subsections describe the commands used for creating physical volumes. 4.2.1.1. Setting the Partition Type If you are using a whole disk device for your physical volume, the disk must have no partition table. For DOS disk partitions, the partition id should be set to 0x8e using the fdisk or cfdisk command or an equivalent.
  • Page 34: Displaying Physical Volumes

    Chapter 4. LVM Administration with CLI Commands /dev/ram5 16.00 MB] /dev/ram6 16.00 MB] /dev/ram7 16.00 MB] /dev/ram8 16.00 MB] /dev/ram9 16.00 MB] /dev/ram10 16.00 MB] /dev/ram11 16.00 MB] /dev/ram12 16.00 MB] /dev/ram13 16.00 MB] /dev/ram14 16.00 MB] /dev/ram15 16.00 MB] /dev/sdb 17.15 GB] /dev/sdb1...
  • Page 35: Preventing Allocation On A Physical Volume

    Preventing Allocation on a Physical Volume The following command shows all physical devices found: # pvscan PV /dev/sdb2 VG vg0 lvm2 [964.00 MB / 0 free] PV /dev/sdc1 VG vg0 lvm2 [964.00 MB / 428.00 MB free] PV /dev/sdc2 lvm2 [964.84 MB] Total: 3 [2.83 GB] / in use: 2 [1.88 GB] / in no VG: 1 [964.84 MB] You can define a filter in the lvm.conf so that this command will avoid scanning specific physical Section 4.6,...
  • Page 36: Creating Volume Groups

    Chapter 4. LVM Administration with CLI Commands 4.3.1. Creating Volume Groups To create a volume group from one or more physical volumes, use the vgcreate command. The vgcreate command creates a new volume group by name and adds at least one physical volume to The following command creates a volume group named vg1 that contains physical volumes /dev/ sdd1 and /dev/sde1.
  • Page 37: Creating Volume Groups In A Cluster

    Creating Volume Groups in a Cluster The maximum device size with LVM is 8 Exabytes on 64-bit CPUs. 4.3.2. Creating Volume Groups in a Cluster You create volume groups in a cluster environment with the vgcreate command, just as you create them on a single node.
  • Page 38: Scanning Disks For Volume Groups To Build The Cache File

    Chapter 4. LVM Administration with CLI Commands The vgscan command, which scans all the disks for volume groups and rebuilds the LVM cache Section 4.3.5, file, also displays the volume groups. For information on the vgscan command, see “Scanning Disks for Volume Groups to Build the Cache File”.
  • Page 39: Removing Physical Volumes From A Volume Group

    Removing Physical Volumes from a Volume Group You can define a filter in the lvm.conf file to restrict the scan to avoid specific devices. For Section 4.6, “Controlling LVM information on using filters to control which devices are scanned, see Device Scans with Filters”.
  • Page 40: Changing The Parameters Of A Volume Group

    Chapter 4. LVM Administration with CLI Commands 4.3.7. Changing the Parameters of a Volume Group The vgchange command is used to deactivate and activate volume groups, as described in Section 4.3.8, “Activating and Deactivating Volume Groups”. You can also use this command to change several volume group parameters for an existing volume group.
  • Page 41: Combining Volume Groups

    Combining Volume Groups Logical volumes cannot be split between volume groups. Each existing logical volume must be entirely on the physical volumes forming either the old or the new volume group. If necessary, however, you can use the pvmove command to force the split. The following example splits off the new volume group smallvg from the original volume group bigvg.
  • Page 42: Moving A Volume Group To Another System

    Chapter 4. LVM Administration with CLI Commands 4.3.14. Moving a Volume Group to Another System You can move an entire LVM volume group to another system. It is recommended that you use the vgexport and vgimport commands when you do this. The vgexport command makes an inactive volume group inaccessible to the system, which allows you to detach its physical volumes.
  • Page 43: Creating Logical Volumes

    Creating Logical Volumes 4.4.1. Creating Logical Volumes To create a logical volume, use the lvcreate command. You can create linear volumes, striped volumes, and mirrored volumes, as described in the following subsections. If you do not specify a name for the logical volume, the default name lvol# is used where # is the internal number of the logical volume.
  • Page 44 Chapter 4. LVM Administration with CLI Commands You can use -l argument of the lvcreate command to create a logical volume that uses the entire volume group. Another way to create a logical volume that uses the entire volume group is to use the vgdisplay command to find the "Total PE"...
  • Page 45 Creating Logical Volumes striped. The number of stripes cannot be greater than the number of physical volumes in the volume group (unless the --alloc anywhere argument is used). If the underlying physical devices that make up a striped logical volume are different sizes, the maximum size of the striped volume is determined by the smallest underlying device.
  • Page 46 Chapter 4. LVM Administration with CLI Commands of 2, using that number as the -R argument to the lvcreate command. For example, if your mirror size is 1.5TB, you could specify -R 2. If your mirror size is 3TB, you could specify -R 4.
  • Page 47 Creating Logical Volumes lvcreate -L 500M -m1 -n mirrorlv vg0 /dev/sda1 /dev/sdb1 /dev/sdc1 The following command creates a mirrored logical volume with a single mirror. The volume is 500 megabytes in size, it is named mirrorlv, and it is carved out of volume group vg0. The first leg of the mirror is on extents 0 through 499 of device /dev/sda1, the second leg of the mirror is on extents 0 through 499 of device /dev/sdb1, and the mirror log starts on extent 0 of device /dev/ sdc1.
  • Page 48: Persistent Device Numbers

    Chapter 4. LVM Administration with CLI Commands 4.4.2. Persistent Device Numbers Major and minor device numbers are allocated dynamically at module load. Some applications work best if the block device always is activated with the same device (major and minor) number. You can specify these with the lvcreate and the lvchange commands by using the following arguments: --persistent y --major major --minor minor Use a large minor number to be sure that it hasn't already been allocated to another device...
  • Page 49: Removing Logical Volumes

    Removing Logical Volumes lvrename vg02 lvold lvnew Section 4.8, For more information on activating logical volumes on individual nodes in a cluster, see “Activating Logical Volumes on Individual Nodes in a Cluster”. 4.4.6. Removing Logical Volumes To remove an inactive logical volume, use the lvremove command. If the logical volume is currently mounted, unmount the volume before removing it.
  • Page 50: Growing Logical Volumes

    Chapter 4. LVM Administration with CLI Commands ACTIVE '/dev/vg0/gfslv' [1.46 GB] inherit 4.4.8. Growing Logical Volumes To increase the size of a logical volume, use the lvextend command. After extending the logical volume, you will need to increase the size of the associated file system to match.
  • Page 51 Extending a Striped Volume the volume group will not enable you to extend the stripe. Instead, you must add at least two physical volumes to the volume group. For example, consider a volume group vg that consists of two underlying physical volumes, as displayed with the following vgs command.
  • Page 52: Shrinking Logical Volumes

    Chapter 4. LVM Administration with CLI Commands To extend the striped logical volume, add another physical volume and then extend the logical volume. In this example, having added two physical volumes to the volume group we can extend the logical volume to the full size of the volume group.
  • Page 53: Creating Snapshot Volumes

    Creating Snapshot Volumes 4.5. Creating Snapshot Volumes Use the -s argument of the lvcreate command to create a snapshot volume. A snapshot volume is writeable. Note LVM snapshots are not supported across the nodes in a cluster. You cannot create a snapshot volume in a clustered volume group.
  • Page 54: Controlling Lvm Device Scans With Filters

    Chapter 4. LVM Administration with CLI Commands Attr LSize Origin Snap% Move Log Copy% lvol0 new_vg owi-a- 52.00M newvgsnap1 new_vg swi-a- 8.00M lvol0 0.20 Note Because the snapshot increases in size as the origin volume changes, it is important to monitor the percentage of the snapshot volume regularly with the lvs command to be sure it does not fill.
  • Page 55: Online Data Relocation

    Online Data Relocation The following filter adds just partition 8 on the first IDE drive and removes all other block devices: filter = [ "a|^/dev/hda8$|", "r/.*/" ] Appendix B, The LVM Configuration Files For more information on the lvm.conf file, see and the lvm.conf(5) man page.
  • Page 56: Customized Reporting For Lvm

    Chapter 4. LVM Administration with CLI Commands To activate logical volumes exclusively on one node, use the lvchange -aey command. Alternatively, you can use lvchange -aly command to activate logical volumes only on the local node but not exclusively. You can later activate them on additional nodes concurrently. You can also activate logical volumes on individual nodes by using LVM tags, which are described in Appendix C, LVM Object Tags.
  • Page 57 Format Control The following example displays the UUID of the physical volume in addition to the default fields. # pvs -o +pv_uuid Attr PSize PFree PV UUID /dev/sdb1 new_vg lvm2 a- 17.14G 17.14G onFF2w-1fLC-ughJ-D9eB- M7iv-6XqA-dqGeXY /dev/sdc1 new_vg lvm2 a- 17.14G 17.09G Joqlch-yWSj-kuEn-IdwM-01S9- X08M-mcpsVe /dev/sdd1 new_vg lvm2 a-...
  • Page 58: Object Selection

    Chapter 4. LVM Administration with CLI Commands To keep the fields aligned when using the separator argument, use the separator argument in conjunction with the --aligned argument. # pvs --separator = --aligned =Fmt =Attr=PSize =PFree /dev/sdb1 =new_vg=lvm2=a- =17.14G=17.14G /dev/sdc1 =new_vg=lvm2=a- =17.14G=17.09G /dev/sdd1 =new_vg=lvm2=a- =17.14G=17.14G...
  • Page 59 Object Selection Argument Header Description DevSize The size of the underlying device on which the physical dev_size volume was created 1st PE Offset to the start of the first physical extent in the pe_start underlying device Attr Status of the physical volume: (a)llocatable or e(x)ported. pv_attr The metadata format of the physical volume (lvm2 or pv_fmt...
  • Page 60 Chapter 4. LVM Administration with CLI Commands You can use the --segments argument of the pvs command to display information about each physical volume segment. A segment is a group of extents. A segment view can be useful if you want to see whether your logical volume is fragmented.
  • Page 61 Object Selection /dev/sdd1 new_vg lvm2 a- 17.14G 17.14G The vgs Command Table 4.2, “vgs Display Fields” lists the display arguments of the vgs command, along with the field name as it appears in the header display and a description of the field. Argument Header Description...
  • Page 62 Chapter 4. LVM Administration with CLI Commands # vgs -v Finding all volume groups Finding volume group "new_vg" Attr #PV #LV #SN VSize VFree VG UUID new_vg wz--n- 4.00M 1 51.42G 51.36G jxQJ0a-ZKk0-OpMO-0118-nlwO- wwqd-fD5D32 The lvs Command Table 4.3, “lvs Display Fields” lists the display arguments of the lvs command, along with the field name as it appears in the header display and a description of the field.
  • Page 63 Object Selection Argument Header Description LV Tags LVM tags attached to the logical volume lv_tags LV UUID The UUID of the logical volume. lv_uuid Device on which the mirror log resides mirror_log Modules Corresponding kernel device-mapper target necessary to modules use this logical volume Move Source physical volume of a temporary logical volume...
  • Page 64: Sorting Lvm Reports

    Chapter 4. LVM Administration with CLI Commands You can use the --segments argument of the lvs command to display information with default columns that emphasize the segment information. When you use the segments argument, the seg prefix is optional. The lvs --segments command displays the following fields by default: lv_name, vg_name, lv_attr, stripes, segtype, seg_size.
  • Page 65: Specifying Units

    Specifying Units # pvs -o pv_name,pv_size,pv_free PSize PFree /dev/sdb1 17.14G 17.14G /dev/sdc1 17.14G 17.09G /dev/sdd1 17.14G 17.14G The following example shows the same output, sorted by the free space field. # pvs -o pv_name,pv_size,pv_free -O pv_free PSize PFree /dev/sdc1 17.14G 17.09G /dev/sdd1 17.14G 17.14G /dev/sdb1...
  • Page 66 Chapter 4. LVM Administration with CLI Commands /dev/sdc1 new_vg lvm2 a- 17552.00M 17500.00M /dev/sdd1 new_vg lvm2 a- 17552.00M 17552.00M By default, units are displayed in powers of 2 (multiples of 1024). You can specify that units be displayed in multiples of 1000 by capitalizing the unit specification (B, K, M, G, T, H). The following command displays the output as a multiple of 1024, the default behavior.
  • Page 67: Lvm Configuration Examples

    Chapter 5. LVM Configuration Examples This chapter provides some basic LVM configuration examples. 5.1. Creating an LVM Logical Volume on Three Disks This example creates an LVM logical volume called new_logical_volume that consists of the disks at /dev/sda1, /dev/sdb1, and /dev/sdc1. 5.1.1.
  • Page 68: Creating The File System

    Chapter 5. LVM Configuration Examples 5.1.4. Creating the File System The following command creates a GFS file system on the logical volume. [root@tng3-1 ~]# gfs_mkfs -plock_nolock -j 1 /dev/new_vol_group/ new_logical_volume This will destroy any data on /dev/new_vol_group/new_logical_volume. Are you sure you want to proceed? [y/n] y Device: /dev/new_vol_group/new_logical_volume Blocksize:...
  • Page 69: Creating The Volume Group

    Creating the Volume Group Physical volume "/dev/sdc1" successfully created 5.2.2. Creating the Volume Group The following command creates the volume group volgroup01. [root@tng3-1 ~]# vgcreate volgroup01 /dev/sda1 /dev/sdb1 /dev/sdc1 Volume group "volgroup01" successfully created You can use the vgs command to display the attributes of the new volume group. [root@tng3-1 ~]# vgs #PV #LV #SN Attr VSize...
  • Page 70: Splitting A Volume Group

    Chapter 5. LVM Configuration Examples All Done The following commands mount the logical volume and report the file system disk space usage. [root@tng3-1 ~]# mount /dev/volgroup01/striped_logical_volume /mnt [root@tng3-1 ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 13902624 1656776 11528232 13% / /dev/hda1...
  • Page 71: Splitting The Volume Group

    Splitting the Volume Group /dev/sdc1: Moved: 76.6% /dev/sdc1: Moved: 92.2% /dev/sdc1: Moved: 100.0% After moving the data, you can see that all of the space on /dev/sdc1 is free. [root@tng3-1 ~]# pvscan PV /dev/sda1 VG myvg lvm2 [17.15 GB / 0 free] PV /dev/sdb1 VG myvg...
  • Page 72: Activating And Mounting The Original Logical Volume

    Chapter 5. LVM Configuration Examples [root@tng3-1 ~]# gfs_mkfs -plock_nolock -j 1 /dev/yourvg/yourlv This will destroy any data on /dev/yourvg/yourlv. Are you sure you want to proceed? [y/n] y Device: /dev/yourvg/yourlv Blocksize: 4096 Filesystem Size: 1277816 Journals: Resource Groups: Locking Protocol: lock_nolock Lock Table: Syncing...
  • Page 73: Moving Extents To A New Disk

    Moving Extents to a New Disk /dev/sdc1 myvg lvm2 a- 17.15G 12.15G 5.00G /dev/sdd1 myvg lvm2 a- 17.15G 2.15G 15.00G We want to move the extents off of /dev/sdb1 so that we can remove it from the volume group. If there are enough free extents on the other physical volumes in the volume group, you can execute the pvmove command on the device you want to remove with no other options and the extents will be distributed to the other devices.
  • Page 74 Chapter 5. LVM Configuration Examples /dev/sdc1 myvg lvm2 a- 17.15G 15.15G 2.00G We want to move the extents of /dev/sdb1 to a new device, /dev/sdd1. 5.4.2.1. Creating the New Physical Volume Create a new physical volume from /dev/sdd1. [root@tng3-1 ~]# pvcreate /dev/sdd1 Physical volume "/dev/sdd1"...
  • Page 75: Creating A Mirrored Lvm Logical Volume In A Cluster

    Creating a Mirrored LVM Logical Volume in a Cluster [root@tng3-1 ~]# vgreduce myvg /dev/sdb1 Removed "/dev/sdb1" from volume group "myvg" You can now reallocate the disk to another volume group or remove the disk from the system. 5.5. Creating a Mirrored LVM Logical Volume in a Cluster Creating a mirrored LVM logical volume in a cluster requires the same commands and procedures as creating a mirrored LVM logical volume on a single node.
  • Page 76 Chapter 5. LVM Configuration Examples 4. Start the cmirror service. [root@doc-07 ~]# service cmirror start Loading clustered mirror log: 5. Create the mirror. The first step is creating the physical volumes. The following commands create three physical volumes. Two of the physical volumes will be used for the legs of the mirror, and the third physical volume will contain the mirror log.
  • Page 77 Creating a Mirrored LVM Logical Volume in a Cluster [root@doc-07 log]# lvs vg001/mirrorlv Attr LSize Origin Snap% Move Log Copy% Convert mirrorlv vg001 mwi-a- 3.91G vg001_mlog 47.00 [root@doc-07 log]# lvs vg001/mirrorlv Attr LSize Origin Snap% Move Log Copy% Convert mirrorlv vg001 mwi-a- 3.91G vg001_mlog 91.00...
  • Page 78 Chapter 5. LVM Configuration Examples /dev/xvdc1:0-0 When you create the mirrored volume, you create the clustered_log dlm space, which will contain the dlm locks for all mirrors. [root@doc-07 log]# cman_tool services Service Name GID LID State Code Fence Domain: "default" 2 run [1 2 3] DLM Lock Space:...
  • Page 79: Lvm Troubleshooting

    Chapter 6. LVM Troubleshooting This chapter provide instructions for troubleshooting a variety of LVM issues. 6.1. Troubleshooting Diagnostics If a command is not working as expected, you can gather diagnostics in the following ways: • Use the -v, -vv, -vvv, or -vvvv argument of any command for increasingly verbose levels of output.
  • Page 80: Recovering From Lvm Mirror Failure

    Chapter 6. LVM Troubleshooting [root@link-07 tmp]# lvs -a -o +devices Volume group "vg" not found Using the -P argument shows the logical volumes that have failed. [root@link-07 tmp]# lvs -P -a -o +devices Partial mode. Incomplete volume groups will be activated read-only. Attr LSize Origin Snap%...
  • Page 81 Recovering from LVM Mirror Failure The following command creates the physical volumes which will be used for the mirror. [root@link-08 ~]# pvcreate /dev/sd[abcdefgh][12] Physical volume "/dev/sda1" successfully created Physical volume "/dev/sda2" successfully created Physical volume "/dev/sdb1" successfully created Physical volume "/dev/sdb2" successfully created Physical volume "/dev/sdc1"...
  • Page 82 Chapter 6. LVM Troubleshooting groupfs mwi-a- 752.00M groupfs_mlog 100.00 groupfs_mimage_0(0),groupfs_mimage_1(0) [groupfs_mimage_0] vg iwi-ao 752.00M /dev/sda1(0) [groupfs_mimage_1] vg iwi-ao 752.00M /dev/sdb1(0) [groupfs_mlog] lwi-ao 4.00M /dev/sdc1(0) In this example, the primary leg of the mirror /dev/sda1 fails. Any write activity to the mirrored volume causes LVM to detect the failed mirror.
  • Page 83 Recovering from LVM Mirror Failure PV /dev/sdg1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdg2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdh1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdh2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sda1...
  • Page 84: Recovering Physical Volume Metadata

    Chapter 6. LVM Troubleshooting [groupfs_mimage_1] vg iwi-ao 752.00M /dev/sda1(0) [groupfs_mlog] lwi-ao 4.00M /dev/sdc1(0) 6.4. Recovering Physical Volume Metadata If the volume group metadata area of a physical volume is accidentally overwritten or otherwise destroyed, you will get an error message indicating that the metadata area is incorrect, or that the system was unable to find a physical volume with a particular UUID.
  • Page 85: Replacing A Missing Physical Volume

    Replacing a Missing Physical Volume (which could happen, for example, if the original pvcreate command had used the command line arguments that control metadata placement, or it the physical volume was originally created using a different version of the software that used different defaults). The pvcreate command overwrites only the LVM metadata areas and does not affect the existing data areas.
  • Page 86: Removing Lost Physical Volumes From A Volume Group

    Chapter 6. LVM Troubleshooting wish to substitute another physical volume of the same size, you can use the pvcreate command with the --restorefile and --uuid arguments to initialize a new device with the same UUID as the missing physical volume. You can then use the vgcfgrestore command to restore the volume group's metadata.
  • Page 87 Insufficient Free Extents for a Logical Volume # lvcreate -l8780 -n testlv testvg This uses all the free extents in the volume group. # vgs -o +vg_free_count,vg_extent_count #PV #LV #SN Attr VSize VFree Free #Ext testvg 0 wz--n- 34.30G 0 8780 Alternately, you can extend the logical volume to use a percentage of the remaining free space in the volume group by using the -l argument of the lvcreate command.
  • Page 89: Lvm Administration With The Lvm Gui

    Chapter 7. LVM Administration with the LVM GUI In addition to the Command Line Interface (CLI), LVM provides a Graphical User Interface (GUI) which you can use to configure LVM logical volumes. You can bring up this utility by typing system- config-lvm.
  • Page 91: The Device Mapper

    Appendix A. The Device Mapper The Device Mapper is a kernel driver that provides a framework for volume management. It provides a generic way of creating mapped devices, which may be used as logical volumes. It does not specifically know about volume groups or metadata formats. The Device Mapper provides the foundation for a number of higher-level technologies.
  • Page 92: The Linear Mapping Target

    Appendix A. The Device Mapper The following subsections describe the format of the following mappings: • linear • striped • mirror • snapshot and snapshot-origin • error • zero • multipath • crypt A.1.1. The linear Mapping Target A linear mapping target maps a continuous range of blocks onto another block device. The format of a linear target is as follows: start length linear device offset start...
  • Page 93: The Striped Mapping Target

    The striped Mapping Target A.1.2. The striped Mapping Target The striped mapping target supports striping across physical devices. It takes as arguments the number of stripes and the striping chunk size followed by a list of pairs of device name and sector. The format of a striped target is as follows: start length striped #stripes chunk_size device1 offset1 ...
  • Page 94: The Mirror Mapping Target

    Appendix A. The Device Mapper major:minor numbers of second device starting offset of the mapping on the second device major:minor numbers of of third device 9789824 starting offset of the mapping on the third device The following example shows a striped target for 2 stripes with 256 KiB chunks, with the device parameters specified by the device names in the file system rather than by the major and minor numbers.
  • Page 95 The mirror Mapping Target clustered_disk The mirror is clustered and the mirror log is kept on disk. This log type takes 3 - 5 arguments: logdevice regionsize UUID [[no]sync] [block_on_error] LVM maintains a small log which it uses to keep track of which regions are in sync with the mirror or mirrors.
  • Page 96: The Snapshot And Snapshot-Origin Mapping Targets

    Appendix A. The Device Mapper 253:2 major:minor numbers of log device 1024 region size the mirror log uses to keep track of what is in sync UUID UUID of mirror log device to maintain log information throughout a cluster block_on_error mirror should respond to errors number of legs in mirror 253:3 0 253:4 0 253:5 0...
  • Page 97 The snapshot and snapshot-origin Mapping Targets brw------- 1 root root 254, 12 29 ago 18:15 /dev/mapper/volumeGroup-snap- brw------- 1 root root 254, 13 29 ago 18:15 /dev/mapper/volumeGroup-snap brw------- 1 root root 254, 10 29 ago 18:14 /dev/mapper/volumeGroup-base The format for the snapshot-origin target is as follows: start length snapshot-origin origin start starting block in virtual device...
  • Page 98: The Error Mapping Target

    Appendix A. The Device Mapper The following example shows a snapshot target with an origin device of 254:11 and a COW device of 254:12. This snapshot device is persistent across reboots and the chunk size for the data stored on the COW device is 16 sectors.
  • Page 99 The multipath Mapping Target start starting block in virtual device length length of this segment #features The number of multipath features, followed by those features. If this parameter is zero, then there is no feature parameter and the next device mapping parameter is #handlerargs. Currently there is one supported multipath feature, queue_if_no_path.
  • Page 100 Appendix A. The Device Mapper #selectorargs The number of path selector arguments which follow this argument in the multipath mapping. Currently, the value of this argument is always 0. #paths The number of paths in this path group. #pathargs The number of path arguments specified for each path in this group. Currently this number is always 1, the ioreqs argument.
  • Page 101: The Crypt Mapping Target

    The crypt Mapping Target 0 71014400 multipath 0 0 1 1 round-robin 0 4 1 66:112 1000 \ 67:176 1000 68:240 1000 65:48 1000 For further information about multipathing, see the Using Device Mapper Multipath document. A.1.8. The crypt Mapping Target The crypt target encrypts the data passing through the specified device.
  • Page 102: The Dmsetup Command

    Appendix A. The Device Mapper A.2. The dmsetup Command The dmsetup command is a command line wrapper for communication with the Device Mapper. For general system information about LVM devices, you may find the info, ls, status, and deps options of the dmsetup command to be useful, as described in the following subsections. For information about additional options and capabilities of the dmsetup command, see the dmsetup(8) man page.
  • Page 103: The Dmsetup Ls Command

    The dmsetup ls Command UUID UUID of the device. The following example shows partial output for the dmsetup info command. [root@ask-07 ~]# dmsetup info Name: testgfsvg-testgfslv1 State: ACTIVE Read Ahead: Tables present: LIVE Open count: Event number: Major, minor: 253, 2 Number of targets: 2 UUID: LVM-K528WUGQgPadNXYcFrrf9LnPlUMswgkCkpgPIgYzSvigM7SfeWCypddNSWtNzc2N Name:...
  • Page 104: The Dmsetup Status Command

    Appendix A. The Device Mapper lock_stress-grant--03.1718 (253, 52) lock_stress-grant--02.1716 (253, 40) lock_stress-grant--03.1713 (253, 47) lock_stress-grant--02.1709 (253, 23) lock_stress-grant--01.1707 (253, 8) lock_stress-grant--01.1724 (253, 14) lock_stress-grant--03.1711 (253, 27) A.2.3. The dmsetup status Command The dmsetup status device command provides status information for each target in a specified device.
  • Page 105: The Lvm Configuration Files

    Appendix B. The LVM Configuration Files LVM supports multiple configuration files. At system startup, the lvm.conf configuration file is loaded from the directory specified by the environment variable LVM_SYSTEM_DIR, which is set to /etc/lvm by default. The lvm.conf file can specify additional configuration files to load. Settings in later files override settings from earlier ones.
  • Page 106 Appendix B. The LVM Configuration Files # It contains the default settings that would be used if there was no # /etc/lvm/lvm.conf file. # Refer to 'man lvm.conf' for further information including the file layout. # To put this file in a different directory and override /etc/lvm set # the environment variable LVM_SYSTEM_DIR before running the tools.
  • Page 107 Sample lvm.conf File # the cache file gets regenerated (see below). # If it doesn't do what you expect, check the output of 'vgscan -vvvv'. # By default we accept every block device: filter = [ "a/.*/" ] # Exclude the cdrom drive # filter = [ "r|/dev/cdrom|"...
  • Page 108 Appendix B. The LVM Configuration Files md_chunk_alignment = 1 # If, while scanning the system for PVs, LVM2 encounters a device- mapper # device that has its I/O suspended, it waits for it to become accessible. # Set this to 1 to skip such devices. This should only be needed # in recovery situations.
  • Page 109 Sample lvm.conf File prefix = " " # To make the messages look similar to the original LVM tools use: indent = 0 command_names = 1 prefix = " -- " # Set this if you want log messages during activation. # Don't use this in low memory situations (can deadlock).
  • Page 110 Appendix B. The LVM Configuration Files # Miscellaneous global LVM2 settings global { library_dir = "/usr/lib64" # The file creation mask for any files and directories created. # Interpreted as octal if the first digit is zero. umask = 077 # Allow other users to read the files #umask = 022 # Enabling test mode means that no changes to the on disk metadata...
  • Page 111 Sample lvm.conf File locking_type = 1 # If using external locking (type 2) and initialisation fails, # with this set to 1 an attempt will be made to use the built-in # clustered locking. # If you are using a customised locking_library you should set this to fallback_to_clustered_locking = 1 # If an attempt to initialise type 2 or type 3 locking failed, perhaps # because cluster components such as clvmd are not running, with this...
  • Page 112 Appendix B. The LVM Configuration Files # Nice value used while devices suspended process_priority = -18 # If volume_list is defined, each LV is only activated if there is a # match against the list. "vgname" and "vgname/lvname" are matched exactly. "@tag"...
  • Page 113 Sample lvm.conf File similarly to: # "allocate_anywhere" - Operates like "allocate", but it does not require that the new space being allocated be on a device is not part of the mirror. For a log device failure, this could mean that the log is allocated on the same device as a mirror device.
  • Page 114 Appendix B. The LVM Configuration Files # dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ] # Event daemon dmeventd { # mirror_library is the library used when monitoring a mirror device. # "libdevmapper-event-lvm2mirror.so" attempts to recover from # failures. It removes failed devices from a volume group and # reconfigures a mirror as necessary.
  • Page 115: Lvm Object Tags

    Appendix C. LVM Object Tags An LVM tag is a word that can be used to group LVM2 objects of the same type together. Tags can be attached to objects such as physical volumes, volume groups, and logical volumes. Tags can be attached to hosts in a cluster configuration.
  • Page 116: Controlling Activation With Tags

    Appendix C. LVM Object Tags C.3. Controlling Activation with Tags You can specify in the configuration file that only certain logical volumes should be activated on that host. For example, the following entry acts as a filter for activation requests (such as vgchange -ay) and only activates vg1/lvol0 and any logical volumes or volume groups with the database tag in the metadata on that host.
  • Page 117: Lvm Volume Group Metadata

    Appendix D. LVM Volume Group Metadata The configuration details of a volume group are referred to as the metadata. By default, an identical copy of the metadata is maintained in every metadata area in every physical volume within the volume group.
  • Page 118: Sample Metadata

    Appendix D. LVM Volume Group Metadata • Name and unique id • A version number which is incremented whenever the metadata gets updated • Any properties: Read/Write? Resizeable? • Any administrative limit on the number of physical volumes and logical volumes it may contain •...
  • Page 119 Sample Metadata device = "/dev/sda" # Hint only status = ["ALLOCATABLE"] dev_size = 35964301 # 17.1491 Gigabytes pe_start = 384 pe_count = 4390 # 17.1484 Gigabytes pv1 { id = "ZHEZJW-MR64-D3QM-Rv7V-Hxsa-zU24-wztY19" device = "/dev/sdb" # Hint only status = ["ALLOCATABLE"] dev_size = 35964301 # 17.1491 Gigabytes pe_start = 384...
  • Page 120 Appendix D. LVM Volume Group Metadata stripes = [ "pv0", 0 segment2 { start_extent = 1280 extent_count = 1280 # 5 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv1", 0...
  • Page 121: Revision History

    Appendix E. Revision History Revision 5.4-1 Tue Aug 18 2009 Steven Levine slevine@redhat.com Replaces Cluster Logical Volume Manager, resolving bugs against that document. Resolves: #510273 Clarifies lack of snapshot support in clustered volume groups. Resolves: #515742 Documents necessity for increasing mirror region size from default value for mirrors that are larger than 1.5TB.
  • Page 123: Index

    Index minor, 36 persistent, 36 device path names, 19 device scan filters, 42 device size, maximum, 25 activating logical volumes device special file directory, 24 individual nodes, 43 display activating volume groups, 28 sorting output, 52 individual nodes, 28 displaying local node only, 28 logical volumes, 37, 50 administrative procedures, 15...
  • Page 124 Index displaying, 37, 44, 50 converting to linear, 35 exclusive access, 43 creation, 33 extending, 38 definition, 12 growing, 38 failure recovery, 68 linear, 31 reconfiguration, 35 local access, 43 lvs display arguments, 50 mirrored, 33 online data relocation, 43 reducing, 40 removing, 37 renaming, 36...
  • Page 125 logical volume, 36 displaying, 25, 44, 49 physical volume, 23 extending, 25 growing, 25 merging, 29 moving between systems, 30 scanning reducing, 27 block devices, 21 removing, 28 scanning devices, filters, 42 renaming, 29 snapshot logical volume shrinking, 27 creation, 41 splitting, 28 snapshot volume example procedure, 58...