Novell LINUX ENTERPRISE SERVER 10 SP3 - STORAGE ADMINISTRATION GUIDE 2-23-2010 Administration Manual

Table of Contents

Advertisement

Quick Links

AUTHORIZED DOCUMENTATION
Storage Administration Guide
Novell
®
SUSE
Linux Enterprise Server
®
10 SP3
February 23, 2010
www.novell.com
SLES 10 SP3: Storage Administration Guide

Advertisement

Table of Contents
loading

Summary of Contents for Novell LINUX ENTERPRISE SERVER 10 SP3 - STORAGE ADMINISTRATION GUIDE 2-23-2010

  • Page 1 AUTHORIZED DOCUMENTATION Storage Administration Guide Novell ® SUSE Linux Enterprise Server ® 10 SP3 February 23, 2010 www.novell.com SLES 10 SP3: Storage Administration Guide...
  • Page 2: Legal Notices

    Further, Novell, Inc. reserves the right to make changes to any and all parts of Novell software, at any time, without any obligation to notify any person or entity of such changes.
  • Page 3 Novell Trademarks For Novell trademarks, see the Novell Trademark and Service Mark list (http://www.novell.com/company/legal/ trademarks/tmlist.html). Third-Party Materials All third-party trademarks and copyrights are the property of their respective owners. Some content in this document is copied, distributed, and/or modified from the following document under the terms specified in the document’s license.
  • Page 4 SLES 10 SP3: Storage Administration Guide...
  • Page 5: Table Of Contents

    Contents About This Guide 1 Overview of EVMS Benefits of EVMS ............11 Plug-In Layers .
  • Page 6 Creating Disk Segments (or Partitions) ......... . 38 Configuring Mount Options for Devices .
  • Page 7 6.1.6 Interoperability Issues ..........80 6.1.7 Guidelines for Component Devices .
  • Page 8 9 Installing and Managing DRBD Services Understanding DRBD............119 Installing DRBD Services .
  • Page 9: About This Guide

    Documentation Updates For the most recent version of the SUSE Linux Enterprise Server 10 Storage Administration Guide for EVMS, visit the Novell Documentation Web site for SUSE Linux Enterprise Server 10 (http:// www.novell.com/documentation/sles10). Additional Documentation For information about managing storage with the Linux Volume Manager (LVM), see the SUSE Linux Enterprise Server 10 Installation and Administration Guide (http://www.novell.com/...
  • Page 10: Documentation Conventions

    Documentation Conventions In Novell documentation, a greater-than symbol (>) is used to separate actions within a step and items in a cross-reference path. ® A trademark symbol ( , etc.) denotes a Novell trademark. An asterisk (*) denotes a third-party trademark.
  • Page 11: Overview Of Evms

    Overview of EVMS The Enterprise Volume Management System (EVMS) 2.5.5 management tool for Linux* is an extensible storage management tool that integrates all aspects of volume management, such as disk partitioning, the Logical Volume Manager (LVM), the Multiple-Disk (MD) manager for software RAIDs, the Device Mapper (DM) for multipath I/O configuration, and file system operations.
  • Page 12: Supported File Systems

    ® The Novell Storage Services (NSS) file system is also supported when used with the Novell Open Enterprise Server 2 for SUSE Linux Enterprise Server 10 SP1 (or later versions of OES 2 and SLES 10). File System Primer (http://wiki.novell.com/index.php/File_System_Primer)
  • Page 13 EVMS Terms Table 1-2 Term Description Sector The lowest level that can be addressed on a block device. Disk A physical disk or a logical device. Segment An ordered set of physically contiguous sectors on a single device. It is similar to traditional disk partitions.
  • Page 14: Location Of Device Nodes For Evms Storage Objects

    1.5 Location of Device Nodes for EVMS Storage Objects EVMS creates a unified namespace for the logical volumes on your system in the /dev/evms directory. It detects the storage objects actually present on a system, and creates an appropriate device node for each one, such as those shown in the following table. Device Node Location Table 1-3 Storage Object...
  • Page 15: Using Evms To Manage Devices

    Using EVMS to Manage Devices This section describes how to configure EVMS as the volume manager of your devices. Section 2.1, “Configuring the System Device at Install to Use EVMS,” on page 15 Section 2.2, “Configuring an Existing System Device to Use EVMS,” on page 22 Section 2.3, “Configuring LVM Devices to Use EVMS,”...
  • Page 16 Linux Enterprise Server 10, see “Large File System Support” in the SUSE Linux Enterprise Server 10 Installation and Administration Guide. (http:// www.novell.com/documentation/sles10). Data Loss Considerations for the System Device This install requires that you delete the default partitioning settings created by the install, and create new partitions to use EVMS instead.
  • Page 17: During The Server Install

    SUSE Linux Enterprise 10 Installation and Administration Guide (http://www.novell.com/documentation/ sles10/sles_admin/data/bookinfo_book_sles_admin.html). 2 When the installation reaches the Installations Settings screen, delete the proposed LVM-based partioning solution. This deletes the proposed partitions and the partition table on the system device so that the device can be marked to use EVMS as the volume manager instead of LVM.
  • Page 18 4c Select Primary Partition, then click OK. 4d Select Do Not Format, then select Linux LVM (0x8E) from the list of file system IDs. 4e In Size (End Value field), set the cylinder End value to 5 GB or larger, depending on the combined partition size you need to contain your system and swap volumes.
  • Page 19 6 Create the swap volume in the container: lvm/system 6a Select , then click Add. lvm/system 6b In the Create Logical Volume dialog box, select Format, then select Swap from the File System drop-down menu. 6c Specify as the volume name. swap 6d Specify 1 GB (recommended) for the swap volume.
  • Page 20: After The Server Install

    IMPORTANT: After the install is complete, make sure to perform the mandatory post-install configuration of the related system settings to ensure that the system device functions properly under EVMS. Otherwise, the system fails to boot properly. For information, see “After the Server Install” on page 2.1.3 After the Server Install After the SUSE Linux Enterprise Server 10 install is complete, you must perform the following tasks to ensure that the system device functions properly under EVMS:...
  • Page 21 /dev/sda1 /boot reiser defaults 1 1 3 In the Device Name column, modify the location of the partition from /boot /dev /dev/ so it can be managed by EVMS. Modify only the device name by adding to the evms /evms path: /dev/evms/sda1 /boot reiser defaults 1 1 4 Save the file.
  • Page 22: Configuring An Existing System Device To Use Evms

    NOTE: Effective in SUSE Linux Enterprise 10, the directory is on , and the device /dev tmpfs nodes are automatically re-created on boot. It is no longer necessary to modify the /etc/ script to delete the device nodes on system restart, as was required for init.d/boot.evms previous versions of SUSE Linux.
  • Page 23: Disable The Boot.lvm And Boot.md Services

    2.2.1 Disable the boot.lvm and boot.md Services You need to disable (handles devices for Linux Volume Manager) and (handles boot.lvm boot.md multiple devices in software RAIDs) so they do not run at boot time. In the future, you want to run at boot time instead. boot.evms 1 In YaST, click System >...
  • Page 24: Edit The Boot Loader File

    IMPORTANT: When working in the file, do not leave any stray characters or spaces /etc/fstab in the file. This is a configuration file, and it is highly sensitive to such mistakes. 1 Open the file in a text editor. /etc/fstab 2 Locate the line that contains the partition.
  • Page 25: Force The Ram Disk To Recognize The Root Partition

    /dev/evms/sda2 Replace with the actual device on your machine. sda2 3c Edit any device paths in the Other Kernel Parameters field. 3d Click OK to save the changes and return to the Boot Loader page. 4 Modify the failsafe image so that the failsafe root file system is mounted as /dev/evms/ instead of /dev/...
  • Page 26: Restart The Server

    For the latest version of mkinitrd mkinitrd, see Recommended Updates for mkinitrd (http://support.novell.com/techcenter/psdb/ 24c7dfbc3e0c183970b70c1c0b3a6d7d.html) at the Novell Technical Support Center. 1 At a terminal console prompt, enter the EVMS Ncurses command as the user or root equivalent:...
  • Page 27: Configuring Lvm Devices To Use Evms

    2.3 Configuring LVM Devices to Use EVMS Use the following post-installation procedure to configure data devices (not system devices) to be managed by EVMS. If you need to configure an existing system device for EVMS, see Section 2.2, “Configuring an Existing System Device to Use EVMS,” on page 1 In a terminal console, run the EVMSGUI by entering the following as the user or root...
  • Page 28: Using The Elilo Loader Files (Ia-64)

    chkconfig boot.evms on This ensures that EVMS and iSCSI start in the proper order each time your servers restart. 2.5 Using the ELILO Loader Files (IA-64) On a SUSE Linux Enterprise Server boot device EFI System Partition, the full paths to the loader and configuration files are: /boot/efi/SuSE/elilo.efi /boot/efi/SuSE/elilo.conf...
  • Page 29 Command Description Starts the EVMS commandline interpreter (CLI) interface. For information about evms command options, see “EVMS Command Line Interpreter” (http:// evms.sourceforge.net/user_guide/#COMMANDLINE) in the EVMS User Guide at the EVMS project on SourceForge.net. To stop from running automatically on restart: evmsgui 1 Close evmsgui...
  • Page 30 SLES 10 SP3: Storage Administration Guide...
  • Page 31: Using Uuids To Mount Devices

    Using UUIDs to Mount Devices This section describes the optional use of UUIDs instead of device names to identify file system devices in the boot loader file and the file. /etc/fstab Section 3.1, “Naming Devices with udev,” on page 31 Section 3.2, “Understanding UUIDs,”...
  • Page 32: Using Uuids To Assemble Or Activate File System Devices

    3.2.1 Using UUIDs to Assemble or Activate File System Devices The UUID is always unique to the partition and does not depend on the order in which it appears or where it is mounted. With certain SAN devices attached to the server, the system partitions are renamed and moved to be the last device.
  • Page 33: Using Uuids In The Boot Loader And /Etc/Fstab File (Ia64)

    kernel /boot/vmlinuz root=/dev/disk/by-uuid/e014e482-1c2d-4d09-84ec- 61b3aefde77a IMPORTANT: Make a copy of the original boot entry, then modify the copy. If you make a mistake, you can boot the server without the SAN connected, and fix the error. If you use the Boot Loader option in YaST, there is a defect where it adds some duplicate lines to the boot loader file when you change a value.
  • Page 34: Additional Information

    3.5 Additional Information For more information about using for managing devices, see “Dynamic Kernel Device udev(8) Management with udev” (http://www.novell.com/documentation/sles10/sles_admin/data/ ® cha_udev.html) in the SUSE Linux Enterprise Server 10 Installation and Administration Guide. For more information about commands, see its man page.
  • Page 35: Managing Evms Devices

    Managing EVMS Devices This section describes how to initialize a disk for EVMS management by adding a segment management container to manage the partitions that you later add to the disk. Section 4.1, “Understanding Disk Segmentation,” on page 35 Section 4.2, “Initializing Disks,” on page 36 Section 4.3, “Removing the Segment Manager from a Device,”...
  • Page 36: Disk Segments

    Segment Manager Description A partitioning scheme for Mac-OS partitions. 4.1.2 Disk Segments After you initialize the disk by adding a segment manager, you see metadata and free space segments on the disk. You can then create one or multiple data segments in a disk segment. Disk Segment Types Table 4-2 Segment Type...
  • Page 37: Guidelines

    4.2.2 Guidelines Consider the following guidelines when initializing a disk: EVMS might allow you to create segments without first adding a segment manager for the disk, but it is best to explicitly add a segment manager to avoid problems later. IMPORTANT: You must add a Cluster segment manager if you plan to use the devices for volumes that you want to share as cluster resources.
  • Page 38: Removing The Segment Manager From A Device

    3b From the list, select one of the following types of segment manager, then click Next. DOS Segment Manager (the most common choice) GPT Segment Manager (for IA-64 platforms) Cluster Segment Manager (available only if it is a viable option for the selected disk) 3c Select the device from the list of Plugin Acceptable Objects, then click Next.
  • Page 39: Configuring Mount Options For Devices

    Primary Partition: Click Yes for a primary partition, or click No for a logical partition. Required settings are denoted in the page by an asterisk (*). All required fields must be completed to make the Create button active. 5 Click Create to create the segment. 6 Verify that the new segment appears in the Segment list.
  • Page 40 Fstab Option Description Data journaling mode For journaling file systems, select the preferred journaling mode: Ordered: Writes data to the file system, then enters the metadata in the journal. This is the default. Journal: Writes data twice; once to the journal, then to the file system. Writeback: Writes data to the file system and writes metadata in the journal, but the writes are performed in any order.
  • Page 41: What's Next

    4.6 What’s Next If multiple paths exist between your host bus adapters (HBAs) and the storage devices, configure multipathing for the devices before creating software RAIDs or file system volumes on the devices. For information, see Chapter 5, “Managing Multipath I/O for Devices,” on page If you want to configure software RAIDs, do it before you create file systems on the devices.
  • Page 42 SLES 10 SP3: Storage Administration Guide...
  • Page 43: Managing Multipath I/O For Devices

    Managing Multipath I/O for Devices This section describes how to manage failover and path load balancing for multiple paths between the servers and block storage devices. Section 5.1, “Understanding Multipathing,” on page 43 Section 5.2, “Planning for Multipathing,” on page 44 Section 5.3, “Multipath Management Tools,”...
  • Page 44: Planning For Multipathing

    Typical connection problems involve faulty adapters, cables, or controllers. When you configure multipath I/O for a device, the multipath driver monitors the active connection between devices. When the multipath driver detects I/O errors for an active path, it fails over the traffic to the device’s designated secondary path.
  • Page 45: Using Multipathed Devices Directly Or In Evms

    Disk Management Tasks Perform the following disk management tasks before you attempt to configure multipathing for a physical or logical device that has multiple paths: Use third-party tools to carve physical disks into smaller logical disks. Use third-party tools to partition physical or logical disks. If you change the partitioning in the running system, the Device Mapper Multipath (DM-MP) module does not automatically detect and reflect these changes.
  • Page 46: Using Lvm2 On Multipath Devices

    5.2.3 Using LVM2 on Multipath Devices By default, LVM2 does not recognize multipathed devices. To make LVM2 recognize the multipathed devices as possible physical volumes, you must modify . It is /etc/lvm/lvm.conf important to modify it in a way that it does not scan and use the physical paths, but only accesses the multipath I/O storage through the multipath I/O layer.
  • Page 47: Supported Architectures For Multipath I/O

    SUSE Linux Enterprise Server 9 In SUSE Linux Enterprise Server 9, it is not possible to partition multipath I/O devices themselves. If the underlying physical device is already partitioned, the multipath I/O device reflects those partitions and the layer provides devices so you can access /dev/disk/by-id/<name>p1 ...
  • Page 48 NetApp* SGI* TP9100 SGI TP9300 SGI TP9400 SGI TP9500 STK OPENstorage DS280 Sun* StorEdge 3510 Sun T4 In general, most other storage arrays should work. When storage arrays are automatically detected, the default settings for multipathing apply. If you want non-default settings, you must manually create and configure the file.
  • Page 49: Multipath Management Tools

    DM-MP is the preferred solution for multipathing on SUSE Linux Enterprise Server 10. It is the ® only multipathing option shipped with the product that is completely supported by Novell SUSE. DM-MP features automatic configuration of the multipathing subsystem for a large variety of setups.
  • Page 50 The user-space component of DM-MP takes care of automatic path discovery and grouping, as well as automated path retesting, so that a previously failed path is automatically reinstated when it becomes healthy again. This minimizes the need for administrator attention in a production environment.
  • Page 51: Multipath I/O Management Tools

    For a list of files included in this package, see the multipath-tools Package Description (http:// www.novell.com/products/linuxpackages/suselinux/multipath-tools.html). 1 Ensure that the package is installed by entering the following at a terminal...
  • Page 52: Using Mdadm For Multipathed Devices

    5.3.3 Using mdadm for Multipathed Devices In SUSE Linux Enterprise Server 10, Udev is the default device handler, and devices are automatically known to the system by the Worldwide ID instead of by the device node name. This resolves problems in previous releases where did not properly mdadm.conf lvm.conf...
  • Page 53: Configuring The System For Multipathing

    Display potential multipath devices, but do not create any devices and do not update device maps (dry run): multipath -d Configure multipath devices and display multipath map information: multipath -v2 <device> multipath -v3 The -v2 option in multipath -v2 -d shows only local disks. Use the -v3 option to show the full path list.
  • Page 54: Preparing San Devices For Multipathing

    SCSI device scanning behavior, such as to indicate that LUNs are not numbered consecutively. For information, see Options for SCSI Device Scanning (http://support.novell.com/techcenter/sdb/en/2005/06/drahn_scsi_scanning.html) in the Novell Support Knowledgebase. 5.4.2 Partitioning Multipathed Devices Partitioning devices that have multiple paths is not recommended, but it is supported.
  • Page 55: Configuring The Server For Multipathing

    If you configure partitions for a device, DM-MP automatically recognizes the partitions and indicates them by appending p1 to pn to the device’s ID, such as /dev/disk/by-id/26353900f02796769p1 To partition multipathed devices, you must disable the DM-MP service, partition the normal device node (such as ), then reboot to allow the DM-MP service to see the new partitions.
  • Page 56: Creating And Configuring The /Etc/Multipath.conf File

    5.4.5 Creating and Configuring the /etc/multipath.conf File file does not exist unless you create it. The /etc/multipath.conf /usr/share/doc/ file contains a sample packages/multipath-tools/multipath.conf.synthetic /etc/ file that you can use as a guide for multipath settings. See multipath.conf /usr/share/doc/ for a template with extensive packages/multipath-tools/multipath.conf.annotated comments for each of the attributes and their options.
  • Page 57 \_ round-robin 0 \_ 1:0:0:2 sdag 66:0 [ready ] \_ 0:0:0:2 sdc 8:32 [ready ] Paths are grouped into priority groups. Only one priority group is ever in active use. To model an active/active configuration, all paths end up in the same group. To model active/passive configuration, the paths that should not be active in parallel are placed in several distinct priority groups.
  • Page 58 multipath { wwid 26353900f02796769 alias sdd4l0 6 Save your changes, then close the file. Blacklisting Non-Multipathed Devices in /etc/multipath.conf file should contain a section where all non-multipathed /etc/multipath.conf blacklist devices should be listed. For example, local IDE hard drives and floppy drives are not normally multipathed.
  • Page 59: Enabling And Starting Multipath I/O Services

    defaults { multipath_tool "/sbin/multipath -v0" udev_dir /dev polling_interval 10 default_selector "round-robin 0" default_path_grouping_policy failover default_getuid "/sbin/scsi_id -g -u -s /block/%n" default_prio_callout "/bin/true" default_features "0" rr_min_io failback immediate NOTE: In the command line, use the path as shown in the default_getuid /sbin/scsi_id above example instead of the sample path of that is found in the sample file...
  • Page 60: Configuring Path Failover Policies And Priorities

    5.6 Configuring Path Failover Policies and Priorities In a Linux host, when there are multiple paths to a storage controller, each path appears as a separate block device, and results in multiple block devices for single LUN. The Device Mapper Multipath service detects multiple paths with the same LUN ID, and creates a new multipath device with that ID.
  • Page 61 “Configuring for Single Path Failover” on page 64 “Grouping I/O Paths for Round-Robin Load Balancing” on page 64 Understanding Priority Groups and Attributes A priority group is a collection of paths that go to the same physical LUN. By default, I/O is distributed in a round-robin fashion across all paths in the group.
  • Page 62 Multipath Attribute Description Values path_grouping_policy Specifies the path grouping failover One path is assigned per priority group policy for a multipath device so that only one path at a time is used. hosted by a given controller. multibus (Default) All valid paths are in one priority group.
  • Page 63 Multipath Attribute Description Values prio_callout Specifies the program and If no attribute is used, all paths prio_callout arguments to use to are equal. This is the default. determine the layout of the /bin/true Use this value when the multipath map. group_by_priority is not being used.
  • Page 64 Multipath Attribute Description Values rr_min_io Specifies the number of I/O n (>0) Specify an integer value greater than 0. transactions to route to a path 1000 Default. before switching to the next path in the same path group, as determined by the specified algorithm in the setting.
  • Page 65: Using A Script To Set Path Priorities

    5.6.3 Using a Script to Set Path Priorities You can create a script that interacts with DM-MP to provide priorities for paths to the LUN when set as a resource for the setting. prio_callout First, set up a text file that lists information about each device and the priority values you want to assign to each path.
  • Page 66 “Options” on page 66 “Return Values” on page 66 Syntax mpath_prio_alua [-d directory] [-h] [-v] [-V] device [device...] Prerequisite SCSI devices Options -d directory Specifying the Linux directory path where the listed device node names can be found. The default directory is .
  • Page 67: Reporting Target Path Groups

    Priority Value Description The device is in the standby group. All other groups. Values are widely spaced because of the way the command handles them. It multiplies multipath the number of paths in a group with the priority value for the group, then selects the group with the highest result.
  • Page 68: Configuring Multipath I/O For An Existing Software Raid

    To enable multipathing on the existing root device: 1 Install Linux with only a single path active, preferably one where the symlinks are listed by-id in the partitioner. 2 Mount the devices by using the path used during the install. /dev/disk/by-id 3 After installation, add dm-multipath...
  • Page 69: Scanning For New Devices Without Rebooting

    3 Stop the service by entering boot.md /etc/init.d/boot.md stop 4 Start the services by entering the following commands: boot.multipath multipathd /etc/init.d/boot.multipath start /etc/init.s/multipathd start 5 After the multipathing services are started, verify that the software RAID’s component devices are listed in the directory.
  • Page 70 Syntax rescan-scsi-bus.sh [options] [host [host ...]] You can specify hosts on the command line (deprecated), or use the option --hosts=LIST (recommended). Options For most storage subsystems, the script can be run successfully without options. However, some special cases might need to use one or more of the following parameters for the rescan-scsi- script: bus.sh...
  • Page 71: Scanning For New Partitioned Devices Without Rebooting

    Procedure Use the following procedure to scan the devices and make them available to multipathing without rebooting the system. 1 On the storage subsystem, use the vendor’s tools to allocate the device and update its access control settings to allow the Linux system access to the new storage. Refer to the vendor’s documentation for details.
  • Page 72: Viewing Multipath I/O Status

    5 Use a text editor to add a new alias definition for the device in the file, /etc/multipath.conf such as oradata3 6 Create a partition table for the device by entering fdisk /dev/dm-8 7 Trigger udev by entering echo 'add' > /sys/block/dm-8/uevent This generates the device-mapper devices for the partitions on dm-8 8 Create a file system and label for the new partition by entering...
  • Page 73: Managing I/O In Error Situations

    For each device, it shows the device’s ID, size, features, and hardware handlers. Paths to the device are automatically grouped into priority groups on device discovery. Only one priority group is active at a time. For an active/active configuration, all paths are in the same group. For an active/passive configuration, the passive paths are placed in separate priority groups.
  • Page 74: Resolving Stalled I/O

    0 queue_if_no_path 5.15 Additional Information For more information about configuring and using multipath I/O on SUSE Linux Enterprise Server, see the following additional resources in the Novell Support Knowledgebase: How to Setup/Use Multipathing on SLES (http://support.novell.com/techcenter/sdb/en/2005/ 04/sles_multipathing.html) Troubleshooting SLES Multipathing (MPIO) Problems (Technical Information Document 3231766) (http://www.novell.com/support/...
  • Page 75: What's Next

    Static Load Balancing in Device-Mapper Multipathing (DM-MP) (Technical Information Document 3858277) (http://www.novell.com/support/ search.do?cmd=displayKC&docType=kc&externalId=3858277&sliceId=SAL_Public&dialogI D=57872426&stateId=0%200%2057878058) Troubleshooting SCSI (LUN) Scanning Issues (Technical Information Document 3955167) (http://www.novell.com/support/ search.do?cmd=displayKC&docType=kc&externalId=3955167&sliceId=SAL_Public&dialogI D=57868704&stateId=0%200%2057878206) 5.16 What’s Next If you want to use software RAIDs, create and configure them before you create file systems on the devices.
  • Page 76 SLES 10 SP3: Storage Administration Guide...
  • Page 77: Managing Software Raids With Evms

    Managing Software RAIDs with EVMS This section describes how to create and manage software RAIDs with the Enterprise Volume Management System (EVMS). EVMS supports only RAIDs 0, 1, 4, and 5 at this time. For RAID 6 and 10 solutions, see Chapter 7, “Managing Software RAIDs 6 and 10 with mdadm,”...
  • Page 78: Overview Of Raid Levels

    Feature Linux Software RAID Hardware RAID RAID processing In the host server’s processor RAID controller on the disk array RAID levels 0, 1, 4, 5, and 10 plus the mdadm Varies by vendor raid10 Component devices Disks from same or different disk Same disk array array 6.1.2 Overview of RAID Levels...
  • Page 79: Comparison Of Raid Performance

    RAID Level Description Performance and Fault Tolerance Stripes data and distributes Improves disk I/O performance for reads and writes. Write parity in a round-robin performance is considerably less than for RAID 0, because fashion across all disks. If parity must be calculated and written. Write performance is disks are different sizes, the faster than RAID 4.
  • Page 80: Configuration Options For Raids

    Raid Level Number of Disk Failures Tolerated Data Redundancy Distributed parity to reconstruct data and parity on the failed disk. 6.1.5 Configuration Options for RAIDs In EVMS management tools, the following RAID configuration options are provided: Configuration Options in EVMS Table 6-5 Option Description...
  • Page 81: Guidelines For Component Devices

    Support” in the SUSE Linux Enterprise Server 10 Installation and Administration Guide. (http:// www.novell.com/documentation/sles10). In general, each storage object included in the RAID should be from a different physical disk to maximize I/O performance and to achieve disk fault tolerance where supported by the RAID level you use.
  • Page 82: Raid 5 Algorithms For Distributing Stripes And Parity

    6.1.8 RAID 5 Algorithms for Distributing Stripes and Parity RAID 5 uses an algorithm to determine the layout of stripes and parity. The following table describes the algorithms. RAID 5 Algorithms Table 6-7 Algorithm EVMS Type Description Left Asymmetric Stripes are written in a round-robin fashion from the first to last member segment.
  • Page 83: Multi-Disk Plug-In For Evms

    For information about the layout of stripes and parity with each of these algorithms, see Linux RAID-5 Algorithms (http://www.accs.com/p_and_p/RAID/LinuxRAID.html). 6.1.9 Multi-Disk Plug-In for EVMS The Multi-Disk (MD) plug-in supports creating software RAIDs 0 (striping), 1 (mirror), 4 (striping with dedicated parity), and 5 (striping with distributed parity). The MD plug-in to EVMS allows you to manage all of these MD features as “regions”...
  • Page 84 4 If segments have not been created on the disks, create a segment on each disk that you plan to use in the RAID. For x86 platforms, this step is optional if you treat the entire disk as one segment. For IA-64 platforms, this step is necessary to make the RAID 4/5 option available in the Regions Manager.
  • Page 85 5d Specify values for Configuration Options by changing the following default settings as desired. For RAIDs 1, 4, or 5, optionally specify a device to use as the spare disk for the RAID. The default is none. For RAIDs 0, 4, or 5, specify the chunk (stripe) size in KB. The default is 32 KB. For RAIDs 4/5, specify RAID 4 or RAID 5 (default).
  • Page 86 5e Click Create to create the RAID device under the directory. /dev/evms/md The device is given a name such as , so its EVMS mount location is /dev/evms/md/ 6 Specify a human-readable label for the device. 6a Select Action > Create > EVMS Volume or Compatible Volume. 6b Select the device that you created in Step 6c Specify a name for the device.
  • Page 87: Expanding A Raid

    8b Select the RAID device you created in Step 5, such as /dev/evms/md/md0 8c Specify the location where you want to mount the device, such as /home 8d Click Mount. 9 Enable to activate EVMS automatically at reboot. boot.evms 9a In YaST, select System > System Services (Run Level). 9b Select Expert Mode.
  • Page 88: Adding Segments To A Raid 4 Or 5

    6.3.2 Adding Segments to a RAID 4 or 5 If the RAID region is clean and operating normally, the kernel driver adds the new object as a regular spare, and it acts as a hot standby for future failures. If the RAID region is currently degraded, the kernel driver immediately activates the new spare object and begins synchronizing the data and parity information.
  • Page 89: Adding A Spare Disk When You Create The Raid

    6.4.2 Adding a Spare Disk When You Create the RAID When you create a RAID 1, 4, or 5 in EVMS, specify the Spare Disk in the Configuration Options dialog box. You can browse to select the available device, segment, or region that you want to be the RAID’s spare disk.
  • Page 90: Identifying The Failed Drive

    synchronization of the replacement disk, write and read performance are both degraded. A RAID 5 can survive a single disk failure at a time. A RAID 4 can survive a single disk failure at a time if the disk is not the parity disk. Disks can fail for many reasons such as the following: Disk crash Disk pulled from the system...
  • Page 91: Replacing A Failed Device With A Spare

    Persistence : Superblock is persistent Update Time : Tue Aug 15 18:31:09 2006 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 8a9f3d46:3ec09d23:86e1ffbc:ee2d0dd8 Events : 0.174164 Number Major Minor RaidDevice State...
  • Page 92: Removing The Failed Disk

    To assign a spare device to the RAID: 1 Prepare the disk as needed to match the other members of the RAID. 2 In EVMS, select the Actions > Add > Spare Disk to a Region (the plug-in for the addspare EVMS GUI).
  • Page 93: Monitoring Status With /Proc/Mdstat

    6.6.2 Monitoring Status with /proc/mdstat A summary of RAID and status information (active/not active) is available in the /proc/mdstat file. 1 Open a terminal console, then log in as the user or equivalent. root 2 View the file by entering the following at the console prompt: /proc/mdstat cat /proc/mdstat 3 Evaluate the information.
  • Page 94 Replace mdx with the RAID device number. Example 1: A Disk Fails In the following example, only four of the five devices in the RAID are active ( Raid Devices : 5 ). When it was created, the component devices in the device were numbered 0 Total Devices : 4 to 5 and are ordered according to their alphabetic appearance in the list where they were chosen, such as...
  • Page 95: Monitoring A Remirror Or Reconstruction

    mdadm -D /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Apr 16 11:37:05 2006 Raid Level : raid5 Array Size : 35535360 (33.89 GiB 36.39 GB) Device Size : 8883840 (8.47 GiB 9.10 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Apr 17 05:50:44 2006...
  • Page 96 The following table identifies RAID events and indicates which events trigger e-mail alerts. All events cause the program to run. The program is run with two or three arguments: the event name, the array device (such as ), and possibly a second device. For Fail, Fail Spare, and Spare /dev/md1 Active, the second device is the relevant component device.
  • Page 97: Deleting A Software Raid And Its Data

    To configure an e-mail alert: 1 At a terminal console, log in as the user. root 2 Edit the file to add your e-mail address for receiving alerts. For /etc/mdadm/mdadm.conf example, specify the MAILADDR value (using your own e-mail address, of course): DEVICE partitions ARRAY /dev/md0 level=raid1 num-devices=2 UUID=1c661ae4:818165c3:3f7a4661:af475fda...
  • Page 98 umount <raid-device> 4 Stop the RAID device and its component devices by entering mdadm --stop <raid-device> mdadm --stop <member-devices> For more information about using , please see the man page. mdadm mdadm(8) 5 Delete all data on the disk by literally overwriting the entire device with zeroes. Enter mdadm --misc --zero-superblock <member-devices>...
  • Page 99: Managing Software Raids 6 And 10 With Mdadm

    Managing Software RAIDs 6 and 10 with mdadm This section describes how to create software RAID 6 and 10 devices, using the Multiple Devices Administration ( ) tool. You can also use to create RAIDs 0, 1, 4, and 5. The mdadm(8) mdadm mdadm...
  • Page 100: Creating A Raid 6

    7.1.2 Creating a RAID 6 The procedure in this section creates a RAID 6 device with four devices: /dev/md0 /dev/sda1 , and . Make sure to modify the procedure to use your actual dev/sdb1 /dev/sdc1 /dev/sdd1 device nodes. 1 Open a terminal console, then log in as the user or equivalent.
  • Page 101: Creating Nested Raid 10 (1+0) With Mdadm

    The following table describes the advantages and disadvantages of RAID 10 nesting as 1+0 versus 0+1. It assumes that the storage objects you use reside on different disks, each with a dedicated I/O capability. RAID Levels Supported in EVMS Table 7-2 RAID Level Description Performance and Fault Tolerance 10 (1+0)
  • Page 102: Creating Nested Raid 10 (0+1) With Mdadm

    Scenario for Creating a RAID 10 (1+0) by Nesting Table 7-3 Raw Devices RAID 1 (mirror) RAID 1+0 (striped mirrors) /dev/sdb1 /dev/md0 /dev/md2 /dev/sdc1 /dev/sdd1 /dev/md1 /dev/sde1 1 Open a terminal console, then log in as the root user or equivalent. 2 Create 2 software RAID 1 devices, using two different devices for each RAID 1 device.
  • Page 103: Creating A Complex Raid 10 With Mdadm

    The procedure in this section uses the device names shown in the following table. Make sure to modify the device names with the names of your own devices. Scenario for Creating a RAID 10 (0+1) by Nesting Table 7-4 Raw Devices RAID 0 (stripe) RAID 0+1 (mirrored stripes) /dev/sdb1...
  • Page 104 “Number of Replicas in the mdadm RAID10” on page 104 “Number of Devices in the mdadm RAID10” on page 104 “Near Layout” on page 105 “Far Layout” on page 105 Comparison of RAID10 Option and Nested RAID 10 (1+0) The complex RAID 10 is similar in purpose to a nested RAID 10 (1+0), but differs in the following ways: Complex vs.
  • Page 105 Near Layout With the near layout, copies of a block of data are striped near each other on different component devices. That is, multiple copies of one data block are at similar offsets in different devices. Near is the default layout for RAID10. For example, if you use an odd number of component devices and two copies of data, some copies are perhaps one chunk further into the device.
  • Page 106: Creating A Raid10 With Mdadm

    . . . 7.3.2 Creating a RAID10 with mdadm The RAID10-level option for creates a RAID 10 device without nesting. For information mdadm about the RAID10-level, see Section 7.3, “Creating a Complex RAID 10 with mdadm,” on page 103. The procedure in this section uses the device names shown in the following table. Make sure to modify the device names with the names of your own devices.
  • Page 107 RAID Type Allowable Number of Slots Missing RAID 1 All but one device RAID 4 One slot RAID 5 One slot RAID 6 One or two slots To create a degraded array in which some devices are missing, simply give the word missing place of a device name.
  • Page 108 108 SLES 10 SP3: Storage Administration Guide...
  • Page 109: Resizing Software Raid Arrays With Mdadm

    Resizing Software RAID Arrays with mdadm This section describes how to increase or reduce the size of a software RAID 1, 4, 5, or 6 device with the Multiple Device Administration ( ) tool. mdadm(8) WARNING: Before starting any of the tasks described in this chapter, make sure that you have a valid backup of all of the data.
  • Page 110: Overview Of Tasks

    8.1.2 Overview of Tasks Resizing the RAID involves the following tasks. The order in which these tasks is performed depends on whether you are increasing or decreasing its size. Tasks Involved in Resizing a RAID Table 8-2 Order If Order If Tasks Description Increasing...
  • Page 111: Increasing The Size Of The Raid Array

    Scenario for Increasing the Size of Component Partitions Table 8-3 RAID Device Component Partitions /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 To increase the size of the component partitions for the RAID: 1 Open a terminal console, then log in as the user or equivalent. root 2 Make sure that the RAID array is consistent and synchronized by entering cat /proc/mdstat...
  • Page 112: Increasing The Size Of The File System

    The procedure in this section uses the device name for the RAID device. Make sure to /dev/md0 modify the name to use the name of your own device. 1 Open a terminal console, then log in as the user or equivalent. root 2 Check the size of the array and the device size known to the array by entering mdadm -D /dev/md0 | grep -e "Array Size"...
  • Page 113 To extend the file system to a specific size, enter resize2fs /dev/md0 size The size parameter specifies the requested new size of the file system. If no units are specified, the unit of the size parameter is the block size of the file system. Optionally, the size parameter may be suffixed by one of the following the unit designators: s for 512 byte sectors;...
  • Page 114: Decreasing The Size Of A Software Raid

    ReiserFS As with Ext2 and Ext3, a ReiserFS file system can be increased in size while mounted or unmounted. The resize is done on the block device of your RAID array. 1 Open a terminal console, then log in as the user or equivalent.
  • Page 115 ® In SUSE Linux Enterprise Server SP1, only Ext2, Ext3, and ReiserFS provide utilities for shrinking the size of the file system. Use the appropriate procedure below for decreasing the size of your file system. The procedures in this section use the device name for the RAID device.
  • Page 116: Decreasing The Size Of Component Partitions

    mount -t reiserfs /dev/md0 /mnt/point 5 Check the effect of the resize on the mounted file system by entering df -h The Disk Free ( ) command shows the total size of the disk, the number of blocks used, and the number of blocks available on the file system.
  • Page 117: Decreasing The Size Of The Raid Array

    Replace the disk on which the partition resides with a different device. This option is possible only if no other file systems on the original disk are accessed by the system. When the replacement device is added back into the RAID, it takes much longer to synchronize the data.
  • Page 118 118 SLES 10 SP3: Storage Administration Guide...
  • Page 119: Installing And Managing Drbd Services

    Installing and Managing DRBD Services This section describes how to install, configure, and manage a device-level software RAID 1 across a network using DRBD* (Distributed Replicated Block Device) for Linux. Section 9.1, “Understanding DRBD,” on page 119 Section 9.2, “Installing DRBD Services,” on page 119 Section 9.3, “Configuring the DRBD Service,”...
  • Page 120: Configuring The Drbd Service

    1b Choose Software > Software Management. 1c Change the filter to Patterns. 1d Under Base Technologies, select High Availability. 1e Click Accept. 2 Install the kernel modules on both servers. drbd 2a Log in as the user or equivalent, then open YaST. root 2b Choose Software >...
  • Page 121: Testing The Drbd Service

    7 After the block devices on both nodes are fully synchronized, format the DRBD device on the primary with a file system such as reiserfs. Any Linux file system can be used. For example, enter mkfs.reiserfs -f /dev/drbd0 IMPORTANT: Always use the name in the command, not the actual /dev/drbd<n>...
  • Page 122: Troubleshooting Drbd

    DRBD as a high availability service with HeartBeat 2. ® For information about installing and configuring HeartBeat 2 for SUSE Linux Enterprise Server 10, see the HeartBeat 2 Installation and Setup Guide (http://www.novell.com/ documentation/sles10/hb2/data/hb2_config.html) on the Novell Documentation Web site for SUSE Linux Enterprise Server 10 (http://www.novell.com/documentation/sles10).
  • Page 123: Tcp Port 7788

    9.6.2 HeartBeat2 ® For information about installing and configuring HeartBeat 2 for SUSE Linux Enterprise Server 10, see the HeartBeat 2 Installation and Setup Guide (http://www.novell.com/documentation/ sles10/hb2/data/hb2_config.html) on the Novell Documentation Web site for SUSE Linux Enterprise Server 10 (http://www.novell.com/documentation/sles10).
  • Page 124 124 SLES 10 SP3: Storage Administration Guide...
  • Page 125: Troubleshooting Storage Issues

    Troubleshooting Storage Issues This section describes how to work around known issues for EVMS devices, software RAIDs, multipath I/O, and volumes. Section 10.1, “Is DM-MP Available for the Boot Partition?,” on page 125 Section 10.2, “Rescue System Cannot Find Devices That Are Managed by EVMS,” on page 125 Section 10.3, “Volumes on EVMS Devices Do Not Appear After Reboot,”...
  • Page 126: Volumes On Evms Devices Do Not Appear When Using Iscsi

    10.4 Volumes on EVMS Devices Do Not Appear When Using iSCSI If you have installed and configured an iSCSI SAN, and have created and configured EVMS disks or volumes on that iSCSI SAN, your EVMS volumes might not be visible or accessible after reboot. This problem is caused by EVMS starting before the iSCSI service.
  • Page 127: A Documentation Updates

    Documentation Updates This section contains information about documentation content changes made to the SUSE Linux ® Enterprise Server Storage Administration Guide since the initial release of SUSE Linux Enterprise Server 10. If you are an existing user, review the change entries to readily identify modified content. If you are a new user, simply read the guide in its current state.
  • Page 128: Managing Multipath I/O

    A.2.1 Managing Multipath I/O Location Change “Configuring Default Multipath Behavior In the command line, use the path default_getuid in /etc/multipath.conf” on page 58 sbin/scsi_id as shown in the above example instead of the sample path of that is found in the /lib/udev/scsi_id sample file /usr/share/doc/packages/multipath-...
  • Page 129: Installing And Managing Drbd Services

    The Novell Storage Services (NSS) file system is also on page 12 supported when used with the Novell Open Enterprise Server 2 for SUSE Linux Enterprise Server 10 SP1 (or later versions of OES 2 and SLES 10). Documentation Updates 129...
  • Page 130: May 15, 2009

    A.5 May 15, 2009 Updates were made to the following section. The changes are explained below. Section A.5.1, “Managing Multipath I/O,” on page 130 A.5.1 Managing Multipath I/O Location Change Section 5.8, “Configuring Multipath I/O DM-MP is available but is not supported for /boot for the Root Device,”...
  • Page 131: June 10, 2008

    A.7 June 10, 2008 Updates were made to the following section. The changes are explained below. Section A.7.1, “Managing Multipath I/O,” on page 131 A.7.1 Managing Multipath I/O Location Change Section 5.3.3, “Using mdadm for For information about modifying the /etc/lvm/lvm.conf Multipathed Devices,”...
  • Page 132 132 SLES 10 SP3: Storage Administration Guide...

This manual is also suitable for:

Suse linux enterprise server 10 sp3 storage

Table of Contents