Novell LINUX ENTERPRISE SERVER 10 - STORAGE ADMINISTRATION GUIDE 7-2007 Administration Manual

Table of Contents

Advertisement

Quick Links

Novell
SUSE
Linux Enterprise Server
®
w w w . n o v e l l . c o m
1 0
S T O R A G E A D M I N I S T R A T I O N G U I D E
J u l y 2 0 0 7

Advertisement

Table of Contents
loading

Summary of Contents for Novell LINUX ENTERPRISE SERVER 10 - STORAGE ADMINISTRATION GUIDE 7-2007

  • Page 1 SLES 10 Storage Administration Guide Novell SUSE Linux Enterprise Server ® w w w . n o v e l l . c o m S T O R A G E A D M I N I S T R A T I O N G U I D E...
  • Page 2: Legal Notices

    Further, Novell, Inc. reserves the right to make changes to any and all parts of Novell software, at any time, without any obligation to notify any person or entity of such changes.
  • Page 3 Novell Trademarks For Novell trademarks, see the Novell Trademark and Service Mark list (http://www.novell.com/company/legal/ trademarks/tmlist.html). Third-Party Materials All third-party trademarks and copyrights are the property of their respective owners. Some content in this document is copied, distributed, and/or modified from the following document under the terms specified in the document’s license.
  • Page 5: Table Of Contents

    Contents About This Guide 1 Overview of EVMS Benefits of EVMS............11 Plug-In Layers .
  • Page 6 Creating Disk Segments (or Partitions) ......... . 38 Configuring Mount Options for Devices .
  • Page 7 6.3.1 Adding Mirrors to a RAID 1 Device ........69 6.3.2 Adding Segments to a RAID 4 or 5 .
  • Page 8 9.5.2 Host Names ............104 9.5.3 TCP Port 7788 .
  • Page 9: About This Guide

    For information about managing storage with the Linux Volume Manager (LVM), see the SUSE Linux Enterprise Server 10 Installation and Administration Guide (http://www.novell.com/ documentation/sles10). Documentation Conventions In Novell documentation, a greater-than symbol (>) is used to separate actions within a step and items in a cross-reference path. About This Guide...
  • Page 10 ® A trademark symbol ( , etc.) denotes a Novell trademark. An asterisk (*) denotes a third-party trademark. SLES 10 Storage Administration Guide...
  • Page 11: Overview Of Evms

    Overview of EVMS The Enterprise Volume Management System (EVMS) 2.5.5 management tool for Linux* is an extensible storage management tool that integrates all aspects of volume management, such as disk partitioning, the Logical Volume Manager (LVM), the Multiple-Disk (MD) manager for software RAIDs, the Device Mapper (DM) for multipath I/O configuration, and file system operations.
  • Page 12: Supported File Systems

    FAT (read only) ® For more information about file systems supported in SUSE Linux Enterprise Server 10, see the SUSE Linux Enterprise Server 10 Installation and Administration Guide. (http://www.novell.com/ documentation/sles10). 1.4 Terminology EVMS uses the following terminology in the EVMS user interface:...
  • Page 13: Location Of Device Nodes For Evms Storage Objects

    Term Description Region An ordered set of logically contiguous sectors that might or might not be physically contiguous. The underlying mapping can be to logical disks, disk segments, or other storage regions. Feature A logically contiguous address space created from one or more disks, segments, regions, or other feature objects through the use of an EVMS (Feature Object, EVMS feature.
  • Page 14 Storage Object Standard Location the Device Node EVMS Location of the Device Node A software RAID device /dev/md1 /dev/evms/md/md1 An LVM volume /dev/lvm_group/lvm_volume /dev/evms/lvm/lvm_group/ lvm_volume SLES 10 Storage Administration Guide...
  • Page 15: Using Evms To Manage Devices

    Using EVMS to Manage Devices This section describes how to configure EVMS as the volume manager of your devices. Section 2.1, “Configuring the System Device at Install to Use EVMS,” on page 15 Section 2.2, “Configuring an Existing System Device to Use EVMS,” on page 22 Section 2.3, “Configuring LVM Devices to Use EVMS,”...
  • Page 16 Linux Enterprise Server 10, see “Large File System Support” in the SUSE Linux Enterprise Server 10 Installation and Administration Guide. (http:// www.novell.com/documentation/sles10). Data Loss Considerations for the System Device This install requires that you delete the default partitioning settings created by the install, and create new partitions to use EVMS instead.
  • Page 17: During The Server Install

    SUSE Linux Enterprise 10 Installation and Administration Guide (http://www.novell.com/documentation/ sles10/sles_admin/data/bookinfo_book_sles_admin.html). 2 When the installation reaches the Installations Settings screen, delete the proposed LVM-based partioning solution. This deletes the proposed partitions and the partition table on the system device so that the device can be marked to use EVMS as the volume manager instead of LVM.
  • Page 18 4c Select Primary Partition, then click OK. 4d Select Do Not Format, then select Linux LVM (0x8E) from the list of file system IDs. 4e In Size (End Value field), set the cylinder End value to 5 GB or larger, depending on the combined partition size you need to contain your system and swap volumes.
  • Page 19 6a Select lvm/system, then click Add. 6b In the Create Logical Volume dialog box, select Format, then select Swap from the File System drop-down menu. 6c Specify swap as the volume name. 6d Specify 1 GB (recommended) for the swap volume. The swap size should be 128 MB or larger, with a recommended size of 1 GB.
  • Page 20: After The Server Install

    IMPORTANT: After the install is complete, make sure to perform the mandatory post-install configuration of the related system settings to ensure that the system device functions properly under EVMS. Otherwise, the system fails to boot properly. For information, see “After the Server Install” on page 2.1.3 After the Server Install After the SUSE Linux Enterprise Server 10 install is complete, you must perform the following tasks to ensure that the system device functions properly under EVMS:...
  • Page 21 /dev/sda1 /boot reiser defaults 1 1 3 In the Device Name column, modify the location of the /boot partition from /dev to /dev/ evms so it can be managed by EVMS. Modify only the device name by adding /evms to the path: /dev/evms/sda1 /boot reiser defaults 1 1 4 Save the file.
  • Page 22: Configuring An Existing System Device To Use Evms

    etc/init.d/boot.evms script to delete the device nodes on system restart, as was required for previous versions of SUSE Linux. 5 Continue with “Restart the Server” on page Restart the Server 1 Restart the server to apply the post-install configuration settings. 2 On restart, if the system device does not appear, it might be because EVMS has not been activated.
  • Page 23: Disable The Boot.lvm And Boot.md Services

    2.2.1 Disable the boot.lvm and boot.md Services You need to disable boot.lvm (handles devices for Linux Volume Manager) and boot.md (handles multiple devices in software RAIDs) so they do not run at boot time. In the future, you want boot.evms to run at boot time instead. 1 In YaST, click System >...
  • Page 24: Edit The Boot Loader File

    Make sure to replace sda1, sda2, and sda3 with the device names you used for your partitions. IMPORTANT: When working in the /etc/fstab file, do not leave any stray characters or spaces in the file. This is a configuration file, and it is highly sensitive to such mistakes. 1 Open the /etc/fstab file in a text editor.
  • Page 25 For example, change the Root Device value from /dev/sda2 /dev/evms/sda2 Replace sda2 with the actual device on your machine. 3c Edit any device paths in the Other Kernel Parameters field. 3d Click OK to save the changes and return to the Boot Loader page. 4 Modify the failsafe image so that the failsafe root file system is mounted as /dev/evms/ instead of /dev/.
  • Page 26: Force The Ram Disk To Recognize The Root Partition

    NOTE: Recent patches to mkinitrd might resolve the need to do this task. For the latest version of mkinitrd, see Recommended Updates for mkinitrd (http://support.novell.com/techcenter/psdb/ 24c7dfbc3e0c183970b70c1c0b3a6d7d.html) at the Novell Technical Support Center. 1 At a terminal console prompt, enter the EVMS Ncurses command as the root user or equivalent: evmsn 2 Review the output to verify that EVMS shows only the /boot and swap partitions as active in EVMS.
  • Page 27: Configuring Lvm Devices To Use Evms

    You should see the following devices mounted (with your own partition names, of course) under the /dev/evms directory: /dev/evms/sda1 /dev/evms/sda2 /dev/evms/sda3 2.3 Configuring LVM Devices to Use EVMS Use the following post-installation procedure to configure data devices (not system devices) to be managed by EVMS.
  • Page 28: Using The Elilo Loader Files (Ia-64)

    If EVMS starts before iSCSI on your system so that your EVMS devices, RAIDs, and volumes are not visible or accessible, you must correct the order in which iSCSI and EVMS are started. Enter the chkconfig command at the Linux server console of every server that is part of your iSCSI SAN. 1 At a terminal console prompt, enter either chkconfig evms on chkconfig boot.evms on...
  • Page 29 Command Description Starts the graphical interface for EVMS GUI. For information about features in evmsgui this interface, see ”EVMS GUI” (http://evms.sourceforge.net/user_guide/#GUI) in the EVMS User Guide at the EVMS project on SourceForge.net. Starts the text-mode interface for EVMS Ncurses. For information about evmsn features in this interface, see the “EVMS Ncurses Interface”...
  • Page 30 SLES 10 Storage Administration Guide...
  • Page 31: Using Uuids To Mount Devices

    Using UUIDs to Mount Devices This section describes the optional use of UUIDs instead of device names to identify file system devices in the boot loader file and the /etc/fstab file. Section 3.1, “Naming Devices with udev,” on page 31 Section 3.2, “Understanding UUIDs,”...
  • Page 32: Finding The Uuid For A File System Device

    during the install, it might be assigned to /dev/sdg1 after the SAN is connected. One way to avoid this problem is to use the UUID in the boot loader and /etc/fstab files for the boot device. A UUID never changes, no matter where the device is mounted, so it can always be found at boot. In a boot loader file, you typically specify the location of the device (such as /dev/sda1 or /dev/ evms/sda1) to mount it at system boot.
  • Page 33: Using Uuids In The Boot Loader And /Etc/Fstab File (Ia64)

    If you use the Boot Loader option in YaST, there is a defect where it adds some duplicate lines to the boot loader file when you change a value. Use an editor to remove the following duplicate lines: color white/blue black/light-gray default 0 timeout 8 gfxmenu (sd0,1)/boot/message...
  • Page 34: Additional Information

    3.5 Additional Information For more information about using udev(8) for managing devices, see “Dynamic Kernel Device Management with udev” (http://www.novell.com/documentation/sles10/sles_admin/data/ ® cha_udev.html) in the SUSE Linux Enterprise Server 10 Installation and Administration Guide.
  • Page 35: Managing Evms Devices

    Managing EVMS Devices This section describes how to initialize a disk for EVMS management by adding a segment management container to manage the partitions that you later add to the disk. Section 4.1, “Understanding Disk Segmentation,” on page 35 Section 4.2, “Initializing Disks,” on page 36 Section 4.3, “Removing the Segment Manager from a Device,”...
  • Page 36: Disk Segments

    Segment Manager Description A partitioning scheme for Mac-OS partitions. 4.1.2 Disk Segments After you initialize the disk by adding a segment manager, you see metadata and free space segments on the disk. You can then create one or multiple data segments in a disk segment. Disk Segment Types Table 4-2 Segment Type...
  • Page 37: Guidelines

    4.2.2 Guidelines Consider the following guidelines when initializing a disk: EVMS might allow you to create segments without first adding a segment manager for the disk, but it is best to explicitly add a segment manager to avoid problems later. IMPORTANT: You must add a Cluster segment manager if you plan to use the devices for volumes that you want to share as cluster resources.
  • Page 38: Removing The Segment Manager From A Device

    3b From the list, select one of the following types of segment manager, then click Next. DOS Segment Manager (the most common choice) GPT Segment Manager (for IA-64 platforms) Cluster Segment Manager (available only if it is a viable option for the selected disk) 3c Select the device from the list of Plugin Acceptable Objects, then click Next.
  • Page 39: Configuring Mount Options For Devices

    Primary Partition: Click Yes for a primary partition, or click No for a logical partition. Required settings are denoted in the page by an asterisk (*). All required fields must be completed to make the Create button active. 5 Click Create to create the segment. 6 Verify that the new segment appears in the Segment list.
  • Page 40 Fstab Option Description Data journaling mode For journaling file systems, select the preferred journaling mode: Ordered: Writes data to the file system, then enters the metadata in the journal. This is the default. Journal: Writes data twice; once to the journal, then to the file system. Writeback: Writes data to the file system and writes metadata in the journal, but the writes are performed in any order.
  • Page 41: What's Next

    4.6 What’s Next If multiple paths exist between your host bus adapters (HBAs) and the storage devices, configure multipathing for the devices before creating software RAIDs or file system volumes on the devices. For information, see Chapter 5, “Managing Multipath I/O for Devices,” on page If you want to configure software RAIDs, do it before you create file systems on the devices.
  • Page 42 SLES 10 Storage Administration Guide...
  • Page 43: Managing Multipath I/O For Devices

    Managing Multipath I/O for Devices This section describes how to manage failover and path load balancing for multiple paths between the servers and block storage devices. Section 5.1, “Understanding Multipathing,” on page 43 Section 5.2, “Multipath Management Tools,” on page 44 Section 5.3, “Supported Storage Subsystems,”...
  • Page 44: Benefits Of Multipathing

    5.1.2 Benefits of Multipathing Linux multipathing provides connection fault tolerance and can optionally provide load balancing across the available connections. When multipathing is configured and running, it automatically isolates and identifies device connection failures, and reroutes I/O to alternate connections. Typical connection problems involve faulty adapters, cables, or controllers.
  • Page 45: Device Mapper Multipath I/O Module

    5.2.1 Device Mapper Multipath I/O Module The Device Mapper Multipath I/O (DM-MPIO) module provides the multipathing capability for Linux. Multipath protects against failures in the paths to the device, and not failures in the device itself. If one of the paths is lost (for example, a network adapter breaks or a fiber-optic cable is removed), I/O will be redirected to the remaining paths.
  • Page 46: Using Mdadm For Multipathed Devices

    For a list of files included in this package, see the multipath-tools Package Description (http:// www.novell.com/products/linuxpackages/suselinux/multipath-tools.html). Ensure that the multipath-tools package is installed by entering the following at a terminal console prompt: rpm -q multipath-tools 5.2.3 Using mdadm for Multipathed Devices...
  • Page 47: Configuring The System For Multipathing

    If the LUNs are seen by the HBA driver, but there are no corresponding block devices, additional kernel parameters are needed to change the SCSI device scanning behavior, such as to indicate that LUNs are not numbered consecutively. For information, see Options for SCSI Device Scanning (http://www.novell.com/support/search.do?cmd=displayKC&docType=kc&externalId=http-- Managing Multipath I/O for Devices...
  • Page 48: Partitioning Multipathed Devices

    ® supportnovellcom-techcenter-sdb-en-2005-06-drahnscsiscanninghtml&sliceId=) in the Novell Support Knowledgebase. 5.4.2 Partitioning Multipathed Devices Partitioning devices that have multiple paths is not recommended. However, if you want to partition the device, you should configure its partitions using fdisk or YaST2 before configuring multipathing. This is necessary because partitioning a DM-MPIO device is not supported.
  • Page 49: Adding Multipathd To The Boot Sequence

    To avoid scanning and detection problems for multipathed devices: 1 In a terminal console, log in as the root user. 2 Open the /etc/mdadm.conf file in a text editor, then modify the Devices variable to scan for devices in the /dev/disk/by-id directory, as follows: DEVICE /dev/disk/by-id/* After you start multipath I/O services, the paths are listed under /dev/disk/by-id as these names are persistent.
  • Page 50: Adding Support For The Storage Subsystem To /Etc/Multipath.conf

    5.7 Adding Support for the Storage Subsystem to /etc/multipath.conf If you are using a storage subsystem that is automatically detected (see “Supported Storage Subsystems” on page 46), no further configuration of the /etc/multipath.conf file is required. Otherwise, create the /etc/multipath.conf file and add an appropriate device entry for your storage subsystem.
  • Page 51: Tuning The Failover For Specific Host Bus Adapters

    cp /usr/share/doc/packages/multipath-tools/ multipath.conf.synthetic /etc/multipath.conf 3 Open the /etc/multipath.conf file in a text editor. 4 Uncomment the Defaults directive and its ending bracket. 5 Uncomment the user_friendly_names option, then change its value from No to Yes. 6 Specify names for devices using the ALIAS directive. 7 Save your changes, then close the file.
  • Page 52: Configuring Multipath I/O For An Existing Software Raid

    5.11 Configuring Multipath I/O for an Existing Software RAID Ideally, you should configure multipathing for devices before you use them as components of a software RAID device. If you add multipathing after creating any software RAID devices, the multipath I/O service might be starting after the md service on reboot, which makes multipathing appear not to be available for RAIDs.
  • Page 53: Using Multipathed Devices

    The devices should now be listed in /dev/disk/by-id, and have symbolic links to their Device Mapper device names. For example: lrwxrwxrwx 1 root root 10 Jun 15 09:36 scsi-mpath1 -> ../../dm-1 6 Restart the boot.md service and the RAID device by entering /etc/init.d/boot.md start 7 Check the status of the software RAID by entering mdadm --detail /dev/md0...
  • Page 54: Using Mdadm

    This allows LVM2 to scan only the by-id paths and reject everything else. If you are also using LVM2 on non-multipathed devices, make the necessary adjustments to suit your setup. 5.12.3 Using mdadm Just as for LVM2, mdadm requires that the devices be accessed by the UUID rather than by the device node path.
  • Page 55: Scanning For New Devices Without Rebooting

    The following information is displayed for each group: Scheduling policy used to balance I/O within the group, such as round-robin Whether the group is active, disabled, or enabled Whether the group is the first (highest priority) one Paths contained within the group The following information is displayed for each path: The physical address as host:bus:target:lun, such as 1:0:1:2 Device node name, such as sda...
  • Page 56: Managing I/O In Error Situations

    Feb 14 01:03 multipathd: mpath4: event checker started Feb 14 01:03 multipathd: mpath5: event checker started Feb 14 01:03:multipathd: mpath4: remaining active paths: 1 Feb 14 01:03 multipathd: mpath5: remaining active paths: 1 5 Repeat Step 2 through Step 4 to add paths through other HBA adapters on the Linux system that are connected to the new device.
  • Page 57: Resolving Stalled I/O

    2 Reactivate queueing by entering the following command at a terminal console prompt: dmsetup message mapname 0 queue_if_no_path 5.17 Additional Information For more information about configuring and using multipath I/O on SUSE Linux Enterprise Server, How to Setup/Use Multipathing on SLES (http://support.novell.com/techcenter/sdb/en/2005/04/ sles_multipathing.html) in the Novell Support Knowledgebase. 5.18 What’s Next If you want to use software RAIDs, create and configure them before you create file systems on the devices.
  • Page 58 SLES 10 Storage Administration Guide...
  • Page 59: Managing Software Raids With Evms

    Managing Software RAIDs with EVMS This section describes how to create and manage software RAIDs with the Enterprise Volume Management System (EVMS). EVMS supports only RAIDs 0, 1, 4, and 5 at this time. For RAID 6 and 10 solutions, see Chapter 7, “Managing Software RAIDs 6 and 10 with mdadm,”...
  • Page 60: Overview Of Raid Levels

    Feature Linux Software RAID Hardware RAID RAID processing In the host server’s processor RAID controller on the disk array RAID levels 0, 1, 4, 5, and 10 plus the mdadm Varies by vendor raid10 Component devices Disks from same or different disk Same disk array array 6.1.2 Overview of RAID Levels...
  • Page 61: Comparison Of Raid Performance

    RAID Level Description Performance and Fault Tolerance Stripes data and distributes Improves disk I/O performance for reads and writes. Write parity in a round-robin performance is considerably less than for RAID 0, because fashion across all disks. If parity must be calculated and written. Write performance is disks are different sizes, the faster than RAID 4.
  • Page 62: Configuration Options For Raids

    Raid Level Number of Disk Failures Tolerated Data Redundancy Distributed parity to reconstruct data and parity on the failed disk. 6.1.5 Configuration Options for RAIDs In EVMS management tools, the following RAID configuration options are provided: Configuration Options in EVMS Table 6-5 Option Description...
  • Page 63: Raid 5 Algorithms For Distributing Stripes And Parity

    Support” in the SUSE Linux Enterprise Server 10 Installation and Administration Guide. (http:// www.novell.com/documentation/sles10). In general, each storage object included in the RAID should be from a different physical disk to maximize I/O performance and to achieve disk fault tolerance where supported by the RAID level you use.
  • Page 64 RAID 5 Algorithms Table 6-7 Algorithm EVMS Type Description Left Asymmetric Stripes are written in a round-robin fashion from the first to last member segment. The parity’s position in the striping sequence moves in a round-robin fashion from last to first. For example: sda1 sdb1 sdc1 sde1 Left Symmetric This is the default setting and is considered the fastest method for...
  • Page 65: Multi-Disk Plug-In For Evms

    6.1.8 Multi-Disk Plug-In for EVMS The Multi-Disk (MD) plug-in supports creating software RAIDs 0 (striping), 1 (mirror), 4 (striping with dedicated parity), and 5 (striping with distributed parity). The MD plug-in to EVMS allows you to manage all of these MD features as “regions” with the Regions Manager. 6.1.9 Device Mapper Plug-In for EVMS The Device Mapper plug-in supports the following features in the EVMS MD Region Manager: Multipath I/O: Connection fault tolerance and load balancing for connections between the...
  • Page 66 For IA-64 platforms, this step is necessary to make the RAID 4/5 option available in the Regions Manager. For information about creating segments, see Section 4.4, “Creating Disk Segments (or Partitions),” on page 4a Select Action > Create > Segment to open the DOS Segment Manager. 4b Select the free space segment you want to use.
  • Page 67 5d Specify values for Configuration Options by changing the following default settings as desired. For RAIDs 1, 4, or 5, optionally specify a device to use as the spare disk for the RAID. The default is none. For RAIDs 0, 4, or 5, specify the chunk (stripe) size in KB. The default is 32 KB. For RAIDs 4/5, specify RAID 4 or RAID 5 (default).
  • Page 68 For information about these settings, see “Configuration Options for RAIDs” on page 5e Click Create to create the RAID device under the /dev/evms/md directory. The device is given a name such as md0, so its EVMS mount location is /dev/evms/ md/md0.
  • Page 69: Expanding A Raid

    8a Select Action > File System > Mount. 8b Select the RAID device you created in Step 5, such as /dev/evms/md/md0. 8c Specify the location where you want to mount the device, such as /home. 8d Click Mount. 9 Enable boot.evms to activate EVMS automatically at reboot. 9a In YaST, select System >...
  • Page 70: Adding Segments To A Raid 4 Or 5

    6.3.2 Adding Segments to a RAID 4 or 5 If the RAID region is clean and operating normally, the kernel driver adds the new object as a regular spare, and it acts as a hot standby for future failures. If the RAID region is currently degraded, the kernel driver immediately activates the new spare object and begins synchronizing the data and parity information.
  • Page 71: Adding A Spare Disk When You Create The Raid

    6.4.2 Adding a Spare Disk When You Create the RAID When you create a RAID 1, 4, or 5 in EVMS, specify the Spare Disk in the Configuration Options dialog box. You can browse to select the available device, segment, or region that you want to be the RAID’s spare disk.
  • Page 72: Identifying The Failed Drive

    can survive a single disk failure at a time. A RAID 4 can survive a single disk failure at a time if the disk is not the parity disk. Disks can fail for many reasons such as the following: Disk crash Disk pulled from the system Drive cable removed or loose I/O errors...
  • Page 73: Replacing A Failed Device With A Spare

    Failed Devices : 0 Spare Devices : 0 UUID : 8a9f3d46:3ec09d23:86e1ffbc:ee2d0dd8 Events : 0.174164 Number Major Minor RaidDevice State removed active sync /dev/sdb2 The “Total Devices : 1”, “Active Devices : 1”, and “Working Devices : 1” indicate that only one of the two devices is currently active.
  • Page 74: Removing The Failed Disk

    6 Monitor the status of the RAID to verify the process has begun. For information about how monitor RAID status, see Section 6.6, “Monitoring Status for a RAID,” on page 7 Continue with Section 6.5.4, “Removing the Failed Disk,” on page 6.5.4 Removing the Failed Disk You can remove the failed disk at any time after it has been replaced with the spare disk.
  • Page 75: Monitoring Status With Mdadm

    Status Information Description Interpretation List of the RAIDs on the server You have two RAIDs defined Personalities : [raid5] by RAID label. with labels of raid5 and [raid4] raid4. <device> : <active | not active> The RAID is active and md0 : active mounted at /dev/evms/ <RAID label you specified>...
  • Page 76 Version : 00.90.03 Creation Time : Sun Apr 16 11:37:05 2006 Raid Level : raid5 Array Size : 35535360 (33.89 GiB 36.39 GB) Device Size : 8883840 (8.47 GiB 9.10 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Apr 17 05:50:44 2006 State : clean, degraded...
  • Page 77: Monitoring A Remirror Or Reconstruction

    Persistence : Superblock is persistent Update Time : Mon Apr 17 05:50:44 2006 State : clean, degraded, recovering Active Devices : 4 Working Devices : 5 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 128K Rebuild Status : 3% complete UUID : 2e686e87:1eb36d02:d3914df8:db197afe Events : 0.189...
  • Page 78 RAID Events in mdadm Table 6-8 Trigger RAID Event E-Mail Description Alert Device Disappeared An md array that was previously configured appears to no longer be configured. (syslog priority: Critical) If mdadm was told to monitor an array which is RAID0 or Linear, then it reports DeviceDisappeared with the extra information Wrong-Level.
  • Page 79: Deleting A Software Raid And Its Data

    UUID=1c661ae4:818165c3:3f7a4661:af475fda devices=/dev/sdb3,/dev/sdc3 MAILADDR yourname@example.com The MAILADDR line gives an e-mail address that alerts should be sent to when mdadm is running in --monitor mode with the --scan option. There should be only one MAILADDR line in mdadm.conf, and it should have only one address. 3 Start mdadm monitoring by entering the following at the terminal console prompt: mdadm --monitor --mail=yourname@example.com --delay=1800 /dev/md0 The --monitor option causes mdadm to periodically poll a number of md arrays and to...
  • Page 80 6 You must now reinitialize the disks for other uses, just as you would when adding a new disk to your system. SLES 10 Storage Administration Guide...
  • Page 81: Managing Software Raids 6 And 10 With Mdadm

    Managing Software RAIDs 6 and 10 with mdadm This section describes how to create software RAID 6 and 10 devices, using the Multiple Devices Administration (mdadm(8)) tool. You can also use mdadm to create RAIDs 0, 1, 4, and 5. The mdadm tool provides the functionality of legacy programs mdtools and raidtools.
  • Page 82: Creating A Raid 6

    7.1.2 Creating a RAID 6 The procedure in this section creates a RAID 6 device /dev/md0 with four devices: /dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1. Make sure to modify the procedure to use your actual device nodes. 1 Open a terminal console, then log in as the root user or equivalent. 2 Create a RAID 6 device.
  • Page 83: Creating Nested Raid 10 (1+0) With Mdadm

    The following table describes the advantages and disadvantages of RAID 10 nesting as 1+0 versus 0+1. It assumes that the storage objects you use reside on different disks, each with a dedicated I/O capability. RAID Levels Supported in EVMS Table 7-2 RAID Level Description Performance and Fault Tolerance 10 (1+0)
  • Page 84: Creating Nested Raid 10 (0+1) With Mdadm

    Scenario for Creating a RAID 10 (1+0) by Nesting Table 7-3 Raw Devices RAID 1 (mirror) RAID 1+0 (striped mirrors) /dev/sdb1 /dev/md0 /dev/md2 /dev/sdc1 /dev/sdd1 /dev/md1 /dev/sde1 1 Open a terminal console, then log in as the root user or equivalent. 2 Create 2 software RAID 1 devices, using two different devices for each RAID 1 device.
  • Page 85: Creating A Complex Raid 10 With Mdadm

    The procedure in this section uses the device names shown in the following table. Make sure to modify the device names with the names of your own devices. Scenario for Creating a RAID 10 (0+1) by Nesting Table 7-4 Raw Devices RAID 0 (stripe) RAID 0+1 (mirrored stripes) /dev/sdb1...
  • Page 86: Understanding The Mdadm Raid10

    7.3.1 Understanding the mdadm RAID10 In mdadm, the RAID10 level creates a single complex software RAID that combines features of both RAID 0 (striping) and RAID 1 (mirroring). Multiple copies of all data blocks are arranged on multiple drives following a striping discipline. Component devices should be the same size. “Comparison of RAID10 Option and Nested RAID 10 (1+0)”...
  • Page 87 of replicas of each data block. The effective storage size is the number of devices divided by the number of replicas. For example, if you specify 2 replicas for an array created with 5 component devices, a copy of each block is stored on two different devices.
  • Page 88: Creating A Raid10 With Mdadm

    Far layout with an odd number of disks and two replicas: sda1 sdb1 sdc1 sde1 sdf1 . . . 7.3.2 Creating a RAID10 with mdadm The RAID10-level option for mdadm creates a RAID 10 device without nesting. For information about the RAID10-level, see Section 7.3, “Creating a Complex RAID 10 with mdadm,”...
  • Page 89: Creating A Degraded Raid Array

    7.4 Creating a Degraded RAID Array A degraded array is one in which some devices are missing. Degraded arrays are supported only for RAID 1, RAID 4, RAID 5, and RAID 6. These RAID types are designed to withstand some missing devices as part of their fault-tolerance features.
  • Page 90 SLES 10 Storage Administration Guide...
  • Page 91: Resizing Software Raid Arrays With Mdadm

    Resizing Software RAID Arrays with mdadm This section describes how to increase or reduce the size of a software RAID 1, 4, 5, or 6 device with the Multiple Device Administration (mdadm(8)) tool. WARNING: Before starting any of the tasks described in this chapter, make sure that you have a valid backup of all of the data.
  • Page 92: Overview Of Tasks

    8.1.2 Overview of Tasks Resizing the RAID involves the following tasks. The order in which these tasks is performed depends on whether you are increasing or decreasing its size. Tasks Involved in Resizing a RAID Table 8-2 Order If Order If Tasks Description Increasing...
  • Page 93: Increasing The Size Of The Raid Array

    Scenario for Increasing the Size of Component Partitions Table 8-3 RAID Device Component Partitions /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 To increase the size of the component partitions for the RAID: 1 Open a terminal console, then log in as the root user or equivalent. 2 Make sure that the RAID array is consistent and synchronized by entering cat /proc/mdstat If your RAID array is still synchronizing according to the output of this command, you must...
  • Page 94: Increasing The Size Of The File System

    The procedure in this section uses the device name /dev/md0 for the RAID device. Make sure to modify the name to use the name of your own device. 1 Open a terminal console, then log in as the root user or equivalent. 2 Check the size of the array and the device size known to the array by entering mdadm -D /dev/md0 | grep -e "Array Size"...
  • Page 95 To extend the file system to a specific size, enter resize2fs /dev/md0 size The size parameter specifies the requested new size of the file system. If no units are specified, the unit of the size parameter is the block size of the file system. Optionally, the size parameter may be suffixed by one of the following the unit designators: s for 512 byte sectors;...
  • Page 96: Decreasing The Size Of A Software Raid

    ReiserFS As with Ext2 and Ext3, a ReiserFS file system can be increased in size while mounted or unmounted. The resize is done on the block device of your RAID array. 1 Open a terminal console, then log in as the root user or equivalent. 2 Increase the size of the file system on the software RAID device called /dev/md0, using one of the following methods: To extend the file system size to the maximum available size of the device, enter...
  • Page 97 ® In SUSE Linux Enterprise Server SP1, only Ext2, Ext3, and ReiserFS provide utilities for shrinking the size of the file system. Use the appropriate procedure below for decreasing the size of your file system. The procedures in this section use the device name /dev/md0 for the RAID device. Make sure to modify commands to use the name of your own device.
  • Page 98: Decreasing The Size Of Component Partitions

    mount -t reiserfs /dev/md0 /mnt/point 5 Check the effect of the resize on the mounted file system by entering df -h The Disk Free (df) command shows the total size of the disk, the number of blocks used, and the number of blocks available on the file system. The -h option print sizes in human-readable format, such as 1K, 234M, or 2G.
  • Page 99: Decreasing The Size Of The Raid Array

    This option is possible only if no other file systems on the original disk are accessed by the system. When the replacement device is added back into the RAID, it takes much longer to synchronize the data. 5 Re-add the partition to the RAID array. For example, to add /dev/sda1, enter mdadm -a /dev/md0 /dev/sda1 Wait until the RAID is synchronized and consistent before continuing with the next partition.
  • Page 100 100 SLES 10 Storage Administration Guide...
  • Page 101: Installing And Managing Drbd Services

    Installing and Managing DRBD Services This section describes how to install, configure, and manage a device-level software RAID 1 across a network using DRBD* (Distributed Replicated Block Device) for Linux. Section 9.1, “Understanding DRBD,” on page 101 Section 9.2, “Installing DRBD Services,” on page 101 Section 9.3, “Configuring the DRBD Service,”...
  • Page 102: Configuring The Drbd Service

    1b Choose Software > Software Management. 1c Change the filter to Patterns. 1d Under Base Technologies, select High Availability. 1e Click Accept. 2 Install the drbd kernel modules on both servers. 2a Log in as the root user or equivalent, then open YaST. 2b Choose Software >...
  • Page 103: Testing The Drbd Service

    7 After the block devices on both nodes are fully synchronized, format the DRBD device on the primary with a file system such as reiserfs. Any Linux file system can be used. For example, enter mkfs.reiserfs -f /dev/drbd0 IMPORTANT: Always use the /dev/drbd<n> name in the command, not the actual / dev/disk device name.
  • Page 104: Troubleshooting Drbd

    DRBD as a high availability service with HeartBeat 2. ® For information about installing and configuring HeartBeat 2 for SUSE Linux Enterprise Server 10, see the HeartBeat 2 Installation and Setup Guide (http://www.novell.com/ documentation/sles10/hb2/data/hb2_config.html) on the Novell Documentation Web site for SUSE Linux Enterprise Server 10 (http://www.novell.com/documentation/sles10).
  • Page 105: Tcp Port 7788

    Linux High-Availability Project ® For information about installing and configuring HeartBeat 2 for SUSE Linux Enterprise Server 10, see the HeartBeat 2 Installation and Setup Guide (http://www.novell.com/documentation/ sles10/hb2/data/hb2_config.html) on the Novell Documentation Web site for SUSE Linux Enterprise Server 10 (http://www.novell.com/documentation/sles10).
  • Page 106 106 SLES 10 Storage Administration Guide...
  • Page 107: Troubleshooting Storage Issues

    Troubleshooting Storage Issues This section describes how to work around known issues for EVMS devices, software RAIDs, multipath I/O, and volumes. Section 10.1, “Is DM-MPIO Available for the Boot Partition?,” on page 107 Section 10.2, “Rescue System Cannot Find Devices That Are Managed by EVMS,” on page 107 Section 10.3, “Volumes on EVMS Devices Do Not Appear After Reboot,”...
  • Page 108: Volumes On Evms Devices Do Not Appear When Using Iscsi

    10.4 Volumes on EVMS Devices Do Not Appear When Using iSCSI If you have installed and configured an iSCSI SAN, and have created and configured EVMS disks or volumes on that iSCSI SAN, your EVMS volumes might not be visible or accessible after reboot. This problem is caused by EVMS starting before the iSCSI service.

This manual is also suitable for:

Suse linux enterprise server 10 storage

Table of Contents