Further, Novell, Inc. reserves the right to make changes to any and all parts of Novell software, at any time, without any obligation to notify any person or entity of such changes.
Page 3
Novell Trademarks For Novell trademarks, see the Novell Trademark and Service Mark list (http://www.novell.com/company/legal/ trademarks/tmlist.html). Third-Party Materials All third-party trademarks and copyrights are the property of their respective owners. Some content in this document is copied, distributed, and/or modified from the following document under the terms specified in the document’s license.
Documentation Updates For the most recent version of the SUSE Linux Enterprise Server 10 Storage Administration Guide for EVMS, visit the Novell Documentation Web site for SUSE Linux Enterprise Server 10 (http:// www.novell.com/documentation/sles10). Additional Documentation For information about managing storage with the Linux Volume Manager (LVM), see the SUSE Linux Enterprise Server 10 Installation and Administration Guide (http://www.novell.com/...
Documentation Conventions In Novell documentation, a greater-than symbol (>) is used to separate actions within a step and items in a cross-reference path. ® A trademark symbol ( , etc.) denotes a Novell trademark. An asterisk (*) denotes a third-party trademark.
Overview of EVMS The Enterprise Volume Management System (EVMS) 2.5.5 management tool for Linux* is an extensible storage management tool that integrates all aspects of volume management, such as disk partitioning, the Logical Volume Manager (LVM), the Multiple-Disk (MD) manager for software RAIDs, the Device Mapper (DM) for multipath I/O configuration, and file system operations.
FAT (read only) ® For more information about file systems supported in SUSE Linux Enterprise Server 10, see the SUSE Linux Enterprise Server 10 Installation and Administration Guide. (http://www.novell.com/ documentation/sles10). File System Primer (http://wiki.novell.com/index.php/File_System_Primer) describes the variety of file systems available on Linux and which ones are the best to use for which workloads and data.
Term Description Segment An ordered set of physically contiguous sectors on a single device. It is similar to traditional disk partitions. Region An ordered set of logically contiguous sectors that might or might not be physically contiguous. The underlying mapping can be to logical disks, disk segments, or other storage regions.
Page 14
Device Node Location Table 1-3 Storage Object Standard Location the Device Node EVMS Location of the Device Node A disk segment of disk /dev/sda5 /dev/evms/sda5 A software RAID device /dev/md1 /dev/evms/md/md1 An LVM volume /dev/lvm_group/lvm_volume /dev/evms/lvm/lvm_group/ lvm_volume SLES 10 SP2: Storage Administration Guide...
Using EVMS to Manage Devices This section describes how to configure EVMS as the volume manager of your devices. Section 2.1, “Configuring the System Device at Install to Use EVMS,” on page 15 Section 2.2, “Configuring an Existing System Device to Use EVMS,” on page 22 Section 2.3, “Configuring LVM Devices to Use EVMS,”...
Page 16
Linux Enterprise Server 10, see “Large File System Support” in the SUSE Linux Enterprise Server 10 Installation and Administration Guide. (http:// www.novell.com/documentation/sles10). Data Loss Considerations for the System Device This install requires that you delete the default partitioning settings created by the install, and create new partitions to use EVMS instead.
SUSE Linux Enterprise 10 Installation and Administration Guide (http://www.novell.com/documentation/ sles10/sles_admin/data/bookinfo_book_sles_admin.html). 2 When the installation reaches the Installations Settings screen, delete the proposed LVM-based partioning solution. This deletes the proposed partitions and the partition table on the system device so that the device can be marked to use EVMS as the volume manager instead of LVM.
Page 18
4c Select Primary Partition, then click OK. 4d Select Do Not Format, then select Linux LVM (0x8E) from the list of file system IDs. 4e In Size (End Value field), set the cylinder End value to 5 GB or larger, depending on the combined partition size you need to contain your system and swap volumes.
Page 19
6 Create the swap volume in the container: lvm/system 6a Select , then click Add. lvm/system 6b In the Create Logical Volume dialog box, select Format, then select Swap from the File System drop-down menu. 6c Specify as the volume name. swap 6d Specify 1 GB (recommended) for the swap volume.
IMPORTANT: After the install is complete, make sure to perform the mandatory post-install configuration of the related system settings to ensure that the system device functions properly under EVMS. Otherwise, the system fails to boot properly. For information, see “After the Server Install” on page 2.1.3 After the Server Install After the SUSE Linux Enterprise Server 10 install is complete, you must perform the following tasks to ensure that the system device functions properly under EVMS:...
Page 21
/dev/sda1 /boot reiser defaults 1 1 3 In the Device Name column, modify the location of the partition from /boot /dev /dev/ so it can be managed by EVMS. Modify only the device name by adding to the evms /evms path: /dev/evms/sda1 /boot reiser defaults 1 1 4 Save the file.
NOTE: Effective in SUSE Linux Enterprise 10, the directory is on , and the device /dev tmpfs nodes are automatically re-created on boot. It is no longer necessary to modify the /etc/ script to delete the device nodes on system restart, as was required for init.d/boot.evms previous versions of SUSE Linux.
2.2.1 Disable the boot.lvm and boot.md Services You need to disable (handles devices for Linux Volume Manager) and (handles boot.lvm boot.md multiple devices in software RAIDs) so they do not run at boot time. In the future, you want to run at boot time instead. boot.evms 1 In YaST, click System >...
IMPORTANT: When working in the file, do not leave any stray characters or spaces /etc/fstab in the file. This is a configuration file, and it is highly sensitive to such mistakes. 1 Open the file in a text editor. /etc/fstab 2 Locate the line that contains the partition.
/dev/evms/sda2 Replace with the actual device on your machine. sda2 3c Edit any device paths in the Other Kernel Parameters field. 3d Click OK to save the changes and return to the Boot Loader page. 4 Modify the failsafe image so that the failsafe root file system is mounted as /dev/evms/ instead of /dev/...
For the latest version of mkinitrd mkinitrd, see Recommended Updates for mkinitrd (http://support.novell.com/techcenter/psdb/ 24c7dfbc3e0c183970b70c1c0b3a6d7d.html) at the Novell Technical Support Center. 1 At a terminal console prompt, enter the EVMS Ncurses command as the user or root equivalent:...
2.3 Configuring LVM Devices to Use EVMS Use the following post-installation procedure to configure data devices (not system devices) to be managed by EVMS. If you need to configure an existing system device for EVMS, see Section 2.2, “Configuring an Existing System Device to Use EVMS,” on page 1 In a terminal console, run the EVMSGUI by entering the following as the user or root...
chkconfig boot.evms on This ensures that EVMS and iSCSI start in the proper order each time your servers restart. 2.5 Using the ELILO Loader Files (IA-64) On a SUSE Linux Enterprise Server boot device EFI System Partition, the full paths to the loader and configuration files are: /boot/efi/SuSE/elilo.efi /boot/efi/SuSE/elilo.conf...
Page 29
Command Description Starts the EVMS commandline interpreter (CLI) interface. For information about evms command options, see “EVMS Command Line Interpreter” (http:// evms.sourceforge.net/user_guide/#COMMANDLINE) in the EVMS User Guide at the EVMS project on SourceForge.net. To stop from running automatically on restart: evmsgui 1 Close evmsgui...
Using UUIDs to Mount Devices This section describes the optional use of UUIDs instead of device names to identify file system devices in the boot loader file and the file. /etc/fstab Section 3.1, “Naming Devices with udev,” on page 31 Section 3.2, “Understanding UUIDs,”...
3.2.1 Using UUIDs to Assemble or Activate File System Devices The UUID is always unique to the partition and does not depend on the order in which it appears or where it is mounted. With certain SAN devices attached to the server, the system partitions are renamed and moved to be the last device.
kernel /boot/vmlinuz root=/dev/disk/by-uuid/e014e482-1c2d-4d09-84ec- 61b3aefde77a IMPORTANT: Make a copy of the original boot entry, then modify the copy. If you make a mistake, you can boot the server without the SAN connected, and fix the error. If you use the Boot Loader option in YaST, there is a defect where it adds some duplicate lines to the boot loader file when you change a value.
3.5 Additional Information For more information about using for managing devices, see “Dynamic Kernel Device udev(8) Management with udev” (http://www.novell.com/documentation/sles10/sles_admin/data/ ® cha_udev.html) in the SUSE Linux Enterprise Server 10 Installation and Administration Guide. For more information about commands, see its man page.
Managing EVMS Devices This section describes how to initialize a disk for EVMS management by adding a segment management container to manage the partitions that you later add to the disk. Section 4.1, “Understanding Disk Segmentation,” on page 35 Section 4.2, “Initializing Disks,” on page 36 Section 4.3, “Removing the Segment Manager from a Device,”...
Segment Manager Description A partitioning scheme for Mac-OS partitions. 4.1.2 Disk Segments After you initialize the disk by adding a segment manager, you see metadata and free space segments on the disk. You can then create one or multiple data segments in a disk segment. Disk Segment Types Table 4-2 Segment Type...
4.2.2 Guidelines Consider the following guidelines when initializing a disk: EVMS might allow you to create segments without first adding a segment manager for the disk, but it is best to explicitly add a segment manager to avoid problems later. IMPORTANT: You must add a Cluster segment manager if you plan to use the devices for volumes that you want to share as cluster resources.
3b From the list, select one of the following types of segment manager, then click Next. DOS Segment Manager (the most common choice) GPT Segment Manager (for IA-64 platforms) Cluster Segment Manager (available only if it is a viable option for the selected disk) 3c Select the device from the list of Plugin Acceptable Objects, then click Next.
Primary Partition: Click Yes for a primary partition, or click No for a logical partition. Required settings are denoted in the page by an asterisk (*). All required fields must be completed to make the Create button active. 5 Click Create to create the segment. 6 Verify that the new segment appears in the Segment list.
Page 40
Fstab Option Description Data journaling mode For journaling file systems, select the preferred journaling mode: Ordered: Writes data to the file system, then enters the metadata in the journal. This is the default. Journal: Writes data twice; once to the journal, then to the file system. Writeback: Writes data to the file system and writes metadata in the journal, but the writes are performed in any order.
4.6 What’s Next If multiple paths exist between your host bus adapters (HBAs) and the storage devices, configure multipathing for the devices before creating software RAIDs or file system volumes on the devices. For information, see Chapter 5, “Managing Multipath I/O for Devices,” on page If you want to configure software RAIDs, do it before you create file systems on the devices.
Managing Multipath I/O for Devices This section describes how to manage failover and path load balancing for multiple paths between the servers and block storage devices. Section 5.1, “Understanding Multipathing,” on page 43 Section 5.2, “Planning for Multipathing,” on page 44 Section 5.3, “Multipath Management Tools,”...
Typical connection problems involve faulty adapters, cables, or controllers. When you configure multipath I/O for a device, the multipath driver monitors the active connection between devices. When the multipath driver detects I/O errors for an active path, it fails over the traffic to the device’s designated secondary path.
Disk Management Tasks Perform the following disk management tasks before you attempt to configure multipathing for a physical or logical device that has multiple paths: Use third-party tools to carve physical disks into smaller logical disks. Use third-party tools to partition physical or logical disks. If you change the partitioning in the running system, the Device Mapper Multipath (DM-MP) module does not automatically detect and reflect these changes.
5.2.3 Using LVM2 on Multipath Devices By default, LVM2 does not recognize multipathed devices. To make LVM2 recognize the multipathed devices as possible physical volumes, you must modify . It is /etc/lvm/lvm.conf important to modify it in a way that it does not scan and use the physical paths, but only accesses the multipath I/O storage through the multipath I/O layer.
prior to enabling multipath I/O. If you change the partitioning in the running system, DM-MP does not automatically detect and reflect these changes. The device must be reinitialized, which usually requires a reboot. 5.2.6 Supported Architectures for Multipath I/O ® The multipathing drivers and tools in SUSE Linux Enterprise Server 10 support all seven of the supported processor architectures: IA32, AMD64/EM64T, IPF/IA64, p-Series (32-bit/64-bit), z-...
Page 48
STK OPENstorage DS280 Sun* StorEdge 3510 Sun T4 In general, most other storage arrays should work. When storage arrays are automatically detected, the default settings for multipathing apply. If you want non-default settings, you must manually create and configure the file.
DM-MP is the preferred solution for multipathing on SUSE Linux Enterprise Server 10. It is the ® only multipathing option shipped with the product that is completely supported by Novell SUSE. DM-MP features automatic configuration of the multipathing subsystem for a large variety of setups.
Page 50
configuration, the traffic is balanced across the remaining healthy paths. If all active paths fail, inactive secondary paths must be waked up, so failover occurs with a delay of approximately 30 seconds. If a disk array has more than one storage processor, make sure that the SAN switch has a connection to the storage processor that owns the LUNs you want to access.
For a list of files included in this package, see the multipath-tools Package Description (http:// www.novell.com/products/linuxpackages/suselinux/multipath-tools.html). 1 Ensure that the package is installed by entering the following at a terminal...
To verify that is installed: mdadm 1 Ensure that the package is installed by entering the following at a terminal console mdadm prompt: rpm -q mdadm If it is installed, the response repeats the package name and provides the version information. For example: mdadm-2.6-0.11 If it is not installed, the response reads:...
multipath -ll multipath -ll <device> Flush all unused multipath device maps (unresolves the multiple paths; it does not delete the device): multipath -F multipath -F <device> Set the group policy: multipath -p [failover|multibus|group_by_serial|group_by_prio|group_by_node_name] Specify one of the group policy options that are described in Table 5-3 on page Group Policy Options for the multipath -p Command Table 5-3...
SCSI device scanning behavior, such as to indicate that LUNs are not numbered consecutively. For information, see Options for SCSI Device Scanning (http://support.novell.com/techcenter/sdb/en/2005/06/drahn_scsi_scanning.html) in the Novell Support Knowledgebase. 5.4.2 Partitioning Multipathed Devices Partitioning devices that have multiple paths is not recommended, but it is supported.
For example, if your system contains a RAID controller accessed by the driver and cciss multipathed devices connected to a QLogic* controller accessed by the driver qla2xxx, this entry would look like: INITRD_MODULES="cciss" Because the QLogic driver is not automatically loaded on start-up, add it here: INITRD_MODULES="cciss qla23xx"...
Page 56
“Configuring Default Multipath Behavior in /etc/multipath.conf” on page 58 “Applying Changes Made to the /etc/multipath.conf File” on page 58 Creating the multipath.conf File If the file does not exist, copy the example to create the file: /etc/multipath.conf 1 In a terminal console, log in as the user.
Page 57
Configuring User-Friendly Names or Alias Names in /etc/multipath.conf A multipath device can be identified by either its WWID or an alias that you assign for it. The WWID (World Wide Identifier) is an identifier for the multipath device that is guaranteed to be globally unique and unchanging.
Page 58
devnode_blacklist { wwid 26353900f02796769 devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st|sda)[0-9]*" devnode "^hd[a-z][0-9]*" devnode "^cciss!c[0-9]d[0-9].*" You can also blacklist only the partitions from a driver instead of the entire array. For example, using the following regular expression would blacklist only partitions from the cciss driver and not the entire array: ^cciss!c[0-9]d[0-9]*[p[0-9]*] After you modify the...
5.5 Enabling and Starting Multipath I/O Services To start multipath services and enable them to start at reboot: 1 Open a terminal console, then log in as the user or equivalent. root 2 At the terminal console prompt, enter chkconfig multipathd on chkconfig boot.multipath on If the boot.multipath service does not start automatically on system boot, do the following: 1 Open a terminal console, then log in as the...
Policy Option Description multibus All paths in one priority group. group_by_serial One priority group per detected serial number. group_by_prio One priority group per path priority value. Priorities are determined by callout programs specified as a global, per-controller, or per-multipath option in the etc/multipath.conf configuration file.
Page 61
Multipath Attribute Description Values blacklist Specifies the list of device For an example, see “Blacklisting Non- names to ignore as non- Multipathed Devices in /etc/multipath.conf” on multipathed devices, such as page cciss, fd, hd, md, dm, sr, scd, st, ram, raw, loop. blacklist_exceptions Specifies the list of device For an example, see the...
Page 62
Multipath Attribute Description Values path_checker Determines the state of the directio (Default in version multipath-tools path. 0.4.8 and later) Reads the first sector that has direct I/O. This is useful for DASD devices. Logs failure messages in /var/log/messages readsector0 (Default in multipath-tools version 0.4.7 and earlier) Reads the first sector of the device.
Page 63
Multipath Attribute Description Values prio_callout Specifies the program and If no attribute is used, all paths prio_callout arguments to use to are equal. This is the default. determine the layout of the /bin/true Use this value when the multipath map. group_by_priority is not being used.
Page 64
Multipath Attribute Description Values rr_min_io Specifies the number of I/O n (>0) Specify an integer value greater than 0. transactions to route to a path 1000 Default. before switching to the next path in the same path group, as determined by the specified algorithm in the setting.
5.6.3 Using a Script to Set Path Priorities You can create a script that interacts with DM-MP to provide priorities for paths to the LUN when set as a resource for the setting. prio_callout First, set up a text file that lists information about each device and the priority values you want to assign to each path.
Page 66
“Options” on page 66 “Return Values” on page 66 Syntax mpath_prio_alua [-d directory] [-h] [-v] [-V] device [device...] Prerequisite SCSI devices Options -d directory Specifying the Linux directory path where the listed device node names can be found. The default directory is .
Priority Value Description The device is in the standby group. All other groups. Values are widely spaced because of the way the command handles them. It multiplies multipath the number of paths in a group with the priority value for the group, then selects the group with the highest result.
To enable multipathing on the existing root device: 1 Install Linux with only a single path active, preferably one where the symlinks are listed by-id in the partitioner. 2 Mount the devices by using the path used during the install. /dev/disk/by-id 3 After installation, add dm-multipath...
/etc/init.d/boot.multipath start /etc/init.s/multipathd start 5 After the multipathing services are started, verify that the software RAID’s component devices are listed in the directory. Do one of the following: /dev/disk/by-id Devices Are Listed: The device names should now have symbolic links to their Device Mapper Multipath device names, such as /dev/dm-1 Devices Are Not Listed: Force the multipath service to recognize them by flushing and...
2 On the Linux system, scan the SAN at a low level to discover the new devices. At a terminal console prompt, enter echo 1 > /sys/class/fc_host/host<number>/issue_lip For example, to probe the HBA on host1, enter echo 1 > /sys/class/fc_host/host1/issue_lip At this point, the newly added device is not known to the higher layers of the Linux kernel's SCSI subsystem and is not yet usable.
tail -33 /var/log/messages 5 Use a text editor to add a new alias definition for the device in the file, /etc/multipath.conf such as oradata3 6 Create a partition table for the device by entering fdisk /dev/dm-8 7 Add a link for the new partition by entering /dev/dm-* /sbin/kpartx -a -p -part /dev/dm-8 8 Verify that the link was created by entering...
0 queue_if_no_path 5.15 Additional Information For more information about configuring and using multipath I/O on SUSE Linux Enterprise Server, see the following additional resources in the Novell Support Knowledgebase: How to Setup/Use Multipathing on SLES (http://support.novell.com/techcenter/sdb/en/2005/ 04/sles_multipathing.html) Troubleshooting SLES Multipathing (MPIO) Problems (Technical Information Document 3231766) (http://www.novell.com/support/...
Static Load Balancing in Device-Mapper Multipathing (DM-MP) (Technical Information Document 3858277) (http://www.novell.com/support/ search.do?cmd=displayKC&docType=kc&externalId=3858277&sliceId=SAL_Public&dialogI D=57872426&stateId=0%200%2057878058) Troubleshooting SCSI (LUN) Scanning Issues (Technical Information Document 3955167) (http://www.novell.com/support/ search.do?cmd=displayKC&docType=kc&externalId=3955167&sliceId=SAL_Public&dialogI D=57868704&stateId=0%200%2057878206) 5.16 What’s Next If you want to use software RAIDs, create and configure them before you create file systems on the devices.
Managing Software RAIDs with EVMS This section describes how to create and manage software RAIDs with the Enterprise Volume Management System (EVMS). EVMS supports only RAIDs 0, 1, 4, and 5 at this time. For RAID 6 and 10 solutions, see Chapter 7, “Managing Software RAIDs 6 and 10 with mdadm,”...
Feature Linux Software RAID Hardware RAID RAID processing In the host server’s processor RAID controller on the disk array RAID levels 0, 1, 4, 5, and 10 plus the mdadm Varies by vendor raid10 Component devices Disks from same or different disk Same disk array array 6.1.2 Overview of RAID Levels...
RAID Level Description Performance and Fault Tolerance Stripes data and distributes Improves disk I/O performance for reads and writes. Write parity in a round-robin performance is considerably less than for RAID 0, because fashion across all disks. If parity must be calculated and written. Write performance is disks are different sizes, the faster than RAID 4.
Raid Level Number of Disk Failures Tolerated Data Redundancy Distributed parity to reconstruct data and parity on the failed disk. 6.1.5 Configuration Options for RAIDs In EVMS management tools, the following RAID configuration options are provided: Configuration Options in EVMS Table 6-5 Option Description...
Support” in the SUSE Linux Enterprise Server 10 Installation and Administration Guide. (http:// www.novell.com/documentation/sles10). In general, each storage object included in the RAID should be from a different physical disk to maximize I/O performance and to achieve disk fault tolerance where supported by the RAID level you use.
6.1.8 RAID 5 Algorithms for Distributing Stripes and Parity RAID 5 uses an algorithm to determine the layout of stripes and parity. The following table describes the algorithms. RAID 5 Algorithms Table 6-7 Algorithm EVMS Type Description Left Asymmetric Stripes are written in a round-robin fashion from the first to last member segment.
For information about the layout of stripes and parity with each of these algorithms, see Linux RAID-5 Algorithms (http://www.accs.com/p_and_p/RAID/LinuxRAID.html). 6.1.9 Multi-Disk Plug-In for EVMS The Multi-Disk (MD) plug-in supports creating software RAIDs 0 (striping), 1 (mirror), 4 (striping with dedicated parity), and 5 (striping with distributed parity). The MD plug-in to EVMS allows you to manage all of these MD features as “regions”...
Page 82
4 If segments have not been created on the disks, create a segment on each disk that you plan to use in the RAID. For x86 platforms, this step is optional if you treat the entire disk as one segment. For IA-64 platforms, this step is necessary to make the RAID 4/5 option available in the Regions Manager.
Page 83
5d Specify values for Configuration Options by changing the following default settings as desired. For RAIDs 1, 4, or 5, optionally specify a device to use as the spare disk for the RAID. The default is none. For RAIDs 0, 4, or 5, specify the chunk (stripe) size in KB. The default is 32 KB. For RAIDs 4/5, specify RAID 4 or RAID 5 (default).
Page 84
5e Click Create to create the RAID device under the directory. /dev/evms/md The device is given a name such as , so its EVMS mount location is /dev/evms/md/ 6 Specify a human-readable label for the device. 6a Select Action > Create > EVMS Volume or Compatible Volume. 6b Select the device that you created in Step 6c Specify a name for the device.
8b Select the RAID device you created in Step 5, such as /dev/evms/md/md0 8c Specify the location where you want to mount the device, such as /home 8d Click Mount. 9 Enable to activate EVMS automatically at reboot. boot.evms 9a In YaST, select System > System Services (Run Level). 9b Select Expert Mode.
6.3.2 Adding Segments to a RAID 4 or 5 If the RAID region is clean and operating normally, the kernel driver adds the new object as a regular spare, and it acts as a hot standby for future failures. If the RAID region is currently degraded, the kernel driver immediately activates the new spare object and begins synchronizing the data and parity information.
6.4.2 Adding a Spare Disk When You Create the RAID When you create a RAID 1, 4, or 5 in EVMS, specify the Spare Disk in the Configuration Options dialog box. You can browse to select the available device, segment, or region that you want to be the RAID’s spare disk.
synchronization of the replacement disk, write and read performance are both degraded. A RAID 5 can survive a single disk failure at a time. A RAID 4 can survive a single disk failure at a time if the disk is not the parity disk. Disks can fail for many reasons such as the following: Disk crash Disk pulled from the system...
Persistence : Superblock is persistent Update Time : Tue Aug 15 18:31:09 2006 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 8a9f3d46:3ec09d23:86e1ffbc:ee2d0dd8 Events : 0.174164 Number Major Minor RaidDevice State...
To assign a spare device to the RAID: 1 Prepare the disk as needed to match the other members of the RAID. 2 In EVMS, select the Actions > Add > Spare Disk to a Region (the plug-in for the addspare EVMS GUI).
6.6.2 Monitoring Status with /proc/mdstat A summary of RAID and status information (active/not active) is available in the /proc/mdstat file. 1 Open a terminal console, then log in as the user or equivalent. root 2 View the file by entering the following at the console prompt: /proc/mdstat cat /proc/mdstat 3 Evaluate the information.
Page 92
Replace mdx with the RAID device number. Example 1: A Disk Fails In the following example, only four of the five devices in the RAID are active ( Raid Devices : 5 ). When it was created, the component devices in the device were numbered 0 Total Devices : 4 to 5 and are ordered according to their alphabetic appearance in the list where they were chosen, such as...
mdadm -D /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Apr 16 11:37:05 2006 Raid Level : raid5 Array Size : 35535360 (33.89 GiB 36.39 GB) Device Size : 8883840 (8.47 GiB 9.10 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Apr 17 05:50:44 2006...
Page 94
The following table identifies RAID events and indicates which events trigger e-mail alerts. All events cause the program to run. The program is run with two or three arguments: the event name, the array device (such as ), and possibly a second device. For Fail, Fail Spare, and Spare /dev/md1 Active, the second device is the relevant component device.
To configure an e-mail alert: 1 At a terminal console, log in as the user. root 2 Edit the file to add your e-mail address for receiving alerts. For /etc/mdadm/mdadm.conf example, specify the MAILADDR value (using your own e-mail address, of course): DEVICE partitions ARRAY /dev/md0 level=raid1 num-devices=2 UUID=1c661ae4:818165c3:3f7a4661:af475fda...
Page 96
umount <raid-device> 4 Stop the RAID device and its component devices by entering mdadm --stop <raid-device> mdadm --stop <member-devices> For more information about using , please see the man page. mdadm mdadm(8) 5 Delete all data on the disk by literally overwriting the entire device with zeroes. Enter mdadm --misc --zero-superblock <member-devices>...
Managing Software RAIDs 6 and 10 with mdadm This section describes how to create software RAID 6 and 10 devices, using the Multiple Devices Administration ( ) tool. You can also use to create RAIDs 0, 1, 4, and 5. The mdadm(8) mdadm mdadm...
7.1.2 Creating a RAID 6 The procedure in this section creates a RAID 6 device with four devices: /dev/md0 /dev/sda1 , and . Make sure to modify the procedure to use your actual dev/sdb1 /dev/sdc1 /dev/sdd1 device nodes. 1 Open a terminal console, then log in as the user or equivalent.
The following table describes the advantages and disadvantages of RAID 10 nesting as 1+0 versus 0+1. It assumes that the storage objects you use reside on different disks, each with a dedicated I/O capability. RAID Levels Supported in EVMS Table 7-2 RAID Level Description Performance and Fault Tolerance 10 (1+0)
Scenario for Creating a RAID 10 (1+0) by Nesting Table 7-3 Raw Devices RAID 1 (mirror) RAID 1+0 (striped mirrors) /dev/sdb1 /dev/md0 /dev/md2 /dev/sdc1 /dev/sdd1 /dev/md1 /dev/sde1 1 Open a terminal console, then log in as the root user or equivalent. 2 Create 2 software RAID 1 devices, using two different devices for each RAID 1 device.
The procedure in this section uses the device names shown in the following table. Make sure to modify the device names with the names of your own devices. Scenario for Creating a RAID 10 (0+1) by Nesting Table 7-4 Raw Devices RAID 0 (stripe) RAID 0+1 (mirrored stripes) /dev/sdb1...
Page 102
“Number of Replicas in the mdadm RAID10” on page 102 “Number of Devices in the mdadm RAID10” on page 102 “Near Layout” on page 103 “Far Layout” on page 103 Comparison of RAID10 Option and Nested RAID 10 (1+0) The complex RAID 10 is similar in purpose to a nested RAID 10 (1+0), but differs in the following ways: Complex vs.
Page 103
Near Layout With the near layout, copies of a block of data are striped near each other on different component devices. That is, multiple copies of one data block are at similar offsets in different devices. Near is the default layout for RAID10. For example, if you use an odd number of component devices and two copies of data, some copies are perhaps one chunk further into the device.
. . . 7.3.2 Creating a RAID10 with mdadm The RAID10-level option for creates a RAID 10 device without nesting. For information mdadm about the RAID10-level, see Section 7.3, “Creating a Complex RAID 10 with mdadm,” on page 101. The procedure in this section uses the device names shown in the following table. Make sure to modify the device names with the names of your own devices.
Page 105
RAID Type Allowable Number of Slots Missing RAID 1 All but one device RAID 4 One slot RAID 5 One slot RAID 6 One or two slots To create a degraded array in which some devices are missing, simply give the word missing place of a device name.
Resizing Software RAID Arrays with mdadm This section describes how to increase or reduce the size of a software RAID 1, 4, 5, or 6 device with the Multiple Device Administration ( ) tool. mdadm(8) WARNING: Before starting any of the tasks described in this chapter, make sure that you have a valid backup of all of the data.
8.1.2 Overview of Tasks Resizing the RAID involves the following tasks. The order in which these tasks is performed depends on whether you are increasing or decreasing its size. Tasks Involved in Resizing a RAID Table 8-2 Order If Order If Tasks Description Increasing...
Scenario for Increasing the Size of Component Partitions Table 8-3 RAID Device Component Partitions /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 To increase the size of the component partitions for the RAID: 1 Open a terminal console, then log in as the user or equivalent. root 2 Make sure that the RAID array is consistent and synchronized by entering cat /proc/mdstat...
The procedure in this section uses the device name for the RAID device. Make sure to /dev/md0 modify the name to use the name of your own device. 1 Open a terminal console, then log in as the user or equivalent. root 2 Check the size of the array and the device size known to the array by entering mdadm -D /dev/md0 | grep -e "Array Size"...
Page 111
To extend the file system to a specific size, enter resize2fs /dev/md0 size The size parameter specifies the requested new size of the file system. If no units are specified, the unit of the size parameter is the block size of the file system. Optionally, the size parameter may be suffixed by one of the following the unit designators: s for 512 byte sectors;...
ReiserFS As with Ext2 and Ext3, a ReiserFS file system can be increased in size while mounted or unmounted. The resize is done on the block device of your RAID array. 1 Open a terminal console, then log in as the user or equivalent.
Page 113
® In SUSE Linux Enterprise Server SP1, only Ext2, Ext3, and ReiserFS provide utilities for shrinking the size of the file system. Use the appropriate procedure below for decreasing the size of your file system. The procedures in this section use the device name for the RAID device.
mount -t reiserfs /dev/md0 /mnt/point 5 Check the effect of the resize on the mounted file system by entering df -h The Disk Free ( ) command shows the total size of the disk, the number of blocks used, and the number of blocks available on the file system.
Replace the disk on which the partition resides with a different device. This option is possible only if no other file systems on the original disk are accessed by the system. When the replacement device is added back into the RAID, it takes much longer to synchronize the data.
Installing and Managing DRBD Services This section describes how to install, configure, and manage a device-level software RAID 1 across a network using DRBD* (Distributed Replicated Block Device) for Linux. Section 9.1, “Understanding DRBD,” on page 117 Section 9.2, “Installing DRBD Services,” on page 117 Section 9.3, “Configuring the DRBD Service,”...
1b Choose Software > Software Management. 1c Change the filter to Patterns. 1d Under Base Technologies, select High Availability. 1e Click Accept. 2 Install the kernel modules on both servers. drbd 2a Log in as the user or equivalent, then open YaST. root 2b Choose Software >...
7 After the block devices on both nodes are fully synchronized, format the DRBD device on the primary with a file system such as reiserfs. Any Linux file system can be used. For example, enter mkfs.reiserfs -f /dev/drbd0 IMPORTANT: Always use the name in the command, not the actual /dev/drbd<n>...
DRBD as a high availability service with HeartBeat 2. ® For information about installing and configuring HeartBeat 2 for SUSE Linux Enterprise Server 10, see the HeartBeat 2 Installation and Setup Guide (http://www.novell.com/ documentation/sles10/hb2/data/hb2_config.html) on the Novell Documentation Web site for SUSE Linux Enterprise Server 10 (http://www.novell.com/documentation/sles10).
Linux High-Availability Project ® For information about installing and configuring HeartBeat 2 for SUSE Linux Enterprise Server 10, see the HeartBeat 2 Installation and Setup Guide (http://www.novell.com/documentation/ sles10/hb2/data/hb2_config.html) on the Novell Documentation Web site for SUSE Linux Enterprise Server 10 (http://www.novell.com/documentation/sles10).
Troubleshooting Storage Issues This section describes how to work around known issues for EVMS devices, software RAIDs, multipath I/O, and volumes. Section 10.1, “Is DM-MP Available for the Boot Partition?,” on page 123 Section 10.2, “Rescue System Cannot Find Devices That Are Managed by EVMS,” on page 123 Section 10.3, “Volumes on EVMS Devices Do Not Appear After Reboot,”...
10.4 Volumes on EVMS Devices Do Not Appear When Using iSCSI If you have installed and configured an iSCSI SAN, and have created and configured EVMS disks or volumes on that iSCSI SAN, your EVMS volumes might not be visible or accessible after reboot. This problem is caused by EVMS starting before the iSCSI service.
For the most recent version of the SUSE Linux Enterprise Server Storage Administration Guide, see the Novell documentation Web site for SUSE Linux Enterprise Server 10 (http:// www.novell.com/documentation/sles10/#additional). In this section, content changes appear in reverse chronological order, according to the publication date.
A.2 November 24, 2008 Updates were made to the following sections. The changes are explained below. Section A.2.1, “Managing Multipath I/O,” on page 126 Section A.2.2, “Using UUIDs to Mount Devices,” on page 126 A.2.1 Managing Multipath I/O The following changes were made to this section: Location Change Section 5.2.7, “Supported Storage Arrays...
A.4 March 20, 2008 (SLES 10 SP2) Updates were made to the following section. The changes are explained below. Section A.4.1, “Managing Multipath I/O,” on page 127 A.4.1 Managing Multipath I/O The following changes were made to this section: Location Change Section 5.2.6, “Supported Architectures This section is new.
Need help?
Do you have a question about the LINUX ENTERPRISE SERVER 10 SP2 - STORAGE ADMINISTRATION GUIDE 05-15-2009 and is the answer not in the manual?
Questions and answers