Further, Novell, Inc., reserves the right to make changes to any and all parts of Novell software, at any time, without any obligation to notify any person or entity of such changes.
Page 3
Novell Trademarks For Novell trademarks, see the Novell Trademark and Service Mark list (http://www.novell.com/company/legal/ trademarks/tmlist.html). Third-Party Materials All third-party trademarks and copyrights are the property of their respective owners.
SUSE Linux Enterprise Server 11 Installation and Administration Guide (http://www.novell.com/documentation/sles11). Documentation Conventions In Novell documentation, a greater-than symbol (>) is used to separate actions within a step and items in a cross-reference path. ® A trademark symbol ( , etc.) denotes a Novell trademark. An asterisk (**) denotes a third-party trademark.
Overview of File Systems in Linux ® SUSE Linux Enterprise Server ships with a number of different file systems from which to choose, including Ext3, Ext2, ReiserFS, and XFS. Each file system has its own advantages and disadvantages. Professional high-performance setups might require a different choice of file system than a home user’s setup.
The terms data integrity and data consistency, when used in this section, do not refer to the consistency of the user space data (the data your application writes to its files). Whether this data is consistent must be controlled by the application itself. IMPORTANT: Unless stated otherwise in this section, all the steps required to set up or change partitions and file systems can be performed by using YaST.
1.2.2 Ext3 Ext3 was designed by Stephen Tweedie. Unlike all other next-generation file systems, Ext3 does not follow a completely new design principle. It is based on Ext2. These two file systems are very closely related to each other. An Ext3 file system can be easily built on top of an Ext2 file system. The most important difference between Ext2 and Ext3 is that Ext3 supports journaling.
This ensures that the Ext3 file system is recognized as such. The change takes effect after the next reboot. 3 To boot a root file system that is set up as an Ext3 partition, include the modules ext3 in the initrd 3a Edit , adding...
Better Disk Space Utilization In ReiserFS, all data is organized in a structure called a B*-balanced tree. The tree structure contributes to better disk space utilization because small files can be stored directly in the B* tree leaf nodes instead of being stored elsewhere and just maintaining a pointer to the actual disk location.
, the file system originally used by DOS, is today used by various operating systems. ® File system for mounting Novell volumes over networks. ncpfs Network File System: Here, data can be stored on any machine in a network and access might be granted via a network.
However, this limit is still out of reach for the currently available hardware. 1.5 Additional Information File System Primer (http://wiki.novell.com/index.php/File_System_Primer) on the Novell Web site describes a variety of file systems for Linux. It discusses the file systems, why there are so many, and which ones are the best to use for which workloads and data.
Page 20
Each of the file system projects described above maintains its own home page on which to find mailing list information, further documentation, and FAQs: E2fsprogs: Ext2/3/4 Filesystem Utilities (http://e2fsprogs.sourceforge.net/) Introducing Ext3 (http://www.ibm.com/developerworks/linux/library/l-fs7.html) ReiserFSprogs (http://chichkin_i.zelnet.ru/namesys/) XFS: A High-Performance Journaling Filesytem (http://oss.sgi.com/projects/xfs/) OCFS2 Project (http://oss.oracle.com/projects/ocfs2/) A comprehensive multipart tutorial about Linux file systems can be found at IBM developerWorks in the...
Storage and Volume Management in SUSE Linux Enterprise (http://www.novell.com/linux/ volumemanagement/strategy.html). For information about managing storage with EVMS2 on SUSE Linux Enterprise Server 10, see the SUSE Linux Enterprise Server 10 SP3: Storage Administration Guide (http://www.novell.com/ documentation/sles10/stor_admin/data/bookinfo.html). 2.2 Ext3 as the Default File System The Ext3 file system has replaced ReiserFS as the default file system recommended by the YaST tools at installation time and when you create file systems.
2.4 OCFS2 File System Is in the High Availability Release The OCFS2 file system is fully supported as part of the SUSE Linux Enterprise High Availability Extension. 2.5 /dev/disk/by-name Is Deprecated path is deprecated in SUSE Linux Enterprise Server 11 packages. /dev/disk/by-name 2.6 Device Name Persistence in the /dev/disk/by- id Directory...
2.8 User-Friendly Names for Multipathed Devices A change in how multipathed device names are handled in the directory (as /dev/disk/by-id described in Section 2.6, “Device Name Persistence in the /dev/disk/by-id Directory,” on page affects your setup for user-friendly names because the two names for the device differ. You must modify the configuration files to scan only the device mapper names after multipathing is configured.
Planning a Storage Solution Consider what your storage needs are and how you can effectively manage and divide your storage space to best meet your needs. Use the information in this section to help plan your storage ® deployment for file systems on your SUSE Linux Enterprise Server 11 server.
SLES 11. For a current list of possible backup and antivirus software vendors, see Novell Open Enterprise Server Partner Support: Backup and Antivirus Support (http://www.novell.com/products/ openenterpriseserver/partners_communities.html). This list is updated quarterly. SLES 11: Storage Administration Guide...
LVM Configuration This section briefly describes the principles behind Logical Volume Manager (LVM) and its basic features that make it useful under many circumstances. The YaST LVM configuration can be reached from the YaST Expert Partitioner. This partitioning tool enables you to edit and delete existing partitions and create new ones that should be used with LVM.
Figure 4-1 compares physical partitioning (left) with LVM segmentation (right). On the left side, one single disk has been divided into three physical partitions (PART), each with a mount point (MP) assigned so that the operating system can access them. On the right side, two disks have been divided into two and three physical partitions each.
This value is normally set to 4 MB and allows for a maximum size of 256 GB for physical and logical volumes. The physical extent size should only be increased, for example, to 8, 16, or 32 MB, if you need logical volumes larger than 256 GB. Creating a Volume Group Figure 4-2 4.4 Configuring Physical Volumes...
Physical Volume Setup Figure 4-3 To add a previously unassigned partition to the selected volume group, first click the partition, then click Add Volume. At this point, the name of the volume group is entered next to the selected partition. Assign all partitions reserved for LVM to a volume group. Otherwise, the space on the partition remains unused.
Page 31
Logical Volume Management Figure 4-4 To create a new logical volume (see Figure 4-5), click Add and fill out the pop-up that opens. For partitioning, specify the size, file system, and mount point. Normally, a file system, such as Reiserfs or Ext2, is created on a logical volume and is then designated a mount point.
Creating Logical Volumes Figure 4-5 If you have already configured LVM on your system, the existing logical volumes can be specified now. Before continuing, assign appropriate mount points to these logical volumes too. Click Next to return to the YaST Expert Partitioner and finish your work there. 4.6 Direct LVM Management If you already have configured LVM and only want to change something, there is an alternative method available.
Page 33
To extend the size of a logical volume: 1 Open a terminal console, log in as the user. root 2 If the logical volume contains file systems that are hosted for a virtual machine (such as a Xen VM), shut down the VM. 3 Dismount the file systems on the logical volume.
Resizing File Systems When your data needs grow for a volume, you might need to increase the amount of space allocated to its file system. Section 5.1, “Guidelines for Resizing,” on page 35 Section 5.2, “Increasing an Ext2 or Ext3 File System,” on page 36 Section 5.3, “Increasing the Size of a Reiser File System,”...
When specifying an exact size for the file system, make sure the new size satisfies the following conditions: The new size must be greater than the size of the existing data; otherwise, data loss occurs. The new size must be equal to or less than the current device size because the file system size cannot extend beyond the space available.
5.3 Increasing the Size of a Reiser File System A ReiserFS file system can be increased in size while mounted or unmounted. 1 Open a terminal console, then log in as the user or equivalent. root 2 Increase the size of the file system on the device called , using one of the following /dev/sda2 methods:...
df -h The Disk Free ( ) command shows the total size of the disk, the number of blocks used, and the number of blocks available on the file system. The -h option print sizes in human-readable format, such as 1K, 234M, or 2G. 5.5 Decreasing the Size of a Reiser File System Reiser file systems can be reduced in size only if the volume is unmounted.
Using UUIDs to Mount Devices This section describes the optional use of UUIDs instead of device names to identify file system devices in the boot loader file and the file. /etc/fstab Section 6.1, “Naming Devices with udev,” on page 39 Section 6.2, “Understanding UUIDs,”...
6.2.1 Using UUIDs to Assemble or Activate File System Devices The UUID is always unique to the partition and does not depend on the order in which it appears or where it is mounted. With certain SAN devices attached to the server, the system partitions are renamed and moved to be the last device.
For example, change kernel /boot/vmlinuz root=/dev/sda1 kernel /boot/vmlinuz root=/dev/disk/by-uuid/e014e482-1c2d-4d09-84ec- 61b3aefde77a IMPORTANT: If you make a mistake, you can boot the server without the SAN connected, and fix the error by using the backup copy of the file as a guide. /boot/grub/menu.1st If you use the Boot Loader option in YaST, there is a defect where it adds some duplicate lines to the boot loader file when you change a value.
IMPORTANT: Do not leave stray characters or spaces in the file. 6.5 Additional Information For more information about using for managing devices, see “Dynamic Kernel Device udev(8) Management with udev” (http://www.novell.com/documentation/sles11/sles_admin/data/ ® cha_udev.html) in the SUSE Linux Enterprise Server 11 Installation and Administration Guide. For more information about commands, see its man page.
Managing Multipath I/O for Devices This section describes how to manage failover and path load balancing for multiple paths between the servers and block storage devices. Section 7.1, “Understanding Multipathing,” on page 43 Section 7.2, “Planning for Multipathing,” on page 44 Section 7.3, “Multipath Management Tools,”...
Typical connection problems involve faulty adapters, cables, or controllers. When you configure multipath I/O for a device, the multipath driver monitors the active connection between devices. When the multipath driver detects I/O errors for an active path, it fails over the traffic to the device’s designated secondary path.
Disk Management Tasks Perform the following disk management tasks before you attempt to configure multipathing for a physical or logical device that has multiple paths: Use third-party tools to carve physical disks into smaller logical disks. Use third-party tools to partition physical or logical disks. If you change the partitioning in the running system, the Device Mapper Multipath (DM-MP) module does not automatically detect and reflect these changes.
7.2.3 Using LVM2 on Multipath Devices By default, LVM2 does not recognize multipathed devices. To make LVM2 recognize the multipathed devices as possible physical volumes, you must modify . It is /etc/lvm/lvm.conf important to modify it so that it does not scan and use the physical paths, but only accesses the multipath I/O storage through the multipath I/O layer.
7.2.6 Partitioning Multipath Devices Behavior changes for how multipathed devices are handled might affect your configuration if you are upgrading. “SUSE Linux Enterprise Server 11” on page 47 “SUSE Linux Enterprise Server 10” on page 47 “SUSE Linux Enterprise Server 9” on page 47 SUSE Linux Enterprise Server 11 In SUSE Linux Enterprise Server 11, the default multipath setup relies on to overwrite the...
Page 48
“Tested Storage Arrays for Multipathing Support” on page 49 “Storage Arrays that Require Specific Hardware Handlers” on page 49 Storage Arrays That Are Automatically Detected for Multipathing package automatically detects the following storage arrays: multipath-tools 3PARdata VV Compaq* HSV110 Compaq MSA1000 DDN SAN MultiDirector DEC* HSG80 EMC* CLARiiON* CX...
Page 49
Consider the following caveats: Not all of the storage arrays that are automatically detected have been tested on SUSE Linux Enterprise Server. For information, see “Tested Storage Arrays for Multipathing Support” on page Some storage arrays might require specific hardware handlers. A hardware handler is a kernel module that performs hardware-specific actions when switching path groups and dealing with I/O errors.
DM-MP is the preferred solution for multipathing on SUSE Linux Enterprise Server 11. It is the ® only multipathing option shipped with the product that is completely supported by Novell SUSE. DM-MP features automatic configuration of the multipathing subsystem for a large variety of setups.
Multipath I/O Features of Storage Arrays Table 7-1 Features of Storage Arrays Description Active/passive controllers One controller is active and serves all LUNs. The second controller acts as a standby. The second controller also presents the LUNs to the multipath component so that the operating system knows about redundant paths.
For a list of files included in this package, see the multipath-tools Package Description (http:// www.novell.com/products/linuxpackages/suselinux/multipath-tools.html). 1 Ensure that the package is installed by entering the following at a terminal...
For information about modifying the file, see Section 7.2.3, “Using LVM2 on /etc/lvm/lvm.conf Multipath Devices,” on page 7.3.4 The Linux multipath(8) Command Use the Linux command to configure and manage multipathed devices. multipath(8) General syntax for the command: multipath(8) multipath [-v verbosity] [-d] [-h|-l|-ll|-f|-F] [-p failover | multibus | group_by_serial | group_by_prio| group_by_node_name ] General Examples Configure multipath devices:...
multipath -p [failover|multibus|group_by_serial|group_by_prio|group_by_node_name] Specify one of the group policy options that are described in Table 7-3: Group Policy Options for the multipath -p Command Table 7-3 Policy Option Description failover One path per priority group. You can use only one path at a time. multibus All paths in one priority group.
SCSI device scanning behavior, such as to indicate that LUNs are not numbered consecutively. For information, see Options for SCSI Device Scanning (http://support.novell.com/techcenter/sdb/en/2005/06/drahn_scsi_scanning.html) in the Novell Support Knowledgebase. 7.4.2 Partitioning Multipathed Devices Partitioning devices that have multiple paths is not recommended, but it is supported.
After changing , you must re-create the on your system with the /etc/sysconfig/kernel INITRD command, then reboot in order for the changes to take effect. mkinitrd When you are using LILO as a boot manager, reinstall it with the command. No further /sbin/lilo action is required if you are using GRUB.
Page 57
Creating the multipath.conf File If the file does not exist, copy the example to create the file: /etc/multipath.conf 1 In a terminal console, log in as the user. root 2 Enter the following command (all on one line, of course) to copy the template: cp /usr/share/doc/packages/multipath-tools/multipath.conf.synthetic /etc/ multipath.conf 3 Use the...
Page 58
Configuring User-Friendly Names or Alias Names in /etc/multipath.conf A multipath device can be identified by either its WWID or an alias that you assign for it. The WWID (Worldwide Identifier) is an identifier for the multipath device that is guaranteed to be globally unique and unchanging.
Page 59
NOTE: The keyword devnode_blacklist has been deprecated and replaced with the keyword blacklist. For example, to blacklist local devices and all arrays from the driver from being managed by cciss multipath, the section looks like this: blacklist blacklist { wwid 26353900f02796769 devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st|sda)[0-9]*"...
defaults { dev_loss_tmo 90 fast_io_fail_tmo 5 The dev_loss_tmo parameter sets the number of seconds to wait before marking a multipath link as bad. When the path fails, any current I/O on that failed path fails. The default value varies according to the device driver being used.
7.6 Configuring Path Failover Policies and Priorities In a Linux host, when there are multiple paths to a storage controller, each path appears as a separate block device, and results in multiple block devices for single LUN. The Device Mapper Multipath service detects multiple paths with the same LUN ID, and creates a new multipath device with that ID.
Page 62
“Configuring for Single Path Failover” on page 66 “Grouping I/O Paths for Round-Robin Load Balancing” on page 66 Understanding Priority Groups and Attributes A priority group is a collection of paths that go to the same physical LUN. By default, I/O is distributed in a round-robin fashion across all paths in the group.
Page 63
Multipath Attribute Description Values path_grouping_policy Specifies the path grouping failover: One path is assigned per priority group policy for a multipath device so that only one path at a time is used. hosted by a given controller. multibus: (Default) All valid paths are in one priority group.
Page 64
Multipath Attribute Description Values path_selector Specifies the path-selector round-robin 0: (Default) The load-balancing algorithm to use for load algorithm used to balance traffic across all active balancing. paths in a priority group. Beginning in SUSE Linux Enterprise Server 11, the following additional I/O balancing options are available: least-pending: Provides a least-pending-I/O dynamic load balancing policy for bio based...
Page 65
Multipath Attribute Description Values prio_callout Specifies the program and If no attribute is used, all paths prio_callout arguments to use to are equal. This is the default. Multipath prio_callouts determine the layout of the are located in shared /bin/true: Use this value when the multipath map.
Page 66
Multipath Attribute Description Values rr_min_io Specifies the number of I/O n (>0): Specify an integer value greater than 0. transactions to route to a path 1000: Default. before switching to the next path in the same path group, as determined by the specified algorithm in the setting.
7.6.3 Using a Script to Set Path Priorities You can create a script that interacts with Device Mapper - Multipath (DM-MP) to provide priorities for paths to the LUN when set as a resource for the setting. prio_callout First, set up a text file that lists information about each device and the priority values you want to assign to each path.
Page 68
“Options” on page 68 “Return Values” on page 68 Syntax mpath_prio_alua [-d directory] [-h] [-v] [-V] device [device...] Prerequisite SCSI devices Options -d directory Specifyies the Linux directory path where the listed device node names can be found. The default directory is .
Priority Value Description The device is in the standby group. All other groups. Values are widely spaced because of the way the command handles them. It multiplies multipath the number of paths in a group with the priority value for the group, then selects the group with the highest result.
3 After installation, add dm-multipath /etc/sysconfig/kernel:INITRD_MODULES 4 For System Z, before running mkinitrd, edit the file to change the by-path /etc/zipl.conf information in with the same by-id information that was used in the / zipl.conf etc/fstab 5 Re-run to update the image.
5 After the multipathing services are started, verify that the software RAID’s component devices are listed in the directory. Do one of the following: /dev/disk/by-id Devices Are Listed: The device names should now have symbolic links to their Device Mapper Multipath device names, such as /dev/dm-1 Devices Are Not Listed: Force the multipath service to recognize them by flushing and rediscovering the devices.
Page 72
Options For most storage subsystems, the script can be run successfully without options. However, some special cases might need to use one or more of the following parameters for the rescan-scsi- script: bus.sh Option Description Activates scanning for LUNs 0-7. [Default: 0] Activates scanning for LUNs 0 to NUM.
rescan-scsi-bus.sh [options] 3 Check for scanning progress in the system log (the file). At a terminal /var/log/messages console prompt, enter tail -30 /var/log/messages This command displays the last 30 lines of the log. For example: # tail -30 /var/log/messages . . . Feb 14 01:03 kernel: SCSI device sde: 81920000 Feb 14 01:03 kernel: SCSI device sdf: 81920000 Feb 14 01:03 multipathd: sde: path checker registered...
mke2fs -j /dev/dm-9 tune2fs -L oradata3 /dev/dm-9 9 Restart DM-MP to let it read the aliases by entering /etc/init.d/multipathd restart 10 Verify that the device is recognized by by entering multipathd multipath -ll 11 Use a text editor to add a mount entry in the file.
Whether the group is the first (highest priority) group Paths contained within the group The following information is displayed for each path: The physical address as host:bus:target:lun, such as 1:0:1:2 Device node name, such as Major:minor numbers Status of the device 7.13 Managing I/O in Error Situations You might need to configure multipathing to queue I/O if all paths fail concurrently by enabling queue_if_no_path.
0 queue_if_no_path 7.15 Additional Information For more information about configuring and using multipath I/O on SUSE Linux Enterprise Server, see the following additional resources in the Novell Support Knowledgebase: How to Setup/Use Multipathing on SLES (http://support.novell.com/techcenter/sdb/en/2005/ 04/sles_multipathing.html) Troubleshooting SLES Multipathing (MPIO) Problems (Technical Information Document 3231766) (http://www.novell.com/support/...
7.16 What’s Next If you want to use software RAIDs, create and configure them before you create file systems on the devices. For information, see the following: Chapter 8, “Software RAID Configuration,” on page 79 Chapter 10, “Managing Software RAIDs 6 and 10 with mdadm,” on page 89 Managing Multipath I/O for Devices...
Software RAID Configuration The purpose of RAID (redundant array of independent disks) is to combine several hard disk partitions into one large virtual hard disk to optimize performance, data security, or both. Most RAID controllers use the SCSI protocol because it can address a larger number of hard disks in a more effective way than the IDE protocol and is more suitable for parallel processing of commands.
8.1.2 RAID 1 This level provides adequate security for your data, because the data is copied to another hard disk 1:1. This is known as hard disk mirroring. If a disk is destroyed, a copy of its contents is available on another mirrored disk.
Page 81
the same amount of space as the smallest sized partition. The RAID partitions should be stored on different hard disks to decrease the risk of losing data if one is defective (RAID 1 and 5) and to optimize the performance of RAID 0. After creating all the partitions to use with RAID, click RAID >...
File System Settings Figure 8-2 As with conventional partitioning, set the file system to use as well as encryption and the mount point for the RAID volume. After completing the configuration with Finish, see the /dev/md0 device and others indicated with RAID in the Expert Partitioner. 8.3 Troubleshooting Check the file to find out whether a RAID partition has been damaged.
Configuring Software RAID for the Root Partition ® In SUSE Linux Enterprise Server 11, the Device Mapper RAID tool has been integrated into the YaST Partitioner. You can use the partitioner at install time to create a software RAID for the system device that contains your root ( ) partition.
3 On the Expert Partitioner page, expand Hard Disks in the System View panel to view the default proposal. 4 On the Hard Disks page, select Configure > Configure iSCSI, then click Continue when prompted to continue with initializing the iSCSI initiator configuration. 9.3 Enabling Multipath I/O Support at Install Time If there are multiple I/O paths to the devices you want to use to create a software RAID device for the root (/) partition, you must enable multipath support before you create the software RAID...
Page 85
5c Under New Partition Size, specify to use the maximum size, then click Next. 5d Under Format Options, select Do not format partition, then select 0xFD Linux RAID from the drop-down list. 5e Under Mount Options, select Do not mount partition. 5f Click Finish.
Page 86
6d Click Next. 6e Under RAID Options, select the chunk size from the drop-down list. The default chunk size for a RAID 1 (Mirroring) is 4 KB. The default chunk size for a RAID 0 (Striping) is 32 KB. Available chunk sizes are 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, 2 MB, or 4 MB.
Page 87
The software RAID device is managed by Device Mapper, and creates a device under the path. /dev/md0 7 On the Expert Partitioner page, click Accept. The new proposal appears under Partitioning on the Installation Settings page. For example, the setup for the 8 Continue with the install.
Managing Software RAIDs 6 and 10 with mdadm This section describes how to create software RAID 6 and 10 devices, using the Multiple Devices Administration ( ) tool. You can also use to create RAIDs 0, 1, 4, and 5. The mdadm(8) mdadm mdadm...
10.1.2 Creating a RAID 6 The procedure in this section creates a RAID 6 device with four devices: /dev/md0 /dev/sda1 , and . Make sure to modify the procedure to use your actual dev/sdb1 /dev/sdc1 /dev/sdd1 device nodes. 1 Open a terminal console, then log in as the user or equivalent.
The following table describes the advantages and disadvantages of RAID 10 nesting as 1+0 versus 0+1. It assumes that the storage objects you use reside on different disks, each with a dedicated I/O capability. Nested RAID Levels Table 10-2 RAID Level Description Performance and Fault Tolerance 10 (1+0) RAID 0 (stripe)
Scenario for Creating a RAID 10 (1+0) by Nesting Table 10-3 Raw Devices RAID 1 (mirror) RAID 1+0 (striped mirrors) /dev/sdb1 /dev/md0 /dev/sdc1 /dev/md2 /dev/sdd1 /dev/md1 /dev/sde1 1 Open a terminal console, then log in as the user or equivalent. root 2 Create 2 software RAID 1 devices, using two different devices for each RAID 1 device.
Scenario for Creating a RAID 10 (0+1) by Nesting Table 10-4 Raw Devices RAID 0 (stripe) RAID 0+1 (mirrored stripes) /dev/sdb1 /dev/md0 /dev/sdc1 /dev/md2 /dev/sdd1 /dev/md1 /dev/sde1 1 Open a terminal console, then log in as the root user or equivalent. 2 Create two software RAID 0 devices, using two different devices for each RAID 0 device.
Page 94
“Near Layout” on page 95 “Far Layout” on page 95 Comparing the Complex RAID10 and Nested RAID 10 (1+0) The complex RAID 10 is similar in purpose to a nested RAID 10 (1+0), but differs in the following ways: Complex vs. Nested RAID 10 Table 10-5 Feature mdadm RAID10 Option...
Page 95
Near Layout With the near layout, copies of a block of data are striped near each other on different component devices. That is, multiple copies of one data block are at similar offsets in different devices. Near is the default layout for RAID10. For example, if you use an odd number of component devices and two copies of data, some copies are perhaps one chunk further into the device.
sda1 sdb1 sdc1 sde1 sdf1 . . . 10.3.2 Creating a RAID 10 with mdadm The RAID10 option for creates a RAID 10 device without nesting. For information about mdadm RAID10-, see Section 10.3, “Creating a Complex RAID 10 with mdadm,” on page The procedure in this section uses the device names shown in the following table.
Page 97
RAID Type Allowable Number of Slots Missing RAID 1 All but one device RAID 4 One slot RAID 5 One slot RAID 6 One or two slots To create a degraded array in which some devices are missing, simply give the word missing place of a device name.
Resizing Software RAID Arrays with mdadm This section describes how to increase or reduce the size of a software RAID 1, 4, 5, or 6 device with the Multiple Device Administration ( ) tool. mdadm(8) WARNING: Before starting any of the tasks described in this section, make sure that you have a valid backup of all of the data.
11.1.2 Overview of Tasks Resizing the RAID involves the following tasks. The order in which these tasks is performed depends on whether you are increasing or decreasing its size. Tasks Involved in Resizing a RAID Table 11-2 Order If Order If Tasks Description Increasing...
Scenario for Increasing the Size of Component Partitions Table 11-3 RAID Device Component Partitions /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 To increase the size of the component partitions for the RAID: 1 Open a terminal console, then log in as the user or equivalent. root 2 Make sure that the RAID array is consistent and synchronized by entering cat /proc/mdstat...
The procedure in this section uses the device name for the RAID device. Make sure to /dev/md0 modify the name to use the name of your own device. 1 Open a terminal console, then log in as the user or equivalent. root 2 Check the size of the array and the device size known to the array by entering mdadm -D /dev/md0 | grep -e "Array Size"...
Page 103
resize2fs /dev/md0 size The size parameter specifies the requested new size of the file system. If no units are specified, the unit of the size parameter is the block size of the file system. Optionally, the size parameter can be suffixed by one of the following the unit designators: s for 512 byte sectors;...
11.3 Decreasing the Size of a Software RAID Before you begin, review the guidelines in Section 11.1, “Understanding the Resizing Process,” on page Section 11.3.1, “Decreasing the Size of the File System,” on page 104 Section 11.3.2, “Decreasing the Size of Component Partitions,” on page 105 Section 11.3.3, “Decreasing the Size of the RAID Array,”...
umount /mnt/point If the partition you are attempting to decrease in size contains system files (such as the root ( volume), unmounting is possible only when booting from a bootable CD or floppy. 3 Decrease the size of the file system on the software RAID device called by entering /dev/md0 resize_reiserfs -s size /dev/md0...
cat /proc/mdstat If your RAID array is still synchronizing according to the output of this command, you must wait until synchronization is complete before continuing. 3 Remove one of the component partitions from the RAID array. For example, to remove /dev/ , enter sda1...
Page 107
mdadm -D /dev/md0 | grep -e "Array Size" -e "Device Size" 5 Do one of the following: If your array was successfully resized, you are done. If your array was not resized as you expected, you must reboot, then try this procedure again.
iSNS for Linux Storage area networks (SANs) can contain many disk drives that are dispersed across complex networks. This can make device discovery and device ownership difficult. iSCSI initiators must be able to identify storage resources in the SAN and determine whether they have access to them. Internet Storage Name Service (iSNS) is a standards-based service that is available beginning with ®...
iSNS Discovery Domains and Discovery Domain Sets Figure 12-1 Both iSCSI targets and iSCSI initiators use iSNS clients to initiate transactions with iSNS servers by using the iSNS protocol. They then register device attribute information in a common discovery domain, download information about other registered clients, and receive asynchronous notification of events that occur in their discovery domain.
Page 111
To install iSNS for Linux: 1 Start YaST and select Network Services > iSNS Server. 2 When prompted to install the package, click Install. isns 3 Follow the install dialog instructions to provide the SUSE Linux Enterprise Server 11 installation disks. When the installation is complete, the iSNS Service configuration dialog opens automatically to the Service tab.
12.3 Configuring iSNS Discovery Domains In order for iSCSI initiators and targets to use the iSNS service, they must belong to a discovery domain. IMPORTANT: The SNS service must be installed and running to configure iSNS discovery domains. For information, see Section 12.4, “Starting iSNS,”...
3 Click the Create Discovery Domain button. You can also select an existing discovery domain and click the Delete button to remove that discovery domain. 4 Specify the name of the discovery domain you are creating, then click OK. 5 Continue with Section 12.3.2, “Creating iSNS Discovery Domain Sets,”...
The Discovery Domain Set Members area lists all discovery domains that are assigned to a selected discovery domain set. Selecting a different discovery domain set refreshes the list with members from that discovery domain set. You can add and delete discovery domains from a selected discovery domain set.
3 Review the list of nodes to make sure that the iSCSI targets and initiators that you want to use the iSNS service are listed. If an iSCSI target or initiator is not listed, you might need to restart the iSCSI service on the node.
2 Click the Discovery Domains Set tab. 3 Select Create Discovery Domain Set to add a new set to the list of discovery domain sets. 4 Choose a discovery domain set to modify. 5 Click Add Discovery Domain, select the discovery domain you want to add to the discovery domain set, then click Add Discovery Domain.
Mass Storage over IP Networks: iSCSI One of the central tasks in computer centers and when operating servers is providing hard disk capacity for server systems. Fibre Channel is often used for this purpose. iSCSI (Internet SCSI) solutions provide a lower-cost alternative to Fibre Channel that can leverage commodity servers and Ethernet networking equipment.
Many storage solutions provide access over iSCSI, but it is also possible to run a Linux server that provides an iSCSI target. In this case, it is important to set up a Linux server that is optimized for file system services. The iSCSI target accesses block devices in Linux. Therefore, it is possible to use RAID solutions to increase disk space as well as a lot of memory to improve data caching.
13.2 Setting Up an iSCSI Target ® SUSE Linux Enterprise Server comes with an open source iSCSI target solution that evolved from the Ardis iSCSI target. A basic setup can be done with YaST, but to take full advantage of iSCSI, a manual setup is required.
each virtual disk from a physical disk or a partition. After you set up the virtual disks for the guest virtual machine, start the guest server, then configure the new blank virtual disks as iSCSI target devices by following the same process as for a physical server. File-baked disk images are created on the Xen host server, then assigned to the Xen guest server.
Page 121
7c Click Add to add a new iSCSI target. The iSCSI target automatically presents an unformatted partition or block device and completes the Target and Identifier fields. 7d You can accept this, or browse to select a different space. You can also subdivide the space to create LUNs on the device by clicking Add and specifying sectors to allocate to that LUN.
The next menu configures the access restrictions of the target. The configuration is very similar to the configuration of the discovery authentication. In this case, at least an incoming authentication should be setup. Next finishes the configuration of the new target, and brings you back to the overview page of the Target tab.
cat /proc/net/iet/volume tid:1 name:iqn.2006-02.com.example.iserv:systems lun:0 state:0 iotype:fileio path:/dev/mapper/system-v3 lun:1 state:0 iotype:fileio path:/dev/hda4 lun:2 state:0 iotype:fileio path:/var/lib/xen/images/xen-1 There are many more options that control the behavior of the iSCSI target. For more information, see the man page of ietd.conf Active sessions are also displayed in the file system.
ietadm can also be used to change various configuration parameters. Obtain a list of the global variables with . The output looks like: ietadm --op show --tid=1 --sid=0 InitialR2T=Yes ImmediateData=Yes MaxConnections=1 MaxRecvDataSegmentLength=8192 MaxXmitDataSegmentLength=8192 MaxBurstLength=262144 FirstBurstLength=65536 DefaultTime2Wait=2 DefaultTime2Retain=20 MaxOutstandingR2T=1 DataPDUInOrder=Yes DataSequenceInOrder=Yes ErrorRecoveryLevel=0 HeaderDigest=None DataDigest=None...
13.3.1 Using YaST for the iSCSI Initiator Configuration The iSCSI Initiator Overview in YaST is divided into three tabs: Service: The Service tab can be used to enable the iSCSI initiator at boot time. It also offers to set a unique Initiator Name and an iSNS server to use for the discovery. The default port for iSNS is 3205.
Page 126
If the server has iBFT (iSCSI Boot Firmware Table) support, the Initiator Name is completed with the corresponding value in the IBFT, and you are not able to change the initiator name in this interface. Use the BIOS Setup to modify it instead.The iBFT is a block of information containing various parameters useful to the iSCSI boot process, including iSCSI target and initiator descriptions for the server.
Setting the Start-up Preference for iSCSI Target Devices 1 In YaST, select iSCSI Initiator, then select the Connected Targets tab to view a list of the iSCSI target devices that are currently connected to the server. 2 Select the iSCSI target device that you want to manage. 3 Click Toggle Start-Up to modify the setting: Automatic: This option is used for iSCSI targets that are to be connected when the iSCSI service itself starts up.This is the typical configuration.
Page 128
the discovery or from the node database. Do this with the parameters -m discovery -m node . Using just with one of these parameters gives an overview of the stored iscsiadm iscsiadm records: iscsiadm -m discovery 149.44.171.99:3260,1 iqn.2006-02.com.example.iserv:systems The target name in this example is .
Important pages for more information about open-iscsi are: Open-iSCSI Project (http://www.open-iscsi.org/) AppNote: iFolder on Open Enterprise Server Linux Cluster using iSCSI (http:// www.novell.com/coolsolutions/appnote/15394.html) There is also some online documentation available. See the man pages for iscsiadm iscsid...
Volume Snapshots A file system snapshot is a copy-on-write technology that monitors changes to an existing volume’s data blocks so that when a write is made to one of the blocks, the block’s value at the snapshot time is copied to a snapshot volume. In this way, a point-in-time copy of the data is preserved until the snapshot volume is deleted.
14.2 Creating Linux Snapshots with LVM The Logical Volume Manager (LVM) can be used for creating snapshots of your file system. 1 Open a terminal console, log in as the user, then enter root lvcreate -s -L 1G -n snap_volume source_volume_path For example: lvcreate -s -L 1G -n linux01-snap /dev/lvm/linux01 The snapshot is created as the...
Troubleshooting Storage Issues This section describes how to work around known issues for devices, software RAIDs, multipath I/ O, and volumes. Section 15.1, “Is DM-MPIO Available for the Boot Partition?,” on page 133 15.1 Is DM-MPIO Available for the Boot Partition? ®...
Documentation Updates This section contains information about documentation content changes made to the SUSE Linux ® Enterprise Server Storage Administration Guide since the initial release of SUSE Linux Enterprise Server 11. If you are an existing user, review the change entries to readily identify modified content. If you are a new user, simply read the guide in its current state.
A.2 January 20, 2010 Updates were made to the following section. The changes are explained below. Section A.2.1, “Managing Multipath I/O,” on page 136 A.2.1 Managing Multipath I/O Location Change “Configuring Default Multipath Behavior In the command line, use the path default_getuid in /etc/multipath.conf”...
A.3.2 Resizing File Systems The following changes were made to this section: Location Change Section 5.1.1, “File Systems that Support The resize2fs utility supports online or offline resizing for the Resizing,” on page 35 ext3 file system. A.3.3 What’s New The following change was made to this section: Location Change...
Location Change “Configuring Default Multipath Behavior Changed getuid_callout to getuid. in /etc/multipath.conf” on page 59 “Understanding Priority Groups and Changed getuid_callout to getuid. Attributes” on page 62 “path_selector” on page 64 Added descriptions of least-pending, length-load-balancing, and service-time options. A.4.3 What’s New The following change was made to this section: Location Change...
Location Change Section 7.8, “Configuring Multipath I/O Added Step 4 on page 70 Step 6 on page 70 for System for the Root Device,” on page 69 Section 7.11, “Scanning for New Corrected the syntax for the command lines in Step 2. Partitioned Devices without Rebooting,”...
A.7.1 Managing Multipath I/O The following changes were made to this section: Location Change “Storage Arrays That Are Automatically Testing of the IBM zSeries* device with multipathing has Detected for Multipathing” on page 48 shown that the dev_loss_tmo parameter should be set to 90 seconds, and the fast_io_fail_tmo parameter should be set to 5 seconds.
Need help?
Do you have a question about the LINUX ENTERPRISE SERVER 11 - STORAGE ADMINISTRATION GUIDE 2-23-2010 and is the answer not in the manual?
Questions and answers