Summary of Contents for Novell LINUX ENTERPRISE SERVER 10 - STORAGE ADMINISTRATION GUIDE FOR EVMS
Page 1
SLES 10 Storage Administration Guide for EVMS Novell SUSE Linux Enterprise Server ® w w w . n o v e l l . c o m S T O R A G E A D M I N I S T R A T I O N G U I D E...
Further, Novell, Inc., reserves the right to make changes to any and all parts of Novell software, at any time, without any obligation to notify any person or entity of such changes.
Page 3
Novell Trademarks For a list of Novell trademarks, see the Novell Trademark and Service Mark list (http://www.novell.com/company/ legal/trademarks/tmlist.html). Third-Party Materials All third-party trademarks and copyrights are the property of their respective owners. Some content in this document is copied, distributed, and/or modified from the following document under the terms specified in the document’s license.
SUSE Linux Enterprise Server 10 Installation and Administration Guide (http://www.novell.com/ documentation/sles10). Documentation Conventions In Novell documentation, a greater-than symbol (>) is used to separate actions within a step and items in a cross-reference path. ® A trademark symbol ( , etc.) denotes a Novell trademark. An asterisk (*) denotes a third-party trademark.
Page 10
SLES 10 Storage Administration Guide for EVMS...
Overview of EVMS The Enterprise Volume Management System (EVMS) 2.5.5 management tool for Linux* is an extensible storage management tool that integrates all aspects of volume management, such as disk partitioning, the Logical Volume Manager (LVM), the Multiple-Disk (MD) manager for software RAIDs, the Device Mapper (DM) for multipath I/O configuration, and file system operations.
NTFS (read only) FAT (read only) For more information about file systems supported in SUSE Linux Enterprise Server 10, see the SUSE Linux Enterprise Server 10 Installation and Administration Guide. (http://www.novell.com/ documentation/sles10). 1.4 Terminology EVMS uses the following terminology in the EVMS user interface:...
Term Description Region An ordered set of logically contiguous sectors that might or might not be physically contiguous. The underlying mapping can be to logical disks, disk segments, or other storage regions. Feature A logically contiguous address space created from one or more disks, segments, regions, or other feature objects through the use of an EVMS (Feature Object, EVMS feature.
Page 14
Storage Object Standard Location the Device Node EVMS Location of the Device Node A software RAID device /dev/md1 /dev/evms/md/md1 An LVM volume /dev/lvm_group/lvm_volume /dev/evms/lvm/lvm_group/ lvm_volume SLES 10 Storage Administration Guide for EVMS...
Using EVMS to Manage Devices This section describes how to configure EVMS as the volume manager of your devices. Section 2.1, “Configuring the System Device at Install to Use EVMS,” on page 15 Section 2.2, “Configuring an Existing System Device to Use EVMS,” on page 21 Section 2.3, “Configuring LVM Devices After Install to Use EVMS,”...
Page 16
Linux Enterprise Server 10, see “Large File System Support” in the SUSE Linux Enterprise Server 10 Installation and Administration Guide. (http:// www.novell.com/documentation/sles10). Data Loss Considerations for the System Device This install requires that you delete the default Partitioning settings created by the install, and create new partitions to use EVMS instead.
2.1.2 During the Server Install WARNING: The following install destroys all data on the system device. To install Linux with EVMS as the volume manager for your boot and system partitions, you must modify the Partitioning configuration in the Installation Settings. 1 Begin the install, according to the instructions provided in the “Deployment”...
Page 18
4d Select Do Not Format, then select Linux LVM (0x8E) from the list of file system IDs. 4e In Size, set the cylinder End Value to 5 GB or larger, depending on the combined partition size you need to contain your system and swap volumes. You are creating a primary partition that becomes the EVMS container for these two volumes that you create later in this procedure.
Page 19
6f Click OK. The swap volume is now listed as a volume in the lvm2/system container. 7 In the EVMS Configuration dialog box, create the root (/) volume in the lvm2/system container as follows: 7a From the EVMS Container drop-down menu, select lvm2/system, then click Add. 7b In the Create Logical Volume dialog box, select Format, then select the file system to use from the File System drop-down menu, such as Reiser.
2.1.3 After the Server Install After the SUSE Linux Enterprise Server 10 install is complete, you must perform the following tasks to ensure that the system device functions properly under EVMS: “Edit the /etc/fstab File” on page 20 “Disable the boot.lvm and boot.md Services” on page 21 “Enable the boot.evms Service”...
Disable the boot.lvm and boot.md Services Disable the boot.lvm and boot.md services so they do not run at boot time (runlevel B). EVMS now handles the boot. 1 In YaST, click System > System Services (Runlevel) > Expert Mode. 2 Select boot.lvm. 3 Click Set/Reset >...
If you do not configure the device to use EVMS, you must manage the device and all of its volumes with its current volume manager (the default is LVM), and free space on the device cannot be used for volumes you want to create using EVMS. Beginning with the Linux 2.6 kernel, any given device cannot be managed by multiple volume managers.
4 Click Finish, then click Yes. The changes do not take affect until the server is restarted. Do not reboot at this time. NOTE: Effective in SUSE Linux Enterprise 10, the /dev directory is on tmpfs and the device nodes are automatically re-created on boot. It is no longer necessary to modify the / etc/init.d/boot.evms script to delete the device nodes on system reboot as was required for previous versions of SUSE Linux.
2.2.4 Edit the Boot Loader File When you boot the system, the kernel reads the boot loader file for information about your system. For Grub, this is the /boot/grub/menu.1st file. For LILO, this is the /etc/lilo.conf file. You must edit the boot loader file to modify the mount location of partitions so they are mounted under the /dev/evms directory.
NOTE: Recent patches to mkinitrd might resolve the need to do this task. For the latest version of mkinitrd, see Recommended Updates for mkinitrd (http://support.novell.com/techcenter/psdb/ 24c7dfbc3e0c183970b70c1c0b3a6d7d.html) at the Novell Technical Support Center. 1 At a terminal console prompt, enter the EVMS Ncurses command as the root user or equivalent: evmsn 2 Review the output to verify that EVMS shows only the /boot and swap partitions as active in EVMS.
2.2.6 Reboot the Server 1 Reboot the server to apply the post-install configuration settings. When your system reboots, the kernel loads the init-ramdisk, which runs the EVMS tools to activate your volumes and mount your root file system. Then your boot scripts run the EVMS tools once more to make sure your /dev/evms/ directory correctly reflects the current state of your volumes.
With these changes, each time your system boots, your file system is mounted using EVMS as the volume manager. 4 Update the boot scripts as follows: The command evms_activate must be run from your boot scripts in order to activate your volumes so they can be mounted.
2.6 Starting EVMS If EVMS does not start during the system boot, you must activate it manually. 1 Open a terminal console, then log in as the root user or equivalent. 2 At the terminal console prompt, enter evms_activate 2.7 Starting the EVMS Management Tools Use the following procedure to start the EVMS management tools.
Mounting EVMS File System Devices by UUIDs This section discusses the optional use of UUIDs instead of device names to identify file system devices in the boot loader file and the /etc/fstab file. Section 3.1, “Naming Devices with udev,” on page 29 Section 3.2, “Understanding UUIDs,”...
3.2.1 Using UUIDs to Assemble or Activate File System Devices The UUID is always unique to the partition and does not depend on the order in which it appears or where it is mounted. With certain SAN devices attached to the server, the system partitions are renamed and moved to be the last device.
kernel /boot/vmlinuz root=/dev/disk/by-uuid/e014e482-1c2d-4d09- 84ec-61b3aefde77a IMPORTANT: Make a copy of the original boot entry, then modify the copy. If you make a mistake, you can boot the server without the SAN connected, and fix the error. If you use the Boot Loader option in YaST, there is a defect where it adds some duplicate lines to the boot loader file when you change a value.
Page 32
4b List all partitions by entering 4c Find the UUID, such as e014e482-1c2d-4d09-84ec-61b3aefde77a —> /dev/sda1 5 Edit the boot loader file, using the Boot Loader option in YaST2. For example, change root=/dev/sda1 root=/dev/disk/by-uuid/e014e482-1c2d-4d09-84ec-61b3aefde77a 6 Edit the /boot/efi/SuSE/elilo.conf file to modify the system device from the location to the UUID.
Managing Devices This section discusses how to initialize a disk by adding a segment management container to manage the partitions that you later add to the disk. Section 4.1, “Understanding Disk Segmentation,” on page 33 Section 4.2, “Initializing Disks,” on page 34 Section 4.3, “Removing the Segment Manager from a Device,”...
Segment Manager Description A partitioning scheme for Mac-OS partitions. 4.1.2 Disk Segments After you initialize the disk by adding a segment manager, you see metadata and free space segments on the disk. You can then create one or multiple data segments in a disk segment. Disk Segment Types Table 4-2 Segment Type...
4.2.2 Guidelines Consider the following guidelines when initializing a disk: EVMS might allow you to create segments without first adding a segment manager for the disk, but it is best to explicitly add a segment manager to avoid problems later. IMPORTANT: You must add a Cluster segment manager if you plan to use the devices for volumes that you want to share as cluster resources.
3b From the list, select one of the following types of segment manager, then click Next. DOS Segment Manager (the most common choice) GPT Segment Manager (for IA-64 platforms) Cluster Segment Manager (available only if it is a viable option for the selected disk) 3c Select the device from the list of Plugin Acceptable Objects, then click Next.
Primary Partition: Click Yes for a primary partition, or click No for a logical partition. Required settings are denoted in the page by an asterisk (*). All required fields must be completed to make the Create button active. 5 Click Create to create the segment. 6 Verify that the new segment appears in the Segment list.
Page 38
Fstab Option Description Data journaling mode For journaling file systems, select the preferred journaling mode: ordered (default), journal, or writeback. Ordered: Writes data to the file system, then enters the metadata in the journal. Journal: Writes data twice; once to the journal, then to the file system. Writeback: Writes data to the file system and writes metadata in the journal, but the writes are performed in any order.
3 Modify the settings as desired, then click OK to accept your changes. 4.6 What’s Next If multiple paths exist between your host bus adapters (HBAs) and the storage devices, configure multipathing for the devices before creating software RAIDs or file system volumes on the devices. For information, see Chapter 5, “Managing Multipath I/O for Devices,”...
Page 40
SLES 10 Storage Administration Guide for EVMS...
Managing Multipath I/O for Devices This section discusses how to configure multiple paths between the servers and storage devices for automatic failover and optional load balancing. Section 5.1, “Understanding Multipathing,” on page 41 Section 5.2, “Before You Begin,” on page 44 Section 5.3, “Adding multipathd to the Boot Sequence,”...
For a list of supported storage subsystems that allows multiple paths to be detected automatically, see “10.1 Supported Hardware” in the SUSE Linux Enterprise Server 10 Administration Guide (http://www.novell.com/documentation/sles10/sles_admin/data/ sec_mpio_supported_hw.html). When configuring devices for multipathing, use the device names in the /dev/disk/by-id directory instead of the default device names (such as /dev/sd*), because the /dev/disk/ by-id names persist over reboots.
This does not work for multipathed devices as there we have to omit all physical devices and scan devices in /dev/disk/by-id only as these are the correct multipathed devices. If a previous MD installation exists, modify mdadm.conf to handle the devices correctly by ID instead of by device node path.
For a list of files included in this package, see the multipath-tools Package Description (http:// www.novell.com/products/linuxpackages/suselinux/multipath-tools.html). Ensure that the multipath-tools package is installed by entering the following at a terminal console prompt: rpm -q multipath-tools 5.2 Before You Begin...
LUNs are not numbered consecutively. For information, see Options for SCSI Device Scanning (http://www.novell.com/support/search.do?cmd=displayKC&docType=kc&externalId=http-- supportnovellcom-techcenter-sdb-en-2005-06-drahnscsiscanninghtml&sliceId=) in the Novell Technical Support Knowledgebase. 5.2.2 Partitioning Devices that Have Multiple Paths Partitioning devices that have multiple paths is not recommended. However, if you want to partition the device, you should configure its partitions using fdisk or YaST2 before configuring multipathing.
5.3 Adding multipathd to the Boot Sequence If you are using multipath IO services, add multipathd the boot sequence, using one of the following methods: Section 5.3.1, “YaST,” on page 46 Section 5.3.2, “Command Line,” on page 46 5.3.1 YaST 1 In YaST, click System >...
5.6 Configuring Multipath I/O for the Root Device In the initial release, the root partition on multipath is supported only if the /boot partition is on a separate, non-multipathed partition. Otherwise, no bootloader is written. NOTE: This issue is resolved in updates since the initial release. To enable multipathing on the existing root device: 1 Install Linux with only a single path active, preferably one where the by-id symlinks are listed in the partitioner.
5.8 Configuring Multipathing for an Existing Software RAID Ideally, you should configure multipathing for devices before you use them as components of a software RAID device. If you add multipathing after creating any software RAID devices, the Multipath service might be starting after the MD service on reboot, which makes multipathing appear not to be available for RAIDs.
The devices should now be listed in /dev/disk/by-id, and have symbolic links to their Device Mapper device names. For example: lrwxrwxrwx 1 root root 10 Jun 15 09:36 scsi-mpath1 -> ../../dm-1 6 Restart the boot.md service and the RAID device by entering /etc/init.d/boot.md start 7 Check the status of the software RAID by entering mdadm --detail /dev/md0...
7 Save your changes, then close the file. 5.10 Managing I/O in Error Situations You might need to configure multipathing to queue I/O if all paths fail concurrently. In certain scenarios, where the driver, the HBA, or the fabric experiences spurious errors, it is advisable that DM MPIO be configured to queue all IO where those errors lead to a loss of all paths, and never propagate errors upwards.
2 Reactivate queueing by entering the following command at a terminal console prompt: dmsetup message mapname 0 queue_if_no_path 5.12 Additional Information For more information about configuring and using multipath I/O on SUSE Linux Enterprise Server, How to Setup/Use Multipathing on SLES (http://support.novell.com/techcenter/sdb/en/2005/04/ sles_multipathing.html) in the Novell Technical Support Knowledgebase. 5.13 What’s Next If you want to use software RAIDs, create and configure them before you create file systems on the devices.
Page 52
SLES 10 Storage Administration Guide for EVMS...
Managing Software RAIDs with EVMS This section discusses how to create and manage software RAIDs with the Enterprise Volume Management System (EVMS). EVMS supports only RAIDs 0, 1, 4, and 5 at this time. For RAID 6 and 10 solutions, see Chapter 7, “Managing Software RAIDs 6 and 10 with mdadm,”...
Feature Linux Software RAID Hardware RAID RAID processing In the host server’s processor RAID controller on the disk array RAID levels 0, 1, 4, 5, and 10 plus the mdadm Varies by vendor raid10 Component devices Disks from same or different disk Same disk array array 6.1.2 Overview of RAID Levels...
RAID Level Description Performance and Fault Tolerance Stripes data and records Improves disk I/O performance for both reads and writes. parity to a dedicated disk. If Write performance is considerably slower than for RAID 0, disks are different sizes, the because parity must be calculated and written.
For ® information about file system limits for SUSE Linux Enterprise Server 10, see “Large File System Support” in the SUSE Linux Enterprise Server 10 Installation and Administration Guide. (http:// www.novell.com/documentation/sles10). SLES 10 Storage Administration Guide for EVMS...
In general, each storage object included in the RAID should be from a different physical disk to maximize I/O performance and to achieve disk fault tolerance where supported by the RAID level you use. In addition, they should be of the same type (disks, segments, or regions). Using component devices of differing speeds might introduce a bottleneck during periods of demanding I/O.
Page 58
RAID 5 Algorithms Table 6-7 Algorithm EVMS Type Description Left Asymmetric Stripes are written in a round-robin fashion from the first to last member segment. The parity’s position in the striping sequence moves in a round-robin fashion from last to first. For example: sda1 sdb1 sdc1 sde1 Left Symmetric This is the default setting and is considered the fastest method for...
6.1.8 Multi-Disk Plug-In for EVMS The Multi-Disk (md) plug-in supports creating software RAIDs 0 (striping), 1 (mirror), 4 (striping with dedicated parity), and 5 (striping with distributed parity). The MD plug-in to EVMS allows you to manage all of these MD features as “regions” with the Regions Manager. 6.1.9 Device Mapper Plug-In for EVMS The Device Mapper plug-in supports the following features in the EVMS MD Region Manager: Multipath I/O: Connection fault tolerance and load balancing for connections between the...
Page 60
For IA-64 platforms, this step is necessary to make the RAID 4/5 option available in the Regions Manager. For information about creating segments, see Section 4.4, “Creating Disk Segments (or Partitions),” on page 4a Select Action > Create > Segment to open the DOS Segment Manager. 4b Select the free space segment you want to use.
Page 61
5d Specify values for Configuration Options by changing the following default settings as desired. For RAIDs 1, 4, or 5, optionally specify a device to use as the spare disk for the RAID. The default is none. For RAIDs 0, 4, or 5, specify the chunk (stripe) size in KB. The default is 32 KB. For RAIDs 4/5, specify RAID 4 or RAID 5 (default).
Page 62
For information about these settings, see “Configuration Options for RAIDs” on page 5e Click Create to create the RAID device under the /dev/evms/md directory. The device is given a name such as md0, so its EVMS mount location is /dev/evms/ md/md0.
8a Select Action > File System > Mount. 8b Select the RAID device you created in Step 5, such as /dev/evms/md/md0. 8c Specify the location where you want to mount the device, such as /home. 8d Click Mount. 9 Enable boot.evms to activate EVMS automatically at reboot. 9a In YaST, select System >...
6.3.2 Adding Segments to a RAID 4 or 5 If the RAID region is clean and operating normally, the kernel driver adds the new object as a regular spare, and it acts as a hot standby for future failures. If the RAID region is currently degraded, the kernel driver immediately activates the new spare object and begin synchronizing the data and parity information.
6.4.2 Adding a Spare Disk When You Create the RAID When you create a RAID 1, 4, or 5 in EVMS, specify the Spare Disk in the Configuration Options dialog box. You can browse to select the available device, segment, or region that you want to make the RAID’s spare disk.
can survive a single disk failure at a time. A RAID 4 can survive a single disk failure at a time if the disk is not the parity disk. Disks can fail for many reasons such as the following: Disk crash Disk pulled from the system Drive cable removed or loose I/O errors...
Failed Devices : 0 Spare Devices : 0 UUID : 8a9f3d46:3ec09d23:86e1ffbc:ee2d0dd8 Events : 0.174164 Number Major Minor RaidDevice State removed active sync /dev/sdb2 The “Total Devices : 1”, “Active Devices : 1”, and “Working Devices : 1” indicate that only one of the two devices is currently active.
6 Monitor the status of the RAID to verify the process has begun. For information about how monitor RAID status, see Section 6.6, “Monitoring Status for a RAID,” on page 7 Continue with Section 6.5.4, “Removing the Failed Disk,” on page 6.5.4 Removing the Failed Disk You can remove the failed disk at any time after it has been replaced with the spare disk.
Status Information Description Interpretation List of the RAIDs on the server You have two RAIDs defined Personalities : [raid5] by RAID label. with labels of raid5 and [raid4] raid4. <device> : <active | not active> The RAID is active and md0 : active mounted at /dev/evms/ <RAID label you specified>...
Page 70
Version : 00.90.03 Creation Time : Sun Apr 16 11:37:05 2006 Raid Level : raid5 Array Size : 35535360 (33.89 GiB 36.39 GB) Device Size : 8883840 (8.47 GiB 9.10 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Apr 17 05:50:44 2006 State : clean, degraded...
Persistence : Superblock is persistent Update Time : Mon Apr 17 05:50:44 2006 State : clean, degraded, recovering Active Devices : 4 Working Devices : 5 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 128K Rebuild Status : 3% complete UUID : 2e686e87:1eb36d02:d3914df8:db197afe Events : 0.189...
Page 72
RAID Events in mdadm Table 6-8 Trigger RAID Event E-Mail Description Alert Device Disappeared An md array that was previously configured appears to no longer be configured. (syslog priority: Critical) If mdadm was told to monitor an array which is RAID0 or Linear, then it will report DeviceDisappeared with the extra information Wrong- Level.
MAILADDR yourname@example.com The MAILADDR line gives an e-mail address that alerts should be sent to when mdadm is running in --monitor mode with the --scan option. There should be only one MAILADDR line in mdadm.conf, and it should have only one address. 3 Start mdadm monitoring by entering the following at the terminal console prompt: mdadm --monitor --mail=yourname@example.com --delay=1800 /dev/md0 The --monitor option causes mdadm to periodically poll a number of md arrays and to...
Page 74
SLES 10 Storage Administration Guide for EVMS...
Managing Software RAIDs 6 and 10 with mdadm This section discusses how to create software RAID 6 and 10 devices, using the Multiple Devices Administration (mdadm(8)) tool. You can also use mdadm to create RAIDs 0, 1, 4, and 5. The mdadm tool provides the functionality of legacy programs mdtools and raidtools.
7.1.2 Creating a RAID 6 The following procedure creates a RAID 6 with four devices: /dev/sda1, /dev/sdb1, /dev/ sdc1, and /dev/sdd1. Make sure to modify the procedure to use your actual device nodes. 1 Open a terminal console, then log in as the root user or equivalent. 2 Create 2 software RAID 0 devices, using two different devices for each RAID 0 device.
The following table describes the advantages and disadvantages of RAID 10 nesting as 1+0 versus 0+1. It assumes that the storage objects you use reside on different disks, each with a dedicated I/O capability. RAID Levels Supported in EVMS Table 7-2 RAID Level Description Performance and Fault Tolerance 10 (1+0)
Scenario for Creating a RAID 10 (1+0) by Nesting Table 7-3 Devices RAID 1 (mirror) RAID 1+0 (striped mirrors) /dev/sdb1 /dev/md0 /dev/md2 /dev/sdc1 /dev/sdd1 /dev/md1 /dev/sde1 1 Open a terminal console, then log in as the root user or equivalent. 2 Create 2 software RAID 1 devices, using two different devices for each RAID 1 device.
The following procedure uses the device names shown in the following table. Make sure to modify the device names with the names of your own devices. Scenario for Creating a RAID 10 (0+1) by Nesting Table 7-4 Devices RAID 0 (stripe) RAID 0+1 (mirrored stripes) /dev/sdb1 /dev/md0...
7.3.1 Understanding the mdadm RAID10 In mdadm, the RAID10 level creates a single complex software RAID that combines features of both RAID 0 (striping) and RAID 1 (mirroring). Multiple copies of all data blocks are arranged on multiple drives following a striping discipline. Component devices should be the same size. “Comparison of RAID10 Option and Nested RAID 10 (1+0)”...
Page 81
of replicas of each data block. The effective storage size is the number of devices divided by the number of replicas. For example, if you specify 2 replicas for an array created with 5 component devices, a copy of each block is stored on two different devices.
Far layout with an odd number of disks and two replicas: sda1 sdb1 sdc1 sde1 sdf1 . . . 7.3.2 Creating a RAID10 with mdadm The RAID10-level option for mdadm creates a RAID 10 device without nesting. For information about the RAID10-level, see Section 7.3, “Creating a Complex RAID 10 with mdadm,”...
7.4 Creating a Degraded RAID Array A degraded array is one in which some devices are missing. Degraded arrays are supported only for RAID 1, RAID 4, RAID 5, and RAID 6. These RAID types are designed to withstand some missing devices as part of their fault-tolerance features.
Page 84
SLES 10 Storage Administration Guide for EVMS...
Linux High-Availability Project ® For information about installing and configuring HeartBeat 2 for SUSE Linux Enterprise Server 10, see the HeartBeat 2 Installation and Setup Guide (http://www.novell.com/documentation/ sles10/hb2/data/hb2_config.html) on the Novell Documentation Web site for SUSE Linux Enterprise Server 10 (http://www.novell.com/documentation/sles10).
2a Log in as the root user or equivalent, then open YaST. 2b Choose Software > Software Management. 2c Change the filter to Search. 2d Type drbd, then click Search. 2e Select all of the drbd-kmp-* packages. 2f Click Accept. 8.3 Configuring the DRBD Service NOTE: The following procedure uses the server names node 1 and node 2, and the cluster resource name r0.
Page 87
drbdadm adjust r0 Make sure there are no errors before continuing. 5 Configure node1 as the primary node by entering drbdsetup /dev/drbd0 primary –do-what-I-say 6 Start the DRBD service on both systems by entering the following on each node: service drbd start 7 Check the DRBD service status by entering the following on each node: service drbd status 8 Format the DRBD device on the primary with a file system such as reiserfs.
Page 88
drbdadm secondary r0 12c On node 1, promote the DRBD service to primary by entering drbdadm primary r0 12d On node 1, check to see if node 1 is primary by entering service drbd status 13 To get the service to automatically start and fail over if the server has a problem, you can set up DRBD as a high availability service with HeartBeat 2.
Troubleshooting EVMS Devices, RAIDs, and Volumes This section describes how to work around known issues for EVMS devices, software RAIDs, multipath I/O, and volumes. Section 9.1, “EVMS Volumes Might Not Appear When Using iSCSI,” on page 89 Section 9.2, “Device Nodes Are Not Automatically Re-Created on Restart,” on page 89 9.1 EVMS Volumes Might Not Appear When Using iSCSI If you have installed and configured an iSCSI SAN, and have created and configured EVMS Disks...
Page 90
echo -en "\nDeleting devices nodes" rm -rf /dev/evms mount -n -o remount,ro / rc_status -v 3 Save the file. 4 Continue with “Reboot the Server” on page SLES 10 Storage Administration Guide for EVMS...
For the most recent version of the SUSE Linux Enterprise Server Storage Administration Guide for EVMS, see the Novell documentation Web site for SUSE Linux Enterprise Server 10 (http://www.novell.com/documentation/sles10). In this section, content changes appear in reverse chronological order, according to the publication date.
Recommended Updates for mkinitrd Root Partition,” on page 25 (http://support.novell.com/techcenter/psdb/ 24c7dfbc3e0c183970b70c1c0b3a6d7d.html) at the Novell Technical Support Center. A.2.3 Managing Multipathing for Devices and Software RAIDs The following changes were made to this section: Location Change Section 5.1.3, “Guidelines...
Location Change Section 5.1.5, “Device Changes were made for clarification. Mapper Multipath I/O Module,” on page 43 Section 5.1.4, “Device DEVICES should be DEVICE. Mapper,” on page 42 Section 5.2.3, “Configuring DEVICES should be DEVICE. mdadm.conf and lvm.conf to Scan Devices by UUID,” on page 45 Section 5.6, “Configuring This section is new.
Need help?
Do you have a question about the LINUX ENTERPRISE SERVER 10 - STORAGE ADMINISTRATION GUIDE FOR EVMS and is the answer not in the manual?
Questions and answers