Creating Nested Raid 10 (1+0) With Mdadm - Novell LINUX ENTERPRISE SERVER 10 SP3 - STORAGE ADMINISTRATION GUIDE 2-23-2010 Administration Manual

Table of Contents

Advertisement

The following table describes the advantages and disadvantages of RAID 10 nesting as 1+0 versus
0+1. It assumes that the storage objects you use reside on different disks, each with a dedicated I/O
capability.
RAID Levels Supported in EVMS
Table 7-2
RAID Level Description
10 (1+0)
RAID 0 (stripe)
built with RAID 1
(mirror) arrays
10 (0+1)
RAID 1 (mirror)
built with RAID 0
(stripe) arrays

7.2.2 Creating Nested RAID 10 (1+0) with mdadm

A nested RAID 1+0 is built by creating two or more RAID 1 (mirror) devices, then using them as
component devices in a RAID 0.
IMPORTANT: If you need to manage multiple connections to the devices, you must configure
multipath I/O before configuring the RAID devices. For information, see
Multipath I/O for Devices," on page
The procedure in this section uses the device names shown in the following table. Make sure to
modify the device names with the names of your own devices.
Performance and Fault Tolerance
RAID 1+0 provides high levels of I/O performance, data redundancy,
and disk fault tolerance. Because each member device in the RAID 0 is
mirrored individually, multiple disk failures can be tolerated and data
remains available as long as the disks that fail are in different mirrors.
You can optionally configure a spare for each underlying mirrored array,
or configure a spare to serve a spare group that serves all mirrors.
RAID 0+1 provides high levels of I/O performance and data
redundancy, but slightly less fault tolerance than a 1+0. If multiple disks
fail on one side of the mirror, then the other mirror is available. However,
if disks are lost concurrently on both sides of the mirror, all data is lost.
This solution offers less disk fault tolerance than a 1+0 solution, but if
you need to perform maintenance or maintain the mirror on a different
site, you can take an entire side of the mirror offline and still have a fully
functional storage device. Also, if you lose the connection between the
two sites, either site operates independently of the other. That is not
true if you stripe the mirrored segments, because the mirrors are
managed at a lower level.
If a device fails, the mirror on that side fails because RAID 1 is not fault-
tolerant. Create a new RAID 0 to replace the failed side, then
resynchronize the mirrors.
43.
Chapter 5, "Managing
Managing Software RAIDs 6 and 10 with mdadm 101

Advertisement

Table of Contents
loading

This manual is also suitable for:

Suse linux enterprise server 10 sp3 storage

Table of Contents