EVGA X299 Micro ATX 2 Installation Manual page 60

Table of Contents

Advertisement

EVGA X299 Micro ATX 2 (121-SX-E296)
failed drive in the RAID10, rebuilding the array is mostly seamless. For example, look at
the array on the second row to the right. P-Drive1 failed, but P-Drive2 is still working
and uses the same data. The array will pull data from P-Drive2 during the rebuild, so
the array can be used normally while P-Drive2 copies ALL of its data back to the drive
replacing P-Drive1. The rebuild process will only rebuild 1TB worth of data because
only one node failed. There will be a performance hit during the rebuild process, which
can be further delayed if VERY data intensive applications are used, but overall
performance of the array will still be more than fast enough to run effectively during the
rebuild. RAID10 rebuilds much more quickly than its predecessor RAID0+1.
RAID0+1
: RAID0+1 is a form of nested RAID that was widely used on previous
generation boards. Although the X299 series motherboards do not use this type of
array, it is listed here to show the improvements made by RAID10, and to clear up a
common misperception that RAID0+1 and RAID10 are the same.
A RAID0+1 array is created from two (2) stripe sets that are mirrored together. Similar
to RAID10, RAID 0+1 requires a minimum of four drives, and is highly scalable in two
drive increments. Again, because RAID0+1 is a mirrored array, it shares the same 50%
drive capacity, meaning that four 1TB drives in RAID0+1 will result in a 2TB array.
Where 0+1 differs from 10 is in how the drives are split, and the data distributed. While
RAID10 is created using two or more mirror sets striped together, RAID0+1 is two
striped sets mirrored together. When scaling with additional drives (in multiples of two),
RAID10 adds the drives as another mirrored set to the striped array, whereas RAID0+1
splits the drives between the two stripes to maintain the mirror. To the end-user, the
final result appears very similar; however, the significant differences lie in fault tolerance
and recovery.
In a RAID0+1, ANY drive failure results in half of the array becoming effectively failed.
If one drive fails, that stripe fails, and the mirrored stripe takes over. When the failed
drive is replaced, the entire capacity of the mirrored array must be rewritten to the failed
array, rather than one drive's worth of capacity (i.e. RAID10). This makes the
RAID0+1 array more volatile than RAID10, despite being fault tolerant, and can also
increase rebuild times at an exponential margin for large arrays.
Like RAID10, RAID0+1 can afford to lose up to half the number of drives in the array
and still be protected; however, this is contingent on the failed units being all from the
same stripe set. If one drive fails from both stripe sets at once, the entire array is lost.
- 60 -

Advertisement

Table of Contents
loading

Table of Contents