Hot Spare Drives - IBM System Storage DS3500 Introduction And Implementation Manual

Table of Contents

Advertisement

Draft Document for Review March 28, 2011 12:24 pm
Large throughput workload
In the large throughput environment, it typically does not take high numbers of disks to reach
the maximum sustained throughput. Considering that this type of workload is usually made of
sequential I/O, which reduces disk latency, in most cases about 20 to 28 drives are enough to
reach the maximum throughput.
This does, however, require that the drives be spread evenly across the DS3500 to best utilize
the system bandwidth. The storage subsystem is optimized in its firmware to give increased
throughput when the load is spread across all parts. Here, bringing all the DS3500 resources
into play is extremely important. Keeping the drives loops, and bus busy with high data
throughput is the winning answer, which is also a perfect model for using the high capacity
drives, as we are looking to push a large volume of data and it will likely be large blocks of
sequential reads and writes.
Consider building smaller arrays which are 4+P or 8+P in size with single logical drives for
higher combined throughput. If multiple logical drives are to be created on the array it is best
to not exceed the number of data drives in the array. The higher the number of logical drives
the greater the chance for contention for the drives.
Best practice: For high throughput, logical drives should be built on arrays with 4+1, or
8+1 drives in them when using RAID 5. Data drive number and
host I/O blocksize for full stripe write. Use multiple logical drives on separate arrays for
maximum throughput.
An example configuration for this environment is to have a single logical drive /array with 16+1
parity 300 GB disks doing all the transfers through one single path and controller; An
alternative consists of two 8+1 parity defined to the two controllers using separate paths,
doing two separate streams of heavy throughput in parallel and filling all the channels and
resources at the same time, which keeps the whole server busy with a cost of one additional
drive.
Further improvements can be gained by splitting the two 8+1 parity into four 4+1 parity arrays
giving four streams, but the addition of three drives is needed. A main consideration here is to
plan for the array data drive count to be a number such that the host I/O blocksize can be
evenly spread using one of the storage subsystem's segment size selections, which will
enable the full stripe write capability discussed in the next section.
Regardless of the workload type or the RAID type being used for the array group, in all cases
building the array using as equal a number of odd and even drive slots is advantageous to the
performance of the array and it's LUNs. This is frequently done by using a "diagonal" or
"orthogonal" layout across all the expansions used to attain enclosure loss protection as well
(shown in Figure 3-20 on page 62).

3.3.4 Hot Spare drives

A hot spare drive is like a replacement drive installed in advance. Hot spare disk drives
provide additional protection that might prove to be essential in case of a disk drive failure in a
fault tolerant array.
When possible, split the hot spares so that they are in separate enclosures and are not on the
same drive loops (see Figure 3-19 on page 60).
Chapter 3. IBM System Storage DS3500 Storage System planning tasks
7914DS3KPlanning_090710.fm
segment size
must equal
59

Advertisement

Table of Contents
loading

Table of Contents