Processor Utilization; Effects Of Optional Features; Future Expansion - IBM N Series Hardware Manual

System storage
Hide thumbs Also See for N Series:
Table of Contents

Advertisement

14.2.4 Processor utilization

Generally, a high processor load on a storage controller is a not, on its own, a good indicator
of a performance problem. This is due both to the averaging that occurs on multi-core,
multi-processor hardware. Also, the system might be running low-priority housekeeping tasks
while otherwise idle (and such tasks are preempted to service client I/O).
One of the benefits of Data ONTAP 8.1 is that it takes better advantage of the modern
multi-processor controller hardware.
The optimal initial plan would be for 50% average utilization, with peak periods of 70%
processor utilization. In a two-node storage cluster, this configuration allows the cluster to
fail-over to a single node with no performance degradation.
If the processors are regularly running at a much higher utilization (for example, 90%), then
performance might still be acceptable. However, expect some performance degradation in a
fail-over scenario because 90% + 90% adds up to a 180% load on the remaining controller.

14.2.5 Effects of optional features

A few optional features affect early planning. Most notably, heavy use of the SnapMirror
option can use large amounts of processor resources. These resources are directly removed
from the pool available for serving user and application data. This process results in what
seems to be an overall reduction in performance. SnapMirror can affect available disk I/O
bandwidth and network bandwidth as well. Therefore, if heavy, constant use of SnapMirror is
planned, adjust these factors accordingly.

14.2.6 Future expansion

Many of the resources of the storage system can be expanded dynamically. However, you can
make this expansion easier and less disruptive by planning for possible future requirements
from the start.
Adding disk drives is one simple example. The disk drives and shelves themselves are all
hot-pluggable, and can be added or replaced without service disruption. But if, for example,
all available space in a rack is used by completely full disk shelves, how does a disk drive get
added?
Where possible, a good practice from the beginning is to try to avoid fully populating disk
shelves. It is much more flexible to install a new storage system with two half-full disk shelves
attached to it rather than a single full shelf. The added cost is generally minimal, and is quickly
recovered the first time additional disks are added.
Similar consideration can be given to allocating network resources. For instance, if a storage
system has two available gigabit Ethernet interfaces, it is good practice to install and
configure both interfaces from the beginning. Commonly, one interface is configured for actual
production use and one as a standby in case of failure. However, it is also possible (given a
network environment that supports this) to configure both interfaces to be in use and provide
mutual failover protection to each other. This arrangement provides additional insurance
because both interfaces are constantly in use. Therefore, you will not find that the standby
interface is broken when you need it at the time of failure.
Overall, it is valuable to consider how the environment might change in the future and to
engineer in flexibility from the beginning.
Chapter 14. Designing an N series solution
177

Hide quick links:

Advertisement

Table of Contents
loading

Table of Contents