Other Effects Of Snapshot; Capacity Overhead Versus Performance - IBM N Series Hardware Manual

System storage
Hide thumbs Also See for N Series:
Table of Contents

Advertisement

14.2.2 Other effects of Snapshot

It is important to understand the potential effect of creating and retaining Snapshots, on both
the N series controller and any associated servers and applications. Also, the Snapshots
need to be coordinated with the attached servers and applications to ensure data integrity.
The effect of Snapshots is determined by these factors:
N series controller:
– Negligible effect on the performance of the controller: The N series snapshots use a
redirect-on-write design. This design avoids most of the performance effect normally
associated with Snapshot creation and retention (as seen in traditional copy-on-write
snapshots on other platforms).
– Incremental capacity is required to retain any changes: Snapshot technology optimizes
storage because only changed blocks are retained. For file access, the change rate is
typically in the 1–5% range. For database applications, it might be similar. However, in
some cases it might be as high as 100%.
Server (SAN-attached):
– Minor effect on the performance of the server when the Snapshot is created (to ensure
file system and LUN consistency).
– Negligible ongoing effect on performance to retain the Snapshots
Application (SAN or NAS attached):
– Minor effect on the performance of the application when the snapshot is created (to
ensure data consistency). This effect depends on the snapshot frequency. Once per
day, or multiple times per day might be acceptable, but more frequent Snapshots can
have an unacceptable effect on application performance.
– Negligible ongoing effect on performance to retain the Snapshots
Workstation (NAS attached):
– No effect on the performance of the workstation. Frequent Snapshots are possible
because the NAS file system consistency is managed by the N series controller.
– Negligible ongoing effect on performance to retain the Snapshots

14.2.3 Capacity overhead versus performance

There is considerable commercial pressure to make efficient use of the physical storage
media. However, there are also times when using more disk spindles is more efficient.
Consider an example where 100 TB is provisioned on two different arrays:
100% raw-to-usable efficiency requires 100 x 1 TB disks, with each disk supporting
perhaps 80 IOPS, for a total of 8000 physical IOPS.
50% raw-to-usable efficiency requires 200 x 1 TB disks, with each disk supporting
perhaps 80 IOPS, for a total of 16,000 physical IOPS.
Obviously this is a simplistic example. Much of the difference might be masked behind the
controller's fast processor and cache memory. But it is important to consider the number of
physical disk spindles when designing for performance.
176
IBM System Storage N series Hardware Guide

Hide quick links:

Advertisement

Table of Contents
loading

Table of Contents