Oracle ZFS Storage Appliance Administration Manual page 213

Hide thumbs Also See for ZFS Storage Appliance:
Table of Contents

Advertisement

Read cache devices, located in a controller slot (internal L2ARC), do not follow data pools
in takeover or failback situations. A read cache device is only active in a particular cluster
node when the pool that is assigned to the read cache device is imported on the node where the
device resides. Absent additional configuration steps, read cache will not be available for a pool
that has migrated due to a failover event. In order to enable a read cache device for a pool that is
not owned by the cluster peer, take over the pool on the non-owning node, and then add storage
and select the cache devices for configuration. Read cache devices in a cluster node should
be configured as described in the
devices are located in the storage fabric and are always accessible to whichever controller has
imported the pool.
If read cache devices are located in a disk shelf (external L2ARC), read cache is always
available. During a failback or takeover operation, read cache remains sharable between
controllers. In this case, read performance is sustained. For external read cache configuration
details, see
"Disk Shelf Configurations" in Oracle ZFS Storage Appliance Customer Service
Manual.
Configuring NSPF - A second important consideration for storage is the use of pool
configurations with no single point of failure (NSPF). Since the use of clustering implies that
the application places a very high premium on availability, there is seldom a good reason to
configure storage pools in a way that allows the failure of a single disk shelf to cause loss
of availability. The downside to this approach is that NSPF configurations require a greater
number of disk shelves than do configurations with a single point of failure; when the required
capacity is very small, installation of enough disk shelves to provide for NSPF at the desired
RAID level may not be economical.
The following table describes storage pool ownership for cluster configurations.
Clustering Considerations for Storage Pools
TABLE 45
Variable
Total throughput (nominal operation)
Total throughput (failed over)
Shutting Down a Clustered Configuration (CLI)
"Configuring Storage" on page
Single controller pool ownership
Up to 50% of total CPU resources,
50% of DRAM, and 50% of total
network connectivity can be used
to provide service at any one time.
This is straightforward: only a single
controller is ever servicing client
requests, so the other is idle.
No change in throughput relative to
nominal operation.
88. Write-optimized log
Multiple pools owned by different
controllers
All CPU and DRAM resources can
be used to provide service at any
one time. Up to 50% of all network
connectivity can be used at any
one time (dark network devices
are required on each controller to
support failover).
100% of the surviving controller's
resources will be used to provide
service. Total throughput relative
to nominal operation may range
from approximately 40% to 100%,
depending on utilization during
nominal operation.
Configuring the Appliance
213

Advertisement

Table of Contents
loading

Table of Contents