Choosing Hardware - HPE Apollo 4500 Reference Manual

Suse enterprise storage on system server, choosing density-optimized servers as suse enterprise storage building blocks
Table of Contents

Advertisement

Reference guide
• OSDs can't be hot swapped with separate data and journal devices.
• Additional setup and planning is required to efficiently make use of SSDs.
• Small object I/O tends to benefit much less than larger objects.
Configuration recommendations
• For bandwidth, four spinning disks to one SSD is a recommended performance ratio for block storage. It's possible to go with more spinning to
solid state to improve capacity density, but this also increases the number of OSDs affected by an SSD failure.
• SSDs can become a bottleneck with high ratios of disks to SSD journals; balance SSD ratios vs. peak spinning media performance. Ratios larger
than eight spinning disks to one SSD are typically inferior to just co-locating the journal with the data.
• Even where application write performance is not critical, it may make sense to add an SSD journal purely for rebuild/rebalance bandwidth
improvements.
• Journals don't require a lot of capacity, but larger SSDs do provide extra wear leveling. Journaling space reserved by SUSE Enterprise Server
should be 10–20 seconds of writes for the OSD the journal is paired with.
• A RAID 1 of SSDs is not recommended outside of the monitor nodes. Wear leveling makes it likely SSDs will be upgraded at similar times. The
doubling of SSDs per node also reduces storage density and increases price per gigabyte. With massive storage scale, it's better to expect
drive failure and plan such that failure is easily recoverable and tolerable.
• Erasure coding is very flexible for choosing between storage efficiency and data durability. The sum of your data and coding chunks should
typically be less than or equal to the OSD host count, so that no single host failure can cause the loss of multiple chunks.
• Keeping cluster nodes single function makes it simpler to plan CPU and memory requirements for both typical operation and failure handling.
• Extra RAM on an OSD host can boost GET performance on smaller I/Os through file system caching.

Choosing hardware

The SUSE Enterprise Storage Administration and Deployment Guide provides minimum hardware recommendations. In this section, we expand
and focus this information around the reference configurations and customer use cases.
Choosing disks
Choose how many drives are needed to meet performance SLAs. That may be the number of drives to meet capacity requirements, but may
require more spindles for performance or cluster homogeneity reasons.
Object storage requirements tend to be primarily driven by capacity, so plan how much raw storage will be needed to meet usable capacity and
data durability. Replica count and data to coding chunk ratios for erasure coding are the biggest factors determining usable storage capacity.
There will be additional usable capacity loss from journals co-located with OSD data, XFS/Btrfs overhead, and logical volume reserved sectors.
A good rule of thumb for three-way replication is 1:3.2 for usable to raw storage capacity ratio.
Some other things to remember around disk performance:
• Replica count or erasure encoding chunks mean multiple media writes for each object PUT.
• Peak write performance of spinning media without separate journals is around half due to writes to journal and data partitions going to the
same device.
• With a single 10GbE port, the bandwidth bottleneck is at the port rather than controller/drive on any fully disk-populated HPE Apollo 4510
Gen9 server node.
• At smaller object sizes, the bottleneck tends to be on the object gateway's ops/sec capabilities before network or disk. In some cases, the
bottleneck can be the client's ability to execute object operations.
Page 13

Hide quick links:

Advertisement

Table of Contents
loading

This manual is also suitable for:

Apollo 4200

Table of Contents