Testing The Memory Dimms - IBM System x3850 X5 Implementation Manual

Redbooks
Hide thumbs Also See for System x3850 X5:
Table of Contents

Advertisement

processing nodes in a high-performance cluster, database servers, or print servers for
graphics printers require the best memory performance possible.
Remember these considerations when installing memory in the System x3850 X5:
Due to the fact that no single processor has direct access to all PCIe slots, the QPI links
that are used to communicate between the two processors are not only used for memory
calls, but also for interfacing to PCIe adapters.
When installing memory for multiple processors and not all memory cards are installed,
only processing threads that are assigned to processor without local memory experience
a 50% increase in memory latency. For a server with heavy I/O processing, the additional
memory traffic also affects the efficiency of addressing PCIe adapters.
Any memory that is installed on memory cards for an uninstalled or defective processor is
not seen by the other processors on the server.
For nonuniform memory access (NUMA)-aware operating systems when multiple
processors are installed, you must install the same amount of memory on each memory
card of each installed processor. See 3.8.2, "DIMM population sequence" on page 79 for
details.
You can achieve the best processor performance when memory is installed to support
Hemisphere Mode
3850 X5 servers to scale. It is possible to have processor 1 and 4 in Hemisphere Mode,
and not processor 2 and 3, to permit scaling. In this type of installation, having processor 1
and 4 in Hemisphere Mode improves the memory access latency for all processors. To
determine the DIMM population for Hemisphere Mode, see 3.8.2, "DIMM population
sequence" on page 79.
You might consider installing all DIMMs of the same field-replaceable unit (FRU) to avoid
conflicts that might prevent NUMA compliance or Hemisphere Mode support. These
problems are most likely to occur when multiple DIMM sizes are installed. The server
supports installing multiple-sized DIMMs in each rank, but the configuration becomes
complex and difficult to maintain. Operating systems that depend on NUMA compliance
inform you when the server is not NUMA-compliant. However, nothing informs you that the
processors are not in Hemisphere Mode.
It is better to install more smaller DIMMs than fewer larger DIMMs to ensure that all of the
memory channels and buffers of each processor have access to the same amount of
memory. This approach allows the processors to fully use their interleave algorithms to
access memory faster by spreading access over multiple paths.

6.3.1 Testing the memory DIMMs

The DIMM components on a server are the most sensitive to static discharge. Improper
handling of memory is the most likely cause of DIMM failures. Working in an extremely dry
location dramatically increases the possibility that you will build up a static charge. Always
use an electrostatic discharge (ESD) strap connected between you and a grounding point to
reduce static buildup.
The best practice when installing memory is to run a memory quick test in diagnostics to
ensure that all of the memory is functional. The following reasons describe why memory
might not be functional:
Wrong DIMM for the type of server that you have. Ensure that only IBM-approved DIMMs
are installed in your server.
The DIMM is not fully installed. Ensure that the DIMM clips are in the locked position to
prevent the DIMM from pulling out of its slot.
226
IBM eX5 Implementation Guide
. Hemisphere Mode is the required memory configuration to allow two

Hide quick links:

Advertisement

Table of Contents
loading

This manual is also suitable for:

Bladecenter hx5System x3690 x5System x3950 x5

Table of Contents