IBM NeXtScale System Planning And  Implementation Manual
IBM NeXtScale System Planning And  Implementation Manual

IBM NeXtScale System Planning And Implementation Manual

Table of Contents

Advertisement

Quick Links

Front cover

IBM NeXtScale System
Planning and
Implementation Guide
Introduces the new high density x86
solution for scale-out environments
Covers the n1200 Enclosure and
nx360 M4 Compute Node
Addresses power, cooling,
racking, and management
David Watts
Jordi Caubet
Duncan Furniss
David Latino
ibm.com/redbooks

Advertisement

Table of Contents
loading

Summary of Contents for IBM NeXtScale System

  • Page 1: Front Cover

    Front cover IBM NeXtScale System Planning and Implementation Guide Introduces the new high density x86 solution for scale-out environments Covers the n1200 Enclosure and nx360 M4 Compute Node Addresses power, cooling, racking, and management David Watts Jordi Caubet Duncan Furniss David Latino ibm.com/redbooks...
  • Page 3 International Technical Support Organization IBM NeXtScale System Planning and Implementation Guide July 2014 SG24-8152-01...
  • Page 4 “Notices” on page vii. Second Edition (July 2014) This edition applies to: IBM NeXtScale n1200 Enclosure, machine type 5456 IBM NeXtScale nx360 M4, machine type 5455 © Copyright International Business Machines Corporation 2013, 2014. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP...
  • Page 5: Table Of Contents

    Stay connected to IBM Redbooks ........xiii...
  • Page 6 4.8 IBM NeXtScale Storage Native Expansion Tray ....86 4.9 IBM NeXtScale PCIe Native Expansion Tray ..... . 90 4.10 GPU and coprocessor adapters .
  • Page 7 6.1 IBM standards ........
  • Page 8 8.1 eXtreme Cloud Administration Toolkit (xCAT)..... 214 8.2 IBM Platform Cluster Manager ....... . 217 8.3 IBM General Parallel File System .
  • Page 9: Notices

    IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead.
  • Page 10: Trademarks

    (® or ™), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml...
  • Page 11: Preface

    Preface IBM® NeXtScale System is a new, dense offering from IBM. It based on our experience with IBM iDataPlex® and IBM BladeCenter® with a tight focus on emerging and future client requirements. The IBM NeXtScale n1200 Enclosure and IBM NeXtScale nx360 M4 Compute Node are designed to optimize density and performance within typical data center infrastructure limits.
  • Page 12 Jordi Caubet is an IT Specialist with IBM in Spain. He has seven years of experience with IBM and several years of experience in high-performance computing (HPC), ranging from systems design and development to systems support. He holds a degree in Computer Science from the Technical University of Catalonia (UPC).
  • Page 13 Thanks to the following people for their contributions to this project: From IBM Marketing: Mathieu Bordier Jill Caugherty Gaurav Chaudhry Kelly Chiu Jimmy Chou Chuck Fang Andrew Huang Camille Lee Brendan Paget Scott Tease Swarna Tsai Matt Ziegler From IBM Development:...
  • Page 14: Now You Can Become A Published Author, Too

    We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at this website: http://www.ibm.com/redbooks...
  • Page 15: Stay Connected To Ibm Redbooks

    Stay connected to IBM Redbooks Find us on Facebook: http://www.facebook.com/IBMRedbooks Follow us on Twitter: http://twitter.com/ibmredbooks Look for us on LinkedIn: http://www.linkedin.com/groups?home=&gid=2130806 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.html...
  • Page 16 IBM NeXtScale System Planning and Implementation Guide...
  • Page 17: Summary Of Changes

    Summary of Changes for SG24-8152-01 for IBM NeXtScale System Planning and Implementation Guide as created or updated on July 7, 2014. July 2014 This revision reflects the addition, deletion, or modification of new and changed information described below.
  • Page 18 New RDIMM memory options Support for 2.5-inch SSD options and other drive options Support for ServeRAID M5120 RAID controller for external SAS storage expansion IBM NeXtScale System Planning and Implementation Guide...
  • Page 19: Chapter 1. Introduction

    Introduction Chapter 1. IBM is introducing our next generation of scale out x86 servers, called NeXtScale System. In this chapter, we describe the client requirements that led us to its design, the computing environment it is meant to work in, and how this architecture was created to meet current and future business and technical challenges.
  • Page 20: Evolution Of Data Centers

    As the number of servers in clusters grows and as data center real estate cost increases, the number of servers in a unit of space (also known as the compute density) becomes an increasingly important consideration. IBM NeXtScale System is designed to optimize density while addressing other objectives, such...
  • Page 21: Executive Summary Of Ibm Nextscale System

    M4 compute node. 1.2.1 IBM NeXtScale n1200 Enclosure The IBM NeXtScale System is based on a six-rack unit (6U) high chassis with 12 half-width bays, as shown in Figure 1-1. Figure 1-1 Front of NeXtScale N1200 chassis, with 12 half-wide servers...
  • Page 22: Ibm Nextscale Nx360 M4 Compute Node

    1.2.2 IBM NeXtScale nx360 M4 compute node The first server that is available for NeXtScale System is the nx360 M4 compute node. It fits in a half-width bay in the n1200 Enclosure, as shown in Figure 1-3 on page 5.
  • Page 23 DDR3 DIMMs, and a hard drive carrier. Hard disk drive carrier options include one 3.5-inch drive, two 2.5-inch drives, or four 1.8-inch solid-state drives. The server is shown in Figure 1-4. Figure 1-4 IBM NeXtScale nx360 M4 with one 3.5-inch hard disk drive Chapter 1. Introduction...
  • Page 24: Design Points Of The System

    1.3 Design points of the system This section introduces some of the following design points that went into IBM NeXtScale System: The system is designed for flexibility. The power supplies and fans, in the back of the chassis, are modular and hot swappable.
  • Page 25: This Book

    Intelligent Cluster™ solutions. Next, we provide information about managing the NeXtScale chassis and nodes. We finish by covering some of the software that is available from IBM that is commonly used in a solution with NeXtScale servers. Chapter 1. Introduction...
  • Page 26 IBM NeXtScale System Planning and Implementation Guide...
  • Page 27: Chapter 2. Positioning

    This chapter describes how IBM NeXtScale System is positioned in the marketplace compared with other systems that are equipped with Intel processors. The information helps you to understand the NeXtScale target audience and the types of workloads for which it is intended.
  • Page 28: Market Positioning

    2.1 Market positioning The IBM NeXtScale System is a new x86 offering that introduces a new category of dense computing into the marketplace. IBM NeXtScale System includes the following key characteristics: Strategically, this is the next generation dense system from System x: –...
  • Page 29: Three Key Messages With Nextscale

    “Departmental” uses in which a small solution can increase the speed of outcome prediction, engineering analysis, and design and modeling. 2.1.1 Three key messages with NeXtScale The three key messages about IBM NeXtScale System is that it is flexible, simple, and scalable, as shown in Figure 2-1. FLEXIBLE...
  • Page 30 ) with which you can select the systems you need that is based on the needs of the applications that you run. Fit it into your datacenter seamlessly in an IBM rack or most 19-inch standard racks. The IBM NeXtScale n1200 Enclosure is designed to be installed in the IBM 42U 1100mm Enterprise V2 Dynamic Rack because it provides the best cabling features.
  • Page 31 KVM port Pull out label tab Figure 2-2 Front of the IBM NeXtScale nx360 M4 Because the cables do not clog up the back of the rack, air flow is improved and thus energy efficiency also is improved. The harder the fans work to move air through server, the more power they use.
  • Page 32 Cloud); both are viable and bring value. Scalable to the container level. IBM NeXtScale System can meet the needs of clients who want to add IT at the rack level or even at the container level. Racks can be fully configured, cabled, labeled, and programmed before they are shipped.
  • Page 33: Optimized For Workloads

    HPC and technical computing have many of the same attributes as cloud; a key factor is the need for the top-bin 130 W processors. NeXtScale System can support top bin 130 W processors, which means more cores and higher frequencies than others.
  • Page 34: Ibm System X Overview

    The world is evolving, and the way that our clients do business is evolving with it. That is why IBM has the broadest x86 portfolio in our history and is expanding even further to meet the needs of our clients, whatever those needs might be.
  • Page 35: Nextscale System Versus Idataplex

    The IBM NeXtScale n1200 Enclosure is designed to fit in the IBM 42U 1100mm Enterprise V2 Dynamic Rack, but it also fits in many standard 19-inch racks.
  • Page 36 In Table 2-1, the features of IBM NeXtScale System are compared to those features of IBM iDataPlex.
  • Page 37: Nextscale System Versus Flex System

    NeXtScale System is an x86-only architecture and its aim is to be the perfect server for clients that require a scale-out infrastructure. NeXtScale System uses industry-standard components, including I/O cards and top-of-rack networking switches, for flexibility of choice and ease of adoption.
  • Page 38: Nextscale System Versus Rack-Mounted Servers

    2.5 NeXtScale System versus rack-mounted servers Although the NeXtScale System compute nodes are included in a chassis, this chassis is only there to provide shared power and cooling. As a consequence, the approach to design, buy, or upgrade a solution that is based on IBM NeXtScale System or regular 1U/2U rack-mounted servers is similar.
  • Page 39: Ordering And Fulfillment

    For medium to large installation of nodes that require up to 128 GB, IBM NeXtScale System should be the system of choice. It allows the users to reduce their initial cost of acquisition and their operating cost through higher density (up to 4X compared to 2U servers) and higher energy efficiency (because of the shared power and cooling infrastructure).
  • Page 40 IBM NeXtScale System is as easy to configure, order, and price as standard System x servers such as the x3650 M4 server, as shown in Table 2-3. Table 2-3 Comparing configuration, ordering, and pricing tools Tool System x iDataPlex NeXtScale...
  • Page 41: Chapter 3. Ibm Nextscale N1200 Enclosure

    IBM NeXtScale n1200 Chapter 3. Enclosure The foundation on which IBM NeXtScale System is built is the IBM NeXtScale n1200 Enclosure. Designed to provide shared, high-efficiency power and cooling for up to 12 compute nodes, this chassis scales with your business needs. Adding compute, storage, or acceleration capability is as simple as adding nodes to the chassis.
  • Page 42: Overview

    1U rack servers. Figure 3-1 IBM NeXtScale n1200 Enclosure with 12 compute nodes The founding principle behind IBM NeXtScale System is to allow clients to adopt this new hardware with minimal or no changes to their existing data center infrastructure, management tools, protocols, and best practices.
  • Page 43 Rail kit Node filler Figure 3-2 IBM NeXtScale n1200 Enclosure components The IBM NeXtScale n1200 Enclosure includes the following components: Up to 12 compute nodes Six power supplies each separately powered A total of 10 fan modules in two cooling zones One Fan and Power Controller Chapter 3.
  • Page 44: Front Components

    3.1.1 Front components The IBM NeXtScale n1200 Enclosure supports up to 12 1U half-wide compute nodes, as shown in Figure 3-3. All compute nodes are front accessible with front cabling as shown in Figure 3-3. From this angle, the chassis looks to be simple because it was designed to be simple, low cost, and efficient.
  • Page 45: Rear Components

    3.1.2 Rear components The n1200 provides shared high-efficiency power supplies and fan modules. As with IBM BladeCenter and IBM Flex System, the NeXtScale System compute nodes connect to a midplane, but this connection is for power and control only; the midplane does not provide any I/O connectivity.
  • Page 46: Fault Tolerance Features

    You might notice that the enclosure does not contain space for network switches. All I/O is routed directly out of the servers to top-of-rack switches. This configuration provides choice and flexibility and keeps the IBM NeXtScale n1200 Enclosure flexible and low cost.
  • Page 47: Standard Chassis Models

    C13 (part number 39Y7937) 3.3 Supported compute nodes The IBM NeXtScale nx360 M4 is the only compute node that is supported in the n1200 enclosure. However, the number of compute nodes that can be powered on depends on the following factors:...
  • Page 48 UEFI settings To size for a specific configuration, you can use the IBM Power Configurator that is available at this website: http://ibm.com/systems/bladecenter/resources/powerconfig.html The following show the number of nodes that can be operated with no performance compromise within the chassis depending on the power policy required.
  • Page 49 For example, “5 + 1” means Supported combination is with the GPU Tray attached, plus 1 server without a GPU Tray attached. In such a configuration, the 1 remaining server bay in the chassis must remain empty. Chapter 3. IBM NeXtScale n1200 Enclosure...
  • Page 50 5 + 1 4 + 1 5 + 1 a. OVS (oversubscription) of the power system allows for more efficient use of the available system power. b. See shaded box on page 31 IBM NeXtScale System Planning and Implementation Guide...
  • Page 51 130 W 4 + 1 3 + 1 4 + 1 a. OVS (oversubscription) of the power system allows for more efficient use of the available system power. b. See shaded box on page 31 Chapter 3. IBM NeXtScale n1200 Enclosure...
  • Page 52 3 + 1 4 + 1 130 W 3 + 1 a. OVS (oversubscription) of the power system allows for more efficient use of the available system power. b. See shaded box on page 31 IBM NeXtScale System Planning and Implementation Guide...
  • Page 53 3 + 2 4 + 1 5 + 1 3 + 2 a. OVS (oversubscription) of the power system allows for more efficient use of the available system power. b. See shaded box on page 31 Chapter 3. IBM NeXtScale n1200 Enclosure...
  • Page 54 High line AC input (200-240 V) 50 W 60 W 70 W 80 W 95 W 115 W 130 W a. OVS (oversubscription) of the power system allows for more efficient use of the available system power. IBM NeXtScale System Planning and Implementation Guide...
  • Page 55 Low line AC input (100-127 V) 50 W 60 W 70 W 80 W 95 W 115 W 130 W a. OVS (oversubscription) of the power system allows for more efficient use of the available system power. Chapter 3. IBM NeXtScale n1200 Enclosure...
  • Page 56: Power Supplies

    3.4 Power supplies The IBM NeXtScale n1200 Enclosure supports up to six high-efficiency autoranging power supplies. The standard model includes all six power supplies. A single power supply is shown in Figure 3-5. Figure 3-5 900 W power supply Table 3-9 lists the ordering information for the supported power supplies.
  • Page 57 USB key Figure 3-6 IBM NeXtScale n1200 Enclosure rear view with power supply numbering The power supplies that are used in IBM NeXtScale System are hot-swap, high-efficiency 80 PLUS Platinum power supplies that are operating at 94% efficiency. The efficiency varies by load, as shown in Table 3-10. The 80 PLUS report is available at this website: http://www.plugloadsolutions.com/psu_reports/IBM_7001700-XXXX_900W_SO-5...
  • Page 58 This means you can use the power capacity of all installed power supplies while still preserving power supply redundancy if there is a power supply failure. For more information, see “Power supply oversubscription” on page 54. IBM NeXtScale System Planning and Implementation Guide...
  • Page 59 Removing a power supply: To maintain proper system cooling, do not operate the IBM NeXtScale n1200 Enclosure without a power supply (or power supply filler) in every power supply bay. Install a power supply within 1 minute of the removal of a power supply.
  • Page 60: Fan Modules

    3.5 Fan modules The IBM NeXtScale n1200 Enclosure supports 10, 80 mm fan modules. All fans modules are at the rear of the chassis and are numbered as shown in Figure 3-7. and Power Supply Bays Fan and Power Controller...
  • Page 61 There are two logical cooling zones in the enclosure, as shown in Figure 3-9 on page 44. Five fan modules on each side correspond to the six compute nodes on that same side of the chassis. Chapter 3. IBM NeXtScale n1200 Enclosure...
  • Page 62 The FPC varies the speeds of the fans in zone 1 and zone 2 by at most a 20% difference to avoid unbalanced air flow distribution. Fan removal: To maintain proper system cooling, do not operate the IBM NeXtScale n1200 Enclosure without a fan module (or fan module filler) in every fan module bay.
  • Page 63: Midplane

    The midplane was designed with no active components to improve reliability and minimize serviceability. Unlike BladeCenter, for example, the midplane is removed by removing a cover on the top of the chassis. Chapter 3. IBM NeXtScale n1200 Enclosure...
  • Page 64 Figure 3-11 shows the connectivity of the chassis components through the midplane. Midplane Power Supplies Servers Modules RJ45 Fan and Power Controller Modules Power Supplies Figure 3-11 Midplane connectivity IBM NeXtScale System Planning and Implementation Guide...
  • Page 65: Fan And Power Controller

    Figure 3-12. The FPC is a hot-swap component, as indicated by the orange handle. Figure 3-12 Rear view of the chassis that shows the location of the FPC Chapter 3. IBM NeXtScale n1200 Enclosure...
  • Page 66: Ports And Connectors

    When this LED is lit (yellow), it indicates that a system error occurred. Ethernet port activity LED When this LED is flashing (green), it indicates that there is activity through the remote management and console (Ethernet) port over the management network. IBM NeXtScale System Planning and Implementation Guide...
  • Page 67: Internal Usb Memory Key

    FPC is replaced. 3.7.2 Internal USB memory key The FPC also includes a USB key that is housed inside the unit, as shown in Figure 3-14. Figure 3-14 Internal view of the FPC Chapter 3. IBM NeXtScale n1200 Enclosure...
  • Page 68: Overview Of Functions

    – Oversubscription, which can be enabled with N+1 and N+N policies. Support the updating of FPC firmware. Monitors and reports fan, power, and chassis status and other failures with event log and corresponding LEDs. IBM NeXtScale System Planning and Implementation Guide...
  • Page 69: Web Gui Interface

    – Host name, DNS, Domain, IP, and IP version configuration – SNMP traps email alert configuration – Web server (http or https) ports configuration Perform a Virtual Reset or Virtual Reseat of each compute node Chapter 3. IBM NeXtScale n1200 Enclosure...
  • Page 70: Power Management

    Perform user account management 3.8 Power management The FPC controls the power on the IBM NeXtScale n1200 Enclosure. If there is sufficient power available, the FPC allows a compute node to be powered on. The power permission includes the following two-step process: 1.
  • Page 71: Power Capping

    One installed power supply is used as redundant. The failure of one power supply is allowed without affecting the system’s operation or performance (performance can be affected if oversubscription mode is enabled). This mode can be enabled with oversubscription mode. Chapter 3. IBM NeXtScale n1200 Enclosure...
  • Page 72: Power Supply Oversubscription

    2700 W (= 3 x 900 W) Enabled 3240 W (= 3 x 900 x 120%) a. The power budget that is presented in this table is based on power supply ratings. Actual power budget can vary. IBM NeXtScale System Planning and Implementation Guide...
  • Page 73 The chassis is powered off only if after throttling the compute nodes, the enclosure power requirement still exceeds the power that is available on the remaining power supplies. Chapter 3. IBM NeXtScale n1200 Enclosure...
  • Page 74: Acoustic Mode

    As a result, these settings increase the possibility that the node might have to be throttled to maintain cooling within the fan speed limitation. The acoustic mode setting should be balanced with the customer’s workload for best performance. IBM NeXtScale System Planning and Implementation Guide...
  • Page 75: Specifications

    3.9 Specifications This section describes the specifications of the IBM NeXtScale n1200 Enclosure. 3.9.1 Physical specifications The enclosure features the following physical specifications: Dimensions: – Height: 263.3 mm (10.37 in.) – Depth: 914.5 mm (36 in.) – Width: 447 mm (17.6 in.) Weight: –...
  • Page 76 Chassis is removed from original shipping container and is installed but not in use; for example, during repair, maintenance, or upgrade. The equipment acclimation period is 1 hour per 20 °C of temperature change from the shipping environment to the operating environment. Condensation is acceptable, but not rain. IBM NeXtScale System Planning and Implementation Guide...
  • Page 77: Chapter 4. Ibm Nextscale Nx360 M4 Compute Node

    IBM NeXtScale nx360 M4 Chapter 4. compute node The IBM NeXtScale nx360 M4 compute node, machine type 5455, is a half-wide, dual-socket server that is designed for data centers that require high performance but are constrained by floor space. It supports Intel Xeon E5-2600 v2 series processors up to 12 cores, which provide more performance per server than previous generation systems.
  • Page 78: Overview

    Figure 4-1 IBM NeXtScale nx360 M4 compute node The nx360 M4 compute nodes fit into the IBM NeXtScale n1200 Enclosure that provides common power and cooling resources. As shown in Figure 4-1, the nx360 M4 compute node can support the following components: One or two Intel Xeon E5-2600 v2 series processors.
  • Page 79: Physical Design

    LEDs Dual-port 1 GbE or IMM 1 GbE mezzanine connector management (dedicated) management adapter slot port (shared) port (dedicated) Figure 4-2 Front view of IBM NeXtScale nx360 M4 Chapter 4. IBM NeXtScale nx360 M4 compute node...
  • Page 80 2 (PCIe Tray only) PCIe 3.0 SATA port USB hypervisor CPU 1 and Midplane riser slot 1 and cable socket four DIMMs connector Figure 4-3 Inside view of the IBM NeXtScale nx360 M4 IBM NeXtScale System Planning and Implementation Guide...
  • Page 81: System Architecture

    Figure 4-4 Exploded view of the nx360 M4 4.2 System architecture The IBM NeXtScale nx360 M4 compute node features the Intel E5-2600 v2 series processors. The Xeon E5-2600 v2 series processor has models with either 6, 8, 10, or 12 cores per processor with up to 24 threads per socket.
  • Page 82 46 bits physical, 48 bits virtual Cores Up to 8 Up to 12 Threads per socket Up to 16 threads Up to 24 threads Last-level Cache (LLC) Up to 20 MB Up to 30 MB IBM NeXtScale System Planning and Implementation Guide...
  • Page 83 15 W or higher 10.5 W or higher 12 W for low-voltage SKUs 7.5 W for low-voltage SKUs Figure 4-5 shows the IBM NeXtScale nx360 M4 building block. 2x 6 Gbps AHCI & 1x 3.5” or 2x 3 Gbps AHCI x4 ESI link 2x 2.5”...
  • Page 84 Note: Although the Socket R (LGA-2011) processor socket on the nx360 M4 physically fits Xeon E5-2600 series and Xeon E5-2600 v2 series processor, only the latter are supported as processor options for this platform. IBM NeXtScale System Planning and Implementation Guide...
  • Page 85: Specificiations

    RAID-10. Implemented in the Intel C600 chipset. Optional hardware RAID with supported 6Gbps RAID controllers. Optical drive No internal bays. Use an external USB drive such as the IBM and Lenovo part number bays 73P4515 or 73P4516. Tape drive bays No internal bays.
  • Page 86 Predictive Failure Analysis, Light Path Diagnostics, Automatic Server Restart, IBM Systems Director and IBM Systems Director Active Energy Manager, IBM ServerGuide. Browser-based chassis management via Ethernet port on the Fan and Power Controller Module on the rear of the enclosure. IMM2 upgrades available to IMM2 Standard and IMM2 Advanced for web GUI and remote presence features.
  • Page 87: Standard Models

    Limited warranty 3-year customer-replaceable unit and onsite limited warranty with 9x5/NBD. Service and Optional service upgrades are available through IBM ServicePacs®: 4-hour or 2-hour support response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for IBM hardware and some IBM and OEM software.
  • Page 88: Processor Options

    Intel Xeon E5-2695 v2 12C 2.4GHz 30MB 1866MHz 115W 46W2721 A42D / A42P Intel Xeon E5-2697 v2 12C 2.7GHz 30MB 1866MHz 130W a. The first feature code corresponds to the first processor; the second feature code corresponds to the second processor. IBM NeXtScale System Planning and Implementation Guide...
  • Page 89: Memory Options

    2 x 10 x 2.8 x 8 = 448 Gflops 4.6 Memory options IBM DDR3 memory is compatibility tested and tuned for optimal IBM System x performance and throughput. IBM memory specifications are integrated into the light path diagnostic tests for immediate system performance feedback and optimum system uptime.
  • Page 90 CL13 ECC DDR3 1866MHz LP RDIMM RDIMMs - 1600 MHz 00D5024 A3QE 4GB (1x4GB, 1Rx4, 1.35V) PC3L-12800 CL11 ECC DDR3 1600MHz LP RDIMM 46W0735 A3ZD 4GB (1x4 GB, 2Rx8, 1.35 V) PC3-12800 CL13 ECC DDR3 1600MHz LP RDIMM IBM NeXtScale System Planning and Implementation Guide...
  • Page 91: Dimm Installation Order

    DIMMs. 4.6.1 DIMM installation order The IBM NeXtScale nx360 M4 boots with only one memory DIMM installed per processor. However, the suggested memory configuration is to balance the memory across all the memory channels on each processor to use the available memory bandwidth.
  • Page 92 Table 4-7 on page 75 shows DIMM installation if you have two processors that are installed. A minimum of two memory DIMMs (one for each processor) are required when two processors are installed. IBM NeXtScale System Planning and Implementation Guide...
  • Page 93 Table 4-8 Memory population with processor stalled: Mirrored channel mode Processor 1 Number of DIMMs DIMM 1 DIMM 2 DIMM 3 DIMM 4 Table 4-9 on page 76 shows DIMM installation for memory mirroring if two processors are installed. Chapter 4. IBM NeXtScale nx360 M4 compute node...
  • Page 94 1600 MHz 1866 MHz a. The maximum quantity that is supported is shown for two installed processors. When one processor is installed, the maximum quantity that is supported is half of that shown. IBM NeXtScale System Planning and Implementation Guide...
  • Page 95: Internal Disk Storage Options

    (SDDs) that are supported also are listed. The IBM NeXtScale nx360 M4 server supports one of the following drive options: One 3.5-inch simple-swap SATA drive Up to two 2.5-inch simple-swap SAS or SATA HDDs or SSDs Up to four 1.8-inch simple-swap SSDs...
  • Page 96 Seven 3.5-inch simple-swap SATA drives and four 1.8-inch simple-swap SSDs We describe the Native Expansion Tray in detail in 4.8, “IBM NeXtScale Storage Native Expansion Tray” on page 86. Drive cages for the drives internal to the nx360 M4 are as listed in Table 4-12.
  • Page 97: Controllers For Internal Storage

    N2115 SAS/SATA HBA for IBM System x: A high-performance host bus adapter for internal drive connectivity. Table 4-13 lists the ordering information for RAID controllers and SAS HBA.
  • Page 98 Depending on the operating system version, drivers might need to be downloaded separately. There is no support for VMware, Hyper-V, or Xen For more information, see the list of IBM Redbooks Product Guides in the RAID adapters category at this website: http://www.redbooks.ibm.com/portals/systemx?Open&page=pg&cat=raid...
  • Page 99 Supports drive sizes greater than 2 TB for RAID 0, 1E, and 10 (not RAID 1) Fixed stripe size of 64 KB For more information, see the list of IBM Redbooks Product Guides in the RAID adapters category at this website: http://www.redbooks.ibm.com/portals/systemx?Open&page=pg&cat=raid...
  • Page 100 MegaRAID Storage Manager management software N2115 SAS/SATA HBA The N2115 SAS/SATA HBA for IBM System x is an ideal solution for System x servers that require high-speed internal storage connectivity. This eight-port host bus adapter supports direct attachment to SAS and SATA internal HDDs and SSDs.
  • Page 101 The two controllers must be: (1) either onboard SATA or ServeRAID C100 for internal drive(s), and (2) N2115 or ServeRAID M115 for drives in the storage tray b. Requires the 3.5-inch RAID cage (feature A4GE, option 00Y8615) Chapter 4. IBM NeXtScale nx360 M4 compute node...
  • Page 102: Using The Serveraid C100 With 1.8-Inch Ssds

    Maximum number code supported 3.5-inch Simple-Swap SATA HDDs 00AD025 A4GC IBM 4TB 7.2K 6Gbps SATA 3.5" HDD for NeXtScale System 1 / 8 00AD020 A489 IBM 3TB 7.2K 6Gbps SATA 3.5" HDD for NeXtScale System 1 / 8 00AD015 A488 IBM 2TB 7.2K 6Gbps SATA 3.5"...
  • Page 103 Feature Description Maximum number code supported 00AD005 A486 IBM 500GB 7.2K 6Gbps SATA 3.5" HDD for NeXtScale System 1 / 8 2.5-inch Simple-Swap 10K SAS HDDs 00FN040 A5NC IBM 1.2TB 10K 6Gbps SAS 2.5" HDD for NeXtScale System 00AD065 A48G IBM 900GB 10K 6Gbps SAS 2.5"...
  • Page 104: Ibm Nextscale Storage Native Expansion Tray

    1 drive supported without the Storage Native Expansion Tray. 8 drives supported with the Storage Native Expansion Tray. See 4.8, “IBM NeXtScale Storage Native Expansion Tray” on page 86. For more information about SSDs, see Enterprise Solid State Drives for IBM BladeCenter and System x Servers, TIPS0792, which is available at this website: http://www.redbooks.ibm.com/abstracts/tips0792.html?Open...
  • Page 105 Figure 4-10 shows the storage tray attached to an nx360 M4. Figure 4-10 IBM NeXtScale Storage Native Expansion Tray attached to an nx360 M4 compute node Ordering information is listed in Table 4-16. Table 4-16 IBM NeXtScale System Internal Storage tray...
  • Page 106 When the Storage Native Expansion Tray is used, one of the following disk controller adapters must be installed in the PCIe slot in the nx360 M4: ServeRAID M1115 SAS/SATA Controller for IBM System x, 81Y4448 N2115 SAS/SATA HBA for IBM System x, 46C8988 The storage tray connects to both ports on the M1115 controller.
  • Page 107 Filler Filler Bay 3 Filler Filler Filler Filler Bay 4 Empty Empty Empty Empty Empty Bay 5 Empty Empty Empty Empty Empty Empty Bay 6 Empty Empty Empty Empty Empty Empty Empty Chapter 4. IBM NeXtScale nx360 M4 compute node...
  • Page 108: Ibm Nextscale Pcie Native Expansion Tray

    Table 4-19 lists HDD options for the Storage tray. Table 4-19 Disk drive options for IBM NeXtScale System Internal Storage tray for nx360 Part Feature Description Maximum number code supported 3.5-inch Simple-Swap SATA HDDs 00AD025 A4GC IBM 4TB 7.2K 6Gbps SATA 3.5" HDD for NeXtScale System...
  • Page 109 (shown with the top cover removed). The figure shows two NVIDIA GPUs installed. Figure 4-13 IBM NeXtScale PCIe Native Expansion Tray attached to an nx360 M4 compute node (with two NVIDIA GPUs installed) Ordering information is listed in Table 4-20.
  • Page 110: Gpu And Coprocessor Adapters

    4.18, “Operating systems support” on page 112. Configuration rules are as follows: The use of GPUs or coprocessors require the use of the IBM NeXtScale PCIe Native Expansion Tray One or two GPUs or coprocessors can be installed...
  • Page 111 GPUs are used for everything from consumer gaming and professional graphics to high performance computing to virtualized and cloud environments. Two of these applications, in particular, are pertinent to NeXtScale System: High-performance computing (HPC): The sheer volume of additional cores...
  • Page 112 Table 4-22 on page 95 provides additional details about each of the NVIDIA GPUs that can be used with an nx360 M4 with the attached PCIe Native Expansion Tray. IBM NeXtScale System Planning and Implementation Guide...
  • Page 113 NVIDIA GPU Boost Base Core Clock 745 MHz 732MHz 706MHz 745 MHz 850 MHz Boost Clocks 810 MHz Not applicable 875 MHz Memory Error Yes Enabled (External & Internal) Yes (External Only) Protection Chapter 4. IBM NeXtScale nx360 M4 compute node...
  • Page 114: Embedded 1 Gb Ethernet Controller

    4.11 Embedded 1 Gb Ethernet controller The IBM NeXtScale nx360 M4 includes an embedded 1 Gb Ethernet controller that is built into the system board. It offers 2-Gb Ethernet ports with the following features: Intel I350 Gb Ethernet controller IEEE 802.3 Ethernet interface for 1000BASE-T, 100BASE-TX, and 10BASE-T applications (802.3, 802.3u, and 802.3ab)
  • Page 115: Pci Express I/O Adapters

    4.12 PCI Express I/O adapters The IBM NeXtScale nx360 M4 supports one onboard PCIe card through a mezzanine card and another full-height/half-length through a 1U single-slot riser card. 4.12.1 Mezzanine adapters The mezzanine card is supported by a dedicated PCIe x8 slot at the front of the server, as shown in Figure 4-16.
  • Page 116: Single-Slot Riser Card

    PCIe Native Expansion Tray: If the PCIe Native Expansion Tray is selected, then this single-slot riser cage is not used. Instead the tray includes a two-slot riser card. See 4.9, “IBM NeXtScale PCIe Native Expansion Tray” on page 90 Table 4-24 PCIe raiser cage option...
  • Page 117: Network Adapters

    Mellanox ConnectX-3 FDR VPI IB/E Adapter for IBM System x 10 Gb Ethernet 94Y5180 A4Z6 Broadcom NetXtreme Dual Port 10GbE SFP+ Adapter for IBM System x 49Y7910 A18Y Broadcom NetXtreme II Dual Port 10GBaseT Adapter for IBM System x 00D8540...
  • Page 118 A3PN Mellanox ConnectX-3 FDR VPI IB/E Adapter for IBM System x More network adapters are offered as part of the IBM Intelligent Cluster program, as listed in Table 4-26. Table 4-26 Network adapters that are offered as part of the Intelligent Cluster program...
  • Page 119: Storage Host Bus Adapters

    Part number Feature code Description Fibre Channel - 16 Gb 81Y1668 A2XU Brocade 16Gb FC Single-port HBA for IBM System x 81Y1675 A2XV Brocade 16Gb FC Dual-port HBA for IBM System x 81Y1655 A2W5 Emulex 16Gb FC Single-port HBA for IBM System x...
  • Page 120: Integrated Virtualization

    Customized VMware vSphere images can be downloaded from this website: http://ibm.com/systems/x/os/vmware/ Figure 4-18 shows the USB port inside the nx360 M4 where the IBM USB Memory for VMWare ESXi is connected. Figure 4-18 Location of the internal USB port for the virtualization key option...
  • Page 121: Local Server Management

    00Y8366 A4AK Console breakout cable (KVM Dongle cable) Same cable as Flex System: This is the same cable that is used with IBM Flex System, but it has a different part number because of the included materials. Chapter 4. IBM NeXtScale nx360 M4 compute node...
  • Page 122 Check log LED (yellow) System error LED (yellow) QR code Figure 4-20 IBM NeXtScale nx360 M4 operator panel The LEDs are defined as follows: Power button (green): Determines the following power state of the serve: – Off: AC power is not present.
  • Page 123: Remote Server Management

    (MTM), followed by the serial number of the server. 4.15 Remote server management Each IBM NeXtScale nx360 M4 compute node has an Integrated Management Module II (IMM2) onboard and uses the Unified Extensible Firmware Interface (UEFI) to replace the older interface.
  • Page 124 IMM restarts the server when the IMM detects an operating-system hang condition. A system administrator can use the blue-screen capture to help determine the cause of the hang condition. Table 4-30 on page 107 lists the remote management options. IBM NeXtScale System Planning and Implementation Guide...
  • Page 125: External Disk Storage Expansion

    ServeRAID M5120 SAS/SATA Controller for IBM System x Hardware upgrades 81Y4487 A1J4 ServeRAID M5100 Series 512MB Flash/RAID 5 Upgrade for IBM System x 81Y4559 A1WY ServeRAID M5100 Series 1GB Flash/RAID 5 Upgrade for IBM System x Features on Demand Upgrades*...
  • Page 126 Based on the LSI SAS2208 6 Gbps ROC controller Supports connectivity to the EXP2512 and EXP2524 storage expansion enclosures For more information, see the IBM Redbooks Product Guide ServeRAID M5120 SAS/SATA Controller for IBM System x, TIPS0858: http://www.redbooks.ibm.com/abstracts/tips0858.html?Open The ServeRAID M5120 SAS/SATA Controller supports connectivity to the IBM System Storage external expansion enclosures that are listed in Table 4-32.
  • Page 127 2.5" NL SAS HS HDDs 49Y1898 500GB 7,200 rpm 6Gb SAS NL 2.5" HDD 81Y9952 1TB 7,200 rpm 6Gb SAS NL 2.5" HDD 2.5" SAS HS HDDs 49Y1896 146GB 15,000 rpm 6Gb SAS 2.5" HDD Chapter 4. IBM NeXtScale nx360 M4 compute node...
  • Page 128: Physical Specifications

    200GB 6Gb SAS 2.5" SSD 49Y6077 400GB 6Gb SAS 2.5" SSD 4.17 Physical specifications The IBM NeXtScale nx360 M4 features the following physical specifications: Dimensions: – Height: 41 mm (1.6 in) – Depth: 658.8 mm (25.9 in) – Width: 216 mm (8.5 in) –...
  • Page 129 Chassis is removed from original shipping container and is installed but not in use, for example, during repair, maintenance, or upgrade. The equipment acclimation period is 1 hour per 20°C of temperature change from the shipping environment to the operating environment. Condensation, but not rain, is acceptable. Chapter 4. IBM NeXtScale nx360 M4 compute node...
  • Page 130: Operating Systems Support

    For more information about the limits for particulates and gases, see “Particulate contamination” on page 229 in the IBM NeXtScale nx360 M4 Installation and Service Guide. 4.18 Operating systems support The server supports the following operating systems:...
  • Page 131 VMware vSphere (ESXi) 5.1 (U2) VMware vSphere (ESXi) 5.5 a. Support is planned For the latest information about the specific versions and service levels that are supported and any other prerequisites, see the following IBM ServerProven® website: http://www.ibm.com/systems/info/x86servers/serverproven/compat/us/nos/m atrix.shtml Chapter 4. IBM NeXtScale nx360 M4 compute node...
  • Page 132 IBM NeXtScale System Planning and Implementation Guide...
  • Page 133: Chapter 5. Rack Planning

    Rack planning Chapter 5. A NeXtScale System configuration can consist of many chassis, nodes, switches, cables, and racks. In many cases, it is relevant for planning purposes to think of a system in terms of racks or multiple racks. In this chapter, we describe best practices for configuration of the individual racks.
  • Page 134: Power Planning

    PRS4401, which is available at this website: http://ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4401 NeXtScale System offers N+1 and N+N power supply redundancy policies at the chassis level. To minimize system cost, it is expected that N+1 or non-redundant power configurations are used; therefore, this is how the power system was optimized.
  • Page 135: Examples

    Tip: In some countries, single-phase power can also be used in such configurations. For more information, see IBM NeXtScale System Power Requirements Guide, PRS4401, which is available at this website: http://ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4401 5.1.1 Examples...
  • Page 136 Figure 5-1 Six chassis and six switches that are connected to three PDUs The PDUs that are shown are in vertical pockets in the rear of an IBM 42U 1100mm Enterprise V2 Dynamic Rack. The rack is described 5.4.1, “The IBM 42U 1100mm Enterprise V2 Dynamic Rack”...
  • Page 137 This configuration requires “Y” power cables for the chassis power supplies. Part numbers are listed in Table 5-1. Table 5-1 Y cable part numbers Part number Description 00Y3046 1.345 m, 2x C13 to C14 Jumper Cord, Rack Power Cable 00Y3047 2.054 m, 2x C13 to C14 Jumper Cord, Rack Power Cable Figure 5-2 on page 120 shows the connections from four 1U PDUs.
  • Page 138 This configuration can provide power redundancy to the chassis, servers, and optional devices that are installed in the rack, depending on the specifications of the equipment. IBM NeXtScale System Planning and Implementation Guide...
  • Page 139: Pdus

    200 - 240 V 60 A, three-phase power or 380 - 415 V 32 A, three-phase power. 0U PDUs: The IBM 0U PDU should not be used in the IBM 42U 1100mm Enterprise V2 Dynamic Rack with the NeXtScale n1200 Enclosure because there is inadequate clearance between the rear of the chassis and the PDU.
  • Page 140: Ups Units

    For more information about PDUs, see the IBM Redbooks Product Guides that are available at this website: http://www.redbooks.ibm.com/portals/systemx?Open&page=pg&cat=power 5.1.3 UPS units There are several rack-mounted UPS units that can be used with the NeXtScale systems, which are listed in Table 5-4. In larger configurations, UPS service is often supplied at the data center level.
  • Page 141 The blank filler panel kits that are listed in Table 5-5 can be used next to the NeXtScale n1200 Enclosure. However, next to switches mounted in the front of the rack requires the use of the IBM 1U Pass Through Bracket (part number 00Y3011), as described in 5.4.3, “Rack options” on page 135. This bracket is required if there is an empty 1U space above or below a front-mounted switch to prevent air recirculation from the rear to the front of the switch.
  • Page 142: Density

    It also often requires the rack or row to be powered off if the enclosure must be opened for maintenance of the equipment or the enclosure. IBM considered the merits of these options, and suggests the use of rear door heat exchangers, as described in 5.5, “Rear Door Heat Exchanger” on page 141.
  • Page 143: Racks

    5.4 Racks In this section, we describe installing NeXtScale System in the IBM 42U 1100mm Enterprise V2 Dynamic Rack because this is the rack that is used in our Intelligent Cluster solution. It is also the rack we recommend for NeXtScale System implementations.
  • Page 144 Exchanger (which is described in 5.5, “Rear Door Heat Exchanger” on page 141) is added, the rack is two standard data center floor tiles (1200 mm) deep. The Enterprise V2 in the name signifies that it is IBM’s second version of open systems rack for the enterprise.
  • Page 145 The following features also are included: A front stabilizer bracket, which can be used to secure the rack to the floor, as shown in Figure 5-6. Bolts Stabilizer bracket Figure 5-6 Rack with stabilizer bracket attached, being bolted to the floor As shown in Figure 5-7 on page 128, a recirculation plate is used to prevent warm air that is coming from the rear of the rack from passing under the rack and into the front of the servers.
  • Page 146 Height of less than 80 inches, which enables it to fit through the doors of most elevators and doorways. Reusable, ship-loadable packaging. For more information about the transportation system, see this website: http://ibm.com/support/entry/portal/docdisplay?lndocid=migr-5091922 IBM NeXtScale System Planning and Implementation Guide...
  • Page 147 Six 1U vertical mounting brackets in the rear post flanges, which can be used for power distribution units, switches, or other 1U devices, as shown in Figure 5-8. Cable openings M6 clip nuts Figure 5-8 1U Power distribution unit mounted in 1U pocket on flange of rear post Chapter 5.
  • Page 148 126. Also includes are attachment points above and below these openings to hold cables out of the way and reduce cable clutter. New versions of this rack have four openings in each side wall. IBM NeXtScale System Planning and Implementation Guide...
  • Page 149: Installing Nextscale System In Other Racks

    5.4.2 Installing NeXtScale System in other racks It is possible to install NeXtScale System chassis in other racks. The IBM NeXtScale n1200 Enclosure can be installed in most industry-standard, four-post server racks. Figure 5-10 on page 132 shows the dimensions of the IBM NeXtScale n1200 Enclosure and included rail kit.
  • Page 150 B. The thickness of rack EIA flanges should be 2 mm - 4.65 mm. IBM provided cage and clip nuts do not fit on material thicker than 3 mm.
  • Page 151 M. Mounts use 7.1 mm round or 9.5 mm square hole racks. Tapped hole racks with standard rail kit are not supported. N. NeXtScale System data and management cables attach to the front of the node (chassis). Cable routing space must be provided at the front of the rack, inside or outside the rack.
  • Page 152: Cable Routing

    Figure 5-12 shows a top view of the bracket and potential location for cable bundles. Front of the switch Potential locations for cable bundles Potential Location for Cable Bundles Figure 5-12 Top view of cable management bracket kit, showing cable bundles IBM NeXtScale System Planning and Implementation Guide...
  • Page 153: Rack Options

    Seal openings under the front of the racks. 5.4.3 Rack options The rack options that are listed in Table 5-8 are available for the IBM Enterprise V2 Dynamic Racks and other racks. Table 5-8 Rack option part numbers...
  • Page 154 43V6147 IBM Single Cable USB Conversion Option (UCO) 39M2895 IBM USB Conversion Option (four Pack UCO) 39M2897 IBM Long KVM Conversion Option (four Pack Long KCO) 46M5383 IBM Virtual Media Conversion Option Gen2 (VCO2) 46M5382 IBM Serial Conversion Option (SCO) The options that are listed in Table 5-9 are unique to configuring NeXtScale environments.
  • Page 155 The IBM 1U Pass Through Bracket is shown in Figure 5-13. The front for the component has brushes to block the airflow around any cables that are passed through it. It can also serve to block air flow around a switch that is recessed in the rack (for cable routing reasons), that pass around a blank filler panel.
  • Page 156 Figure 5-14. This kit includes four brackets, which are enough for one rack (two brackets are installed on each side of the rack). Cable management brackets (x2) M5 screws (x10) M5 clip nut (x10) Figure 5-14 Cable management bracket IBM NeXtScale System Planning and Implementation Guide...
  • Page 157 The IBM Rack and Switch Seal Kit (part number 00Y3001) has the following purposes: Provides a means to seal the opening in the bottom front of the IBM 42U 1100 mm Enterprise V2 Dynamic Rack. The opening at the bottom front of the rack must be sealed if a switch is at the front of the rack in U space one.
  • Page 158 Figure 5-16 shows the components of the IBM Rack and Switch Seal Kit. Seal kit foam blocks Switch seals Kit contains 12 pieces, enough for Kit contains two pieces, enough six switches (two per switch) for one rack (two per rack) Figure 5-16 IBM Rack and Switch Seal Kit contents Figure 5-17 shows where these pieces are used.
  • Page 159: Rear Door Heat Exchanger

    5.5 Rear Door Heat Exchanger There is a Rear Door Heat Exchanger available for the IBM 42U 1100mm Enterprise V2 Dynamic Rack. It has the following features: It attaches in place of the perforated rear door and adds 100 mm, which makes the overall package 1200 mm (the depth of two standard data center floor tiles).
  • Page 160 The part number for the Rear Door Heat eXchanger for IBM 42U 1100 mm rack is shown in Table 5-10 on page 143.
  • Page 161: Top Of Rack Switches

    Table 5-10 Part number for Rear Door Heat eXchanger for IBM 42U 1100 mm rack Part number Description 175642X Rear Door Heat eXchanger for IBM 42U 1100 mm Enterprise V2 Dynamic Racks For more information, see Rear Door Heat eXchanger V2 Type 1756 Installation and Maintenance Guide, which is available at this website: http://www.ibm.com/support/entry/portal/docdisplay?lndocid=migr-5089575...
  • Page 162 As part of the Intelligent Cluster program, there are other equipment manufacturer (OEM) switches (which are listed in Table 5-12) that can be ordered from IBM. These switches are covered under the same warranty or maintenance agreement as the rest of the Intelligent Cluster.
  • Page 163: Infiniband Switches

    SAN storage, which then share storage via a parallel file system (such as IBM’s GPFS) to the rest of the servers. Table 5-14 on page 146 lists the current IBM 16 Gb Fibre Channel switch offerings. These switches should be mounted facing the rear of the rack because they all have rear to front (port side) cooling air flow.
  • Page 164: Rack-Level Networking: Sample Configurations

    Cisco MDS 9148 for IBM System Storage 2498B80 IBM System Storage SAN80B-4 For more information about IBM Fibre Channel switches, see this website: http://www.ibm.com/systems/networking/switches/san/ 5.7 Rack-level networking: Sample configurations In this section, we describe the following sample configurations that use...
  • Page 165: Non-Blocking Infiniband

    For more information about networking with IBM NeXtScale System, see IBM NeXtScale System Network and Management Cable Guide, PRS4401, which is available at this website: http://ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4401 5.7.1 Non-blocking InfiniBand Figure 5-19 shows a non-blocking InfiniBand configuration. On the left side is a rack with six chassis for a total of 72 compute nodes.
  • Page 166: 50% Blocking Infiniband

    48 Port 1GBe 36 Down 2 10G Up Figure 5-20 50% Blocking InfiniBand Filler panel: Filler panels (part number 00Y3011) are placed in rack unit 21 and 41 to prevent hot air recirculation. IBM NeXtScale System Planning and Implementation Guide...
  • Page 167: Gb Ethernet, One Port Per Node

    5.7.3 10 Gb Ethernet, one port per node Figure 5-21 shows a network with one 10 Gb Ethernet connection per compute node. On the left side is a rack with six chassis for a total of 72 CPU nodes. Two 48 port 10 Gb switches and two 48 port 1 Gbps Ethernet switches provide the network connectivity.
  • Page 168: Gb Ethernet, Two Ports Per Node

    The location of the chassis and the switches within the rack are shown in a way that optimizes the cabling of the solution. The chassis and switches are color-coded to indicate which InfiniBand or Ethernet switches support which chassis. IBM NeXtScale System Planning and Implementation Guide...
  • Page 169: Management Network

    In Figure 5-22 on page 150, each node has two colors. This indicates that each node is connected to two different switches to provide redundancy. Filler panel: A 1U filler (part number 00Y3011) is placed in rack unit 22 to prevent hot air recirculation.
  • Page 170 Controller, which is at the rear of the chassis. Each PDU can also have a management port. Note: The management cables that connect to devices at the rear of the chassis should be routed to the front of the chassis via the cable channels. IBM NeXtScale System Planning and Implementation Guide...
  • Page 171: Chapter 6. Factory Integration And Testing

    Factory integration and Chapter 6. testing IBM provides factory integration and testing as part of the Intelligent Cluster offering. Recently, the Intelligent Cluster testing process was enhanced, and all NeXtScale System configurations that are integrated by IBM benefit. This chapter describes what IBM factory integration provides, what testing is performed, and the documentation that is supplied.
  • Page 172: Ibm Standards

    6.1 IBM standards IBM standards for factory integration are based on meeting a broad range of criteria, including the following criteria: Racks are one standard floor tile wide (or deep in the case of iDataPlex), and fit through 80-inch doorways.
  • Page 173: Testing

    6.2 Testing The following tasks are typical of the testing IBM does at the factory. Other testing might be done based on unique hardware configurations or client requirements: All servers are fully tested as individual units before rack integration. After they are installed in the rack, there is a post assembly inspection to assure all components are installed and positioned correctly.
  • Page 174 This testing is meant to verify the hardware and cabling only. Any software installations require more Cluster Enablement Team, IBM Global Services, or third-party services.
  • Page 175: Documentation That Is Provided

    6.3 Documentation that is provided This section describes the documentation IBM provides with the hardware. IBM manufacturing uses the extreme Cloud Administration Toolkit (xCAT) to set up and test systems. xCAT is also used to document the manufacturing that is set up.
  • Page 176 Figure 6-1 Example of HPLinpack output graph Although open systems can be racked, cabled, configured, and tested by users, we encourage clients to evaluate the benefits of having IBM integrate and test the system before delivery. We also suggest engaging the IBM Lab Services cluster enablement team (CET) to speed the commissioning after the equipment arrives.
  • Page 177: Chapter 7. Hardware Management

    Hardware management Chapter 7. This chapter describes the available options from IBM for managing an IBM NeXtScale System environment. We describe the management capabilities and interfaces that are integrated in the system and the middleware and software layers that are often used in High Performance Computing to manage clusters.
  • Page 178: Managing Compute Nodes

    105. 7.1.1 Integrated Management Module II In 2009, IBM introduced the IPMI-compliant service processor that is called the Integrated Management Module (IMM). The IMM and the second-generation IMM2 are common across all IBM x86 machines, including the IBM NeXtScale nx360 M4 compute node.
  • Page 179 The IMM2 can be accessed and controlled through any of the following methods: Command-line interface (CLI Telnet or CLI SSH) Web interface (if IMM Standard and Advanced FoD is available) IPMI 2.0 (local or remote) Advanced Settings Utility (ASU) SNMP v1 and v3 Figure 7-1 shows the available IMM2 access methods.
  • Page 180 IMM and the Ethernet port. Figure 7-2 displays both ports that allow access to IMM. Eth1/IMM2 Shared port IMM2 dedicated port Figure 7-2 IMM2 dedicated and shared port IBM NeXtScale System Planning and Implementation Guide...
  • Page 181 Dedicated and shared mode are exclusive. If shared mode is selected, the IMM2 dedicated port becomes disabled and the IMM can be accessed only to the Eth1 interface. The way IMM2 is accessed can be selected manually through F1 UEFI setup menu or by using the ASU tool that allows to modify firmware settings through a command-line interface (CLI) (remotely or locally to the node).
  • Page 182: Unified Extendible Firmware Interface

    > /opt/ibm/toolscenter/asu/asu64 set IMM.SharedNicMode Shared IBM Advanced Settings Utility version 9.41.81K Licensed Materials - Property of IBM (C) Copyright IBM Corp. 2007-2013 All Rights Reserved Successfully discovered the IMM via SLP. Discovered IMM at IP address 169.254.95.118 Connected to IMM at IP address 169.254.95.118 IMM.SharedNicMode=Shared...
  • Page 183 Complete out-of-band coverage by ASU simplifies remote setup. More functionality, better user interface, easier management for users. For more information about the UEFI, see the IBM white paper, Introducing UEFI-Compliant Firmware on IBM System x and BladeCenter servers, which is available at this website: http://www.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5083207...
  • Page 184 In most operating conditions, the default settings provide the best performance possible without wasting energy during off-peak usage. However, for certain workloads, it might be appropriate to change these settings to meet specific power to performance requirements. IBM NeXtScale System Planning and Implementation Guide...
  • Page 185 The UEFI provides several predefined setups for commonly wanted operation operating modes conditions. These predefined values are referred to as . Access the menu in UEFI by selecting System Settings  Operating Modes  Choose Operating Mode. You see the five operating modes from which to choose, as shown in Figure 7-6.
  • Page 186 QPI link, and memory subsystem to a lowest working frequency. Minimal Power provides less heat and the lowest power usage at the expense of performance. Figure 7-7 UEFI operation mode: Minimal Power IBM NeXtScale System Planning and Implementation Guide...
  • Page 187 Efficiency - Favor Power Figure 7-8 shows the Efficiency - Favor Power predetermined values. They emphasize power-saving server operation by setting the processors, QPI link, and memory subsystem to a balanced working frequency. Efficiency - Favor Power provides more performance than Minimal Power, but favors power usage. Figure 7-8 UEFI operation mode: Efficiency - Favor Power Chapter 7.
  • Page 188 Figure 7-9 shows the Efficiency - Favor Performance predetermined values. They emphasize performance server operation by setting the processors, QPI link, and memory subsystem to a high working frequency. Figure 7-9 UEFI operation mode: Efficiency - Favor Performance IBM NeXtScale System Planning and Implementation Guide...
  • Page 189 Custom Mode By using Custom Mode, users can select the specific values that they want, as shown in Figure 7-10. The recommended factory default setting values provide optimal performance with reasonable power usage. However, with this mode, users can individually set the power-related and performance-related options. Figure 7-10 UEFI operation mode: Custom Mode Chapter 7.
  • Page 190 This section describes the UEFI settings that are related to system performance. In most cases, increasing system performance increases the power usage of the system. IBM NeXtScale System Planning and Implementation Guide...
  • Page 191 Processors Processor settings control the various performance and power features that are available on the installed Xeon processor. Figure 7-12 shows the UEFI Processors system settings window with the default values. Figure 7-12 UEFI Processor system settings panel The following processor feature options are available: Turbo Mode (Default: Enable) This mode enables the processor to increase its clock speed dynamically if the CPU does not exceed the Thermal Design Power (TDP) for which it was...
  • Page 192 This option enables the stream prefetcher. Some applications and benchmarks can benefit from having it enabled. DCU IP Prefetcher (Default: Enable) This option enables Instruction Pointer prefetcher. Some applications and benchmarks can benefit from having it disabled. IBM NeXtScale System Planning and Implementation Guide...
  • Page 193 Memory The Memory settings window provides the available memory operation options, as shown in Figure 7-13. Figure 7-13 UEFI Memory system settings panel The following memory feature options are available: Memory Mode (Default: Independent) This option selects memory mode at initialization. Independent, mirroring, or sparing memory mode can be selected.
  • Page 194: Asu

    – Adaptive: Use Adaptive Page Policy to decide the memory page state. 7.1.3 ASU By using the IBM ASU tool, users can modify firmware settings from the command line on multiple operating-system platforms. You can perform the following tasks by using the utility: Modify selected basic input/output system (BIOS) CMOS settings without restarting the system to access F1 settings.
  • Page 195 Note: By using the ASU utility, you can generate a BIOS definition file from an existing system. A standard definition file is not provided on the IBM support site. By using the ASU utility command line, you can read or modify firmware settings of a single node.
  • Page 196: Firmware Upgrade

    Bootable Media Creator IBM released a tool that is called Bootable Media Creator with which you can bundle multiple System x updates from UpdateXpress System Packs and create a single bootable media (such as CD/DVD, USB flash drive, or a file set for PXE boot).
  • Page 197: Managing The Chassis

    IBM NeXtScale n1200 Enclosure does not contain a fully functional management module that allows any management operation to be performed on the installed nodes. The FPC module is kept simple and as with the IBM iDataPlex system, compute nodes are managed through their IMM interface rather than a rich management module in the chassis.
  • Page 198: Fpc Web Browser Interface

    Figure 7-15 Fan and Power Controller log in page 3. Click Log in. Note: The web browser interface supports the following minimum browser levels: Internet Explorer 6 or later Firefox 2.0.x or later IBM NeXtScale System Planning and Implementation Guide...
  • Page 199 After you are logged in, the main page shows the following main functions on the left side of the page, as shown in Figure 7-16: Summary: Displays the enclosure overall status and information. It introduces the chassis front view and rear view components and provides the status of the components (compute nodes, power supply units, fans, and so on).
  • Page 200 Virtual Reseat: User can remotely power cycle entire node. Reseat provides a way to emulate physical disconnection of a node. After virtual reset or reseat, node IMM takes up to two minutes to be ready. IBM NeXtScale System Planning and Implementation Guide...
  • Page 201 Rear Overview tab The rear overview window provides a graphical rear view of the enclosure and a table that lists the status and information regarding power supply units, system fans, and FPC module information. It displays characteristics of the available elements and a summary of health conditions, with which the system administrator can easily identify the source of a problem.
  • Page 202 3.8, “Power management” on page 52. Table 7-3 System fan status table Column Description Status Possible values: Present: fan installed No Present: no fan installed Type Possible values: Low performance High performance (for future use) IBM NeXtScale System Planning and Implementation Guide...
  • Page 203 Power The Power function provides the power information about the different enclosure elements. You can configure power redundancy modes, power capping and power-saving policies, and the power restore policy to be used. The following tabs are available: “Power Overview tab” on page 185 “PSU Configuration tab”...
  • Page 204 The power consumption of the chassis depends on nodes configuration and actual load. Figure 7-18 was taken with minimum node configuration and no load. Power consumption numbers differ in each case. IBM NeXtScale System Planning and Implementation Guide...
  • Page 205 N+1: One of the power supplies is redundant, so a single faulty power supply is allowed. N+N: Half of the PSUs that are installed are redundant, so the enclosure can support up to N faulty power supply units. Figure 7-19 Power supply redundancy mode configuration Oversubscription option (OVS) is only selectable when N+1 or N+N redundancy modes are enabled.
  • Page 206 Figure 7-21 shows power capping windows at node level. The range that is suggested is based on the minimum and maximum power consumption for the nodes that are selected. Figure 7-21 Power capping at node level IBM NeXtScale System Planning and Implementation Guide...
  • Page 207 Similar to power capping, power saving can be set at chassis or node level. The following four modes can be selected: Disabled (default static maximum performance mode): The system runs at full speed (no throttling), regardless of the workload. Mode 1 (static minimum power): The system runs in a throttling state regardless of the workload.
  • Page 208 The Cooling function provides information about fan speed for system fans and power supply unit fans. The following tabs are available: Cooling Overview PSU Fan Speed Acoustic Mode These tabs are described next. IBM NeXtScale System Planning and Implementation Guide...
  • Page 209 Cooling Overview tab As shown in Figure 7-24, the Cooling Overview tab displays system fan speeds and their healthy condition. Each fan is equipped with dual motor, so A displays the primary fan motor speed and B displays the redundant fan motor speed. System fan speed normally operates in the range of 2,000 - 13,000 rpm.
  • Page 210 As shown in Figure 7-26, the Acoustic Mode is set in the Acoustic Mode tab and is intended to reduce the noise of the IBM NeXtScale n1200 Enclosure. The following acoustic levels (which are applied to the chassis as a whole) can be...
  • Page 211 Acoustic mode might force nodes to be power capped to avoid over-heating conditions. For more information, see 3.8.5, “Acoustic mode” on page 56. System Information tab The System Information tab provides fix information about Vital Product Data (VPD) for the chassis, the midplane, and the FPC module. Figure 7-27 and Figure 7-28 show the system information VPD windows for the chassis and the midplane.
  • Page 212 USB: Selected power supply redundancy policy Oversubscription mode Power capping and power saving values at chassis and node level Acoustic mode settings Power restore policy IBM NeXtScale System Planning and Implementation Guide...
  • Page 213 All of these settings are volatile, so when FPC is rebooted, the FPC restores the settings from the internal USB automatically. The configuration settings that are related to network, SNMP, and so on, are non-volatile, so they remain in FPC memory between reboots.
  • Page 214 A table shows the actual firmware version, the new firmware version, and a preserve existing settings option that must be selected to keep the settings. After the firmware update is performed, the FPC is rebooted. Figure 7-31 Selection window for firmware update IBM NeXtScale System Planning and Implementation Guide...
  • Page 215 SMTP tab The FPC module allows the SMTP configuration to send the events to the destination email addresses and SMTP server, as shown in Figure 7-32. The Global Alerting Enable option at the Platform Event Filters (PEF) tab must be selected to enable SMTP traps and no filtering applied so all the events are sent.
  • Page 216 Platform Event Filter (PEF) tab. The Global Alerting Enable option in the PEF tab must be selected so that the SNMP traps are enabled. Figure 7-33 SNMP configuration tab IBM NeXtScale System Planning and Implementation Guide...
  • Page 217 Platform Filter Events tab In the PEF tab, you can configure the type of events that are sent as SNMP traps, as shown in Figure 7-34. You also can enable or disable SNMP and SMTP alerting by selecting the Global Alerting Enable option. Figure 7-34 Platform Event Filters window Network Configuration tab In the Network Configuration tab, users can configure the network setting for the...
  • Page 218 Administrator: Full access to all web pages and settings. Operator: Full access to all web pages and settings except the User Account page. User: Full access and settings to all pages except the Configuration function tab. IBM NeXtScale System Planning and Implementation Guide...
  • Page 219 Figure 7-37 User Configuration window Web Service tab User can configure the web interface ports for HTTP and HTTPS access in the Web Service tab, as shown in Figure 7-38. Figure 7-38 Configuration of the HTTP/HTTPS ports for the web browser interface Chapter 7.
  • Page 220: Fpc Ipmi Interface

    7.2.2 FPC IPMI interface The FPC module on the IBM NeXtScale n1200 Enclosure supports IPMI over LAN access. The FPC complies with IPMI v2.0 standard and uses extensions and OEM IPMI commands to access FPC module-specific features. The IPMI interface can be used to develop wrappers with which users can remotely manage and configure multiple FPC modules at the same time.
  • Page 221 Table 7-9 on page 206 lists Fan IPMI commands Table 7-10 on page 207 lists LED IPMI commands Table 7-11 on page 207 lists Node IPMI commands Table 7-12 on page 209 lists Miscellaneous IPMI commands “Examples” on page 209 shows the usage of IPMI interface with some examples Table 7-4 Power supply unit (PSU) IPMI commands Description...
  • Page 222 Chassis: 0x0d Response Data: Byte 1: Completion code (0x00) or out of range (0xC9) Byte 2: Capping disable / enable Byte 3: Capping value LSB Byte 4: Capping value MSB Byte 5: Saving mode IBM NeXtScale System Planning and Implementation Guide...
  • Page 223 Table 7-6 Power redundancy IPMI commands Description NetFn Data Get PSU policy 0x32 0xa2 Request Data: None Response Data: Byte 1: Completion code (0x00) Byte 2: PSU Policy 0: No redundancy 1: N+1 2: N+N Byte 3: Oversubscription mode (0: disable; 1: enable) Byte 4: Power bank LSB Byte 5: Power bank MSB Set PSU policy...
  • Page 224 Byte 1: PSU FAN number (0x01-0x06 FAN 1-6) Response Data: Byte 1: Fan speed LSB (rpm) Byte 2: Fan speed MSB (rpm) Byte 3: Fan speed (0-100%) Byte 4: Fan health 0: Not present 1: Abnormal 2: Normal IBM NeXtScale System Planning and Implementation Guide...
  • Page 225 Table 7-10 LED IPMI commands Description NetFn Data Get Sys LED: 0x32 0x96 Request Data: Command to None get FPC LED Response Data: status Byte 1: Completion code (0x00) Byte 2: SysLocater LED Byte 3: CheckLog LED Possible values are 0: Off 1:On 2: Blink (SysLocater LED only) Set Sys LED: 0x32 0x97...
  • Page 226 Byte 1: Completion code (0x00) Byte 2: Power minimum (LSB) Byte 3: Power minimum (MSB) Byte 4: Power average (LSB) Byte 5: Power average (MSB) Byte 6: Power maximum (LSB) Byte 7: Power maximum (MSB) IBM NeXtScale System Planning and Implementation Guide...
  • Page 227 Table 7-12 Miscellaneous IPMI commands Description NetFn Data Set Time 0x32 0xa1 Request Data: Byte 1: Year MSB (1970 - 2037) Byte 2: Year LSB (1970 - 2037) Byte 3: Month (0x01-0x12) Byte 4: Date (0x01-0x31) Byte 5: Hour (0x00-0x23) Byte 6: Minute (0x00-0x59) Byte 7: Second (0x00-0x59) Example: Year 2010 (byte1: 0x20;...
  • Page 228 To get power supply fan status, use the following command (see Table 7-9 on page 206 for the syntax): ipmitool -I lanplus -U USERID -P PASSW0RD -H 192.168.0.100 raw 0x32 0xa5 0x1 b0 14 14 02 IBM NeXtScale System Planning and Implementation Guide...
  • Page 229: Serveraid C100 Drivers

    IBM ServeRAID C100 RAID support must be enabled by pressing F1 at the setup menu. By using the F1 setup menu, the MegaCLI command line utility, and the MegaRAID Storage Manager, a storage configuration must be created for the use of the software RAID capabilities.
  • Page 230: Vmware Vsphere Hypervisor (Esxi)

    The VMware ESXi embedded hypervisor software is a virtualization platform with which multiple operating systems can be run on a host system at the same time. IBM provides different versions of VMware ESXi customized for IBM hardware that can be downloaded from this website: http://ibm.com/systems/x/os/vmware/...
  • Page 231: Chapter 8. Software Stack

    Chapter 8. This chapter describes the available software options from IBM for managing an IBM NeXtScale cluster. The products that are described in this chapter are part of the software stack that IBM provides for technical computing, high-performance computing, analytics, and cloud environments.
  • Page 232: Extreme Cloud Administration Toolkit (Xcat)

    8.1 eXtreme Cloud Administration Toolkit (xCAT) The eXtreme Cluster Administration Toolkit (xCAT) 2 is an Open Source Initiative that was developed by IBM to support the deployment of large high-performance computing (HPC) clusters that are based on various hardware platforms. xCAT 2 is not an evolution of the earlier xCAT 1.
  • Page 233 Management path includes: xCAT CLI/ GUI Monitoring •xcatd path •monitorctrl Out-of-band path •monitoring plug-in modules xCAT GPFS GPFS xCAT xCAT Ctrl Ctrl GPFS GPFS GPFS GPFS Figure 8-1 xCAT hierarchical cluster management Automatic discovery Single power button press, physical location-based discovery and configuration capability.
  • Page 234 The following hardware control features are included: Power control (power on, off, cycle, and current state) Event logs Boot device control (full boot sequence on IBM System BladeCenter, next boot device on other systems) Sensor readings (temperature, fan speed, voltage, current, and fault...
  • Page 235: Ibm Platform Cluster Manager

    Machines that are based on the Intelligent Platform Management Interface (IPMI) Because only IBM hardware is available for testing, this is the only hardware supported. However, other vendors’ hardware can be managed with xCAT. For more information and the latest list of supported hardware, see this website: http://xcat.sourceforge.net/...
  • Page 236 Table 8-1 IBM Platform Cluster Manager SE versus IBM Platform Cluster Manager AE Feature Standard Edition Advanced Edition Physical provisioning Server monitoring Other hardware monitoring IBM Platform HPC integration VM provisioning Multiple cluster support Self-service portal Storage management Network management Supported Environments IBM Platform Load Sharing Facility®...
  • Page 237 MPI drivers, and the application software. Administrators can use the portal to rapidly deploy a mix of technologies, such as, IBM Platform Load Sharing Facility (LSF), IBM Platform Symphony, Hadoop, and most other third-party workload managers.
  • Page 238: Ibm General Parallel File System

    IBM GPFS provides the following features: Clustered Network File System (CNFS) By using IBM GPFS, you can configure a subset of the nodes in the cluster to provide a highly available solution for exporting GPFS file systems by using Network File System (NFS). The participating nodes are designated as Cluster NFS (CNFS) member nodes and the entire setup is frequently referred to as CNFS or a CNFS cluster.
  • Page 239 AFM is easy to deploy because it relies on open standards for high-performance file serving and does not require any proprietary hardware or software to be installed at the home cluster. The following features that use IBM GPFS capabilities are available through a different licensing: IBM GPFS File Placement Optimizer (FPO) IBM GPFS FPO extends GPFS for a new class of data-intensive applications, which are commonly referred to as big data applications.
  • Page 240: Ibm Gpfs Fpo

    JBOD enclosures, and IBM GPFS Native that easily enables the procurement of a scalable storage system. For more information about IBM GPFS FPO and IBM GPFS Native RAID, see the following sections: 8.3.1, “IBM GPFS FPO”...
  • Page 241 Figure 8-3 Map Reduce Environment that uses GPFS FPO For organizations that want to run more applications on their cluster in a multi-tenant environment, IBM Platform Symphony Advanced Edition can also be deployed. Platform Symphony is a complementary technology to GPFS FPO that provides the flexibility to run Hadoop MapReduce applications with various other applications on the same shared computing environment.
  • Page 242: Ibm System X Gpfs Storage Server

    Deploying solutions for technical computing, HPC, analytics, and cloud environments can place a significant burden on IT. The IBM System x GPFS Storage Server, which is fulfilled through the IBM Intelligent Cluster, uses decades of IBM experience to reduce the complexity of deployment with integrated, delivered, and fully supported solutions that match best-in-industry components with optimized solution design.
  • Page 243 Two base configurations: Model 24 (four JBODs) and Model 26 (six JBODs). They allow an easy configuration of a high-performance storage solution that can be attached to the IBM NeXtScale system through InfiniBand or Ethernet networks. Scalable building block approach. The two basic configurations are scalable to large storage configurations by using them as building blocks.
  • Page 244 In the case of a disk failure, all of the disks participate in the reconstruction while the input and output demand is serviced. No hardware controllers. Disk management and RAID are performed by the GPFS Native RAID feature. IBM NeXtScale System Planning and Implementation Guide...
  • Page 245: Ibm Platform Lsf Family

    Administrators must monitor cluster resources and workloads, manage software licenses, identify bottlenecks, monitor service-level agreements (SLAs), and plan capacity. The IBM Platform LSF software family helps address all of these issues. IBM Platform LSF is a powerful workload management platform for demanding, distributed mission-critical HPC environments.
  • Page 246 IBM Platform Analytics IBM Platform Analytics is an advanced analysis and visualization tool for analyzing massive amounts of IBM Platform LSF and IBM Platform Symphony workload data. You can correlate job, resource, and license data from multiple clusters for data-driven decision making.
  • Page 247: Ibm Platform Hpc

    IBM Platform HPC includes the following components: A user portal that is based on IBM Platform Application Center (IBM PAC) A workload management that is based on IBM Platform LSF IBM Platform Cluster Manager Standard Edition...
  • Page 248 HPC clusters. Cluster provisioning This function is provided by the elements of IBM Platform Cluster Manager Standard Edition that is included in the IBM Platform HPC product. Physical machines (hosts) are provisioned via network boot (Dynamic Host Configuration Protocol [DHCP]) and image transfer (TFTP/HTTP).
  • Page 249 Monitoring and reporting After a cluster is provisioned, IBM Platform HPC provides the means to monitor the status of the cluster resources and jobs to display alerts when there are resource shortages or abnormal conditions, and to produce reports on the throughput and usage of the cluster.
  • Page 250: Ibm Platform Symphony Family

    Optimized, low latency MapReduce implementation. Support of compute-intensive and data-intensive problems on a single shared grid of resources. Figure 8-7 on page 233 shows an overview of the elements of IBM Platform Symphony. IBM NeXtScale System Planning and Implementation Guide...
  • Page 251: Ibm Parallel Environment For X86

    C, or C++ programs. IBM PE Runtime Edition consists of components and command-line tools for developing, running, debugging, profiling, and tuning parallel programs. IBM PE Runtime Edition is required in all the compute nodes that are required to run parallel applications. Chapter 8. Software stack...
  • Page 252: Ibm Parallel Environment Runtime For X86

    IBM PE Developer Edition consists of two major integrated components: – A client workbench that runs on a desktop or notebook computer – A server that runs on select IBM Power Systems, IBM PureFlex Systems servers, IBM System x systems, iDataPlex servers, and IBM NeXtScale System 8.7.1 IBM Parallel Environment Runtime for x86...
  • Page 253 MPI standard. Collective communications routines for 64-bit programs were enhanced to use shared memory for better performance. The IBM MPI collective communication is designed to use an optimized communication algorithm according to job and data size.
  • Page 254: Ibm Parallel Environment Developer Edition For X86

    The IBM PE Developer Edition client (also known as the IBM PE Developer Edition Workbench) software is delivered in the following packages: An all-in-one bundle with Eclipse basics and the IBM PE Developer Edition additions.
  • Page 255 C/C++ Development Tooling (CDT) The CDT provides a fully functional C and C++ IDE that is based on the Eclipse platform and includes the following features: – Support for project creation and managed build for various toolchains – Standard make build –...
  • Page 256 C or Fortran and are running on Red Hat Enterprise Linux 6 on IBM System x with the Intel microarchitecture. The Xprof GUI also supports C++ applications.
  • Page 257: Abbreviations And Acronyms

    Fan and Power Controller CMOS complementary metal oxide File Placement Optimizer semiconductor Flexible Service Processor Converged Network Adapter gigabyte CNFS Clustered Network File GFLOPS Giga floating-point operations System per Second © Copyright IBM Corp. 2013, 2014. All rights reserved.
  • Page 258 NEMA National Electrical IMPI Manufacturers Association IOPS I/O operations per second network file system Internet Protocol network interface card IPMI Intelligent Platform Network Time Protocol Management Interface IBM NeXtScale System Planning and Implementation Guide...
  • Page 259 NVRAM non-volatile random access RHEL Red Hat Enterprise Linux memory Resource Monitoring and other equipment Control manufacturer RAID-on-card OFED OpenFabrics Enterprise Receive-side scaling Distribution release to manufacturing operating system storage area network Oversubscription Serial Attached SCSI Platform Application Center SATA Serial ATA PAMI Parallel Active Messaging...
  • Page 260 Unified Parallel C uninterruptible power supply Uniform Resource Locator universal serial bus UXSP UpdateXpress System Packs Virtual Fabric Adapter VLAN virtual LAN virtual machine vital product data Virtual Protocol Interconnect Extensible Markup Language IBM NeXtScale System Planning and Implementation Guide...
  • Page 261: Related Publications

    IBM Redbooks The following IBM Redbooks publications provide more information about the topic in this document. Some publications that are referenced in this list might be available in softcopy only:...
  • Page 262: Online Resources

    ServerProven hardware compatibility page for the IBM NeXtScale n1200 Enclosure: http://ibm.com/systems/info/x86servers/serverproven/compat/us/NeXtSc ale/5456.html IBM Redbooks Product Guides for IBM System x servers and options: http://www.redbooks.ibm.com/portals/systemx?Open&page=pgbycat Configuration and Option Guide: http://www.ibm.com/systems/xbc/cog/ IBM System x Support Portal: http://ibm.com/support/entry/portal/ Help from IBM IBM Support and downloads http://www.ibm.com/support...
  • Page 266 IBM NeXtScale System Planning and Implementation Guide ® Introduces the new IBM NeXtScale System is a new, dense offering from IBM. It INTERNATIONAL based on our experience with IBM iDataPlex and IBM high density x86 TECHNICAL BladeCenter with a tight focus on emerging and future client...

This manual is also suitable for:

Nextscale n1200Nextscale nx360 m4

Table of Contents