Supported Configurations; Configuration Constraints; Maximum System Limits; Hardware Requirements - HP Cluster Platform Introduction v2010 Installation Manual

Microsoft windows hpc server 2008 installation guide
Hide thumbs Also See for Cluster Platform Introduction v2010:
Table of Contents

Advertisement

2 Supported Configurations

2.1 Configuration Constraints

The configuration of an HP Cluster Platform is flexible, so that it can support several different
operating environments. To support HPCS, a cluster must have the following specific
characteristics:
The cluster consists of a single head node and a number of compute nodes.
Only servers (not workstations) are supported as the head node or compute nodes.
The head node requires an optical drive.
Nodes can differ by model, providing the models have identical CPU type and memory and
I/O configurations, compliant with the basic system requirements for HP Cluster Platform.
Clusters configured this way can support the concept of grouping servers into pools of up
to four pools (pool one through pool four).
Only the following types of system interconnects are supported as the high-speed Message
Passing Interface (MPI) network:
— InfiniBand, which is typically employed in clusters that require a dedicated high-speed
MPI network, segregating job traffic from cluster administration and job management
traffic.
— Gigabit Ethernet, which might be employed for a dedicated MPI network, or as an
in-band network where cluster administration and job management traffic passes over
the same network used for the MPI traffic.
The HP Cluster Platform concept of separate console and administrative networks does not
apply where HPCS is employed as the operating environment.

2.1.1 Maximum System Limits

In this release of HPCS, supported HP Cluster Platform configurations are constrained as follows:
HP Cluster Platform Express is constrained to clusters ranging from 5 to 33 nodes.
HP Cluster Platform is constrained to clusters ranging from 5 to 512 nodes for CP3000 and
CP4000 models. Models employing blade and c-Class blade servers are constrained to 1024
nodes.
InfiniBand interconnect Single Data Rate (SDR) and Double Data Rate (DDR).

2.2 Hardware Requirements

2.2.1 Supported Servers

HP Cluster Platform hardware utilizes several CPU architectures and supports many models of
HP servers.
For the current list of supported servers, see the ProLiant OS support matrix for Windows Server
available at:
http://h71028.www7.hp.com/enterprise/cache/461942-0-0-0-121.html
Windows HPCS requires 64-bit processor architecture processors such as the Intel Xeon with
Intel Extended Memory 64 Technology (EM64T) or the AMD Opteron family of processors. Both
single and dual core processors are supported by Windows HPCS.

2.3 Supported Topologies

HPCS supports five different cluster topologies. Each topology has implications for performance
and accessibility. The topologies involve up to three different networks: public, private, and MPI.
In most cases, NAT (using RRAS) is automatically installed on the head node.
2.1 Configuration Constraints
13

Advertisement

Table of Contents
loading

This manual is also suitable for:

Cluster platformCluster platform express

Table of Contents