Understanding Supported Cluster Configurations; Understanding Configuration Constraints; Identifying Supported Topologies - HP Cluster Platform Introduction v2010 Introduction Manual

Microsoft windows compute clusters
Hide thumbs Also See for Cluster Platform Introduction v2010:
Table of Contents

Advertisement

2 Understanding Supported Cluster Configurations

This release of Microsoft Windows Compute Cluster 2003 is constrained to certain models and
specific configurations of HP Cluster Platform. Supported configurations are automatically
verified during the order and build process for factory-installed clusters. The information in this
chapter defines the supported configurations and components.
The following components and topics are discussed:
For information about the limitations on cluster designs, see
For definitions of supported cluster topologies, see
The maximum size of a cluster configuration, see
To identify the supported InfiniBand HCAs, see
To identify the server models supported as nodes,, see
For information about adding or removing components, see

2.1 Understanding Configuration Constraints

The configuration of an HP Cluster Platform is flexible, so that it might support several different
operating environments. The cluster architecture is explained in HP Cluster Platform, Cluster
Platform Overview.
To support Microsoft Windows Compute Cluster 2003, a cluster must have the following specific
characteristics:
The cluster consists of a single head node and a number of compute nodes. The HP Cluster
platform concept of a utility node is not supported.
Only servers (not workstations) are supported as the head node or compute nodes. The
supported server types are defined in
The head node requires an optical drive (CD reader).
Nodes may differ by model, providing the models have identical CPU type and memory
and I/O configurations, compliant with the basic system requirements for HP Cluster
Platform. Clusters configured this way will support the concept of grouping servers into
poolsof up to four pools (pool 1 through pool 4).
Only the following types of system interconnect are supported as the high-speed Message
Passing Interface (MPI) network:
— InfiniBand, which is typically employed in clusters that require a dedicated high-speed
MPI network, segregating job traffic from cluster administration and job management
traffic.
— Gigabit Ethernet, which might be employed for a dedicated MPI network, or as an
in-band network where cluster administration and job management traffic passes over
the same network used for the MPI traffic. (Typical for implementations of Microsoft
Windows Compute Cluster 2003 on HP Cluster Platform Express.)
The HP Cluster Platform concept of separate console and administrative networks does not
apply where Microsoft Windows Compute Cluster 2003 is employed as the operating
environment.

2.2 Identifying Supported Topologies

Microsoft Windows Compute Cluster 2003 supports five different cluster topologies. Only two
of these topologies map to HP Cluster Platform topologies.
based on in-band (shared) use of the Cluster's HP ProCurve switch as the MPI fabric. The same
network provides routing for cluster administrative and job management traffic. A second
network is provided via the site LAN connection.
Section 2.2
Section 2.3
Section 2.4
Section 2.5
Section
2.5.
Figure 2-1
2.1 Understanding Configuration Constraints
Section 2.1
Section 2.6
shows the simplest topology,
13

Advertisement

Table of Contents
loading

This manual is also suitable for:

Cluster platform

Table of Contents