007-5483-001
The primary hardware components in an SGI Virtu VN200 system are:
•
Head node(s) (SGI Altix XE250 servers)
•
Compute nodes (SGI Altix XE320 servers)
•
Graphics nodes (SGI Virtu VN200 graphics nodes)
•
Network interconnect components (Gigabit Ethernet switches, InfiniBand switches,
PCI cards, and cables)
•
System console, monitor, keyboard, and mouse
The head node is connected to the interconnect network and also to the public network,
typically via the local area network (LAN). The head node is the point of submittal for all
MPI application jobs for the cluster. An MPI job is started from the head node and the
sub-processes are distributed to the cluster compute nodes from the head node. The main
process on the head node waits for the sub-processes to finish. For large clusters or
clusters that run many MPI jobs, multiple head nodes may be used to distribute the load.
The compute nodes are identical computing systems that run the primary processes of
MPI applications. These compute nodes are connected to each other through the
interconnect network.
A graphics node is similar to a compute node in that it contains processors and memory,
but it has an additional high-performance 3D graphics card installed.
The network interconnect components are typically Gigabit Ethernet or InfiniBand. The
MPI messages are passed across this network between the processes. This compute node
network does not connect directly to the public network because mixing external and
internal cluster network traffic could impact application performance. Visualization
nodes may be connected to the public network to act as a login or application gateway
for remote visualization.
Note: Refer to
"Related Publications" on page
Publications that can provide more detailed information about SGI cluster head nodes,
compute nodes, and system rack.
Product Description
xiv, for a listing of relevant SGI Technical
3
Need help?
Do you have a question about the Virtu VN200 and is the answer not in the manual?
Questions and answers