Infiniband Technology Overview - HP Cluster Platform Interconnects v2010 User Manual

Hp cluster platform infiniband interconnect installation and user's guide
Hide thumbs Also See for Cluster Platform Interconnects v2010:
Table of Contents

Advertisement

1 InfiniBand Technology Overview

InfiniBand™ is a specification of the InfiniBand® Trade Association of which Hewlett-Packard
is a member. The trade association has generated specifications for a 100 Gb/s communications
protocol for high-bandwidth, low latency server clusters. The same communications protocol
can operate across all system components for computing, communications, and storage as a
distributed fabric. InfiniBand supplements other interconnect technologies such as SCSI I/O,
providing an interconnect for storage and communications networks that is efficient, reliable,
and scalable. The InfiniBand specification defines the physical layers, application layers,
application programming interfaces, and fabric management (the complete I/O stack).
An InfiniBand switched network provides a fabric that has the following features:
A high-performance, channel-based interconnect architecture that is modular and highly
scalable, enabling networks to grow as needed.
It provides hardware management features such as device discovery, device failover, remote
boot, and I/O sharing.
It enables devices such as servers, storage, and I/O to communicate at high speed, avoiding
the serialization of data transfers required by shared I/O buses.
It provides inter-processor communication and memory sharing at speeds from 2.5 Gb/s to
30 Gb/s
It has advanced fault isolation controls that provide fault tolerance.
InfiniBand Network Features
The InfiniBand architecture consists of single data rate (SDR) and double data rate (DDR) channels
created by linking host channel adapters (HCAs) and target channel adapters through InfiniBand
interconnects (switches). The host channel adapters are PCI bus devices installed in a server, or
are integrated into devices such as storage arrays. An HP Cluster platform consists of many
servers connected in a fat tree (clos) topology through one or more InfiniBand interconnects.
Where necessary, a target channel adapter connects the cluster to remote storage and networks
such as Ethernet, creating an InfiniBand fabric. Features of an InfiniBand fabric are:
Performance:
— The following bandwidth options are supported by the architecture:
Note:
1X bandwidth occurs only if a 4X or 12X link auto-negotiates to 1X because of link
problems.
— Low latency
— Reduced CPU utilization
— Fault-tolerance through automatic path migration
— Physical multiplexing through virtual lanes
— Physical link aggregation
Flexibility:
— Linear scalability
— Industry standard components and open standards
1X (2.5 Gb/s)
4X (10 Gb/s or 20 Gb/s — depending on configuration)
12X (30 Gb/s)
DDR–up to 60 Gb/s
19

Advertisement

Table of Contents
loading

This manual is also suitable for:

Cluster platform

Table of Contents