Download Print this page

S C A L A B L E H I G H S P E E D C L U S T E R I N G - Qlogic SANbox 1400 Supplementary Manual

Qlogic sanbox 1400: supplementary guide
Hide thumbs Also See for SANbox 1400:

Advertisement

S
c
a
l
a
b
l
e
H
i
g
h
S
p
e
e
S
c
a
l
a
b
l
e
H
i
g
h
S
p
e
e
QLogic InfiniPath host channel adapters and switches provide a low latency, high bandwidth connection
between servers in a cluster. Compared to the Fibre Channel technology that QLogic is well known for,
these products use Infiniband for server switching instead of storage switching.
Server switching is increasingly being used to deploy applications which previously ran on mainframes
and Unix servers on clusters of commodity servers and server blades. Early adopters of Infiniband server
switching technology in the high performance computing market have already deployed clusters of up to
4,000 server nodes. Server switching technology is also beginning to be used to deploy clustered
databases on commodity servers and server blades. QLogic entered the Infiniband market in 2006 with
InfiniBand silicon and host channel adapter technology via the acquisition of PathScale, and InfiniBand
Edge switches and multi-core Fabric directors via the acquisition of SilverStorm.
ESG Lab Review
Although QLogic claims an industry leading low latency of 1.3 microseconds at the hardware level, ESG
Lab focused its review on higher level application and messaging results which provide a better real-world
indicator of clustered application performance and scalability.
configured in a cluster, as shown in Figure Seven, was observed by ESG Lab.
Hyper Transport adapters were observed exchanging 7.7 million MPI messages per second that were 32
bytes in length, and throughput of 953 MB/sec was observed when passing messages of 4 KB in length or
greater. The MPI benchmark used for this test was derived from a benchmark utility developed by the
Supercomputer Center at Ohio State University.
W
h
y
T
h
i
s
M
a
t
t
e
r
s
W
h
y
T
h
i
s
M
a
t
t
e
r
s
Data center managers have begun to deploy new applications on clusters of commodity servers. As
these clusters grow in size and importance, the traditional use of Ethernet for messaging between
clustered servers is becoming a performance bottleneck. Taking a cue from the high performance
computing industry, early adopters are taking advantage of the low latency and high bandwidth of
Infiniband to address this performance problem. Early adopters have also noticed that InfiniBand
technology is surprisingly affordable (especially when compared to emerging 10 Gigabit Ethernet).
ESG Lab has verified that QLogic Infiniband adapters are extremely fast and scalable at the
applications level where it matters most.
Enterprise Strategy Group
d
C
l
u
s
t
e
r
i
n
g
d
C
l
u
s
t
e
r
i
n
g
Figure Seven – A Two Node InfiniPath Enabled Cluster
ESG Lab Review
A pair of high performance servers
9

Advertisement

loading

This manual is also suitable for:

Sanbox 5200Sanbox 5600