Table 5. I/O Cluster Details - Dell PowerEdge EL Manual

Improving nfs performance on hpc clusters with dell fluid cache for das
Hide thumbs Also See for PowerEdge EL:
Table of Contents

Advertisement

Improving NFS Performance on HPC Clusters with Dell Fluid Cache for DAS
CLIENTS
CHASSIS CONFIGURATION
INFINIBAND FABRIC
For I/O traffic
ETHERNET FABRIC
For cluster deployment and
management
CLIENT
PROCESSORS
MEMORY
INTERNAL DISK
INTERNAL RAID CONTROLLER
CLUSTER ADMINISTRATION
INTERCONNECT
I/O INTERCONNECT
BIOS
iDRAC
OPERATING SYSTEM
KERNEL
Table 5.
I/O cluster details
I/O cluster configuration
64 PowerEdge M420 blade servers
32 blades in each of two PowerEdge M1000e chassis
Two PowerEdge M1000e chassis, each with 32 blades
Two Mellanox M4001F FDR10 I/O modules per chassis
Two PowerConnect M6220 I/O switch modules per chassis
Each PowerEdge M1000e chassis has two Mellanox M4001
FDR10 I/O module switches.
Each FDR10 I/O module has four uplinks to a rack Mellanox
SX6025 FDR switch for a total of 16 uplinks.
The FDR rack switch has a single FDR link to the NFS server.
Each PowerEdge M1000e chassis has two PowerConnect
M6220 Ethernet switch modules.
Each M6220 switch module has one link to a rack
PowerConnect 5224 switch.
There is one link from the rack PowerConnect switch to an
Ethernet interface on the cluster master node.
I/O compute node configuration
PowerEdge M420 blade server
Dual Intel(R) Xeon(R) CPU E5-2470 @ 2.30 GHz
48 GB. 6 * 8 GB 1600 MT/s RDIMMs
1 50GB SATA SSD
PERC H310 Embedded
Broadcom NetXtreme II BCM57810
Mellanox ConnectX-3 FDR10 mezzanine card
I/O cluster software and firmware
1.3.5
1.23.23 (Build 1)
Red Hat Enterprise Linux (RHEL) 6.2
2.6.32-220.el6.x86_64
11

Advertisement

Table of Contents
loading

Table of Contents