Maximizing Network Cable Reduction - HP 279720-B21 - ProLiant BL p-Class F-GbE Interconnect Overview

Hp proliant bl p-class gbe interconnect switch overview - white paper
Hide thumbs Also See for 279720-B21 - ProLiant BL p-Class F-GbE Interconnect:
Table of Contents

Advertisement

Maximizing network cable reduction

For maximum (97 percent) cable reduction, the 32 Ethernet signals within the server blade enclosure
can be concentrated into any one single external Ethernet port. This results in a total of seven Ethernet
connections for a fully configured 42U rack of six server blade enclosures containing 192 network
adapters.
Applications that utilize a single uplink port include testing and evaluation systems, server blade
enclosures with a few installed servers, and applications that require minimal bandwidth. On a
heavily utilized system, using a single uplink port for all 32 network adapters can cause a traffic
bottleneck. For example, using one uplink from interconnect switch A requires the traffic from all the
network adapters routed to switch B to travel over the two crosslinks (a 200 Mb/s path), previously
shown in Figure 2. The crosslinks are intended primarily as a failover route and generally are not
used as a primary path. For more optimal performance, at least one uplink port on each interconnect
switch would be used. However, system administrators may use any combination from one to all
twelve external Ethernet ports to increase bandwidth, to separate network and management data onto
physically isolated ports, or to add redundant connections to the Ethernet network backbone.
Another means to achieve network cable reduction is to link the GbE Interconnect Switches with other
ProLiant BL e-Class and p-Class interconnect switches from different server blade enclosures. This is
ideal for customers with multiple blade enclosures within a rack. It also allows the network
administrator to define the desired level of network blocking or oversubscription.
For example, Figure 3 shows a configuration with three fully populated BL p-Class server blade
enclosures, each with the C-GbE Interconnect Kit installed. The interconnect switches are linked, or
daisy chained, together in redundant configuration using the four gigabit uplinks that, in turn, are
connected to the Ethernet network backbone. Each enclosure contains thirty-two 10/100 Ethernet
network adapters with an aggregate bandwidth of 3.2 gigabits per second (Gb/s), for a total
bandwidth of 9.6 Gb/s for entire system of three enclosures. However, since uplinks are daisy
changed together, the maximum system throughput for this configuration is 4.0 Gb/s (the combined
throughput of the four daisy chained gigabit ports). This configuration creates a 2.4x blocking ratio
(9.6 Gb/s versus 4.0 Gb/s); nevertheless, it reduces the total Ethernet network cables at the rack level
from 96 to 4, a 96 percent cable reduction, while maintaining redundant connections to the Ethernet
network backbone.
Figure 3. ProLiant BL p-Class GbE Interconnect Switch linking
7

Advertisement

Table of Contents
loading

This manual is also suitable for:

Proliant bl p-class gbe

Table of Contents