Dell Active Fabric Manager Deployment Manual page 24

Active fabric manager deployment guide 1.5
Hide thumbs Also See for Active Fabric Manager:
Table of Contents

Advertisement

For redundancy, each leaf in a large core design can connect 2 to 16 spines. The Type 1: Extra Large Distributed Core
Design uses a 1:2 spine-to-leaf ratio. As a result, the maximum number of spines for this design is 16 and the maximum
number of leaves is 32.
Each Z9000 leaf for the Type 1: Extra Large Distributed Core design has the following:
Six hundred forty Gigabit of fabric interlink maximum capacity to the Spine (16 x 40 Gb)
Forty-eight 10 GbE ports for server connectivity and WAN connectivity
Type 2: Large Distributed Core Fabric
Use the Type 2: Large Distributed Core fabric design when:
You require a fabric interlink bandwidth between the spines and leaves of 10 GbE is required.
The current and future planned uplinks and downlinks on the leaves for the fabric is less than or equal to 2048
ports.
The leaves act as a switch or ToR-leaf switch. Within the ToR, the downlink protocol can be either VLAN or
VLAN and LAG.
With a Type 2: Large Distributed Core fabric design, the S4810 spines connect to the S4810 leaves at a fixed 10 GbE. The
maximum number of spines is 32 and the maximum number of leaves is 64, as shown in the following figure.
Figure 5. Type 2: Large Distributed Core Fabric Design
Each S4810 leaf for the Type 2: Large Distributed Core fabric design has the following:
Forty gigabit of fabric interlink maximum capacity to the spine (4x 10 Gb)
Thirty-two 10 Gigabit ports will be used for fabric interlink and thirty–two 10 Gb ports are used for the downlinks
24

Hide quick links:

Advertisement

Table of Contents
loading

Table of Contents