Download Print this page

Nvidia MMA4Z00-NS Manual page 10

800gb/s twin-port osfp, 2x400gb/s multimode 2xsr4, 50m

Advertisement

4. Switch-to-DGX H100 GPU Systems
The DGX-H100 contains eight "Hopper" H100 GPUs in the top chassis section, two
CPUs, storage, and In niBand and/or Ethernet networking in the bottom server
section. This contains eight 400Gb/s ConnectX-7 ICs mounted on two mezzanine
boards called "Cedar-7" cards for GPU-to-GPU In niBand or Ethernet networking.
The cards I/Os are routed internally to four 800G Twin-port OSFP cages with internal
riding heat sinks on top of the cages mounted on the front panel. This requires the
use of at-top transceivers, ACCs, and DACs in the DGX H100. The 400G IB/EN
switches require nned-top 2x400G transceivers for additional cooling due to the
reduced air ow inlets in the switches.
The Cedar-7-to-Switch links can be either single mode or multimode optics or ACC
active copper cables and in In niBand or Ethernet.
Each Twin-port 2x400G transceiver provides two 400G ConnectX-7 links from the
DGX to the Quantum-2 or Spectrum-4 switch. This reduces the ConnectX-7 card
redundancy, complexity, and the number of transceivers compared to the DGX A100,
which uses 8 separate HCAs and 8 transceivers or AOCs and two additional
ConnectX-6s for In niBand or Ethernet Storage.
Additionally, for traditional networking to storage, clusters, and management, the
DGX-H100 also supports up to four ConnectX-7 and/or two BlueField-3 DPUs in
In niBand and/or Ethernet for storage I/O, and additional networking using 400G or
200G with OSFP or QSFP112 devices. These PCIe card slots are located on both
sides of the OSFP GPU cages and use separate cables and/or transceivers.
MMA4Z00-NS 800Gb/s Twin-port OSFP, 2x400Gb/s Multimode 2xSR4, 50m
9

Advertisement

loading
Need help?

Need help?

Do you have a question about the MMA4Z00-NS and is the answer not in the manual?

Subscribe to Our Youtube Channel