Nvidia ConnectX-6 Manual page 11

Infiniband/ethernet adapter cards for ocp spec 3.0
Hide thumbs Also See for ConnectX-6:
Table of Contents

Advertisement

Feature
InfiniBand HDR100
Up to 200 Gigabit Ethernet
InfiniBand EDR
Memory Components
Overlay Networks
RDMA and RDMA over Converged Ethernet
(RoCE)
A standard InfiniBand data rate, where each lane of a 2X port runs a bit rate of 53.125Gb/s with a 64b/66b encoding, resulting in an effective
bandwidth of 100Gb/s.
NVIDIA adapters comply with the following IEEE 802.3 standards:
• 200GbE / 100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
• IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet
• IEEE 802.3by, Ethernet Consortium25, 50 Gigabit Ethernet, supporting all FEC modes
• IEEE 802.3ba 40 Gigabit Ethernet
• IEEE 802.3by 25 Gigabit Ethernet
• IEEE 802.3ae 10 Gigabit Ethernet
• IEEE 802.3ap based auto-negotiation and KR startup
• Proprietary Ethernet protocols (20/40GBASE-R2, 50GBASE-R4)
• IEEE 802.3ad, 802.1AX Link Aggregation
• IEEE 802.1Q, 802.1P VLAN tags and priority
• IEEE 802.1Qau (QCN)
• Congestion Notification
• IEEE 802.1Qaz (ETS)
• IEEE 802.1Qbb (PFC)
• IEEE 802.1Qbg
• IEEE 1588v2
• Jumbo frame support (9.6KB)
A standard InfiniBand data rate, where each lane of a 4X port runs a bit rate of 25.78125Gb/s with a 64b/66b encoding, resulting in an
effective bandwidth of 100Gb/s.
EEPROM - The EEPROM capacity is 32Kbit.
reserved) 
SPI Quad - includes 256Mbit SPI Quad Flash device (MX25L25645GXDI-08G device by Macronix)
In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual
machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP
packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-6 effectively addresses this by providing
advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol.
ConnectX-6, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technology, delivers low-latency
and high-performance over Band and Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as ConnectX-6 advanced
congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.
Description
FRU I2C address is (0x50) and is accessible through the PCIe
SMBus. (Note:
Address 0x58 is
11

Advertisement

Table of Contents
loading

Table of Contents