Features And Benefits; Table 4: Features - Mellanox Technologies ConnectX-5 User Manual

Ex 100gb/s vpi single and dual adapter cards
Hide thumbs Also See for ConnectX-5:
Table of Contents

Advertisement

1.2

Features and Benefits

Table 4 - Features
100Gb/s Virtual Protocol
Interconnect (VPI)
Adapter
InfiniBand Architecture
Specification v1.3
compliant
PCI Express (PCIe)
Up to 100 Gigabit Ethernet
InfiniBand EDR
Memory
Overlay Networks
Rev 1.5
a
ConnectX-5 offers the highest throughput VPI adapter, supporting EDR
100Gb/s InfiniBand and 100Gb/s Ethernet and enabling any standard
networking, clustering, or storage to operate seamlessly over any con-
verged network leveraging a consolidated software stack.
ConnectX-5 delivers low latency, high bandwidth, and computing effi-
ciency for performance-driven server and storage clustering applica-
tions. ConnectX-5 is InfiniBand Architecture Specification v1.3
compliant.
Uses PCIe Gen 3.0 (8GT/s) and Gen 4.0 (16GT/s) through an x16 edge con-
nector. Gen 1.1 and 2.0 compatible.
Mellanox adapters comply with the following IEEE 802.3 standards
:– 100GbE/ 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
– IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet
– IEEE 802.3by, Ethernet Consortium25, 50 Gigabit Ethernet,
supporting all FEC modes
– IEEE 802.3ba 40 Gigabit Ethernet
– IEEE 802.3ae 10 Gigabit Ethernet
– IEEE 802.3ap based auto-negotiation and KR startup
– Proprietary Ethernet protocols (20/40GBASE-R2, 50GBASE-R4)
– IEEE 802.3ad, 802.1AX Link Aggregation
– IEEE 802.1Q, 802.1P VLAN tags and priority
– IEEE 802.1Qau (QCN)
– Congestion Notification
– IEEE 802.1Qaz (ETS)
– IEEE 802.1Qbb (PFC)
– IEEE 802.1Qbg
– IEEE 1588v2
– Jumbo frame support (9.6KB)
A standard InfiniBand data rate, where each lane of a 4X port runs a bit
rate of 25.78125Gb/s with a 64b/66b encoding, resulting in an effective
bandwidth of 100Gb/s.
PCI Express - stores and accesses
tion information and packet data.SPI Quad - includes 128Mbit SPI Quad Flash
device (W25Q128FVSIG device by ST Microelectronics)
VPD EEPROM - The EEPROM capacity is 128Kbit.
In order to better scale their networks, data center operators often create overlay
networks that carry traffic from individual virtual machines over logical tunnels
in encapsulated formats such as NVGRE and VXLAN. While this solves net-
work scalability issues, it hides the TCP packet from the hardware offloading
engines, placing higher loads on the host CPU. ConnectX-5 effectively
addresses this by providing advanced NVGRE and VXLAN hardware offload-
ing engines that encapsulate and de-capsulate the overlay protocol.
Mellanox Technologies
InfiniBand and/or
Ethernet fabric connec-
Introduction
10

Hide quick links:

Advertisement

Table of Contents
loading

This manual is also suitable for:

Mcx556a-ecatMcx555a-ecatMcx556a-edat

Table of Contents