Download Print this page

Nvidia Mellanox ConnectX-6 MCX654106A-ECAT User Manual page 16

Infiniband/vpi adapter cards
Hide thumbs Also See for Mellanox ConnectX-6 MCX654106A-ECAT:

Advertisement

Overlay
In order to better scale their networks, datacenter operators often create overlay networks that
Networks
carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as
NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the
hardware offloading engines, placing higher loads on the host CPU. ConnectX-6 effectively
addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that
encapsulate and de-capsulate the overlay protocol.
RDMA and
ConnectX-6, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged
RDMA over
Ethernet) technology, delivers low-latency and high-performance over InfiniBand and Ethernet
Converged
networks. Leveraging datacenter bridging (DCB) capabilities as well as ConnectX-6 advanced
Ethernet
congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services
(RoCE)
over Layer 2 and Layer 3 networks.
Mellanox
PeerDirect™ communication provides high efficiency RDMA access by eliminating unnecessary
PeerDirect
internal data copies between components on the PCIe bus (for example, from GPU to CPU), and
therefore significantly reduces application run time. ConnectX-6 advanced acceleration technology
enables higher cluster efficiency and scalability to tens of thousands of nodes.
CPU
Adapter functionality enabling reduced CPU overhead allowing more available CPU for
Offload
computation tasks.
Flexible match-action flow tables
Open VSwitch (OVS) offload using ASAP2(TM)
Tunneling encapsulation / decapsulation
Quality of
Support for port-based Quality of Service enabling various application requirements for latency
Service
and SLA.
(QoS)
Hardware-
ConnectX-6 provides dedicated adapter resources and guaranteed isolation and protection for
based I/O
virtual machines within the server.
Virtualizati
on
Storage
A consolidated compute and storage network achieves significant cost-performance advantages
Accelerati
over multi-fabric networks. Standard block and file access protocols can leverage:
on
RDMA for high-performance storage access
NVMe over Fabric offloads for target machine
Erasure Coding
T10-DIF Signature Handover
SR-IOV
ConnectX-6 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and
protection for virtual machines (VM) within the server.
High-
Tag Matching and Rendezvous Offloads
Performan
Adaptive Routing on Reliable Transport
ce
Burst Buffer Offloads for Background Checkpointing
Accelerati
ons
16

Advertisement

loading