Nvidia MCX651105A-EDAT User Manual page 24

Connectx-6 infiniband/ethernet adapter card
Hide thumbs Also See for MCX651105A-EDAT:
Table of Contents

Advertisement

In order to better scale their networks, datacenter operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in
Overlay
encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing
Networks
higher loads on the host CPU. ConnectX-6 effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-
capsulate the overlay protocol.
ConnectX-6, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technology, delivers low-latency and high-performance over
RDMA and RDMA
InfiniBand and Ethernet networks. Leveraging datacenter bridging (DCB) capabilities as well as ConnectX-6 advanced congestion control hardware mechanisms, RoCE
over Converged
provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.
Ethernet (RoCE)
PeerDirect™ communication provides high efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example,
NVIDIA
from GPU to CPU), and therefore significantly reduces application run time. ConnectX-6 advanced acceleration technology enables higher cluster efficiency and
PeerDirect™
scalability to tens of thousands of nodes.
Adapter functionality enables reduced CPU overhead leaving more CPU resources available for computation tasks.
CPU Offload
Open vSwitch (OVS) offload using ASAP
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
Support for port-based Quality of Service enabling various application requirements for latency and SLA.
Quality of
Service (QoS)
ConnectX-6 provides dedicated adapter resources and guaranteed isolation and protection for virtual machines within the server.
Hardware-based
I/O
Virtualization
A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can
Storage
leverage:
Acceleration
RDMA for high-performance storage access
NVMe over Fabric offloads for target machine
Erasure Coding
T10-DIF Signature Handover
ConnectX-6 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server.
SR-IOV
2(TM)
24

Advertisement

Table of Contents
loading

Table of Contents