Nvidia MCX621102AC-ADAT User Manual page 14

Connectx-6 dx ethernet adapter cards
Hide thumbs Also See for MCX621102AC-ADAT:
Table of Contents

Advertisement

Feature
Overlay
In order to better scale their networks, data center operators often create overlay networks
Networks
that carry traffic from individual virtual machines over logical tunnels in encapsulated formats
such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet
from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-6 Dx
effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading
engines that encapsulate and de-capsulate the overlay protocol.
RDMA
ConnectX-6 Dx, utilizing RoCE (RDMA over Converged Ethernet) technology, delivers low-latency
over Conver
and high-performance over Band and Ethernet networks. Leveraging data center bridging (DCB)
ged
capabilities, as well as ConnectX-6 Dx, advanced congestion control hardware mechanisms,
Ethernet
RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.
(RoCE)
NVIDIA
NVIDIA
PeerDirect®
PeerDirect®
unnecessary internal data copies between components on the PCIe bus (for example, from GPU
to CPU), and therefore significantly reduces application run time. ConnectX-6 Dx advanced
acceleration technology enables higher cluster efficiency and scalability to tens of thousands of
nodes.
CPU Offload
Adapter functionality enables reduced CPU overhead leaving more CPU resources available for
computation tasks.
Open vSwitch (OVS) offload using ASAP
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
Quality of
Support for port-based Quality of Service enabling various application requirements for latency
Service
and SLA.
(QoS)
Hardware-
ConnectX-6 Dx provides dedicated adapter resources and guaranteed isolation and protection
based I/
for virtual machines within the server.
O Virtualizat
ion
Storage
Acceleration A consolidated compute and storage network achieves significant cost-performance advantages
over multi-fabric networks. Standard block and file access protocols can leverage
• RDMA for high-performance storage access
• NVMe over Fabric offloads for the target machine
SR-IOV
ConnectX-6 Dx SR-IOV technology provides dedicated adapter resources and guaranteed
isolation and protection for virtual machines (VM) within the server.
High-
• Tag Matching and Rendezvous Offloads
Performance
• Adaptive Routing on Reliable Transport
 Acceleratio
• Burst Buffer Offloads for Background Checkpointing
ns
Description
communication provides high-efficiency RDMA access by eliminating
2(TM)
14

Advertisement

Table of Contents
loading

Table of Contents