RDMA
ConnectX-6, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged
and
Ethernet) technology, delivers low-latency and high-performance over InfiniBand and Ethernet
RDMA
networks. Leveraging datacenter bridging (DCB) capabilities as well as ConnectX-6 advanced
over
congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services
Converg
over Layer 2 and Layer 3 networks.
ed
Ethernet
(RoCE)
NVIDIA
PeerDirect™ communication provides high efficiency RDMA access by eliminating unnecessary
PeerDire
internal data copies between components on the PCIe bus (for example, from GPU to CPU), and
ct™
therefore significantly reduces application run time. ConnectX-6 advanced acceleration
technology enables higher cluster efficiency and scalability to tens of thousands of nodes.
CPU
Adapter functionality enables reduced CPU overhead leaving more CPU resources available for
Offload
computation tasks.
Open vSwitch (OVS) offload using ASAP
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
Quality
Support for port-based Quality of Service enabling various application requirements for latency
of
and SLA.
Service
(QoS)
Hardwar
ConnectX-6 provides dedicated adapter resources and guaranteed isolation and protection for
e-based
virtual machines within the server.
I/O
Virtualiz
ation
Storage
A consolidated compute and storage network achieves significant cost-performance advantages
Accelera
over multi-fabric networks. Standard block and file access protocols can leverage:
tion
•
RDMA for high-performance storage access
•
NVMe over Fabric offloads for target machine
•
Erasure Coding
•
T10-DIF Signature Handover
SR-IOV
ConnectX-6 SR-IOV technology provides dedicated adapter resources and guaranteed isolation
and protection for virtual machines (VM) within the server.
•
Tag Matching and Rendezvous Offloads
High-
•
Adaptive Routing on Reliable Transport
Perform
•
Burst Buffer Offloads for Background Checkpointing
ance
Accelera
tions
2(TM)
18