Nvidia Mellanox User Manual page 10

Connectx-6 infiniband/vpi adapter cards for ocp spec 3.0
Hide thumbs Also See for Mellanox:
Table of Contents

Advertisement

Feature
Overlay Networks
RDMA and RDMA
over Converged Ethernet
(RoCE)
Mellanox PeerDirect™
CPU Offload
Quality of Service (QoS)
Hardware-based I/
O Virtualization
Storage Acceleration
SR-IOV
NC-SI
High-
Performance Acceleration
s
Wake-on-LAN (WoL)
Reset-on-LAN (RoL)
In order to better scale their networks, data center operators often create overlay
networks that carry traffic from individual virtual machines over logical tunnels in
encapsulated formats such as NVGRE and VXLAN. While this solves network
scalability issues, it hides the TCP packet from the hardware offloading engines,
placing higher loads on the host CPU. ConnectX-6 effectively addresses this by
providing advanced NVGRE and VXLAN hardware offloading engines that
encapsulate and de-capsulate the overlay protocol.
ConnectX-6, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA
over Converged Ethernet) technology, delivers low-latency and high-performance
over Band and Ethernet networks. Leveraging data center bridging (DCB)
capabilities as well as ConnectX-6 advanced congestion control hardware
mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and
Layer 3 networks.
PeerDirect™ communication provides high-efficiency RDMA access by eliminating
unnecessary internal data copies between components on the PCIe bus (for
example, from GPU to CPU), and therefore significantly reduces application run
time. ConnectX-6 advanced acceleration technology enables higher cluster
efficiency and scalability to tens of thousands of nodes.
Adapter functionality enabling reduced CPU overhead allowing more available
CPU for computation tasks.
Open vSwitch (OVS) offload using ASAP
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
Support for port-based Quality of Service enabling various application
requirements for latency and SLA.
ConnectX-6 provides dedicated adapter resources and guaranteed isolation and
protection for virtual machines within the server.
A consolidated compute and storage network achieves significant cost-
performance advantages over multi-fabric networks. Standard block and file
access protocols can leverage RDMA for high-performance storage access.
• NVMe over Fabric offloads for the target machine
• Erasure Coding
• T10-DIF Signature Handover
ConnectX-6 SR-IOV technology provides dedicated adapter resources and
guaranteed isolation and protection for virtual machines (VM) within the server.
The adapter supports a Network Controller Sideband Interface (NC-SI), MCTP over
SMBus and MCTP over PCIe - Baseboard Management Controller interface.
• Tag Matching and Rendezvous Offloads
• Adaptive Routing on Reliable Transport
• Burst Buffer Offloads for Background Checkpointing
The adapter supported Wake-on-LAN (WoL), a computer
allows an adapter to be turned on or awakened by a network message.
TBD: In STBY mode, only port0 is available.
Supported
Description
2(TM)
networking standard that
10

Advertisement

Table of Contents
loading

Table of Contents