Nvidia ConnectX-6 Dx User Manual page 13

Ethernet adapter cards for ocp spec 3.0
Hide thumbs Also See for ConnectX-6 Dx:
Table of Contents

Advertisement

Feature
NVIDIA PeerDirect®
CPU Offload
Quality of Service
(QoS)
Hardware-based I/
O Virtualization
Storage Acceleration
SR-IOV
NC-SI
High-
Performance Accelera
tions
Host Management
Secure Boot 
Crypto
Wake-on-LAN (WoL)
Reset-on-LAN (RoL)
NVIDIA Multi-Host 
Arrow.com.
Arrow.com.
Arrow.com.
Arrow.com.
Arrow.com.
Arrow.com.
Arrow.com.
Arrow.com.
Arrow.com.
Arrow.com.
Arrow.com.
Arrow.com.
Arrow.com.
Downloaded from
Downloaded from
Downloaded from
Downloaded from
Downloaded from
Downloaded from
Downloaded from
Downloaded from
Downloaded from
Downloaded from
Downloaded from
Downloaded from
Downloaded from
PeerDirect® communication provides high-efficiency RDMA access by eliminating
unnecessary internal data copies between components on the PCIe bus (for example,
from GPU to CPU), and therefore significantly reduces application run time.
ConnectX®-6 Dx advanced acceleration technology enables higher cluster efficiency
and scalability to tens of thousands of nodes.
Adapter functionality enables reduced CPU overhead leaving more CPU resources
available for computation tasks.
Open vSwitch (OVS) offload using ASAP
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
Support for port-based Quality of Service enabling various application requirements
for latency and SLA.
ConnectX®-6 Dx provides dedicated adapter resources and guaranteed isolation and
protection for virtual machines within the server.
A consolidated compute and storage network achieves significant cost-performance
advantages over multi-fabric networks. Standard block and file access protocols can
leverage RDMA for high-performance storage access.
• NVMe over Fabric offloads for the target machine
ConnectX®-6 Dx SR-IOV technology provides dedicated adapter resources and
guaranteed isolation and protection for virtual machines (VM) within the server.
The adapter supports a Network Controller Sideband Interface (NC-SI), MCTP over
SMBus and MCTP over PCIe - Baseboard Management Controller interface.
• Tag Matching and Rendezvous Offloads
• Adaptive Routing on Reliable Transport
• Burst Buffer Offloads for Background Checkpointing
NVIDIA host management sideband implementations enable remote monitor and
control capabilities using RBT, MCTP over SMBus, and MCTP over PCIe – Baseboard
Management Controller (BMC) interface, supporting both NC-SI and PLDM management
protocols using these interfaces. NVIDIA OCP 3.0 adapters support these protocols to
offer numerous Host Management features such as PLDM for Firmware Update,
network boot in UEFI driver, UEFI secure boot, and more.
Hardware Root-of-Trust (RoT) Secure Boot and secure firmware update using RSA
cryptography, and cloning-protection, via a device-unique secret key.
Crypto – IPsec and TLS data-in-motion inline encryption and decryption offload and
AES-XTS block-level data-at-rest encryption and decryption offload.
The adapter supported Wake-on-LAN (WoL), a computer
allows an adapter to be turned on or awakened by a network message.
In STBY mode, only port0 is available.
Supported
®
®
NVIDIA
 Mellanox
 Multi-Host technology enables next-generation Cloud, Web 2.0 and
high-performance data centers to design and build new scale-out heterogeneous
compute and storage racks with direct connectivity between multiple hosts and the
centralized network controller. This enables direct data access with the lowest
latency to significantly improve densities and maximizes data transfer rates. For more
information, please visit
NVIDIA Multi-Host
Description
2(TM)
networking standard that
Solutions.
13

Advertisement

Table of Contents
loading

Table of Contents