Nvidia 900-9X4B0-0012-0T1 User Manual page 9

Connectx-4 lx ethernet adapter cards
Table of Contents

Advertisement

Feature
Overlay Networks
RDMA and RDMA over Converged
Ethernet (RoCE)
NVIDIA PeerDirect™
CPU Offload
Quality of Service (QoS)
Hardware-based I/O Virtualization
Storage Acceleration
SR-IOV
NC-SI
High-Performance Accelerations
UEFI
In order to better scale their networks, data center operators often
create overlay networks that carry traffic from individual virtual
machines over logical tunnels in encapsulated formats such as NVGRE
and VXLAN. While this solves network scalability issues, it hides the
TCP packet from the hardware offloading engines, placing higher
loads on the host CPU. ConnectX-4 Lx effectively addresses this by
providing advanced NVGRE and VXLAN hardware offloading engines
that encapsulate and de-capsulate the overlay protocol.
ConnectX-4 Lx, utilizing IBTA RDMA (Remote Data Memory Access)
and RoCE (RDMA over Converged Ethernet) technology, delivers low-
latency and high-performance over Band and Ethernet networks.
Leveraging data center bridging (DCB) capabilities, as well as
ConnectX-4 Lx, advanced congestion control hardware mechanisms,
RoCE provides efficient low-latency RDMA services over Layer 2 and
Layer 3 networks.
PeerDirect™ communication provides high-efficiency RDMA access by
eliminating unnecessary internal data copies between components
on the PCIe bus (for example, from GPU to CPU), and therefore
significantly reduces application run time. ConnectX-4 Lx advanced
acceleration technology enables higher cluster efficiency and
scalability to tens of thousands of nodes.
Adapter functionality enabling reduced CPU overhead allowing more
available CPU for computation tasks.
Support for port-based Quality of Service enabling various
application requirements for latency and SLA.
ConnectX-4 Lx provides dedicated adapter resources and guaranteed
isolation and protection for virtual machines within the server.
A consolidated compute and storage network achieves significant
cost-performance advantages over multi-fabric networks. Standard
block and file access protocols can leverage RDMA for high-
performance storage access.
• NVMe over Fabric offloads for the target machine
ConnectX-4 Lx SR-IOV technology provides dedicated adapter
resources and guaranteed isolation and protection for virtual
machines (VM) within the server.
The adapter supports a Network Controller Sideband Interface (NC-
SI), MCTP over SMBus and MCTP over PCIe - Baseboard Management
Controller interface.
• Tag Matching and Rendezvous Offloads
• Adaptive Routing on Reliable Transport
• Burst Buffer Offloads for Background Checkpointing
UEFI is a standard firmware interface designed to replace BIOS.
NVIDIA UEFI Network driver allows boot over the network via PXE
(Preboot eXecution Environment). This network driver allows remote
boot over Ethernet, or Boot over iSCSI (Bo-iSCSI) in UEFI mode, and
also supports the Secure Boot standard. The UEFI Network driver
allows IT managers the flexibility to deploy servers with a single
adapter card into Ethernet networks while also enabling booting
from LAN or remote storage targets. In addition to boot capabilities,
NVIDIA UEFI Network driver provides firmware management and
diagnostic protocols compliant with the UEFI specification.
For further information, refer to the
Manual.
Supported in MCX4111A-ACUT, MCX4121A-XCHT, MCX4121A-ACUT
and MCX4121A-ACHT.
Description
NVIDIA PreBoot Drivers User
9

Advertisement

Table of Contents
loading

Table of Contents