Nvidia BlueField-2 User Manual page 19

Ethernet dpu
Hide thumbs Also See for BlueField-2:
Table of Contents

Advertisement

Feature
RDMA and RDMA
DPU, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technology, delivers low-latency and high-performance over Ethernet
over Converged
networks. Leveraging data center bridging (DCB) capabilities as well as advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA
Ethernet (RoCE)
services over Layer 2 and Layer 3 networks.
NVIDIA
NVIDIA PeerDirect communication provides high-efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example,
PeerDirect
from GPU to CPU), significantly reducing application run time. DPU advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands
of nodes.
Quality of
Support for port-based Quality of Service enabling various application requirements for latency and SLA.
Service (QoS)
Storage
A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can
Acceleration
leverage RDMA for high-performance storage access.
NVMe over Fabric offloads for the target machine
T10-DIF Signature Handover
BlueField-2 DPU may operate as a co-processor offloading specific storage tasks from the host, isolating part of the storage media from the host, or enabling abstraction of
software-defined storage logic using the BlueField-2 Arm cores. On the storage initiator side, BlueField-2 DPU can prove an efficient solution for hyper-converged systems to
enable the host CPU to focus on computing while all the storage interface is handled through the Arm cores.
NVMe-oF
Nonvolatile Memory Express (NVMe) over Fabrics is a protocol for communicating block storage IO requests over RDMA to transfer data between a host computer and a
target solid-state storage device or system over a network. BlueField-2 DPU may operate as a co-processor offloading specific storage tasks from the host using its powerful
NVMe over Fabrics Offload accelerator.
SR-IOV
DPU SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server.
GPU Direct
The latest advancement in GPU-GPU communications is GPUDirect RDMA. This new technology provides a direct P2P (Peer-to-Peer) data path between the GPU Memory
directly to/from the HCA devices. This provides a significant decrease in GPU-GPU communication latency and completely offloads the CPU, removing it from all GPU-GPU
communications across the network. The DPU uses high-speed DMA transfers to copy data between P2P devices resulting in more efficient system applications.
Crypto
The BlueField-2 DPU crypto-enabled versions include a BlueField-2 IC which supports accelerated cryptographic operations. In addition to specialized instructions for bulk
cryptographic processing in the Arm cores, an offload hardware engine accelerates public-key cryptography, and random number generation is enabled.
Security
A consolidated compute and network solution based on DPU achieves significant advantages over a centralized security server solution. Standard encryption protocols and
Accelerators
security applications can leverage BlueField-2 compute capabilities and network offloads for security application solutions such as Layer 4 stateful firewall.
Description
19

Advertisement

Table of Contents
loading

Table of Contents