Scsi Rdma Protocol; Infiniband Link Operation; Conclusion - HP 226824-001 - ProLiant - ML750 Introduction Manual

Improving network performance
Hide thumbs Also See for 226824-001 - ProLiant - ML750:
Table of Contents

Advertisement

SCSI RDMA Protocol

SCSI RDMA Protocol (SRP) encapsulates SCSI commands over InfiniBand for SAN networking.
Operating from the kernel level, SRP allows SCSI commands to be copied between systems via RDMA
for low-latency communications with storage systems.

InfiniBand link operation

Since some network devices can send data faster than the destination device can receive it, InfiniBand
uses a queue pair (one send, one receive) system similar to the one for RDMA over TCP. InfiniBand
queue pairs may be located in the HCA or TCA of each device or, if necessary, in main memory.
When a connection between two channel adapters is established, the transport layer's
communications protocol is selected and queue pairs are assigned to a virtual lane.
The transport layer communications protocol can be implemented in hardware, allowing much of the
work to be off-loaded from the system's processor. The transport layer can handle four types of data
transfers for the Send queue:
• Send/Receive – Typical operation where one node sends a message and another node receives the
message.
• RDMA Write – Operation where one node writes data directly into a memory buffer of a remote
node.
• RDMA Read – Operation where one node reads data directly from a memory buffer of a remote
node.
• RDMA Atomics – Combined operation of reading a memory location, comparing the value, and
changing/updating the value if necessary.
The only operation available for the receive queue is Post Receive Buffer transfer, which identifies a
buffer that a client may send to or receive from using a Send, RDMA Write, or RDMA Read data
transfer.

Conclusion

RDMA operations can provide iWARP functionality to today's Ethernet networks and relieve the
congestion that 10-Gb Ethernet might otherwise cause. High-performance infrastructures such as
InfiniBand use RDMA as a core function to efficiently handle high data throughput that previously
required specialized networks.
HP is a founding member of the RDMA Consortium, an independent group formed to develop the
architectural specifications necessary to implement products that provide RDMA capabilities over
existing network interconnects. Many of the concepts and technologies leveraged for RDMA come
from high-end servers, such as HP NonStop and SuperDome servers, and from established networking
used in storage area networks (SANs) and local area networks (LANs). HP has led the development in
networking and communication technologies, including cluster interconnects such as ServerNet, the
Virtual Interface (VI) Architecture, and InfiniBand, which represent the origins and evolution of RDMA
technology.
Today, HP is at the forefront of the RDMA technology initiative and is a trusted advisor on future data
center directions that provide lasting value for IT customers. HP is committed to support RDMA as
applied to both Ethernet and InfiniBand infrastructures, and to help customers chose the most cost-
effective fabric interconnect solution for their environments.
11

Advertisement

Table of Contents
loading

Table of Contents