IBM NeXtScale System Planning And Implementation Manual page 253

Table of Contents

Advertisement

It provides a number of nonblocking collective communications subroutines
that are available for parallel programming that also are extensions of the MPI
standard. Collective communications routines for 64-bit programs were
enhanced to use shared memory for better performance. The IBM MPI
collective communication is designed to use an optimized communication
algorithm according to job and data size.
The IBM MPI provides a high scalability and low memory usage
implementation. It minimizes its own memory usage so that an application
program uses as much system resources as possible and is designed to
support parallel job size of up to 1 million tasks.
IBM Parallel Active Messaging Interface (PAMI)
IBM PAMI is a converged messaging API that covers point-to-point and
collective communications. PAMI exploits the low-level user space interface to
the Host Fabric Interface (HFI) and TCP/IP by using UDP sockets.
PAMI has a rich set of collective operations that are designed to support MPI
and pGAS semantics, multiple algorithm selection, and nonblocking
operation. It supports nonblocking and ad hoc geometry
(group/communicator) creation and nonblocking collective allreduce, reduce,
broadcast, gather(v), scatter(v), alltoall(v), reduce scatter, and (ex)scan
operations. The geometry can support multiple algorithms, including
hardware-accelerated versions of broadcast, barrier, allreduce, and reduce.
Low-level application programming interface (LAPI)
The low-level application programming interface (LAPI) is a message-passing
API that provides a one-sided communication model. In this model, one task
starts a communication operation to a second task. The completion of the
communication does not require the second task to take complementary
action. The LAPI library provides basic operations to "put" data to and "get"
data from one or more virtual addresses of a remote task. LAPI also provides
an active message infrastructure. With active messaging, programmers can
install a set of handlers that are called and run in the address space of a
target task on behalf of the task that is originating the active message. Among
other uses, these handlers can be used to dynamically determine the target
address (or addresses) where data from the originating task must be stored.
You can use this generic interface to customize LAPI functions for your
environment.
235
Chapter 8. Software stack

Advertisement

Table of Contents
loading

This manual is also suitable for:

Nextscale n1200Nextscale nx360 m4

Table of Contents