Chapter IV. iWARP (RDMA)
iv. Next, build and install the benchmarks using:
[root@host]# gmake -f make_mpich
The above step will install IMB-MPI1, IMB-IO and IMB-EXT benchmarks in the current working
directory (i.e. src).
v. Change your working directory to the MPI installation directory. In case of OpenMPI, it will
be /usr/mpi/gcc/openmpi-x.y.z/
vi. Create a directory called tests and then another directory called imb under tests.
vii. Copy the benchmarks built and installed in step (iv) to the imb directory.
viii. Follow steps (v), (vi) and (vii) for all the nodes.
4.2.5. Running MPI applications
Run Open MPI application as:
mpirun --host
node1,node2 -mca btl openib,sm,self /usr/mpi/gcc/openmpi-
x.y.z/tests/imb/IMB-MPI1
For OpenMPI/RDMA clusters with node counts greater than or equal to 8 nodes,
Note
and process counts greater than or equal to 64, you may experience the
following RDMA address resolution error when running MPI jobs with the default
OpenMPI settings:
The RDMA CM returned an event error while attempting to make a connection.
This type of error usually indicates a network configuration error.
Local host:
core96n3.asicdesigners.com
Local device: Unknown
Error name:
RDMA_CM_EVENT_ADDR_ERROR
Peer:
core96n8
Workaround: Increase the OpenMPI rdma route resolution timeout. The default is 1000, or
1000ms. Increase it to 30000 with this parameter:
--mca btl_openib_connect_rdmacm_resolve_timeout 30000
Chelsio T5/T4 Unified Wire For Linux
Page 77
Need help?
Do you have a question about the Chelsio T5 and is the answer not in the manual?
Questions and answers