Chapter IV. iWARP (RDMA)
x. Set MVAPICH2:
[root@host~]# mpi-selector --set mvapich2 –yes
xi. Logut and log back in.
mpd.hosts
xii. Populate
xiii. On each node, create /etc/mv2.conf with a single line containing the IP address of the local
adapter interface. This is how MVAPICH2 picks which interface to use for RDMA traffic.
Building MPI Tests
Download Intel's MPI Benchmarks from
i.
benchmarks
ii. Untar and change your current working directory to src directory.
iii. Edit make_mpich file and set MPI_HOME variable to the MPI which you want to build the
benchmarks tool against. For example, in case of openMPI-1.6.4 set the variable as:
MPI_HOME=/usr/mpi/gcc/openmpi-1.6.4/
iv. Next, build and install the benchmarks using:
[root@host~]# gmake -f make_mpich
The above step will install IMB-MPI1, IMB-IO and IMB-EXT benchmarks in the current working
directory (i.e. src).
v. Change your working directory to the MPI installation directory. In case of OpenMPI, it will
be /usr/mpi/gcc/openmpi-x.y.z/
vi. Create a directory called tests and then another directory called imb under tests.
vii. Copy the benchmarks built and installed in step (iv) to the imb directory.
viii. Follow steps (v), (vi) and (vii) for all the nodes.
Running MPI Applications
• Run Intel MPI applications as:
mpdboot -n <no_of_nodes_in_cluster> -r ssh
mpdtrace
mpiexec -ppn -n 2 /opt/intel/impi/3.1/tests/IMB-3.1/IMB-MPI1
Chelsio Unified Wire for Linux
with node names.
http://software.intel.com/en-us/articles/intel-mpi-
94
Need help?
Do you have a question about the Terminator Series and is the answer not in the manual?
Questions and answers