Chapter IV. iWARP (RDMA)
4.2.3. Configuration of various MPIs (Installation and Setup)
Intel-MPI
i. Download latest Intel MPI from the Intel website
ii. Copy the license file (.lic file) into
iii. Create machines.LINUX (list of node names) in
iv. Select advanced options during installation and register the MPI.
v. Install software on every node.
[root@host]# ./install.py
vi. Set IntelMPI with mpi-selector (do this on all nodes).
[root@host]# mpi-selector --register intelmpi --source-dir
/opt/intel/impi/3.1/bin/
[root@host]# mpi-selector --set intelmpi
vii. Edit
.bashrc
and add these lines:
export RSH=ssh
export DAPL_MAX_INLINE=64
export I_MPI_DEVICE=rdssm:chelsio
export MPIEXEC_TIMEOUT=180
export MPI_BIT_MODE=64
viii. Logout & log back in.
ix. Populate
mpd.hosts
The hosts in this file should be Chelsio interface IP addresses.
Note
I_MPI_DEVICE=rdssm:chelsio
/etc/dat.conf
MPIEXEC_TIMEOUT
going across the systems.
x. Contact Intel for obtaining their MPI with DAPL support.
xi. To run Intel MPI applications:
mpdboot -n <no_of_nodes_in_cluster> -r ssh
mpdtrace
mpiexec -ppn -n 2 /opt/intel/impi/3.1/tests/IMB-3.1/IMB-MPI1
The performance is best with NIC MTU set to 9000 bytes.
Chelsio T5/T4 Unified Wire For Linux
l_mpi_p_x.y.z
with node names.
chelsio
named
value might be required to increase if heavy traffic is
directory
l_mpi_p_x.y.z
assumes
you
have
.
an
entry
in
Page 71
Need help?
Do you have a question about the Chelsio T5 and is the answer not in the manual?
Questions and answers