Page 1
S i m p l i f y InfiniPath Install Guide Version 2.0 IB0056101-00 C Page i...
Page 2
QLogic Corporation reserves the right to change product specifications at any time without notice. Applications described in this document for any of these products are for illustrative purposes only. QLogic Corporation makes no representation nor warranty that such applications are suitable for the specified use without further testing or modification.
The InfiniPath Install Guide contains instructions for installing the QLogic InfiniPath Interconnect hardware adapters and the InfiniPath and OpenFabrics software. The adapters covered in this guide are the QLE7140, the QLogic InfiniPath PCI Express™ adapter, the the QMI7140, which runs on Power PC systems, particularly the IBM®...
InfiniPath Interconnect. The nodes are Linux-based computers, each having up to eight processors. The InfiniPath interconnect is InfiniBand 4X, with a raw data rate of 10 Gb/s (data rate of 8Gb/s). InfiniPath utilizes standard, off-the-shelf InfiniBand 4X switches and cabling.
Single Port 10GBS InfiniBand IBM Blade Center Adapter This version of InfiniPath provides support for all QLogic’s HCAs, including: InfiniPath QLE7140, which is supported on systems with PCIe x8 or x16 slots ■ InfiniPath QMI7140, which runs on Power PC systems, particularly on the IBM®...
2.6.16 (x86_64 and ppc64) NOTE: IBM Power systems run only with the SLES 10 distribution. The SUSE10 release series is no longer supported as of this InfiniPath 2.0 release. Fedora Core 4 kernels prior to 2.6.16 are also no longer supported. Software Components...
Page 13
OpenFabrics libraries and utilities ■ OpenFabrics kernel module support is now built and installed as part of the InfiniPath RPM install. The InfiniPath release 2.0 runs on the same code base as OpenFabrics Enterprise Distribution (OFED) version 1.1. It also includes the OpenFabrics 1.1-based library and utility RPMs.
1 – Introduction Conventions Used in this Guide Conventions Used in this Guide This Guide uses these typographical conventions: Table 1-3. Typographical Conventions Convention Meaning Fixed-space font is used for literal items such as commands, command functions, programs, files and pathnames, and program output;...
■ Readme file ■ The Troubleshooting Appendix for installation, InfiniPath and OpenFabrics administration, and MPI issues is located in the InfiniPath Interconnect User Guide. Visit the QLogic support Web site for documentation and the latest software updates. http://www.qlogic.com IB0056101-00 C...
Page 16
1 – Introduction Documentation and Technical Support IB0056101-00 C...
This chapter lists the requirements and provides instructions for installing the QLogic InfiniPath Interconnect adapters. Instructions are included for the QLogic InfiniPath PCI Express Adapter and PCIe riser card, QLE7140, the QMI7140 for IBM Blade Center H processor blades, and the InfiniPath QHT7040 or QHT7140 adapter hardware and HTX riser card.
Note that the motherboard vendor is the optimal source for information on the layout and use of HyperTransport and PCIe-enabled expansion slots on supported motherboards. For the most up-to-date listing of InfiniPath PCIe and HTX Adapter model numbers and the motherboards in which they are supported, please go to our web site: http://www.qlogic.com...
InfiniPath Interconnect User Guide. 2.1.2 Cabling and Switches InfiniPath utilizes standard, off-the-shelf InfiniBand cabling and 4X switches. The InfiniPath interconnect is designed to work with all InfiniBand-compliant switches. A standard InfiniBand copper cable up to a length of twenty meters is required.
MTRR (Memory Type Range Registers) is used by the InfiniPath driver to enable write combining to the InfiniPath on-chip transmit buffers. This improves write bandwidth to the InfiniPath chip by writing multiple words in a single bus transaction (typically 64). This applies only to x86_64 systems.
■ The contents are illustrated in figure 2-2 below. The IBA6120 and the IBA6110 are the QLogic InfiniPath Interconnect ASICs, which are the central components of the interconnect. The IBA6120 is shown in figure 2-1, and the IBA6110 is shown in figure 2-2.
When unpacking, ground yourself before removing the InfiniPath Adapter from the anti-static bag. 1. Grasping the InfiniPath Adapter by its face plate, pull the adapter out of the anti-static bag. Handle the adapter only by its edges or the IB connector. Do not allow the InfiniPath Adapter or any adapter card components to touch any metal parts.
This results in a horizontal installation of the QLE7140. This type of installation will be described first. Installation in a 3U chassis will be described in the next section. Installation of InfiniPath QLE7140 in 1U or 2U chassis requires installation with a PCI Express riser card.
7. Remove the InfiniPath QLE7140 from the anti-static bag. 8. Locate the face plate on the connector edge of the card. 9. Connect the InfiniPath adapter and PCIe riser card together, forming the assembly that you’ll insert into your motherboard. To do this, first visually line up the card slot connector edge with the edge connector of the PCIe riser card.
The result is a combined L-shaped assembly of the PCIe riser card and InfiniPath Adapter. This assembly is what you’ll insert into the PCIe slot on the motherboard in the next step. 11. Turn the assembly so that the riser card connector edge is facing the PCIe slot on the mother board, and the face plate is toward the front of the chassis.
Hardware Installation Figure 2-5. Assembled QLE7140 with Riser 14. Secure the face plate to the chassis. The InfiniPath adapter has a screw hole on the side of the face plate which can be attached to the chassis with a retention screw.
2 – Hardware Installation Hardware Installation usually need to be done before installing the InfiniPath adapter. section 2.1.4. 2. Shut down the power supply to the system into which you’ll be installing the InfiniPath adapter. 3. Take precautions to avoid damage to the cards by grounding yourself or touching the metal chassis to discharge static electricity before handling them.
HTX riser card edge connector, as show in Figure 3 above. The result is a combined L-shaped assembly of the HTX riser card and InfiniPath Adapter. This assembly is what you’ll insert into the HTX slot on the motherboard in the next step.
Figure 2-8. Assembled QHT7140 with Riser 14. Secure the face plate to the chassis. The InfiniPath adapter has a screw hole on the side of the face plate which can be attached to the chassis with a retention screw.
3. Take precautions to avoid damage to the cards by grounding yourself or touching the metal chassis to discharge static electricity before handling them. 4. If you are installing the InfiniPath Adapter into a covered system, you will first need to remove the cover screws and cover plate to expose the system’s motherboard.
4. The InfiniBand cables are symmetric; either end can be plugged into the switch. Connect the InfiniBand cable to the connector on the InfiniPath QLE7140 or QHT7140. Depress the side latches of the cable when connecting. (On some cables this latch is located at the top of the cable connector.) Make sure the...
2.6.2 Optical Fibre Option The InfiniPath adapter also supports connection to the switch by means of optical fibres through optical media converters such as the Emcore QT2400. Not all switches support these types of convertors. For more information on the Emcore convertor, contact www.emcore.com.
OpenFabrics libraries and utilities ■ OpenFabrics kernel module support is now built and installed as part of the InfiniPath RPM install. The InfiniPath release 2.0 runs on the same code base as OpenFabrics Enterprise Distribution (OFED) version 1.1. It also includes the OpenFabrics 1.1-based library and utility RPMs.
OpenFabrics are defined below. See section 3.2. 3. For each release, download the InfiniPath/OpenFabrics software from the QLogic web site to a local server directory, and from there install the appropriate packages on each cluster node as described under section 3.3, section 3.4,...
2.6.16 (x86_64 and ppc64) NOTE: IBM Power systems run only with the SLES 10 distribution. The SUSE10 release series is no longer supported as of this InfiniPath 2.0 release. Fedora Core 4 kernels prior to 2.6.16 are also no longer supported. 3.2.2...
Kernel Patches Some kernels, such as some versions of Fedora Core 4 (2.6.16), have CONFIG_PCI_MSI=n as the default. If the InfiniPath driver is being compiled on a machine without CONFIG_PCI_MSI=y configured, you will get a compilation error. This default may also be introduced with updates to other Linux distributions or local configuration changes.
3 – Software Installation OpenFabrics RPMs InfiniPath Software RPMs Linux distributions of InfiniPath software are installed from binary RPMs. RPM is a Linux packaging and installation tool used by Red Hat, SUSE, and CentOS. Each set of RPMs uses a build identifier and a distribution identifier .
The locations of the install directories are given in the InfiniPath Interconnect User Guide in section 2.2. The InfiniPath OFED 1.1 library and utility source tar file may also be downloaded from the web site. IB0056101-00 C...
/lib/modules/‘uname -r‘/updates This avoids replacing kernel modules which may be provided by your Linux distribution, so they will be available if the InfiniPath software is removed. In this release the module ipath_core.ko has been renamed to ib_ipath.ko, and conflicts can arise if ipath_core.ko is also present. If it is found during installation of the infinipath-kernel RPM, ipath_core.ko is renamed to...
Installation of the InfiniPath driver RPM (infinipath-kernel-2.0-xxx-yyy) builds kernel modules for the currently running kernel version. These InfiniPath modules will only work with that kernel. If a different kernel is booted, the InfiniPath driver RPM must be re-installed or rebuilt See section 3.14 for more information.
Installing for Your Distribution You may be using a kernel which is compatible with one of the supported distributions, but which may not be picked during infinipath-kernel installation. It may also happen when using make-install.sh to manually recompile the drivers.
We extend the normal Rocks compute node appliance .xml file by adding two functions: one to install the QLogic PathScale compilers, and an install script that loads the drivers after kickstart reboots the machine.
3. Download and install the config and kickstart RPMs onto the front end node. 4. Download and install the InfiniPath RPMs onto the front end node. 5. Create a directory that contains the config, kickstart, and InfiniPath RPMs on the front end node.
Page 46
Note that just the prefix of the RPM names for infinipath, mpi, and the compiler are included; the full name will vary from release to release. See your current release for the complete names.
Page 47
Managing and Installing Software Using Rocks 6. Create the file: /home/install/site-profiles/4.0.0/nodes/extend-compute.xml Use the following contents: <?xml version="1.0" standalone="no"?> <kickstart> <description> </description> <changelog> </changelog> <-- QLogic PathScale Compilers --> <package>infinipath-sub-client</package> <package>infinipath-base</package> <package>infinipath-compilers-libs</package> <-- InfiniPath Drivers --> <package>mpi-doc</package> <package>mpi-frontend</package> <package>mpi-libs</package> <package>mpi-benchmark</package> <package>mpi-devel</package>...
</post> </kickstart> The important thing to note in this file is that the installation of the InfiniPath drivers is done in the <post> section, as it is a "live" install. This file can be used as a guideline: it may be cut and pasted, and then modified to suit your needs.
To find more information on Red Hat Enterprise Linux 4, and on using kickstart, see: http://www.redhat.com/ InfiniPath and OpenFabrics Driver Overview The ib_ipath module provides low level InfiniPath hardware support, and is the base driver for both the InfiniPath and OpenFabrics software components. The ib_ipath module does hardware initialization, handles InfiniPath-specific memory management, and provides services to other InfiniPath and OpenFabrics modules.
Normally this configuration file is set up correctly at installation and the driver(s) are loaded automatically during system boot once the RPMs have been installed. Assuming that all the InfiniPath and OpenFabrics sofware has been installed, the default settings upon startup will be: InfiniPath ib_ipath is enabled ■...
Typically on servers there are two Ethernet devices present, numbered as 0 (eth0) and 1 (eth1). This example assumes we create a third device, eth2. NOTE: When multiple InfiniPath chips are present, the configuration for eth3, eth4, and so on follow the same format as for adding eth2 in the examples below.
1 (eth1). This example assumes we create a third device, eth2. NOTE: When multiple InfiniPath chips are present, the configuration for eth3, eth4, and so on follow the same format as for adding eth2 in the examples below. Similarly , in step 2, add one to the unit number, so...
Page 53
If either step 1 or step 2 fails in some fashion, the problem must be found and corrected before continuing. Verify that the RPMs are installed correctly, and that infinipath has correctly been started. If problems continue, run and report the results to your reseller or InfiniPath support ipathbug-helper organization.
OpenFabrics Configuration and Startup In the prior InfiniPath 1.3 release the InfiniPath (ipath_core) and OpenFabrics (ib_ipath) modules were separate. In this release there is now one module, ib_ipath, which provides both low level InfiniPath support and management functions for OpenFabrics protocols.
Instructions are given here to manually configure your OpenFabrics IPoIB network interface. This example assumes that you are using sh or bash as your shell, and that all required InfiniPath and OpenFabrics RPMs are installed, and your startup scripts have been run, either manually or at system boot.
In this release SRP is provided as a technology preview. Add ib_srp to the module list in /etc/sysconfig/infinipath to have it automatically loaded. NOTE: SRP does not yet work with IBM Power Systems.This will be fixed in a future release.
The InfiniPath driver software runs as a system service, normally started at system startup. Normally you will not need to restart the software, but you may wish to do so after installing a new InfiniPath release, or after changing driver options, or if doing manual testing.
If there is output, you should look at the output from this command to determine if it is configured: $ /sbin/ifconfig -a Finally, if you need to find which InfiniPath and OpenFabrics modules are running, try the following command: $ lsmod | egrep ’ipath_|ib_|rdma_|findex’...
3 – Software Installation Switch Configuration and Monitoring These topics are also covered in more detail in the InfiniPath Interconnect User Guide, in Section 3.15 MPI Over uDAPL Some MPI implementations can be run over uDAPL. uDAPL is the user mode version of the Direct Access Provider Library (DAPL).
InfiniPath fabric. It is to be run on a front end node, and requires specification of a hosts file: $ ipath_checkout [options] hostsfile where hostsfile designates a file listing the hostnames of the nodes of the cluster, one hostname per line.
Page 61
Turn on -x and -v flags in bash. In most cases of failure, the script suggests recommended actions. Please see the ipath_checkout man page for further information and updates. Also refer to appendix C (The Troubleshooting Appendix) in the InfiniPath Interconnect User Guide. IB0056101-00 C 3-27...
Removing Software Packages Instructions for uninstalling or downgrading InfiniPath and OpenFabrics software is given below. To uninstall the InfiniPath software packages on any node, using a bash shell, type the command (as root): # rpm -e $(rpm -qa ’InfiniPath-MPI/mpi*’ ’InfiniPath/infinipath*’) This will uninstall the InfiniPath and MPI software RPMs on that node.
6. Verify that no InfiniPath or OpenFabrics modules are present in the directory. /lib/modules/$(uname -r)/updates 7. If not yet installed, install the InfiniPath and OpenFabrics modules from your alternate set of RPMs. 8. Reload all modules by using this command (as root): # /etc/init.d/infinipath start...
The number of nodes used for development will vary. Although QLogic recommends installing all RPMs on all nodes, not all InfiniPath software is required on all nodes. Table A-1 for information on installation of software RPMs on specific types of nodes.
Documentation and InfiniPath RPMs Table A-1. Documentation/RPMs Front end Compute Development Optional Optional Optional infinipath-doc-2.0-xxx_yyy.noarch.rpm InfiniPath man pages and other documents Optional Optional Optional mpi-doc-2.0-xxx_yyy.noarch.rpm Man pages for MPI functions and other MPI documents Optional Optional Optional ofed-docs-2.0-xxx.1_1.yyy.x86_64.rpm OpenFabrics documentation Table A-2.
Page 67
Utilities and source code InfiniPath configuration files Optional Required Optional infinipath-kernel-2.0-xxx_yyy.x86_64.rpm InfiniPath drivers,OpenFabrics kernel modules Optional Required Optional infinipath-libs-2.0-xxx_yyy.i386.rpm InfiniPath protocol shared libraries for 32-bit and 64-bit systems Table A-4. InfiniPath/RPMs Front end Compute Development Optional Required Optional infinipath-2.0-xxx_yyy.x86_64.rpm Utilities and source code...
Page 68
For this library to be usefule, a device-specfic plug-in module should also be installed. Required for libibverbs-utils-2.0-xxx.1_0_4.yyy.x86_64.rpm OpenFabrics Useful libibverbs example programs such as ibv_devinfo, which displays information about InfiniBand devices Required for libipathverbs-2.0-xxx.1_0.yyy.x86_64.rpm OpenFabrics Provides device-specific userspace driver for QLogic HCAs IB0056101-00 C...
Page 69
A – RPM Descriptions OpenFabrics RPMs Table A-6. OpenFabrics/RPMs (Continued) RPM name Comments Optional for libopensm-2.0-xxx.2_0_0.yyy.x86_64.rpm OpenFabrics libopensm provides the library for OpenSM Used by OpenSM Optional for libosmcomp-2.0-xxx.2_0_0.yyy.x86_64.rpm OpenFabrics libosmcomp provides the OS component library for OpenSM Used by OpenSM Optional for libosmvendor-2.0-xxx.2_0_0.yyy.x86_64.rpm OpenFabrics...
Page 70
A – RPM Descriptions OpenFabrics RPMs Table A-7. OpenFabrics-Devel/RPMs RPM name Comments Optional for libibmad-devel-2.0-xxx.1_0.yyy.x86_64.rpm Development files for the libibmad library OpenFabrics Optional for libibumad-devel-2.0-xxx.1_0.yyy.x86_64.rpm OpenFabrics Development files for the libibumad library Optional for libibverbs-devel-2.0-xxx.1_0_4.yyy.x86_64.rpm OpenFabrics Static libraries and header files for the libibverbs verbs library Optional for libipathverbs-devel-2.0-xxx.1_0.yyy.x86_64.rpm OpenFabrics...
Page 71
A – RPM Descriptions OpenFabrics RPMs Table A-9. OtherHCAs/RPMs RPM name Comments Optional libmthca-2.0-xxx.1_0_3.yyy.x86_64.rpm Provides a device-specific userspace driver for Mellanox HCAs (MT23108 InfiniHost and MT25208 InfiniHost III Ex) for use with the libibverbs library Optional mstflint-2.0-xxx.1_0.yyy.x86_64.rpm This package contains a burning tool for Mellanox manufactured HCA cards.
Page 72
A – RPM Descriptions OpenFabrics RPMs IB0056101-00 C...
Page 73
OpenSM 3-22 ipath_checkout 3-26 Distribution override, setting LEDs, states indicated by 3-26 Distributions Linux environment supported see distributions, supported for InfiniPath Drivers and OpenFabrics starting, stopping and restarting 3-23 testing 3-23 OpenSM 3-22 Front matter intended audience for this guide...
Need help?
Do you have a question about the InfiniPath and is the answer not in the manual?
Questions and answers