User’s Guide—Converged Network Adapters 41xxx Series Document Revision History Revision A, April 28, 2017 Revision B, August 24, 2017 Revision C, October 1, 2017 Revision D, January 24, 2018 Revision E, March 15, 2018 Revision F, April 19, 2018 Revision G, May 22, 2018 Revision H, August 23, 2018 Revision J, January 24, 2019 Changes...
Page 3
OFED. Following Step 12, added a fourth bullet to the “Configuring Microsoft Initiator to Use Cavium’s note: “Switch dependent teaming (IEEE 802.3ad iSCSI Offload” on page 192 LACP and Generic/Static Link Aggregation (Trunk- ing) cannot use a switch independent partitioned virtual adapter.
Page 4
User’s Guide—Converged Network Adapters 41xxx Series Added support for Windows Server 2019 to the “Windows Server 2012 R2, 2016, and 2019 iSCSI section. Boot Installation” on page 199 In the To create a Hyper-V virtual switch with an “Creating a Hyper-V Virtual Switch with an RDMA RDMA NIC procedure: NIC”...
Table of Contents Preface Supported Products ......... . . xviii Intended Audience .
Page 6
User’s Guide—Converged Network Adapters 41xxx Series Installing the Linux Drivers with RDMA ......Linux Driver Optional Parameters ......Linux Driver Operation Defaults .
Page 7
User’s Guide—Converged Network Adapters 41xxx Series Boot from SAN Configuration iSCSI Boot from SAN ......... . iSCSI Out-of-Box and Inbox Support.
Page 8
User’s Guide—Converged Network Adapters 41xxx Series Configuring FCoE Boot from SAN on Linux ....Prerequisites for Linux FCoE Boot from SAN....Configuring Linux FCoE Boot from SAN .
Page 9
Offload in Windows Server....... . . Installing Cavium QLogic Drivers ......
Page 10
User’s Guide—Converged Network Adapters 41xxx Series iSCSI Offload in Windows Server (continued) iSCSI Offload FAQs........Windows Server 2012 R2, 2016, and 2019 iSCSI Boot Installation .
Page 11
User’s Guide—Converged Network Adapters 41xxx Series RoCE over Switch Embedded Teaming (continued) Enabling RDMA on SET ........Assigning a vLAN ID on SET.
Page 12
User’s Guide—Converged Network Adapters 41xxx Series Troubleshooting Troubleshooting Checklist ........Verifying that Current Drivers Are Loaded .
Page 13
User’s Guide—Converged Network Adapters 41xxx Series List of Figures Figure Page Dell Update Package Window ......... QLogic InstallShield Wizard: Welcome Window .
This preface lists the supported products, specifies the intended audience, explains the typographic conventions used in this guide, and describes legal notices. Supported Products ® This user’s guide describes the following Cavium products: QL41112HFCU-DE 10Gb Converged Network Adapter, full-height bracket ...
Preface What Is in This Guide What Is in This Guide Following this preface, the remainder of this guide is organized into the following chapters and appendices: Chapter 1 Product Overview provides a product functional description, a list of features, and the adapter specifications. ...
Preface Documentation Conventions Chapter 15 Windows Server 2019 describes the Windows Server 2019 features. Chapter 16 Troubleshooting describes a variety of troubleshooting methods and resources. Appendix A Adapter LEDS lists the adapter LEDs and their significance. Appendix B Cables and Optical Modules lists the cables, optical modules, ...
Page 21
Preface Documentation Conventions Text in font indicates a file name, directory path, or command line Courier text. For example: To return to the root directory from anywhere in the file structure: Type and press ENTER. cd/ root Issue the following command: sh ./install.bin. ...
Preface Legal Notices | (vertical bar) indicates mutually exclusive options; select one option only. For example: on|off 1|2|3|4 (ellipsis) indicates that the preceding item may be repeated. For example: means one or more instances of x...
Preface Legal Notices Agency Certification The following sections summarize the EMC and EMI test specifications performed on the 41xxx Series Adapters to comply with emission, immunity, and product safety standards. EMI and EMC Requirements FCC Part 15 compliance: Class A FCC compliance information statement: This device complies with Part 15 of the FCC Rules.
Preface Legal Notices KCC: Class A Korea RRA Class A Certified Product Name/Model: Converged Network Adapters and Intelligent Ethernet Adapters Certification holder: QLogic Corporation Manufactured date: Refer to date code listed on product Manufacturer/Country of origin: QLogic Corporation/USA A class equipment As this equipment has undergone EMC registration for busi- ness purpose, the seller and/or the buyer is asked to beware (Business purpose...
“Adapter Specifications” on page 3 Functional Description The Cavium FastLinQ 41000 Series Adapters include 10 and 25Gb Converged Network Adapters and Intelligent Ethernet Adapters that are designed to perform accelerated data networking for server systems. The 41000 Series Adapter includes a 10/25Gb Ethernet MAC with full-duplex capability.
Page 27
1–Product Overview Features Performance features: TCP, IP, UDP checksum offloads TCP segmentation offload (TSO) Large segment offload (LSO) Generic segment offload (GSO) Large receive offload (LRO) Receive segment coalescing (RSC) ® Microsoft dynamic virtual machine queue (VMQ), and Linux Multiqueue ...
1–Product Overview Adapter Specifications EM64T processor support iSCSI and FCoE boot support Adapter Specifications The 41xxx Series Adapter specifications include the adapter’s physical characteristics and standards-compliance references. Physical Characteristics The 41xxx Series Adapters are standard PCIe cards and ship with either a full-height or a low-profile bracket for use in a standard PCIe slot.
“Preinstallation Checklist” on page 6 “Installing the Adapter” on page 6 System Requirements Before you install a Cavium 41xxx Series Adapter, verify that your system meets the hardware and operating system requirements shown in Table 2-1 Table 2-2. For a complete list of supported operating systems, visit the Marvell Web site.
Never attempt to install a damaged adapter. Installing the Adapter The following instructions apply to installing the Cavium 41xxx Series Adapters in most systems. For details about performing these tasks, refer to the manuals that were supplied with the system.
Page 32
2–Hardware Installation Installing the Adapter Applying even pressure at both corners of the card, push the adapter card into the slot until it is firmly seated. When the adapter is properly seated, the adapter port connectors are aligned with the slot opening, and the adapter faceplate is flush against the system chassis.
Driver Installation This chapter provides the following information about driver installation: Installing Linux Driver Software “Installing Windows Driver Software” on page 17 “Installing VMware Driver Software” on page 27 Installing Linux Driver Software This section describes how to install Linux drivers with or without remote direct memory access (RDMA).
Page 34
3–Driver Installation Installing Linux Driver Software Table 3-1 describes the 41xxx Series Adapter Linux drivers. Table 3-1. Cavium QLogic 41xxx Series Adapters Linux Drivers Linux Description Driver The qed core driver module directly controls the firmware, handles interrupts, and pro- vides the low-level API for the protocol specific driver set.
3–Driver Installation Installing Linux Driver Software The following source code TAR BZip2 (BZ2) compressed file installs Linux drivers on RHEL and SLES hosts: fastlinq-<version>.tar.bz2 NOTE For network installations through NFS, FTP, or HTTP (using a network boot disk), you may require a driver disk that contains the qede driver. Compile the Linux boot drivers by modifying the makefile and the make environment.
Page 36
3–Driver Installation Installing Linux Driver Software rmmod qed depmod -a For RHEL: cd /lib/modules/<version>/extra/qlgc-fastlinq rm -rf qed.ko qede.ko qedr.ko For SLES: cd /lib/modules/<version>/updates/qlgc-fastlinq rm -rf qed.ko qede.ko qedr.ko To remove Linux drivers in a non-RDMA environment: To get the path to the currently installed drivers, issue the following command: modinfo <driver name>...
3–Driver Installation Installing Linux Driver Software To remove Linux drivers in an RDMA environment: To get the path to the installed drivers, issue the following command: modinfo <driver name> Unload and remove the Linux drivers. modprobe -r qedr modprobe -r qede modprobe -r qed depmod -a Remove the driver module files:...
3–Driver Installation Installing Linux Driver Software For SLES: cd /usr/src/packages rpmbuild -bb SPECS/fastlinq-<version>.spec Install the newly compiled RPM: rpm -ivh RPMS/<arch>/qlgc-fastlinq-<version>.<arch>.rpm NOTE option may be needed on some Linux distributions if --force conflicts are reported. The drivers will be installed in the following paths. For SLES: /lib/modules/<version>/updates/qlgc-fastlinq For RHEL:...
3–Driver Installation Installing Linux Driver Software Change to the recently created directory, and then install the drivers: cd fastlinq-<version> make clean; make install The qed and qede drivers will be installed in the following paths. For SLES: /lib/modules/<version>/updates/qlgc-fastlinq For RHEL: /lib/modules/<version>/extra/qlgc-fastlinq Test the drivers by loading them (unload the existing drivers first, if necessary):...
3–Driver Installation Installing Linux Driver Software To build and install the libqedr user space library, issue the following command: 'make libqedr_install' Test the drivers by loading them as follows: modprobe qedr make install_libeqdr Linux Driver Optional Parameters Table 3-2 describes the optional parameters for the qede driver. Table 3-2.
3–Driver Installation Installing Linux Driver Software Table 3-3. Linux Driver Operation Defaults (Continued) Operation qed Driver Default qede Driver Default — Auto-negotiation with RX and Flow Control TX advertised — 1500 (range is 46–9600) — 1000 Rx Ring Size — 4078 (range is 128–8191) Tx Ring Size —...
3–Driver Installation Installing Windows Driver Software Reboot the system. Review the list of certificates that are prepared to be enrolled: # mokutil --list-new Reboot the system again. When the shim launches MokManager, enter the root password to confirm the certificate importation to the Machine Owner Key (MOK) list. To determine if the newly imported key was enrolled: # mokutil --list-enrolled To launch MOK manually and enroll the QLogic public key:...
3–Driver Installation Installing Windows Driver Software Managing Adapter Properties Setting Power Management Options Installing the Windows Drivers Install Windows driver software using the Dell Update Package (DUP): Running the DUP in the GUI DUP Installation Options ...
Page 44
3–Driver Installation Installing Windows Driver Software Figure 3-2. QLogic InstallShield Wizard: Welcome Window Complete the following in the wizard’s License Agreement window (Figure 3-3): Read the End User Software License Agreement. To continue, select I accept the terms in the license agreement. Click Next.
Page 45
3–Driver Installation Installing Windows Driver Software Figure 3-3. QLogic InstallShield Wizard: License Agreement Window Complete the wizard’s Setup Type window (Figure 3-4) as follows: Select one of the following setup types: Click Complete to install all program features. Click Custom to manually select the features to be installed.
Page 46
3–Driver Installation Installing Windows Driver Software Figure 3-4. InstallShield Wizard: Setup Type Window If you selected Custom in Step 5, complete the Custom Setup window (Figure 3-5) as follows: Select the features to install. By default, all features are selected. To change a feature’s install setting, click the icon next to it, and then select one of the following options: ...
Page 47
3–Driver Installation Installing Windows Driver Software Figure 3-5. InstallShield Wizard: Custom Setup Window In the InstallShield Wizard’s Ready To Install window (Figure 3-6), click Install. The InstallShield Wizard installs the QLogic Adapter drivers and Management Software Installer. Figure 3-6. InstallShield Wizard: Ready to Install the Program Window AH0054602-00 J...
Page 48
3–Driver Installation Installing Windows Driver Software When the installation is complete, the InstallShield Wizard Completed window appears (Figure 3-7). Click Finish to dismiss the installer. Figure 3-7. InstallShield Wizard: Completed Window In the Dell Update Package window (Figure 3-8), “Update installer operation was successful”...
3–Driver Installation Installing Windows Driver Software Figure 3-8. Dell Update Package Window DUP Installation Options To customize the DUP installation behavior, use the following command line options. To extract only the driver components to a directory: /drivers=<path> NOTE This command requires the /s option. ...
3–Driver Installation Installing Windows Driver Software NOTE This command requires the /s option. DUP Installation Examples The following examples show how to use the installation options. To update the system silently: <DUP_file_name>.exe /s To extract the update contents to the C:\mydir\ directory: <DUP_file_name>.exe /s /e=C:\mydir To extract the driver components to the C:\mydir\ directory: <DUP_file_name>.exe /s /drivers=C:\mydir...
3–Driver Installation Installing VMware Driver Software Setting Power Management Options You can set power management options to allow the operating system to turn off the controller to save power or to allow the controller to wake up the computer. If the device is busy (servicing a call, for example), the operating system will not shut down the device.
3–Driver Installation Installing VMware Driver Software For ESXi 6.5, the NIC and RoCE drivers have been packaged together and can be installed as a single offline bundle using the standard ESXi installation commands. The package name is . The qedentv_3.0.7.5_qedrntv_3.0.7.5.1_signed_drivers.zip recommended installation sequence is NIC and RoCE drivers, followed by FCoE and iSCSI drivers.
Page 55
3–Driver Installation Installing VMware Driver Software Place the host in maintenance mode by issuing the following command: # esxcli --maintenance-mode NOTE The maximum number of supported qedentv Ethernet interfaces on an ESXi host is 32 because the vmkernel allows only 32 interfaces to register for management callback.
3–Driver Installation Installing VMware Driver Software VMware NIC Driver Optional Parameters Table 3-6 describes the optional parameters that can be supplied as command line arguments to the command. esxcfg-module Table 3-6. VMware NIC Driver Optional Parameters Parameter Description Globally enables (1) or disables (0) hardware vLAN insertion and removal. hw_vlan Disable this parameter when the upper layer needs to send or receive fully formed packets.
3–Driver Installation Installing VMware Driver Software Table 3-6. VMware NIC Driver Optional Parameters (Continued) Parameter Description Enables (1) or disables (0) the driver automatic firmware recovery capability. auto_fw_reset When this parameter is enabled, the driver attempts to recover from events such as transmit timeouts, firmware asserts, and adapter parity errors.
3–Driver Installation Installing VMware Driver Software Table 3-7. VMware Driver Parameter Defaults (Continued) Parameter Default Enabled (eight RX/TX queue pairs) Number of Queues Wake on LAN (WoL) Disabled Removing the VMware Driver To remove the .vib file (qedentv), issue the following command: # esxcli software vib remove --vibname qedentv To remove the driver, issue the following command: # vmkload_mod -u qedentv...
3–Driver Installation Installing VMware Driver Software iSCSI Support The QLogic VMware iSCSI qedil Host Bus Adapter (HBA) driver, similar to qedf, is a kernel mode driver that provides a translation layer between the VMware SCSI stack and the QLogic iSCSI firmware and hardware. The qedil driver leverages the services provided by the VMware iscsid infrastructure for session management and IP services.
Upgrading the Firmware This chapter provides information about upgrading the firmware using the Dell Update Package (DUP). The firmware DUP is a Flash update utility only; it is not used for adapter configuration. You can run the firmware DUP by double-clicking the executable file.
Page 61
4–Upgrading the Firmware Running the DUP by Double-Clicking Follow the on-screen instructions. In the Warning dialog box, click Yes to continue the installation. The installer indicates that it is loading the new firmware, as shown in Figure 4-2. Figure 4-2. Dell Update Package: Loading New Firmware When complete, the installer indicates the result of the installation, as shown Figure 4-3.
4–Upgrading the Firmware Running the DUP from a Command Line Click Yes to reboot the system. Click Finish to complete the installation, as shown in Figure 4-4. Figure 4-4. Dell Update Package: Finish Installation Running the DUP from a Command Line Running the firmware DUP from the command line, with no options specified, results in the same behavior as double-clicking the DUP icon.
4–Upgrading the Firmware Running the DUP Using the .bin File Figure 4-5 shows the options that you can use to customize the Dell Update Package installation. Figure 4-5. DUP Command Line Options Running the DUP Using the .bin File The following procedure is supported only on Linux OS. To update the DUP using the .bin file: Copy the file to the system or...
Page 64
4–Upgrading the Firmware Running the DUP Using the .bin File Example output from the SUT during the DUP update: ./Network_Firmware_NJCX1_LN_08.07.26.BIN Collecting inventory... Running validation... BCM57810 10 Gigabit Ethernet rev 10 (p2p1) The version of this Update Package is the same as the currently installed version.
Adapter Preboot Configuration During the host boot process, you have the opportunity to pause and perform adapter management tasks using the Human Infrastructure Interface (HII) application. These tasks include the following: “Getting Started” on page 41 “Displaying Firmware Image Properties” on page 44 ...
5–Adapter Preboot Configuration Getting Started Getting Started To start the HII application: Open the System Setup window for your platform. For information about launching the System Setup, consult the user guide for your system. In the System Setup window (Figure 5-1), select Device Settings, and then press ENTER.
Page 67
5–Adapter Preboot Configuration Getting Started The Main Configuration Page (Figure 5-3) presents the adapter management options where you can set the partitioning mode. Figure 5-3. Main Configuration Page Under Device Level Configuration, set the Partitioning Mode to NPAR to add the NIC Partitioning Configuration option to the Main Configuration Page, as shown in Figure 5-4.
Page 68
5–Adapter Preboot Configuration Getting Started Device Level Configuration (see “Configuring Device-level Parameters” on page NIC Configuration (see “Configuring NIC Parameters” on page iSCSI Configuration (if iSCSI remote boot is allowed by enabling iSCSI offload in NPAR mode on the port’s third partition) (see “Configuring iSCSI Boot”...
Family Firmware Version is the multiboot image version, which comprises several firmware component images. MBI Version is the Cavium QLogic bundle image version that is active on the device. Controller BIOS Version is the management firmware version.
5–Adapter Preboot Configuration Configuring Device-level Parameters Configuring Device-level Parameters NOTE The iSCSI physical functions (PFs) are listed when the iSCSI Offload feature is enabled in NPAR mode only. The FCoE PFs are listed when the FCoE Offload feature is enabled in NPAR mode only. Not all adapter models support iSCSI Offload and FCoE Offload.
5–Adapter Preboot Configuration Configuring NIC Parameters NParEP Mode configures the maximum quantity of partitions per adapter. This parameter is visible when you select either NPAR or NPar + SR-IOV as the Virtualization Mode in Step Enabled allows you to configure up to 16 partitions per adapter. ...
Page 72
5–Adapter Preboot Configuration Configuring NIC Parameters Select one of the following Link Speed options for the selected port. Not all speed selections are available on all adapters. Auto Negotiated enables Auto Negotiation mode on the port. FEC mode selection is not available for this speed mode. 1 Gbps enables 1GbE fixed speed mode on the port.
Page 73
5–Adapter Preboot Configuration Configuring NIC Parameters For Boot Mode, select one of the following values: PXE enables PXE boot. FCoE enables FCoE boot from SAN over the hardware offload pathway. The FCoE mode is available only if FCoE Offload is enabled on the second partition in NPAR mode (see “Configuring Partitions”...
5–Adapter Preboot Configuration Configuring Data Center Bridging To configure the port to use RDMA: NOTE Follow these steps to enable RDMA on all partitions of an NPAR mode port. Set NIC + RDMA Mode to Enabled. Click Back. When prompted, click Yes to save the changes. Changes take effect after a system reset.
Page 75
5–Adapter Preboot Configuration Configuring Data Center Bridging CEE enables the legacy Converged Enhanced Ethernet (CEE) protocol DCBX mode on this port. IEEE enables the IEEE DCBX protocol on this port. Dynamic enables dynamic application of either the CEE or IEEE ...
5–Adapter Preboot Configuration Configuring FCoE Boot Configuring FCoE Boot NOTE The FCoE Boot Configuration Menu is only visible if FCoE Offload Mode is enabled on the second partition in NPAR mode (see Figure 5-18 on page 60). It is not visible in non-NPAR mode. To configure the FCoE boot configuration parameters: On the Main Configuration Page, select FCoE Configuration, and then select the following as needed:...
5–Adapter Preboot Configuration Configuring iSCSI Boot Figure 5-10. FCoE Target Configuration Click Back. When prompted, click Yes to save the changes. Changes take effect after a system reset. Configuring iSCSI Boot NOTE The iSCSI Boot Configuration Menu is only visible if iSCSI Offload Mode is enabled on the third partition in NPAR mode (see Figure 5-19 on page 61).
Page 78
5–Adapter Preboot Configuration Configuring iSCSI Boot To configure the iSCSI boot configuration parameters: On the Main Configuration Page, select iSCSI Boot Configuration Menu, and then select one of the following options: iSCSI General Configuration iSCSI Initiator Configuration iSCSI First Target Configuration ...
Page 79
5–Adapter Preboot Configuration Configuring iSCSI Boot iSCSI Second Target Parameters (Figure 5-14 on page Connect IPv4 Address TCP Port Boot LUN iSCSI Name CHAP ID CHAP Secret Click Back. When prompted, click Yes to save the changes. Changes take effect after a system reset.
5–Adapter Preboot Configuration Configuring Partitions Configuring Partitions You can configure bandwidth ranges for each partition on the adapter. For information specific to partition configuration on VMware ESXi 6.0/6.5, see Partitioning for VMware ESXi 6.0 and ESXi 6.5. To configure the maximum and minimum bandwidth allocations: On the Main Configuration Page, select NIC Partitioning Configuration, and then press ENTER.
Page 83
5–Adapter Preboot Configuration Configuring Partitions On the Global Bandwidth Allocation page (Figure 5-16), click each partition minimum and maximum TX bandwidth field for which you want to allocate bandwidth. There are eight partitions per port in dual-port mode. Figure 5-16. Global Bandwidth Allocation Page ...
Page 84
5–Adapter Preboot Configuration Configuring Partitions When prompted, click Yes to save the changes. Changes take effect after a system reset. To configure partitions: To examine a specific partition configuration, on the NIC Partitioning Configuration page (Figure 5-15 on page 57), select Partition n Configuration.
Page 85
5–Adapter Preboot Configuration Configuring Partitions FCoE Mode enables or disables the FCoE-Offload personality on the second partition. If you enable this mode on the second partition, you should disable NIC Mode. Because only one offload is available per port, if FCoE-Offload is enabled on the port’s second partition, iSCSI-Offload cannot be enabled on the third partition of that same NPAR mode port.
Page 86
5–Adapter Preboot Configuration Configuring Partitions To configure the third partition, select Partition 3 Configuration to open the Partition 3 Configuration page (Figure 5-19). If iSCSI Offload is present, the Partition 3 Configuration shows the following parameters: NIC Mode (Disabled) ...
5–Adapter Preboot Configuration Configuring Partitions Figure 5-20. Partition 4 Configuration Partitioning for VMware ESXi 6.0 and ESXi 6.5 If the following conditions exist on a system running either VMware ESXi 6.0 or ESXi 6.5, you must uninstall and reinstall the drivers: The adapter is configured to enable NPAR with all NIC partitions.
Page 88
5–Adapter Preboot Configuration Configuring Partitions In the preceding command output, notice that are actually vmnic4 vmnic10 storage adapter ports. To prevent this behavior, you should enable storage functions at the same time that you configure the adapter for NPAR mode. For example, assuming that the adapter is in Single Function mode by default, you should: Enable NPAR mode.
SAN boot enables deployment of diskless servers in an environment where the boot disk is located on storage connected to the SAN. The server (initiator) communicates with the storage device (target) through the SAN using the Cavium Converged Network Adapter (CNA) Host Bus Adapter (HBA).
6–Boot from SAN Configuration iSCSI Boot from SAN iSCSI Out-of-Box and Inbox Support Table 6-1 lists the operating systems’ inbox and out-of-box support for iSCSI boot from SAN (BFS). Table 6-1. iSCSI Out-of-Box and Inbox Boot from SAN Support Out-of-Box Inbox Hardware Hardware...
iSCSI SW (also known as non-offload path with Microsoft/Open-iSCSI initiator) ISCSI HW (offload path with the Cavium FastLinQ offload iSCSI driver). This option can be set using Boot Mode. For VMware ESXi operating systems, only the iSCSI SW method is supported.
Page 92
6–Boot from SAN Configuration iSCSI Boot from SAN Figure 6-1. System Setup: Boot Settings AH0054602-00 J...
6–Boot from SAN Configuration iSCSI Boot from SAN Enabling NPAR and the iSCSI HBA To enable NPAR and the iSCSI HBA: In the System Setup, Device Settings, select the QLogic device (Figure 6-2). Refer to the OEM user guide on accessing the PCI device configuration menu.
6–Boot from SAN Configuration iSCSI Boot from SAN Selecting the iSCSI UEFI Boot Protocol Before selecting the preferred boot mode, ensure that the Device Level Configuration menu setting is Enable NPAR and that the NIC Partitioning Configuration menu setting is Enable iSCSI HBA. The Boot Mode option is listed under iSCSI Configuration (Figure 6-3) for the...
Initiator IQN CHAP ID and secret Configuring iSCSI Boot Parameters Configure the Cavium QLogic iSCSI boot software for either static or dynamic configuration. For configuration options available from the General Parameters window, see Table 6-2 on page 76, which lists parameters for both IPv4 and IPv6.
6–Boot from SAN Configuration iSCSI Boot from SAN Configuring BIOS Boot Mode To configure the boot mode: Restart the system. Access the System BIOS menu (Figure 6-4). NOTE SAN boot is supported in UEFI environment only. Make sure the system boot option is UEFI, and not legacy. Figure 6-4.
Page 97
6–Boot from SAN Configuration iSCSI Boot from SAN On the Main Configuration Page, select NIC Configuration (Figure 6-5), and then press ENTER. Figure 6-5. Selecting NIC Configuration AH0054602-00 J...
6–Boot from SAN Configuration iSCSI Boot from SAN On the NIC Configuration page (Figure 6-6), for the Boot Protocol option, select UEFI iSCSI HBA (requires NPAR mode). Figure 6-6. System Setup: NIC Configuration, Boot Protocol NOTE Use the Virtual LAN Mode and Virtual LAN ID options on this page only for PXE boot.
Page 99
6–Boot from SAN Configuration iSCSI Boot from SAN For information on configuration options, see Table 6-2 on page To configure the iSCSI boot parameters using static configuration: In the Device HII Main Configuration Page, select iSCSI Configuration (Figure 6-7), and then press ENTER. Figure 6-7.
Page 100
6–Boot from SAN Configuration iSCSI Boot from SAN On the iSCSI General Parameters page (Figure 6-9), press the DOWN ARROW key to select a parameter, and then press the ENTER key to input the following values (Table 6-2 on page 76 provides descriptions of these parameters): ...
Page 101
6–Boot from SAN Configuration iSCSI Boot from SAN Table 6-2. iSCSI General Parameters Option Description This option is specific to IPv4. Controls whether the iSCSI boot TCP/IP Parameters via DHCP host software acquires the IP address information using DHCP (Enabled) or using a static IP configuration (Disabled). Controls whether the iSCSI boot host software acquires its iSCSI iSCSI Parameters via DHCP target parameters using DHCP (Enabled) or through a static...
Page 102
6–Boot from SAN Configuration iSCSI Boot from SAN Select iSCSI Initiator Parameters (Figure 6-10), and then press ENTER. Figure 6-10. System Setup: Selecting iSCSI Initiator Parameters On the iSCSI Initiator Parameters page (Figure 6-11), select the following parameters, and then type a value for each: ...
Page 103
6–Boot from SAN Configuration iSCSI Boot from SAN NOTE For the preceding items with asterisks (*), note the following: The label will change to IPv6 or IPv4 (default) based on the IP version set on the iSCSI General Parameters page (Figure 6-9 on page 75).
Page 104
6–Boot from SAN Configuration iSCSI Boot from SAN Select iSCSI First Target Parameters (Figure 6-12), and then press ENTER. Figure 6-12. System Setup: Selecting iSCSI First Target Parameters On the iSCSI First Target Parameters page, set the Connect option to Enabled for the iSCSI target.
Page 105
6–Boot from SAN Configuration iSCSI Boot from SAN Figure 6-13. System Setup: iSCSI First Target Parameters Return to the iSCSI Boot Configuration page, and then press ESC. AH0054602-00 J...
Page 106
6–Boot from SAN Configuration iSCSI Boot from SAN If you want to configure a second iSCSI target device, select iSCSI Second Target Parameters (Figure 6-14), and enter the parameter values as you did in Step 10. This second target is used if the first target cannot be connected to.Otherwise, proceed to Step Figure 6-14.
Page 107
6–Boot from SAN Configuration iSCSI Boot from SAN Figure 6-15. System Setup: Saving iSCSI Changes After all changes have been made, reboot the system to apply the changes to the adapter’s running configuration. Dynamic iSCSI Boot Configuration In a dynamic configuration, ensure that the system’s IP address and target (or initiator) information are provided by a DHCP server (see IPv4 and IPv6 configurations in “Configuring the DHCP Server to Support iSCSI Boot”...
Page 108
6–Boot from SAN Configuration iSCSI Boot from SAN NOTE When using a DHCP server, the DNS server entries are overwritten by the values provided by the DHCP server. This override occurs even if the locally provided values are valid and the DHCP server provides no DNS server information.
Page 109
6–Boot from SAN Configuration iSCSI Boot from SAN Figure 6-16. System Setup: iSCSI General Parameters Enabling CHAP Authentication Ensure that the CHAP authentication is enabled on the target. To enable CHAP authentication: Go to the iSCSI General Parameters page. Set CHAP Authentication to Enabled. In the Initiator Parameters window, type values for the following: ...
Configuring vLANs for iSCSI Boot DHCP iSCSI Boot Configurations for IPv4 DHCP includes several options that provide configuration information to the DHCP client. For iSCSI boot, Cavium QLogic adapters support the following DHCP configurations: DHCP Option 17, Root Path DHCP Option 43, Vendor-specific Information ...
Page 111
6–Boot from SAN Configuration iSCSI Boot from SAN Table 6-3. DHCP Option 17 Parameter Definitions (Continued) Parameter Definition Logical unit number to use on the iSCSI target. The value of the <LUN> LUN must be represented in hexadecimal format. A LUN with an ID of 64 must be configured as 40 within the Option 17 parameter on the DHCP server.
Page 112
DHCPv6 Option 17, Vendor-Specific Information NOTE The DHCPv6 standard Root Path option is not yet available. Cavium suggests using Option 16 or Option 17 for dynamic iSCSI boot IPv6 support. DHCPv6 Option 16, Vendor Class Option DHCPv6 Option 16 (vendor class option) must be present and must contain a string that matches your configured DHCP Vendor ID parameter.
Page 113
6–Boot from SAN Configuration iSCSI Boot from SAN Table 6-5 lists the DHCP Option 17 sub-options. Table 6-5. DHCP Option 17 Sub-option Definitions Sub-option Definition First iSCSI target information in the standard root path format: "iscsi:"[<servername>]":"<protocol>":"<port>":"<LUN> ": "<targetname>" Second iSCSI target information in the standard root path format: "iscsi:"[<servername>]":"<protocol>":"<port>":"<LUN>...
6–Boot from SAN Configuration iSCSI Boot from SAN Select VLAN ID to enter and set the vLAN value, as shown in Figure 6-17. Figure 6-17. System Setup: iSCSI General Parameters, VLAN ID Configuring iSCSI Boot from SAN on Windows Adapters support iSCSI boot to enable network boot of operating systems to diskless systems.
41xxx Series Adapters is not supported in legacy BIOS. Cavium recommends that you disable the Integrated RAID Controller. Selecting the Preferred iSCSI Boot Mode To select the iSCSI boot mode on Windows: On the NIC Partitioning Configuration page for a selected partition, set the iSCSI Offload Mode to Enabled.
6–Boot from SAN Configuration iSCSI Boot from SAN Virtual LAN ID: (Optional) You can isolate iSCSI traffic on the network in a Layer 2 vLAN to segregate it from general traffic.To segregate traffic, make the iSCSI interface on the adapter a member of the Layer 2 vLAN by setting this value.
6–Boot from SAN Configuration iSCSI Boot from SAN Configuring the iSCSI Targets You can set up the iSCSI first target, second target, or both at once. To set the iSCSI target parameters on Windows: From the Main Configuration page, select iSCSI Configuration, and then select iSCSI First Target Parameters.
Page 118
6–Boot from SAN Configuration iSCSI Boot from SAN The output from the preceding command shown in Figure 6-18 indicates that the iSCSI LUN was detected successfully at the preboot level. Figure 6-18. Detecting the iSCSI LUN Using UEFI Shell (Version 2) On the newly detected iSCSI LUN, select an installation source such as using a WDS server, mounting the with an integrated Dell Remote...
6–Boot from SAN Configuration iSCSI Boot from SAN Inject the latest QLogic drivers by mounting drivers in the virtual media: Click Load driver, and then click Browse (see Figure 6-20). Figure 6-20. Windows Setup: Selecting Driver to Install Navigate to the driver location and choose the qevbd driver. Choose the adapter on which to install the driver, and then click Next to continue.
6–Boot from SAN Configuration iSCSI Boot from SAN Configuring iSCSI Boot from SAN for RHEL 7.5 and Later To install RHEL 7.5 and later: Boot from the RHEL 7.x installation media with the iSCSI target already connected in UEFI. Install Red Hat Enterprise Linux 7.x Test this media &...
6–Boot from SAN Configuration iSCSI Boot from SAN After a successful system boot, edit the file to remove the blacklist /etc/modprobe.d/anaconda-blacklist.conf entry for the selected driver. Edit the file as follows: /etc/default/grub Locate the string in double quotes as shown in the following example. The command line is a specific reference to help find the string.
6–Boot from SAN Configuration iSCSI Boot from SAN Configuring iSCSI Boot from SAN for Other Linux Distributions For distributions such as RHEL 6.9/6.10/7.2/7.3, SLES 11 SP4, and SLES 12 SP1/2, the inbox iSCSI user space utility (Open-iSCSI tools) lacks support for qedi iSCSI transport and cannot perform user space-initiated iSCSI functionality.
Page 123
6–Boot from SAN Configuration iSCSI Boot from SAN Where the DUD parameter is for RHEL 7.x and dud=1 SLES 12.x. Install the OS on the target LUN. Migrating from Software iSCSI Installation to Offload iSCSI Migrate from the non-offload interface to an offload interface by following the instructions for your operating system.
Page 124
6–Boot from SAN Configuration iSCSI Boot from SAN Open the file, make the following /boot/efi/EFI/redhat/grub.conf changes, and save the file: Remove ifname=eth5:14:02:ec:ce:dc:6d Remove ip=ibft selinux=0 For example: kernel /vmlinuz-2.6.32-696.el6.x86_64 ro root=/dev/mapper/vg_prebooteit-lv_root rd_NO_LUKS iscsi_firmware LANG=en_US.UTF-8 ifname=eth5:14:02:ec:ce:dc:6d rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_DM rd_LVM_LV=vg_prebooteit/lv_swap ip=ibft KEYBOARDTYPE=pc KEYTABLE=us rd_LVM_LV=vg_prebooteit/lv_root rhgb quiet...
Page 125
6–Boot from SAN Configuration iSCSI Boot from SAN Migrating to Offload iSCSI for SLES 11 SP4 To migrate from a software iSCSI installation to an offload iSCSI for SLES 11 SP4: Update Open-iSCSI tools and iscsiuio to the latest available versions by issuing the following commands: # rpm -ivh qlgc-open-iscsi-2.0_873.111.sles11sp4-3.x86_64.rpm --force # rpm -ivh iscsiuio-2.11.5.3-2.sles11sp4.x86_64.rpm --force...
Page 126
6–Boot from SAN Configuration iSCSI Boot from SAN Create a backup of initrd, and then build a new initrd by issuing the following commands. cd /boot/ mkinitrd Reboot the server, and then open the UEFI HII. In the UEFI HII, disable iSCSI boot from BIOS, and then enable iSCSI HBA or boot for the adapter as follows: Select the adapter port, and then select Device Level Configuration.
Page 127
6–Boot from SAN Configuration iSCSI Boot from SAN Enable iscsid and iscsiuio services (if they are not already enabled) by issuing the following commands: # systemctl enable iscsid # systemctl enable iscsiuio Issue the following command: cat /proc/cmdline Check if the OS has preserved any boot options, such as ip=ibft rd.iscsi.ibft ...
Page 128
6–Boot from SAN Configuration iSCSI Boot from SAN Open the NIC Partitioning Configuration page and set the iSCSI Offload Mode to Enabled. (iSCSI HBA support is on partition 3 for a two--port adapter and on partition 2 for a four-port adapter.) Open the NIC Configuration menu and set the Boot Protocol to UEFI iSCSI.
Page 129
6–Boot from SAN Configuration iSCSI Boot from SAN Edit the file, make the following, /boot/efi/EFI/redhat/grub.conf changes, and then save the file: Remove ifname=eth5:14:02:ec:ce:dc:6d Remove ip=ibft selinux=0 Build the file by issuing the following command: initramfs # dracut –f Reboot and change the adapter boot settings to use L4 or iSCSI (HW) for both ports and to boot through L4.
Page 130
6–Boot from SAN Configuration iSCSI Boot from SAN Update Open-iSCSI tools and iscsiuio by issuing the following commands: # rpm -ivh qlgc-open-iscsi-2.0_873.111.rhel7u3-3.x86_64.rpm --force # rpm -ivh iscsiuio-2.11.5.5-6.rhel7u3.x86_64.rpm --force Reload all the daemon services by issuing the following command: # systemctl daemon-reload Restart iscsid and iscsiuio services by issuing the following commands: # systemctl restart iscsiuio # systemctl restart iscsid...
Page 131
6–Boot from SAN Configuration iSCSI Boot from SAN Reboot the server and boot into the OS with multipath. NOTE For any additional changes in the file to take effect, /etc/multipath.conf you must rebuild the initrd image and reboot the server. Migrating and Configuring MPIO to Offloaded Interface for SLES 11 SP4 To migrate from L2 to L4 and configure MPIO to boot the OS over an offloaded interface for SLES 11 SP4:...
Page 132
6–Boot from SAN Configuration iSCSI Boot from SAN NOTE If the file is not present, copy from the multipath.conf folder: /usr/share/doc/packages/multipath-tools cp /usr/share/doc/packages/multipath-tools/multipath. conf.defaults /etc/multipath.conf Edit the to enable the default section. multipath.conf Rebuild initrd image to include MPIO support: # mkinitrd –f multipath Reboot the server and boot the OS with multipath support.
Page 133
6–Boot from SAN Configuration iSCSI Boot from SAN To load the multipath module, issue the following command: # modprobe dm_multipath To enable the multipath daemon, issue the following commands: # systemctl start multipathd.service # systemctl enable multipathd.service # systemctl start multipathd.socket To run the multipath utility, issue the following commands: # multipath (may not show the multipath devices because it is booted with a single path on L2)
6–Boot from SAN Configuration iSCSI Boot from SAN Configuring iSCSI Boot from SAN on VMware Because VMware does not natively support iSCSI boot from SAN offload, you must configure BFS through the software in preboot, and then transition to offload upon OS driver loads.
Page 135
6–Boot from SAN Configuration iSCSI Boot from SAN Go to the Main Configuration Page and select NIC Partitioning Configuration. On the NIC Partitioning Configuration page, select Partition 1 Configuration. Complete the Partition 1 Configuration page as follows: For Link Speed, select either Auto Neg, 10Gbps, or 1Gbps. Ensure that the link is up.
6–Boot from SAN Configuration iSCSI Boot from SAN Configuring the System BIOS for iSCSI Boot (L2) To configure the System BIOS on VMware: On the System BIOS Settings page, select Boot Settings. Complete the Boot Settings page as shown in Figure 6-23.
Page 137
6–Boot from SAN Configuration iSCSI Boot from SAN Figure 6-24. Integrated NIC: System BIOS, Connection 1 Settings for VMware Complete the target details, and for Authentication Type, select either CHAP (to set CHAP details) or None (the default). Figure 6-25 shows an example.
6–Boot from SAN Configuration iSCSI Boot from SAN Save all configuration changes, and then reboot the server. During system boot up, press the F11 key to start the Boot Manager. In the Boot Manager under Boot Menu, Select UEFI Boot Option, select the Embedded SATA Port AHCI Controller.
Figure 6-27. VMware iSCSI Boot from SAN Successful FCoE Boot from SAN Cavium 41xxx Series Adapters support FCoE boot to enable network boot of operating systems to diskless systems. FCoE boot allows a Windows, Linux, or VMware operating system to boot from a Fibre Channel or FCoE target machine located remotely over an FCoE supporting network.
6–Boot from SAN Configuration FCoE Boot from SAN FCoE Out-of-Box and Inbox Support Table 6-6 lists the operating systems’ inbox and out-of-box support for FCoE boot from SAN (BFS). Table 6-6. FCoE Out-of-Box and Inbox Boot from SAN Support Inbox Out-of-Box Hardware Offload Hardware Offload...
6–Boot from SAN Configuration FCoE Boot from SAN Specifying the BIOS Boot Protocol FCoE boot from SAN is supported in UEFI mode only. Set the platform in boot mode (protocol) using the system BIOS configuration to UEFI. NOTE FCoE BFS is not supported in legacy BIOS mode. Configuring Adapter UEFI Boot Mode To configure the boot mode to FCOE: Restart the system.
Page 142
6–Boot from SAN Configuration FCoE Boot from SAN On the Device Settings page, select the QLogic adapter (Figure 6-29). Figure 6-29. System Setup: Device Settings, Port Selection AH0054602-00 J...
Page 143
6–Boot from SAN Configuration FCoE Boot from SAN On the Main Configuration Page, select NIC Configuration (Figure 6-30), and then press ENTER. Figure 6-30. System Setup: NIC Configuration On the NIC Configuration page, select Boot Mode, press ENTER, and then select FCoE as a preferred boot mode.
Page 144
6–Boot from SAN Configuration FCoE Boot from SAN Figure 6-31. System Setup: FCoE Mode Enabled To configure the FCoE boot parameters: On the Device UEFI HII Main Configuration Page, select FCoE Configuration, and then press ENTER. On the FCoE Configuration Page, select FCoE General Parameters, and then press ENTER.
Page 145
6–Boot from SAN Configuration FCoE Boot from SAN Figure 6-32. System Setup: FCoE General Parameters Return to the FCoE Configuration page. Press ESC, and then select FCoE Target Parameters. Press ENTER. In the FCoE General Parameters Menu, enable Connect to the preferred FCoE target.
Windows Server 2012 R2 and 2016 FCoE Boot Installation For Windows Server 2012R2/2016 boot from SAN installation, Cavium requires the use of a “slipstream” DVD, or ISO image, with the latest Cavium QLogic drivers injected. See “Injecting (Slipstreaming) Adapter Drivers into Windows Image Files”...
6–Boot from SAN Configuration FCoE Boot from SAN Load the latest Cavium QLogic FCoE boot images into the adapter NVRAM. Configure the FCoE target to allow a connection from the remote device. Ensure that the target has sufficient disk space to hold the new OS installation.
Network location extract the driver package. For example, type c:\temp. Follow the driver installer instructions to install the drivers in the specified folder. In this example, the Cavium QLogic driver files are installed here: c:\temp\Program File 64\QLogic Corporation\QDrivers Download the Windows Assessment and Deployment Kit (ADK) version 10 from Microsoft: https://developer.microsoft.com/en-us/windows/hardware/...
Prerequisites for Linux FCoE Boot from SAN The following are required for Linux FCoE boot from SAN to function correctly with the Cavium FastLinQ 41xxx 10/25GbE Controller. General You no longer need to use the FCoE disk tabs in the Red Hat and SUSE installers because the FCoE interfaces are not exposed from the network interface and are automatically activated by the qedf driver.
6–Boot from SAN Configuration FCoE Boot from SAN The installer parameter is required to ensure that the installer will dud=1 ask for the driver update disk. Do not use the installer parameter because the software withfcoe=1 FCoE will conflict with the hardware offload if network interfaces from qede are exposed.
Page 151
6–Boot from SAN Configuration FCoE Boot from SAN At the Select the file which is your driver disk image prompt, select the out-of-box driver (FastLinQ driver update disk), and then click OK. Figure 6-34 shows an example. Figure 6-34. Selecting the Driver Disk Image Load the FastLinQ driver update disk and then click Next to continue with the installation.(You can skip the media test.) At the What type of devices will your installation involve? prompt, select...
Configuring FCoE Boot from SAN on VMware For VMware ESXi 6.5/6.7 boot from SAN installation, Cavium requires that you use a customized ESXi ISO image that is built with the latest Cavium QLogic Converged Network Adapter bundle injected. This section covers the following VMware FCoE boot from SAN procedures.
Use the new DVD to install the ESXi OS. Installing the Customized ESXi ISO Load the latest Cavium QLogic FCOE boot images into the adapter NVRAM. Configure the FCOE target to allow a valid connection with the remote machine. Ensure that the target has sufficient free disk space to hold the new OS installation.
Page 154
6–Boot from SAN Configuration FCoE Boot from SAN Follow the on-screen instructions. On the window that shows the list of disks available for the installation, the FCOE target disk should be visible because the injected Converged Network Adapter bundle is inside the customized ESXi ISO. Figure 6-37 shows an example.
Page 155
6–Boot from SAN Configuration FCoE Boot from SAN In the example shown in Figure 6-38, the first two ports indicate Cavium QLogic adapters. Figure 6-38. VMware Generic USB Boot Options AH0054602-00 J...
RoCE Configuration This chapter describes RDMA over converged Ethernet (RoCE v1 and v2) configuration on the 41xxx Series Adapter, the Ethernet switch, and the Windows, Linux, or VMware host, including: Supported Operating Systems and OFED “Planning for RoCE” on page 132 “Preparing the Adapter”...
7–RoCE Configuration Planning for RoCE Table 7-1. OS Support for RoCE v1, RoCE v2, iWARP, iSER, and OFED (Continued) Operating System Inbox OFED-4.8-2 GA Windows Server 2016 Windows Server 2019 RoCE v1, RoCE v2, iWARP RHEL 6.9 RoCE v1, iWARP RHEL 6.10 RoCE v1, iWARP RHEL 7.5...
7–RoCE Configuration Preparing the Adapter OFED and RDMA applications that depend on libibverbs also require the QLogic RDMA user space library, libqedr. Install libqedr using the libqedr RPM or source packages. RoCE supports only little endian. RoCE does not work over a VF in an SR-IOV environment. ...
7–RoCE Configuration Preparing the Ethernet Switch Configuring the Cisco Nexus 6000 Ethernet Switch Steps for configuring the Cisco Nexus 6000 Ethernet Switch for RoCE include configuring class maps, configuring policy maps, applying the policy, and assigning a vLAN ID to the switch port. To configure the Cisco switch: Open a config terminal session as follows: Switch# config terminal...
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server switch(config)# service-policy type qos input roce switch(config)# service-policy type queuing output roce switch(config)# service-policy type queuing input roce switch(config)# service-policy type network-qos roce Assign a vLAN ID to the switch port to match the vLAN ID assigned to the adapter (5).
Page 161
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Table 7-2. Advanced Properties for RoCE (Continued) Property Value or Description For RoCE v1/v2, always select Enabled to allow Win- Quality of Service dows DCB-QoS service to control and monitor DCB. For more information, see “Configuring QoS by Disabling DCBX on the Adapter”...
Page 162
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Using Windows PowerShell, verify that RDMA is enabled on the adapter. command lists the adapters that support Get-NetAdapterRdma RDMA—both ports are enabled. NOTE If you are configuring RoCE over Hyper-V, do not assign a vLAN ID to the physical interface.
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Viewing RDMA Counters The following procedure also applies to iWARP. To view RDMA counters for RoCE: Launch Performance Monitor. Open the Add Counters dialog box. Figure 7-2 shows an example. Figure 7-2.
Page 164
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server NOTE If Cavium RDMA counters are not listed in the Performance Monitor Add Counters dialog box, manually add them by issuing the following command from the driver location: Lodctr /M:qend.man Select one of the following counter types: ...
Page 165
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Figure 7-3 shows three examples of the counter monitoring output. Figure 7-3. Performance Monitor: Cavium FastLinQ Counters Table 7-3 provides details about error counters. Table 7-3. Cavium FastLinQ RDMA Error Counters...
Page 166
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Table 7-3. Cavium FastLinQ RDMA Error Counters (Continued) Applies Applies RDMA Error Description Troubleshooting Counter RoCE? iWARP? Posted work requests may be Occurs when the Requestor flushed by sending completions with...
Page 167
7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Table 7-3. Cavium FastLinQ RDMA Error Counters (Continued) Applies Applies RDMA Error Description Troubleshooting Counter RoCE? iWARP? Remote side could not complete the A software issue at the Requestor operation requested due to a local...
7–RoCE Configuration Configuring RoCE on the Adapter for Linux Table 7-3. Cavium FastLinQ RDMA Error Counters (Continued) Applies Applies RDMA Error Description Troubleshooting Counter RoCE? iWARP? An internal QP consistency error Indicates a software Responder was detected while processing this issue.
7–RoCE Configuration Configuring RoCE on the Adapter for Linux RoCE Configuration for RHEL To configure RoCE on the adapter, the Open Fabrics Enterprise Distribution (OFED) must be installed and configured on the RHEL host. To prepare inbox OFED for RHEL: While installing or upgrading the operating system, select the InfiniBand and OFED support packages.
7–RoCE Configuration Configuring RoCE on the Adapter for Linux perftest-x.x.x.x86_64.rpm (required for bandwidth and latency applications) Install the Linux drivers, as described in “Installing the Linux Drivers with RDMA” on page Verifying the RoCE Configuration on Linux After installing OFED, installing the Linux driver, and loading the RoCE drivers, verify that the RoCE devices were detected on all Linux operating systems.
Page 171
7–RoCE Configuration Configuring RoCE on the Adapter for Linux ib_ucm,ib_iser,ib_srpt,ib_umad, ib_uverbs,rdma_ucm,ib_ipoib,ib_isert Configure the IP address and enable the port using a configuration method such as ifconfig. For example: # ifconfig ethX 192.168.10.10/24 up Issue the command. For each PCI function, you should see a ibv_devinfo separate , as shown in the following example:...
7–RoCE Configuration Configuring RoCE on the Adapter for Linux The following are examples of successful ping pong tests on the server and the client. Server Ping: root@captain:~# ibv_rc_pingpong -d qedr0 -g 0 local address: LID 0x0000, QPN 0xff0000, PSN 0xb3e07e, GID fe80::20e:1eff:fe50:c7c0 remote address: LID 0x0000, QPN 0xff0000, PSN 0x934d28, GID fe80::20e:1eff:fe50:c570...
7–RoCE Configuration Configuring RoCE on the Adapter for Linux NOTE The default GID value is zero (0) for back-to-back or pause settings. For server and switch configurations, you must identify the proper GID value. If you are using a switch, refer to the corresponding switch configuration documents for the correct settings.
7–RoCE Configuration Configuring RoCE on the Adapter for Linux GID[ 3ffe:ffff:0000:0f21:0000:0000:0000:0004 GID[ 0000:0000:0000:0000:0000:ffff:c0a8:6403 GID[ 0000:0000:0000:0000:0000:ffff:c0a8:6403 Verifying the RoCE v1 or RoCE v2 GID Index and Address from sys and class Parameters Use one of the following options to verify the RoCE v1 or RoCE v2 GID Index and address from the sys and class parameters: Option 1: ...
Verifying RoCE v2 Through Different Subnets NOTE You must first configure the route settings for the switch and servers. On the adapter, set the RoCE priority and DCBX mode using either the HII, UEFI user interface, or one of the Cavium management utilities. AH0054602-00 J...
Page 176
7–RoCE Configuration Configuring RoCE on the Adapter for Linux To verify RoCE v2 through different subnets: Set the route configuration for the server and client using the DCBX-PFC configuration. System Settings: Server VLAN IP : 192.168.100.3 and Gateway :192.168.100.1 Client VLAN IP : 192.168.101.3 and Gateway :192.168.101.1 ...
Page 177
7–RoCE Configuration Configuring RoCE on the Adapter for Linux Server Switch Settings: Figure 7-4. Switch Settings, Server Client Switch Settings: Figure 7-5. Switch Settings, Client AH0054602-00 J...
Page 178
7–RoCE Configuration Configuring RoCE on the Adapter for Linux Configuring RoCE v1 or RoCE v2 Settings for RDMA_CM Applications To configure RoCE, use the following scripts from the FastLinQ source package: # ./show_rdma_cm_roce_ver.sh qedr0 is configured to IB/RoCE v1 qedr1 is configured to IB/RoCE v1 # ./config_rdma_cm_roce_ver.sh v2 configured rdma_cm for qedr0 to RoCE v2 configured rdma_cm for qedr1 to RoCE v2...
7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX Configuring RoCE on the Adapter for VMware ESX This section provides the following procedures and information for RoCE configuration: Configuring RDMA Interfaces Configuring MTU RoCE Mode and Statistics ...
7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX To associate the QLogic NIC port to the vSwitch, issue the following command: # esxcli network vswitch standard uplink add -u <uplink device> -v <roce vswitch> For example: # esxcli network vswitch standard uplink add -u vmnic0 -v roce_vs To create a new port group on this vSwitch, issue the following command: # esxcli network vswitch standard portgroup add -p roce_pg -v...
7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX vmrdma0 qedrntv Active 2048 25 Gbps vmnic0 QLogic FastLinQ QL45xxx RDMA Interface vmrdma1 qedrntv Active 1024 25 Gbps vmnic1 QLogic FastLinQ QL45xxx RDMA Interface RoCE Mode and Statistics For the RoCE mode, ESXi requires concurrent support of both RoCE v1 and v2. The decision regarding which RoCE mode to use is made during queue pair creation.
7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX Address handles allocated: 0 Memory windows allocated: 0 Configuring a Paravirtual RDMA Device (PVRDMA) To configure PVRDMA using a vCenter interface: Create and configure a new distributed virtual switch as follows: ®...
Page 183
7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX Manage VMkernel network adapters. Accept the default, and then click Next. Migrate VM networking. Assign the port group created in Step Assign a vmknic for PVRDMA to use on ESX hosts: Right-click a host, and then click Settings.
Page 184
7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX Figure 7-10 shows an example. Figure 7-10. Setting the Firewall Rule Set up the VM for PVRDMA as follows: Install the following supported guest OS: RHEL 7.5 and 7.6 Install OFED-3.18.
7–RoCE Configuration Configuring DCQCN Configuring DCQCN Data Center Quantized Congestion Notification (DCQCN) is a feature that determines how an RoCE receiver notifies a transmitter that a switch between them has provided an explicit congestion notification (notification point), and how a transmitter reacts to such notification (reaction point). This section provides the following information about DCQCN configuration: ...
3 is used for the FCoE traffic group, and 4 is used for the iSCSI-TLV traffic group. You may encounter DCB mismatch issues if you attempt to reuse these numbers on networks that also support FCoE or iSCSI-TLV traffic. Cavium recommends that you use numbers 1–2 or 5–7 for RoCE-related traffic groups. ...
7–RoCE Configuration Configuring DCQCN DCB-related Parameters Use DCB to map priorities to traffic classes (priority groups). DCB also controls which priority groups are subject to PFC (lossless traffic), and the related bandwidth allocation (ETS). Global Settings on RDMA Traffic Global settings on RDMA traffic include configuration of vLAN priority, ECN, and DSCP.
CNPs. Values range dcqcn_cnp_vlan_priority between 0..7. FCoE-Offload uses 3 and iSCSI-Offload-TLV generally uses 4. Cavium rec- ommends that you specify a number from 1–2 or 5–7. Use this same value throughout the entire net- work.
7–RoCE Configuration Configuring DCQCN Table 7-4. DCQCN Algorithm Parameters (Continued) Parameter Description and Values Alpha update interval. dcqcn_k_us DCQCN timeout. dcqcn_timeout_us MAC Statistics To view MAC statistics, including per-priority PFC statistics, issue the phy_mac_stats command. For example, to view statistics on port 1 issue the following command: ./debugfs.sh -n eth0 -d phy_mac_stat -P 1 Script Example...
DCQCN mode currently supports only up to 64 QPs. Cavium adapters can determine vLAN priority for PFC purposes from vLAN priority or from DSCP bits in the ToS field. However, in the presence of both, vLAN takes precedence.
iWARP Configuration Internet wide area RDMA protocol (iWARP) is a computer networking protocol that implements RDMA for efficient data transfer over IP networks. iWARP is designed for multiple environments, including LANs, storage networks, data center networks, and WANs. This chapter provides instructions for: ...
8–iWARP Configuration Configuring iWARP on Windows In the Warning - Saving Changes message box, click Yes to save the configuration. In the Success - Saving Changes message box, click OK. Repeat Step 2 through Step 7 to configure the NIC and iWARP for the other ports.
Page 193
8–iWARP Configuration Configuring iWARP on Windows Using Windows PowerShell, verify that RDMA is enabled. The command output (Figure 8-1) shows the adapters Get-NetAdapterRdma that support RDMA. Figure 8-1. Windows PowerShell Command: Get-NetAdapterRdma Using Windows PowerShell, verify that is enabled. The NetworkDirect command output (Figure...
Page 194
8–iWARP Configuration Configuring iWARP on Windows Figure 8-3 shows an example. Figure 8-3. Perfmon: Add Counters AH0054602-00 J...
Page 195
If iWARP traffic is running, counters appear as shown in the Figure 8-4 example. Figure 8-4. Perfmon: Verifying iWARP Traffic NOTE For more information on how to view Cavium RDMA counters in Windows, see “Viewing RDMA Counters” on page 138. To verify the SMB connection:...
[fe80::71ea:bdd2:ae41:b95f%60]:445 NA Kernel 60 Listener 192.168.11.20:16159 192.168.11.10:445 Configuring iWARP on Linux Cavium 41xxx Series Adapters support iWARP on the Linux Open Fabric Enterprise Distributions (OFEDs) listed in Table 7-1 on page 131. iWARP configuration on a Linux system includes the following: ...
8–iWARP Configuration Configuring iWARP on Linux Use the following command syntax to change the RDMA protocol by loading driver with a port interface PCI ID ( ) and an RDMA xx:xx.x protocol value (p). # modprobe -v qed rdma_protocol_map=<xx:xx.x-p> The RDMA protocol ( ) values are as follows: ...
8–iWARP Configuration Configuring iWARP on Linux Issue the command, and then verify the transport type. ibv_devinfo If the command is successful, each PCI function will show a separate . For example (if checking the second port of the above dual-port hca_id adapter): [root@localhost ~]# ibv_devinfo -d qedr1...
8–iWARP Configuration Configuring iWARP on Linux Running Perftest for iWARP All perftest tools are supported over the iWARP transport type. You must run the tools using the RDMA connection manager (with the option). Example: On one server, issue the following command (using the second port in this example): # ib_send_bw -d qedr1 -F -R On one client, issue the following command (using the second port in this...
8–iWARP Configuration Configuring iWARP on Linux NOTE For latency applications (send/write), if the perftest version is the latest (for example, ), use the supported perftest-3.0-0.21.g21dc344.x86_64.rpm inline size value: 0-128 Configuring NFS-RDMA NFS-RDMA for iWARP includes both server and client configuration steps. To configure the NFS server: Create an nfs-server directory and grant permission by issuing the following commands:...
8–iWARP Configuration Configuring iWARP on Linux To configure the NFS client: NOTE This procedure for NFS client configuration also applies to RoCE. Create an nfs-client directory and grant permission by issuing the following commands: # mkdir /tmp/nfs-client # chmod 777 /tmp/nfs-client Load the xprtrdma module as follows: # modprobe xprtrdma Mount the NFS file system as appropriate for your version:...
Page 202
8–iWARP Configuration Configuring iWARP on Linux Install all OS-dependent packages/libraries as described in the RDMA-Core README. For CentOS, issue the following command: # yum install cmake gcc libnl3-devel libudev-devel make pkgconfig valgrind-devel For SLES 12 SP3 (ISO/SDK kit), install the following RPMs: cmake-3.5.2-18.3.x86_64.rpm (OS ISO) libnl-1_1-devel-1.1.4-4.21.x86_64.rpm (SDK ISO) (SDK ISO)
Page 203
8–iWARP Configuration Configuring iWARP on Linux For example: # /usr/bin/rping -c -v -C 5 -a 192.168.22.3 (or) rping -c -v -C 5 -a 192.168.22.3 ping data: rdma-ping-0: ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqr ping data: rdma-ping-1: BCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrs ping data: rdma-ping-2: CDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrst ping data: rdma-ping-3: DEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstu ping data: rdma-ping-4: EFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuv client DISCONNECT EVENT...
iSER Configuration This chapter provides procedures for configuring iSCSI Extensions for RDMA (iSER) for Linux (RHEL and SLES) and VMware ESXi 6.7, including: Before You Begin “Configuring iSER for RHEL” on page 180 “Configuring iSER for SLES 12” on page 183 “Using iSER with iWARP on RHEL and SLES”...
9–iSER Configuration Configuring iSER for RHEL Configuring iSER for RHEL To configure iSER for RHEL: Install inbox OFED as described in “RoCE Configuration for RHEL” on page 144. NOTE Out-of-box OFEDs are not supported for iSER because the ib_isert module is not available in the out-of-box OFED 3.18-2 GA/3.18-3 GA versions.
Page 206
9–iSER Configuration Configuring iSER for RHEL Figure 9-1 shows an example of a successful RDMA ping. Figure 9-1. RDMA Ping Successful You can use a Linux TCM-LIO target to test iSER. The setup is the same for any iSCSI target, except that you issue the command on the applicable portals.
Page 207
9–iSER Configuration Configuring iSER for RHEL To change the transport mode to iSER, issue the iscsiadm command. For example: iscsiadm -m node -T iqn.2015-06.test.target1 -o update -n iface.transport_name -v iser To connect to or log in to the iSER target, issue the iscsiadm command.
9–iSER Configuration Configuring iSER for SLES 12 Figure 9-4. Checking for New iSCSI Device Configuring iSER for SLES 12 Because the targetcli is not inbox on SLES 12.x, you must complete the following procedure. To configure iSER for SLES 12: To install targetcli, copy and install the following RPMs from the ISO image (x86_64 and noarch location): lio-utils-4.1-14.6.x86_64.rpm...
9–iSER Configuration Using iSER with iWARP on RHEL and SLES Start the targetcli utility, and configure your targets on the iSER target system. NOTE targetcli versions are different in RHEL and SLES. Be sure to use the proper backstores to configure your targets: ...
Page 210
9–iSER Configuration Using iSER with iWARP on RHEL and SLES Figure 9-5 shows the target configuration for LIO. Figure 9-5. LIO Target Configuration To configure an initiator for iWARP: To discover the iSER LIO target using port 3261, issue the iscsiadm command as follows: # iscsiadm -m discovery -t st -p 192.168.21.4:3261 -I iser...
9–iSER Configuration Optimizing Linux Performance Optimizing Linux Performance Consider the following Linux performance configuration enhancements described in this section. Configuring CPUs to Maximum Performance Mode Configuring Kernel sysctl Settings Configuring IRQ Affinity Settings Configuring Block Device Staging Configuring CPUs to Maximum Performance Mode Configure the CPU scaling governor to performance by using the following script to set all CPUs to maximum performance mode:...
9–iSER Configuration Configuring iSER on ESXi 6.7 Configuring IRQ Affinity Settings The following example sets CPU core 0, 1, 2, and 3 to interrupt request (IRQ) XX, YY, ZZ, and XYZ respectively. Perform these steps for each IRQ assigned to a port (default is eight queues per port).
9–iSER Configuration Configuring iSER on ESXi 6.7 vmk0 Management Network IPv6 fe80::e2db:55ff:fe0c:5f94 e0:db:55:0c:5f:94 1500 65535 true STATIC, PREFERRED defaultTcpipStack The iSER target is configured to communicate with the iSER initiator. Configuring iSER for ESXi 6.7 To configure iSER for ESXi 6.7: Add iSER devices by issuing the following commands: esxcli rdma iser add esxcli iscsi adapter list...
Page 214
9–iSER Configuration Configuring iSER on ESXi 6.7 esxcli iscsi networkportal add -A vmhba67 -n vmk1 esxcli iscsi networkportal list esxcli iscsi adapter get -A vmhba65 vmhba65 Name: iqn.1998-01.com.vmware:localhost.punelab.qlogic.com qlogic.org qlogic.com mv.qlogic.com:1846573170:65 Alias: iser-vmnic5 Vendor: VMware Model: VMware iSCSI over RDMA (iSER) Adapter Description: VMware iSCSI over RDMA (iSER) Adapter Serial Number: vmnic5 Hardware Version:...
Page 215
9–iSER Configuration Configuring iSER on ESXi 6.7 Console Device: /vmfs/devices/cdrom/mpx.vmhba0:C0:T4:L0 Devfs Path: /vmfs/devices/cdrom/mpx.vmhba0:C0:T4:L0 Vendor: TSSTcorp Model: DVD-ROM SN-108BB Revis: D150 SCSI Level: 5 Is Pseudo: false Status: on Is RDM Capable: false Is Removable: true Is Local: true Is SSD: false Other Names: vml.0005000000766d686261303a343a30 VAAI Status: unsupported...
HBA. iSCSI offload increases network performance and throughput while helping to optimize server processor use. This section covers how to configure the Windows iSCSI offload feature for the Cavium 41xxx Series Adapters. With the proper iSCSI offload licensing, you can configure your iSCSI-capable 41xxx Series Adapter to offload iSCSI processing from the host processor.
After the IP address is configured for the iSCSI adapter, you must use Microsoft Initiator to configure and add a connection to the iSCSI target using the Cavium QLogic iSCSI adapter. For more details on Microsoft Initiator, see the Microsoft user guide.
Page 218
10–iSCSI Configuration iSCSI Offload in Windows Server Figure 10-1. iSCSI Initiator Properties, Configuration Page In the iSCSI Initiator Name dialog box, type the new initiator IQN name, and then click OK. (Figure 10-2) Figure 10-2. iSCSI Initiator Node Name Change On the iSCSI Initiator Properties, click the Discovery tab.
Page 219
10–iSCSI Configuration iSCSI Offload in Windows Server On the Discovery page (Figure 10-3) under Target portals, click Discover Portal. Figure 10-3. iSCSI Initiator—Discover Target Portal In the Discover Target Portal dialog box (Figure 10-4): In the IP address or DNS name box, type the IP address of the target. Click Advanced.
Page 220
10–iSCSI Configuration iSCSI Offload in Windows Server Figure 10-4. Target Portal IP Address In the Advanced Settings dialog box (Figure 10-5), complete the following under Connect using: For Local adapter, select the QLogic <name or model> Adapter. For Initiator IP, select the adapter IP address. Click OK.
Page 221
10–iSCSI Configuration iSCSI Offload in Windows Server Figure 10-5. Selecting the Initiator IP Address On the iSCSI Initiator Properties, Discovery page, click OK. AH0054602-00 J...
Page 222
10–iSCSI Configuration iSCSI Offload in Windows Server Click the Targets tab, and then on the Targets page (Figure 10-6), click Connect. Figure 10-6. Connecting to the iSCSI Target AH0054602-00 J...
10–iSCSI Configuration iSCSI Offload in Windows Server On the Connect To Target dialog box (Figure 10-7), click Advanced. Figure 10-7. Connect To Target Dialog Box In the Local Adapter dialog box, select the QLogic <name or model> Adapter, and then click OK. Click OK again to close Microsoft Initiator.
Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019 support booting and installing in either the offload or non-offload paths. Cavium requires that you use a slipstream DVD with the latest Cavium QLogic drivers injected. See “Injecting (Slipstreaming) Adapter Drivers into Windows Image Files”...
41xxx Series Adapters. No additional configurations are required to configure iSCSI crash dump generation. iSCSI Offload in Linux Environments The Cavium QLogic FastLinQ 41xxx iSCSI software consists of a single kernel module called (qedi). The qedi module is dependent on additional parts qedi.ko...
Some key differences exist between qedi—the driver for the Cavium FastLinQ 41xxx Series Adapter (iSCSI)—and the previous QLogic iSCSI offload driver— bnx2i for the Cavium 8400 Series Adapters. Some of these differences include: qedi directly binds to a PCI function exposed by the CNA.
Page 227
10–iSCSI Configuration iSCSI Offload in Linux Environments To verify that the iSCSI interfaces were detected properly, issue the following command. In this example, two iSCSI CNA devices are detected with SCSI host numbers 4 and 5. # dmesg | grep qedi [0000:00:00.0]:[qedi_init:3696]: QLogic iSCSI Offload Driver v8.15.6.0.
Page 228
10–iSCSI Configuration iSCSI Offload in Linux Environments 192.168.25.100:3260,1 iqn.2003- 04.com.sanblaze:virtualun.virtualun.target-05000001 192.168.25.100:3260,1 iqn.2003-04.com.sanblaze:virtualun.virtualun.target-05000002 Log into the iSCSI target using the IQN obtained in Step 5. To initiate the login procedure, issue the following command (where the last character in the command is a lowercase letter “L”): #iscsiadm -m node -p 192.168.25.100 -T iqn.2003-04.com.sanblaze:virtualun.virtualun.target-0000007 -l Logging in to [iface: qedi.00:0e:1e:c4:e1:6c,...
Chapter 6 Boot from SAN Configuration. Configuring Linux FCoE Offload The Cavium FastLinQ 41xxx Series Adapter FCoE software consists of a single kernel module called qedf.ko (qedf). The qedf module is dependent on additional parts of the Linux kernel for specific functionality: ...
No explicit configuration is required for qedf.ko. The driver automatically binds to the exposed FCoE functions of the CNA and begins discovery. This functionality is similar to the functionality and operation of the Cavium QLogic FC driver, qla2xx, as opposed to the older bnx2fc driver.
11–FCoE Configuration Configuring Linux FCoE Offload The load qedf.ko kernel module performs the following: # modprobe qed # modprobe libfcoe # modprobe qedf Verifying FCoE Devices in Linux Follow these steps to verify that the FCoE devices were detected correctly after installing and loading the qedf kernel module.
Page 232
11–FCoE Configuration Configuring Linux FCoE Offload Check for discovered FCoE devices using the lsscsi lsblk -S commands. An example of each command follows. # lsscsi [0:2:0:0] disk DELL PERC H700 2.10 /dev/sda [2:0:0:0] cd/dvd TEAC DVD-ROM DV-28SW R.2A /dev/sr0 [151:0:0:0] disk P2000G3 FC/iSCSI T252 /dev/sdb...
SR-IOV Configuration Single root input/output virtualization (SR-IOV) is a specification by the PCI SIG that enables a single PCI Express (PCIe) device to appear as multiple, separate physical PCIe devices. SR-IOV permits isolation of PCIe resources for performance, interoperability, and manageability. NOTE Some SR-IOV features may not be fully enabled in the current release.
Page 234
12–SR-IOV Configuration Configuring SR-IOV on Windows Figure 12-1. System Setup for SR-IOV: Integrated Devices On the Main Configuration Page for the selected adapter, click Device Level Configuration. On the Main Configuration Page - Device Level Configuration (Figure 12-2): Set the Virtualization Mode to SR-IOV, or NPAR+SR-IOV if you are using NPAR mode.
Page 235
12–SR-IOV Configuration Configuring SR-IOV on Windows To enable SR-IOV on the miniport adapter: Access Device Manager. Open the miniport adapter properties, and then click the Advanced tab. On the Advanced properties page (Figure 12-3) under Property, select SR-IOV, and then set the value to Enabled. Click OK.
Page 236
12–SR-IOV Configuration Configuring SR-IOV on Windows NOTE Be sure to enable SR-IOV when you create the vSwitch. This option is unavailable after the vSwitch is created. Figure 12-4. Virtual Switch Manager: Enabling SR-IOV The Apply Networking Changes message box advises you that Pending changes may disrupt network connectivity.
Page 237
12–SR-IOV Configuration Configuring SR-IOV on Windows To get the virtual machine switch capability, issue the following Windows PowerShell command: PS C:\Users\Administrator> Get-VMSwitch -Name SR-IOV_vSwitch | fl Output of the command includes the following SR-IOV Get-VMSwitch capabilities: IovVirtualFunctionCount : 96 IovVirtualFunctionsInUse To create a virtual machine (VM) and export the virtual function (VF) in the Create a virtual machine.
Page 238
Configuring SR-IOV on Windows Figure 12-5. Settings for VM: Enabling SR-IOV Install the Cavium QLogic drivers for the adapters detected in the VM. Use the latest drivers available from your vendor for your host OS (do not use inbox drivers).
Page 239
12–SR-IOV Configuration Configuring SR-IOV on Windows After installing the drivers, the Cavium QLogic adapter is listed in the VM. Figure 12-6 shows an example. Figure 12-6. Device Manager: VM with QLogic Adapter To view the SR-IOV VF details, issue the following Windows PowerShell command: PS C:\Users\Administrator>...
12–SR-IOV Configuration Configuring SR-IOV on Linux Configuring SR-IOV on Linux To configure SR-IOV on Linux: Access the server BIOS System Setup, and then click System BIOS Settings. On the System BIOS Settings page, click Integrated Devices. On the System Integrated Devices page (see Figure 12-1 on page 209): Set the SR-IOV Global Enable option to Enabled.
Page 241
12–SR-IOV Configuration Configuring SR-IOV on Linux On the Device Settings page, select Port 1 for the Cavium QLogic adapter. On the Device Level Configuration page (Figure 12-9): Set the Virtualization Mode to SR-IOV. Click Back. Figure 12-9. System Setup for SR-IOV: Integrated Devices On the Main Configuration Page, click Finish, save your settings, and then reboot the system.
Page 242
12–SR-IOV Configuration Configuring SR-IOV on Linux Figure 12-10. Editing the grub.conf File for SR-IOV Save the file and then reboot the system. grub.conf To verify that the changes are in effect, issue the following command: dmesg | grep iommu A successful input–output memory management unit (IOMMU) command output should show, for example: Intel-IOMMU: enabled To view VF details (number of VFs and total VFs), issue the following...
Page 243
12–SR-IOV Configuration Configuring SR-IOV on Linux Review the command output (Figure 12-11) to confirm that actual VFs were created on bus 4, device 2 (from the 0000:00:02.0 parameter), functions 0 through 7. Note that the actual device ID is different on the PFs (8070 in this example) versus the VFs (8090 in this example).
Page 244
12–SR-IOV Configuration Configuring SR-IOV on Linux Ensure that the VF interface is up and running with the assigned MAC address. Power off the VM and attach the VF. (Some OSs support hot-plugging of VFs to the VM.) In the Virtual Machine dialog box (Figure 12-13), click Add Hardware.
12–SR-IOV Configuration Configuring SR-IOV on Linux Figure 12-14. Add New Virtual Hardware Power on the VM, and then issue the following command: check lspci -vv|grep -I ether Install the drivers for the adapters detected in the VM. Use the latest drivers available from your vendor for your host OS (do not use inbox drivers).
12–SR-IOV Configuration Configuring SR-IOV on VMware To enable IOMMU for SR-IOV on SUSE 12.x: In the file, locate /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT and then add the boot parameter. intel_iommu=on To update the grub configuration file, issue the following command: grub2-mkconfig -o /boot/grub2/grub.cfg Configuring SR-IOV on VMware To configure SR-IOV on VMware: Access the server BIOS System Setup, and then click System BIOS...
Page 247
12–SR-IOV Configuration Configuring SR-IOV on VMware [root@localhost:~] esxcfg-module -g qedentv qedentv enabled = 1 options = 'max_vfs=16,16' To verify if actual VFs were created, issue the command as follows: lspci [root@localhost:~] lspci | grep -i QLogic | grep -i 'ethernet\|network' | more 0000:05:00.0 Network controller: QLogic Corp.
Page 248
12-15) as follows: In the New Device box, select Network, and then click Add. For Adapter Type, select SR-IOV Passthrough. For Physical Function, select the Cavium QLogic VF. To save your configuration changes and close this dialog box, click AH0054602-00 J...
Page 249
12–SR-IOV Configuration Configuring SR-IOV on VMware Figure 12-15. VMware Host Edit Settings To validate the VFs per port, issue the command as follows: esxcli [root@localhost:~] esxcli network sriovnic vf list -n vmnic6 VF ID Active PCI Address Owner World ID ----- ------ -----------...
Page 250
005:03.7 Install the Cavium QLogic drivers for the adapters detected in the VM. Use the latest drivers available from your vendor for your host OS (do not use inbox drivers). The same driver version must be installed on the host and the...
NVMe-oF Configuration with RDMA Non-Volatile Memory Express over Fabrics (NVMe-oF) enables the use of alternate transports to PCIe to extend the distance over which an NVMe host device and an NVMe storage drive or subsystem can connect. NVMe-oF defines a common architecture that supports a range of storage networking fabrics for the NVMe block storage protocol over a storage networking fabric.
Page 252
13–NVMe-oF Configuration with RDMA Figure 13-1 illustrates an example network. 41xxx Series Adapter 41xxx Series Adapter Figure 13-1. NVMe-oF Network The NVMe-oF configuration process covers the following procedures: Installing Device Drivers on Both Servers Configuring the Target Server ...
13–NVMe-oF Configuration with RDMA Installing Device Drivers on Both Servers Installing Device Drivers on Both Servers After installing your operating system (SLES 12 SP3), install device drivers on both servers. To upgrade the kernel to the latest Linux upstream kernel, go to: https://www.kernel.org/pub/linux/kernel/v4.x/ Install and load the latest FastLinQ drivers (qed, qede, libqedr/qedr) following all installation instructions in the README.
13–NVMe-oF Configuration with RDMA Configuring the Target Server Enable and start the RDMA service as follows: # systemctl enable rdma.service # systemctl start rdma.service Disregard the error. All OFED modules required by RDMA Service Failed qedr are already loaded. Configuring the Target Server Configure the target server after the reboot process.
Page 255
13–NVMe-oF Configuration with RDMA Configuring the Target Server Table 13-1. Target Parameters (Continued) Command Description Sets the NVMe device path. The NVMe device # echo -n /dev/nvme0n1 >namespaces/ path can differ between systems. Check the device 1/device_path path using the lsblk command. This system has two NVMe devices: nvme0n1 and nvme1n1.
13–NVMe-oF Configuration with RDMA Configuring the Initiator Server Configuring the Initiator Server You must configure the initiator server after the reboot process. After the server is operating, you cannot change the configuration without rebooting. If you are using a startup script to configure the initiator server, consider pausing the script (using command or something similar) as needed to ensure that each wait command finishes before executing the next command.
13–NVMe-oF Configuration with RDMA Preconditioning the Target Server Connect to the discovered NVMe-oF target ( ) using the nvme-qlogic-tgt1 NQN. Issue the following command after each server reboot. For example: # nvme connect -t rdma -n nvme-qlogic-tgt1 -a 1.1.1.1 -s 1023 Confirm the NVMe-oF target connection with the NVMe-oF device as follows: # dmesg | grep nvme...
13–NVMe-oF Configuration with RDMA Testing the NVMe-oF Devices Testing the NVMe-oF Devices Compare the latency of the local NVMe device on the target server with that of the NVMe-oF device on the initiator server to show the latency that NVMe adds to the system.
13–NVMe-oF Configuration with RDMA Optimizing Performance In this example, the target NVMe device latency is 8µsec. The total latency that results from the use of NVMe-oF is the difference between the initiator device NVMe-oF latency (30µsec) and the target device NVMe-oF latency (8µsec), or 22µsec.
13–NVMe-oF Configuration with RDMA Optimizing Performance Set the IRQ affinity for all 41xxx Series Adapters. The file is a script file that is listed in “.IRQ Affinity multi_rss-affin.sh (multi_rss-affin.sh)” on page 235. # systemctl stop irqbalance # ./multi_rss-affin.sh eth1 NOTE A different version of this script, , is in the 41xxx Linux qedr_affin.sh...
13–NVMe-oF Configuration with RDMA Optimizing Performance CPUID=$((CPUID*OFFSET)) for ((A=1; A<=${NUM_FP}; A=${A}+1)) ; do INT='grep -m $A $eth /proc/interrupts | tail -1 | cut -d ":" -f 1' SMP='echo $CPUID 16 o p | dc' echo ${INT} smp affinity set to ${SMP} echo $((${SMP})) >...
Page 262
13–NVMe-oF Configuration with RDMA Optimizing Performance NOTE The following commands apply only to the initiator server. # echo 0 > /sys/block/nvme0n1/queue/add_random # echo 2 > /sys/block/nvme0n1/queue/nomerges AH0054602-00 J...
Windows Server 2016 This chapter provides the following information for Windows Server 2016: Configuring RoCE Interfaces with Hyper-V “RoCE over Switch Embedded Teaming” on page 244 “Configuring QoS for RoCE” on page 245 “Configuring VMMQ” on page 254 ...
14–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V Creating a Hyper-V Virtual Switch with an RDMA NIC Follow the procedures in this section to create a Hyper-V virtual switch and then enable RDMA in the host VNIC. To create a Hyper-V virtual switch with an RDMA virtual NIC: On all physical interfaces, set the value of the NetworkDirect Functionality parameter to Enabled.
14–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V On the Advanced page (Figure 14-2): Under Property, select Network Direct (RDMA). Under Value, select Enabled. Click OK. Figure 14-2. Hyper-V Virtual Ethernet Adapter Properties To enable RDMA, issue the following Windows PowerShell command: PS C:\Users\Administrator>...
14–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V Figure 14-3 shows the command output. Figure 14-3. Windows PowerShell Command: Get-VMNetworkAdapter To set the vLAN ID to the host virtual NIC, issue the following Windows PowerShell command: PS C:\Users\Administrator> Set-VMNetworkAdaptervlan -VMNetworkAdapterName "New Virtual Switch" -VlanId 5 -Access -Management0S NOTE Note the following about adding a vLAN ID to a host virtual NIC:...
14–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V Adding Host Virtual NICs (Virtual Ports) To add host virtual NICs: To add a host virtual NIC, issue the following command: Add-VMNetworkAdapter -SwitchName "New Virtual Switch" -Name SMB - ManagementOS Enable RDMA on host virtual NICs as shown in “To enable RDMA in a host virtual NIC:”...
Page 268
14–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V Figure 14-5. Add Counters Dialog Box If the RoCE traffic is running, counters appear as shown in Figure 14-6. Figure 14-6. Performance Monitor Shows RoCE Traffic AH0054602-00 J...
14–Windows Server 2016 RoCE over Switch Embedded Teaming RoCE over Switch Embedded Teaming Switch Embedded Teaming (SET) is Microsoft’s alternative NIC teaming solution available to use in environments that include Hyper-V and the Software Defined Networking (SDN) stack in Windows Server 2016 Technical Preview. SET integrates limited NIC Teaming functionality into the Hyper-V Virtual Switch.
14–Windows Server 2016 Configuring QoS for RoCE Figure 14-8 shows command output. Figure 14-8. Windows PowerShell Command: Get-NetAdapter To enable RDMA on SET, issue the following Windows PowerShell command: PS C:\Users\Administrator> Enable-NetAdapterRdma "vEthernet (SET)" Assigning a vLAN ID on SET To assign a vLAN ID on SET: ...
14–Windows Server 2016 Configuring QoS for RoCE Configuring QoS by Disabling DCBX on the Adapter All configuration must be completed on all of the systems in use before configuring QoS by disabling DCBX on the adapter. The priority-based flow control (PFC), enhanced transition services (ETS), and traffic classes configuration must be the same on the switch and server.
Page 272
14–Windows Server 2016 Configuring QoS for RoCE Figure 14-9. Advanced Properties: Enable QoS Assign the vLAN ID to the interface as follows: Open the miniport properties, and then click the Advanced tab. On the adapter properties’ Advanced page (Figure 14-10) under Property, select VLAN ID, and then set the value.
Page 273
14–Windows Server 2016 Configuring QoS for RoCE Figure 14-10. Advanced Properties: Setting VLAN ID To enable PFC for RoCE on a specific priority, issue the following command: PS C:\Users\Administrators> Enable-NetQoSFlowControl -Priority 5 NOTE If configuring RoCE over Hyper-V, do not assign a vLAN ID to the physical interface.
Page 274
14–Windows Server 2016 Configuring QoS for RoCE True Global False Global False Global To configure QoS and assign relevant priority to each type of traffic, issue the following commands (where Priority 5 is tagged for RoCE and Priority 0 is tagged for TCP): PS C:\Users\Administrators>...
14–Windows Server 2016 Configuring QoS for RoCE [Default] 1-4,6-7 Global RDMA class Global TCP class Global To see the network adapter QoS from the preceding configuration, issue the following Windows PowerShell command: PS C:\Users\Administrator> Get-NetAdapterQos Name : SLOT 4 Port 1 Enabled : True Capabilities...
Page 276
14–Windows Server 2016 Configuring QoS for RoCE NOTE If the switch does not have a way of designating the RoCE traffic, you may need to set the RoCE Priority to the number used by the switch. ® Arista switches can do so, but some other switches cannot. To install the DCB role in the host, issue the following Windows PowerShell command: PS C:\Users\Administrators>...
Page 277
14–Windows Server 2016 Configuring QoS for RoCE Figure 14-11. Advanced Properties: Enabling QoS Assign the vLAN ID to the interface (required for PFC) as follows: Open the miniport properties, and then click the Advanced tab. On the adapter properties’ Advanced page (Figure 14-12) under Property, select VLAN ID, and then set the value.
Page 278
14–Windows Server 2016 Configuring QoS for RoCE Figure 14-12. Advanced Properties: Setting VLAN ID To configure the switch, issue the following Windows PowerShell command: PS C:\Users\Administrators> Get-NetAdapterQoS Name : Ethernet 5 Enabled : True Capabilities Hardware Current -------- ------- MacSecBypass : NotSupported NotSupported DcbxSupport : CEE...
14–Windows Server 2016 Configuring VMMQ NetDirect 445 RemoteTrafficClasses : TC TSA Bandwidth Priorities -- --- --------- ---------- 0 ETS 0-4,6-7 1 ETS RemoteFlowControl : Priority 5 Enabled RemoteClassifications : Protocol Port/Type Priority -------- --------- -------- NetDirect 445 NOTE The preceding example is taken when the adapter port is connected to an Arista 7060X switch.
14–Windows Server 2016 Configuring VMMQ Figure 14-13. Advanced Properties: Enabling Virtual Switch RSS Creating a Virtual Machine Switch with or Without SR-IOV To create a virtual machine switch with or without SR-IOV: Launch the Hyper-V Manager. Select Virtual Switch Manager (see Figure 14-14).
14–Windows Server 2016 Configuring VMMQ Figure 14-14. Virtual Switch Manager Click OK. Enabling VMMQ on the Virtual Machine Switch To enable VMMQ on the virtual machine switch: Issue the following Windows PowerShell command: PS C:\Users\Administrators> Set-VMSwitch -name q1 -defaultqueuevmmqenabled $true -defaultqueuevmmqqueuepairs 4 AH0054602-00 J...
14–Windows Server 2016 Configuring VMMQ Getting the Virtual Machine Switch Capability To get the virtual machine switch capability: Issue the following Windows PowerShell command: PS C:\Users\Administrator> Get-VMSwitch -Name ql | fl Figure 14-15 shows example output. Figure 14-15. Windows PowerShell Command: Get-VMSwitch Creating a VM and Enabling VMMQ on VMNetworkadapters in the VM To create a virtual machine (VM) and enable VMMQ on VMNetworksadapters...
PowerShell command: PS C:\Users\Administrator> Use get-netadapterstatistics | fl NOTE Cavium supports the new parameter added for Windows Server 2016 and Windows Server 2019 to configure the maximum quantity of queue pairs on a virtual port. For details, see “Max Queue Pairs (L2) Per VPort” on page 269.
14–Windows Server 2016 Configuring VXLAN Enabling VXLAN Offload on the Adapter To enable VXLAN offload on the adapter: Open the miniport properties, and then click the Advanced tab. On the adapter properties’ Advanced page (Figure 14-16) under Property, select VXLAN Encapsulated Task Offload. Figure 14-16.
14–Windows Server 2016 Configuring Storage Spaces Direct Configuring Storage Spaces Direct Windows Server 2016 introduces Storage Spaces Direct, which allows you to build highly available and scalable storage systems with local storage. For more information, refer to the following Microsoft TechNet link: https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces /storage-spaces-direct-windows-server-2016 Configuring the Hardware...
14–Windows Server 2016 Configuring Storage Spaces Direct Deploying a Hyper-Converged System This section includes instructions to install and configure the components of a Hyper-Converged system using the Windows Server 2016. The act of deploying a Hyper-Converged system can be divided into the following three high-level phases: ...
Page 287
14–Windows Server 2016 Configuring Storage Spaces Direct Example Dell switch configuration: no ip address mtu 9416 portmode hybrid switchport dcb-map roce_S2D protocol lldp dcbx version cee no shutdown Enable Network Quality of Service. NOTE Network Quality of Service is used to ensure that the Software Defined Storage system has enough bandwidth to communicate between the nodes to ensure resiliency and performance.
14–Windows Server 2016 Configuring Storage Spaces Direct To configure the host virtual NIC to use a vLAN, issue the following commands: Set-VMNetworkAdapterVlan -VMNetworkAdapterName "SMB_1" -VlanId 5 -Access -ManagementOS Set-VMNetworkAdapterVlan -VMNetworkAdapterName "SMB_2" -VlanId 5 -Access -ManagementOS NOTE These commands can be on the same or different vLANs. To verify that the vLAN ID is set, issue the following command: Get-VMNetworkAdapterVlan -ManagementOS To disable and enable each host virtual NIC adapter so that the vLAN...
Page 289
14–Windows Server 2016 Configuring Storage Spaces Direct Step 1. Running a Cluster Validation Tool Run the cluster validation tool to make sure server nodes are configured correctly to create a cluster using Storage Spaces Direct. To validate a set of servers for use as a Storage Spaces Direct cluster, issue the following Windows PowerShell command: Test-Cluster -Node <MachineName1, MachineName2, MachineName3, MachineName4>...
Page 291
14–Windows Server 2016 Configuring Storage Spaces Direct Step 6. Creating Virtual Disks If the Storage Spaces Direct was enabled, it creates a single pool using all of the disks. It also names the pool (for example S2D on Cluster1), with the name of the cluster that is specified in the name.
Windows Server 2019 This chapter provides the following information for Windows Server 2019: RSSv2 for Hyper-V “Windows Server 2019 Behaviors” on page 268 “New Adapter Properties” on page 269 RSSv2 for Hyper-V In Windows Server 2019, Microsoft added support for Receive Side Scaling version 2 (RSSv2) with Hyper-V (RSSv2 per vPort).
However in Windows Server 2019, you will get 4 VNICs with VMMQ acceleration, each with 16 queue pairs and 30 VNICs with no acceleration. Because of this functionality, Cavium introduced a new user property, Max Queue Pairs (L2) Per VPort. For more details, see New Adapter Properties.
Max Queue Pairs for Default vPort = value Max Queue Pairs for Non-Default vPort = value Network Direct Technology Cavium supports the new Network Direct Technology parameter that allows you to select the underlying RDMA technology that adheres to the following Microsoft specification: https://docs.microsoft.com/en-us/windows-hardware/drivers/network/inf-requirem ents-for-ndkpi This option replaces the RDMA Mode parameter.
15–Windows Server 2019 New Adapter Properties Virtualization Resources Table 15-1 lists the maximum quantities of virtualization resources in Windows 2019 for Dell 41xxx Series Adapters. Table 15-1. Windows 2019 Virtualization Resources for Dell 41xxx Series Adapters Two-port NIC-only Single Quantity Function Non-CNA Maximum VMQs Maximum VFs...
Windows PowerShell commands: Default-VPort: Set-VMSwitch -Name <vswitch name> -DefaultQueueVmmqEnabled:1 -DefaultQueueVmmqQueuePairs:<number> NOTE Cavium does not recommend that you disable VMMQ or decrease the quantity of queue pairs for the Default-VPort, because it may impact system performance. AH0054602-00 J...
Page 297
15–Windows Server 2019 New Adapter Properties PF Non-Default VPort: For the host: Set-VMNetworkAdapter -ManagementOS -VmmqEnabled:1 -VmmqQueuePairs:<number> For the VM: Set-VMNetworkAdapter -VMName <vm name> -VmmqEnabled:1 -VmmqQueuePairs:<number> VF Non-Default VPort: Set-VMNetworkAdapter -VMName <vm name> -IovWeight:100 -IovQueuePairsRequested:<number> NOTE The default quantity of QPs assigned for a VF (IovQueuePairsRequested) is still 1.
Troubleshooting This chapter provides the following troubleshooting information: Troubleshooting Checklist “Verifying that Current Drivers Are Loaded” on page 274 “Testing Network Connectivity” on page 275 “Microsoft Virtualization with Hyper-V” on page 276 “Linux-specific Issues” on page 276 ...
16–Troubleshooting Verifying that Current Drivers Are Loaded Replace the failed adapter with one that is known to work properly. If the second adapter works in the slot where the first one failed, the original adapter is probably defective. Install the adapter in another functioning system, and then run the tests again.
In this example, the last entry identifies the driver that will be active upon reboot. # dmesg | grep -i "Cavium" | grep -i "qede" 10.097526] QLogic FastLinQ 4xxxx Ethernet Driver qede x.x.x.x 23.093526] QLogic FastLinQ 4xxxx Ethernet Driver qede x.x.x.x 34.975396] QLogic FastLinQ 4xxxx Ethernet Driver qede x.x.x.x...
16–Troubleshooting Microsoft Virtualization with Hyper-V Testing Network Connectivity for Linux To verify that the Ethernet interface is up and running: To check the status of the Ethernet interface, issue the ifconfig command. To check the statistics on the Ethernet interface, issue the netstat -i command.
16–Troubleshooting Collecting Debug Data Problem: In an ESXi environment, with the iSCSI driver (qedil) installed, sometimes, the VI-client cannot access the host. This is due to the termination of the hostd daemon, which affects connectivity with the VI-client. Solution: Contact VMware technical support. Collecting Debug Data Use the commands in Table 16-1...
Adapter LEDS Table A-1 lists the LED indicators for the state of the adapter port link and activity. Table A-1. Adapter Port Link and Activity LEDs Port LED LED Appearance Network State No link (cable disconnected) Link LED Continuously illuminated Link No port activity Activity LED...
Cables and Optical Modules This appendix provides the following information for the supported cables and optical modules: Supported Specifications “Tested Cables and Optical Modules” on page 280 “Tested Switches” on page 284 Supported Specifications The 41xxx Series Adapters support a variety of cables and optical modules that comply with SFF8024.
B–Cables and Optical Modules Tested Cables and Optical Modules Tested Cables and Optical Modules Cavium does not guarantee that every cable or optical module that satisfies the compliance requirements will operate with the 41xxx Series Adapters. Cavium has tested the components listed in...
Page 306
B–Cables and Optical Modules Tested Cables and Optical Modules Table B-1. Tested Cables and Optical Modules (Continued) Speed/Form Manufac- Cable Part Number Type Gauge Factor turer Length TCPM2 QSFP+40G-to-4xSFP+10G 40G Copper QSFP Dell 27GG5 QSFP+40G-to-4xSFP+10G Splitter (4 × 10G) P8T4W QSFP+40G-to-4xSFP+10G 8T47V SFP+ to 1G RJ...
Page 307
B–Cables and Optical Modules Tested Cables and Optical Modules Table B-1. Tested Cables and Optical Modules (Continued) Speed/Form Manufac- Cable Part Number Type Gauge Factor turer Length Optical Solutions AFBR-703SMZ SFP+ SR ® Avago AFBR-701SDZ SFP+ LR Y3KJN SFP+ SR 1G/10G WTRD1 SFP+ SR...
Page 308
B–Cables and Optical Modules Tested Cables and Optical Modules Table B-1. Tested Cables and Optical Modules (Continued) Speed/Form Manufac- Cable Part Number Type Gauge Factor turer Length 470-ABLV SFP+ AOC 470-ABLZ SFP+ AOC 470-ABLT SFP+ AOC 470-ABML SFP+ AOC 470-ABLU SFP+ AOC 470-ABMD SFP+ AOC...
B–Cables and Optical Modules Tested Switches Tested Switches Table B-2 lists the switches that have been tested for interoperability with the 41xxx Series Adapters. This list is based on switches that are available at the time of product release, and is subject to change over time as new switches enter the market or are discontinued.
Dell Z9100 Switch Configuration The 41xxx Series Adapters support connections with the Dell Z9100 Ethernet Switch. However, until the auto-negotiation process is standardized, the switch must be explicitly configured to connect to the adapter at 25Gbps. To configure a Dell Z9100 switch port to connect to the 41xxx Series Adapter at 25Gbps: Establish a serial port connection between your management workstation and the switch.
Page 311
C–Dell Z9100 Switch Configuration Quad port mode with 25G speed Dell(conf)#stack-unit 1 port 5 portmode quad speed 25G For information about changing the adapter link speed, see “Testing Network Connectivity” on page 275. Verify that the port is operating at 25Gbps: Dell# Dell#show running-config | grep "port 5"...
PFs on the same port for storage in NPAR Mode. Concurrent RoCE and iWARP Is Not Supported on the Same Port RoCE and iWARP are not supported on the same port. HII and Cavium QLogic management tools do not allow users to configure both concurrently.
Glossary ACPI bandwidth The Advanced Configuration and Power A measure of the volume of data that can Interface (ACPI) specification provides an be transmitted at a specific transmission open standard for unified operating rate. A 1Gbps or 2Gbps Fibre Channel system-centric device configuration and port can transmit or receive at nominal power management.
Page 314
User’s Guide—Converged Network Adapters 41xxx Series CHAP DCBX Challenge-handshake authentication Data center bridging exchange. A protocol protocol (CHAP) is used for remote logon, used by devices to exchange config- usually between a client and server or a uration information with directly connected Web browser and Web server.
Page 315
User’s Guide—Converged Network Adapters 41xxx Series Energy-efficient Ethernet. A set of Enhanced transmission selection. A enhancements to the twisted-pair and standard that specifies the enhancement backplane Ethernet family of computer of transmission selection to support the networking standards that allows for less allocation of bandwidth among traffic power consumption during periods of low classes.
Page 316
User’s Guide—Converged Network Adapters 41xxx Series File transfer protocol. A standard network Internet protocol. A method by which data protocol used to transfer files from one is sent from one computer to another over host to another host over a TCP-based the Internet.
Page 317
User’s Guide—Converged Network Adapters 41xxx Series Layer 2 maximum transmission unit Refers to the data link layer of the multilay- See MTU. ered communication model, Open message signaled interrupts Systems Interconnection (OSI). The function of the data link layer is to move MSI, MSI-X.
Page 318
User’s Guide—Converged Network Adapters 41xxx Series NPAR quality of service partitioning. The division of a single See QoS. NIC port into multiple physical functions or partitions, each with a user-configurable bandwidth and personality (interface type). Physical function. Personalities include NIC, FCoE, and RDMA iSCSI.
Page 319
User’s Guide—Converged Network Adapters 41xxx Series SCSI A target is a device that responds to a requested by an initiator (the host system). Small computer system interface. A Peripherals are targets, but for some high-speed interface used to connect commands (for example, a SCSI COPY devices, such as hard drives, CD drives, command), the peripheral may act as an printers, and scanners, to a computer.
Page 320
User’s Guide—Converged Network Adapters 41xxx Series virtual port User datagram protocol. A connectionless See vPort. transport protocol without any guarantee vLAN of packet sequence or delivery. It functions directly on top of IP. Virtual logical area network (LAN). A group of hosts with a common set of require- UEFI ments that communicate as if they were...
Need help?
Do you have a question about the 41 Series and is the answer not in the manual?
Questions and answers