Cavium QL41112HLRJ-CK User Manual

Converged network adapters and intelligent ethernet adapters
Table of Contents

Advertisement

User's Guide
Converged Network Adapters and
Intelligent Ethernet Adapters
FastLinQ 41000 Series
AH0054601-00 B

Advertisement

Table of Contents
loading
Need help?

Need help?

Do you have a question about the QL41112HLRJ-CK and is the answer not in the manual?

Questions and answers

Summary of Contents for Cavium QL41112HLRJ-CK

  • Page 1 User’s Guide Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series AH0054601-00 B...
  • Page 2 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series Document Revision History Revision 01, February 7, 2017 Revision A, June 26, 2017 Revision B, November 21, 2017 Changes Sections Affected Updated EMI/EMC requirements: “EMI and EMC Requirements” on page xxii ...
  • Page 3: Table Of Contents

    Table of Contents Preface Supported Products ......... . . Intended Audience .
  • Page 4 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series Hardware Installation System Requirements ......... Safety Precautions .
  • Page 5 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series Upgrading Adapter Firmware on Windows Nano ..... Adapter Preboot Configuration Getting Started .
  • Page 6 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series Configuring iWARP on Linux ........Installing the Driver .
  • Page 7 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series Configuring DHCP iSCSI Boot for IPv6 ......DHCPv6 Option 16, Vendor Class Option .
  • Page 8 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series NVMe-oF Configuration with RDMA Installing Device Drivers on Both Servers ......Configuring the Target Server .
  • Page 9 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series Configuring the Hardware ........Deploying a Hyper-Converged System .
  • Page 10 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series Index AH0054601-00 B...
  • Page 11: List Of Figures

    User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series List of Figures Figure Page Setting Advanced Adapter Properties ........Power Management Options .
  • Page 12 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series Port Level Configuration, Boot Mode ........Selecting iSCSI Boot Configuration .
  • Page 13 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series 12-3 Confirm NVMe-oF Connection ......... 12-4 FIO Utility Installation .
  • Page 14: List Of Tables

    User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series List of Tables Table Page Host Hardware Requirements ......... Minimum Host Operating System Requirements .
  • Page 15: Preface

    Supported Products ® This user’s guide describes the following Cavium FastLinQ products:  10Gb Intelligent Ethernet Adapter: QL41112HLCU-CK/SP/BK ...
  • Page 16 Preface What Is in This Guide  Chapter 5 Adapter Preboot Configuration describes the preboot adapter configuration tasks using the Human Infrastructure Interface (HII) application.  Chapter 6 RoCE Configuration describes how to configure the adapter, the Ethernet switch, and the host to use RDMA over converged Ethernet (RoCE).
  • Page 17: Related Materials

    Preface Related Materials Related Materials For additional information, refer to the following documents that are available on the Downloads and Documentation page of the QLogic Web site: http://driverdownloads.qlogic.com  Installation Guide—QConvergeConsole GUI (part number SN0051105-00) contains detailed information on how to install and use the QConvergeConsole GUI management tool.
  • Page 18  “Installation Checklist” on page  For more information, visit www.cavium.com.  Text in bold font indicates user interface elements such as a menu items, buttons, check boxes, or column headings. For example: ...
  • Page 19: License Agreements

    Preface License Agreements  (angle brackets) indicate a variable whose value you must < > specify. For example:  <serial_number> NOTE For CLI commands only, variable names are always indicated using angle brackets instead of italics.  (square brackets) indicate an optional parameter. For example: ...
  • Page 20: Technical Support

    Preface Technical Support Technical Support Customers should contact their authorized maintenance provider for technical support of their QLogic products. QLogic-direct customers may contact Technical Support; others will be redirected to their authorized maintenance provider. Visit the Support Web site listed in Contact Information for the latest firmware and software updates.
  • Page 21: Contact Information

    Preface Legal Notices Contact Information Technical Support for products under warranty is available during local standard working hours excluding Cavium Observed Holidays. For Support phone numbers, see the Contact Support link: support.qlogic.com Support Headquarters Cavium, Inc. 12900 Whitewater Drive Suite 140...
  • Page 22: Laser Safety-Fda Notice

    Preface Legal Notices Laser Safety—FDA Notice This product complies with DHHS Rules 21CFR Chapter I, Subchapter J. This product has been designed and manufactured according to IEC60825-1 on the safety label of laser product. CLASS I LASER Class 1 Caution—Class 1 laser radiation when open Laser Product Do not view directly with optical instruments Appareil laser...
  • Page 23: Kcc: Class A

    Preface Legal Notices Immunity Standards EN61000-4-2 : ESD EN61000-4-3 : RF Electro Magnetic Field EN61000-4-4 : Fast Transient/Burst EN61000-4-5 : Fast Surge Common/ Differential EN61000-4-6 : RF Conducted Susceptibility EN61000-4-8 : Power Frequency Magnetic Field EN61000-4-11 : Voltage Dips and Interrupt VCCI: 2015-04;...
  • Page 24: Product Safety Compliance

    Preface Legal Notices Product Safety Compliance UL, cUL product safety: UL 60950-1 (2nd Edition) A1 + A2 2014-10-14 CSA C22.2 No.60950-1-07 (2nd Edition) A1 +A2 2014-10 Use only with listed ITE or equivalent. Complies with 21 CFR 1040.10 and 1040.11, 2014/30/EU, 2014/35/EU. 2006/95/EC low voltage directive: TUV EN60950-1:2006+A11+A1+A12+A2 2nd Edition TUV IEC 60950-1: 2005 2nd Edition Am1: 2009 + Am2: 2013 CB...
  • Page 25: Product Overview

    Product Overview This chapter provides the following information for the 41000 Series Adapters:  Functional Description  Features  Adapter Management  Adapter Specifications Functional Description The QLogic FastLinQ 41000 Series Adapters include 10 and 25Gb Converged Network Adapters and Intelligent Ethernet Adapters that are designed to perform accelerated data networking for server systems.
  • Page 26 1–Product Overview Features  Data center bridging (DCB):  Enhanced transmission selection (ETS; IEEE 802.1Qaz)  Priority-based flow control (PFC; IEEE 802.1Qbb)  Data center bridging eXchange protocol (DCBX; CEE version 1.01, IEEE)  Single-chip solution:  10/25Gb MAC  SerDes interface for direct attach copper (DAC) transceiver connection ...
  • Page 27: Adapter Management

    1–Product Overview Adapter Management  Advanced network features:  Jumbo frames (up to 9,600 bytes). The OS and the link partner must support jumbo frames.  Virtual LANs (VLAN)  Flow control (IEEE Std 802.3x)  Logical link control (IEEE Std 802.2) ...
  • Page 28: Qlogic Control Suite Cli

    1–Product Overview Adapter Management QLogic Control Suite CLI QLogic Control Suite (QCS) CLI is a console application that you can run from a Windows command prompt or a Linux terminal console. Use QCS CLI to manage QLogic FastLinQ 3400/8400/41000/45000 Series Adapters and any QLogic adapter based on 57xx/57xxx controllers on both local and remote computer systems.
  • Page 29: Qconvergeconsole Powerkit

    1–Product Overview Adapter Specifications QConvergeConsole PowerKit The QConvergeConsole PowerKit lets you manage QLogic FastLinQ 3400/8400/41000/45000 Series Adapters on the system using QLogic cmdlets in ® the Windows PowerShell application. Windows PowerShell is a Microsoft-developed scriptable language for performing task automation and configuration management both locally and remotely.
  • Page 30: Standards Specifications

    1–Product Overview Adapter Specifications Standards Specifications Supported standards specifications include:  PCI Express Base Specification, rev. 3.0  PCI Express Card Electromechanical Specification, rev. 3.0  PCI Bus Power Management Interface Specification, rev. 1.2  IEEE Specifications:  802.3-2012 IEEE Standard for Ethernet (flow control) ...
  • Page 31: Hardware Installation

    Hardware Installation This chapter provides the following hardware installation information:  System Requirements  Safety Precautions  Preinstallation Checklist  Installing the Adapter AH0054601-00 B...
  • Page 32: System Requirements

    2–Hardware Installation System Requirements System Requirements Before you install a QLogic 41000 Series Adapter, verify that your system meets the hardware and operating system requirements shown in Table 2-1 Table 2-2. For a complete list of supported operating systems, visit the QLogic Downloads and Documentation page: driverdownloads.qlogic.com Table 2-1.
  • Page 33: Safety Precautions

    2–Hardware Installation Safety Precautions NOTE Table 2-2 denotes minimum host OS requirements. For a complete list of supported operating systems, visit the QLogic Downloads and Documentation page: driverdownloads.qlogic.com Safety Precautions WARNING The adapter is being installed in a system that operates with voltages that can be lethal.
  • Page 34: Installing The Adapter

    2–Hardware Installation Installing the Adapter Remove the adapter from its shipping package and place it on an anti-static surface. Check the adapter for visible signs of damage, particularly on the edge connector. Never attempt to install a damaged adapter. Installing the Adapter The following instructions apply to installing the QLogic 41000 Series Adapters in most systems.
  • Page 35: Driver Installation

    Driver Installation This chapter provides the following information about driver installation:  Installing Linux Driver Software  “Installing Windows Driver Software” on page 19  “Installing VMware Driver Software” on page 25 Installing Linux Driver Software This section describes how to install Linux drivers with or without RDMA and iWARP.
  • Page 36 3–Driver Installation Installing Linux Driver Software Table 3-1. QLogic 41000 Series Adapters Linux Drivers (Continued) Linux Description Driver qede Linux Ethernet driver for the 41000 Series Adapter. This driver directly controls the hard- ware and is responsible for sending and receiving Ethernet packets on behalf of the Linux host networking stack.
  • Page 37: Installing The Linux Drivers Without Rdma

    3–Driver Installation Installing Linux Driver Software The following source code TAR BZip2 (BZ2) compressed file installs Linux drivers on RHEL and SLES hosts:  fastlinq-<version>.tar.bz2 NOTE For network installations through NFS, FTP, or HTTP (using a network boot disk), a driver disk that contains the qede driver may be needed. Linux boot drivers can be compiled by modifying the makefile and the make environment.
  • Page 38 3–Driver Installation Installing Linux Driver Software To remove Linux drivers in a non-RDMA environment, unload and remove the drivers: Follow the procedure that relates to the original installation method and the OS.  If the Linux drivers were installed using an RPM package, issue the following commands: rmmod qede rmmod qed...
  • Page 39: Installing Linux Drivers Using The Src Rpm Package

    3–Driver Installation Installing Linux Driver Software  If the drivers were installed using a TAR file, issue the following commands for your operating system: For RHEL and CentOS: cd /lib/modules/<version>/extra/qlgc-fastlinq rm -rf qed.ko qede.ko qedr.ko For SLES: cd /lib/modules/<version>/updates/qlgc-fastlinq rm -rf qed.ko qede.ko qedr.ko Installing Linux Drivers Using the src RPM Package To install Linux drivers using the src RPM package: Issue the following at a command prompt:...
  • Page 40: Installing Linux Drivers Using The Kmp/Kmod Rpm Package

    3–Driver Installation Installing Linux Driver Software Turn on all ethX interfaces as follows: ifconfig <ethX> up For SLES, use YaST to configure the Ethernet interfaces to automatically start at boot by setting a static IP address or enabling DHCP on the interface.
  • Page 41: Installing The Linux Drivers With Rdma

    3–Driver Installation Installing Linux Driver Software Test the drivers by loading them (unload the existing drivers first, if necessary): rmmod qede rmmod qed modprobe qed modprobe qede Installing the Linux Drivers with RDMA For information on iWARP, see Chapter 7 iWARP Configuration.
  • Page 42: Linux Driver Optional Parameters

    3–Driver Installation Installing Linux Driver Software To build and install the libqedr user space library, issue the following command: 'make libqedr_install' Test the drivers by loading them as follows: modprobe qed modprobe qede modprobe qedr Linux Driver Optional Parameters Table 3-2 describes the optional parameters for the qede driver.
  • Page 43: Linux Driver Messages

    3–Driver Installation Installing Windows Driver Software Table 3-3. Linux Driver Operation Defaults (Continued) Operation qed Driver Default qede Driver Default — Auto-negotiation with RX and Flow Control TX advertised — 1500 (range is 46–9600) — 1000 Rx Ring Size — 4078 (range is 128–8191) Tx Ring Size —...
  • Page 44: Installing The Windows Drivers

    3–Driver Installation Installing Windows Driver Software Installing the Windows Drivers NOTE  Note that there is no other separate procedure to install RoCE-supported drivers in Windows.  For information on building the Windows Nano virtual hard disk, go to: https://technet.microsoft.com/en-us/windows-server-docs/compute/ nano-server/getting-started-with-nano-server To install the Windows drivers: Download the Windows device drivers for the 41000 Series Adapter from...
  • Page 45: Managing Adapter Properties

    3–Driver Installation Installing Windows Driver Software Managing Adapter Properties To view or change the 41000 Series Adapter properties: In the Control Panel, click Device Manager. On the properties of the selected adapter, click the Advanced tab. On the Advanced page (Figure 3-1), select an item under Property and then change the Value for that item as needed.
  • Page 46: Setting Power Management Options

    3–Driver Installation Installing Windows Driver Software Setting Power Management Options You can set power management options to allow the operating system to turn off the controller to save power or to allow the controller to wake up the computer. If the device is busy (servicing a call, for example), the operating system will not shut down the device.
  • Page 47: Creating A Nano Iso Image, Injecting Drivers, And Updating The Multiboot/Flash Image On A Nano Server

    3–Driver Installation Installing Windows Driver Software Table 3-4 lists some of the Windows drivers. Table 3-4. Windows Drivers Windows Driver Description QeVBD Core driver QeND Ethernet networking driver QeOIS iSCSI-Offload driver QeFCoE FCoE-Offload driver QxDiag Diagnostics driver Delete the package by issuing the following command: oem0.inf ...
  • Page 48 3–Driver Installation Installing Windows Driver Software Place the extracted individual files in a temporary folder. NOTE You will use the QLogic drivers files during Nano Server image creation. For more information on a specific command, refer to the Injecting Drivers section in the Microsoft link noted in Step To use Microsoft's pnputil tool to upgrade or install the QLogic drivers: Copy the QLogic driver files to the Nano Server.
  • Page 49: Installing Vmware Driver Software

    3–Driver Installation Installing VMware Driver Software Installing VMware Driver Software This section describes the qedentv VMware ESXi driver for the 41000 Series Adapters:  VMware Drivers and Driver Packages  Installing VMware Drivers  VMware Driver Optional Parameters  VMware Driver Parameter Defaults ...
  • Page 50: Installing Vmware Drivers

    3–Driver Installation Installing VMware Driver Software Table 3-6. ESXi Driver Packages by Release (Continued) ESXi Release Protocol Driver Name Driver Version ESXi 6.0u3 qedentv 2.0.7.5 FCoE qedf 1.2.24.0 iSCSI qedil 1.0.19.0 Install individual drivers using either:  Standard ESXi package installation commands (see Installing VMware Drivers) ...
  • Page 51 3–Driver Installation Installing VMware Driver Software You can place the file anywhere that is accessible to the ESX console shell. NOTE If you do not have a Linux machine, you can uses the vSphere datastore file browser to upload the files to the server. Place the host in maintenance mode by issuing the following command: #esxcli --maintenance-mode Select one of the following installation options:...
  • Page 52: Vmware Driver Optional Parameters

    3–Driver Installation Installing VMware Driver Software VMware Driver Optional Parameters Table 3-7 describes the optional parameters that can be supplied as command line arguments to the command. esxcfg-module Table 3-7. VMware Driver Optional Parameters Parameter Description Globally enables (1) or disables (0) hardware VLAN insertion and removal. hw_vlan Disable this parameter when the upper layer needs to send or receive fully formed packets.
  • Page 53: Vmware Driver Parameter Defaults

    3–Driver Installation Installing VMware Driver Software Table 3-7. VMware Driver Optional Parameters (Continued) Parameter Description Enables (1) or disables (0) the driver automatic firmware recovery capability. auto_fw_reset When this parameter is enabled, the driver attempts to recover from events such as transmit timeouts, firmware asserts, and adapter parity errors. The default is auto_fw_reset=1.
  • Page 54: Removing The Vmware Driver

    3–Driver Installation Installing VMware Driver Software Table 3-8. VMware Driver Parameter Defaults (Continued) Parameter Default Numbered value Number of Queues Wake on LAN (WoL) Disabled Removing the VMware Driver To remove the .vib file (qedentv), issue the following command: # esxcli software vib remove --vibname qedentv To remove the driver, issue the following command: # vmkload_mod -u qedentv FCoE Support...
  • Page 55: Iscsi Support

    3–Driver Installation Installing VMware Driver Software iSCSI Support Table 3-10 describes the iSCSI driver. Table 3-10. QLogic 41000 Series Adapter iSCSI Driver Driver Description qedil The qedil driver is the QLogic VMware iSCSI HBA driver. Similar to qedf, qedil is a kernel mode driver that provides a translation layer between the VMware SCSI stack and the QLogic iSCSI firmware and hardware.
  • Page 56: Firmware Upgrade Utility

    Firmware Upgrade Utility QLogic provides scripts to automate the adapter firmware and boot code upgrade process for Windows and Linux systems. Each script identifies all 41000 Series Adapters and upgrades all firmware components. To upgrade adapter firmware on VMware systems, see the User’s Guide—FastLinQ ESXCLI VMware Plug-in or the User’s Guide—QConvergeConsole Plug-ins for vSphere.
  • Page 57: Upgrading Adapter Firmware On Linux

    4–Firmware Upgrade Utility Upgrading Adapter Firmware on Linux When the MFW requires verification, the hash is calculated on the MFW. The hash is compared with the signature after running the opposite algorithm using the equivalent public-key.  Signature = Digital-Signature-Algorithm (Private-Key, HASH) ...
  • Page 58: Upgrading Adapter Firmware On Windows Nano

    4–Firmware Upgrade Utility Upgrading Adapter Firmware on Windows Nano Upgrading Adapter Firmware on Windows Nano To upgrade adapter firmware on a Windows Nano system: Install the Windows eVBD or qeVBD (as applicable) driver. Download the Firmware Upgrade Utility for Windows from QLogic: driverdownloads.qlogic.com Unzip the Firmware Upgrade Utility on the system where the adapter is installed.
  • Page 59: Adapter Preboot Configuration

    Adapter Preboot Configuration During the host boot process, you have the opportunity to pause and perform adapter management tasks using the Human Infrastructure Interface (HII) application. These tasks include the following:  Displaying Firmware Image Properties  Configuring Device-level Parameters ...
  • Page 60: Getting Started

    5–Adapter Preboot Configuration Getting Started Getting Started To start the HII application: Open the System Setup window for your platform. For information about launching the System Setup, consult the user guide for your system. In the System Setup window (Figure 5-1), select Device Settings, and then press ENTER.
  • Page 61 5–Adapter Preboot Configuration Getting Started The Main Configuration Page presents the adapter management options where you can set the partitioning mode.  If you are not using NPAR, set the Partitioning Mode to Default, as shown in Figure 5-3. Figure 5-3. Main Configuration Page, Setting Default Partitioning Mode AH0054601-00 B...
  • Page 62 5–Adapter Preboot Configuration Getting Started  Setting the Partitioning Mode to NPAR adds the Partitions Configuration option to the Main Configuration Page, as shown in Figure 5-4. Figure 5-4. Main Configuration Page, Setting NPAR Partitioning Mode Figure 5-3 and , the Main Configuration Page shows the following: ...
  • Page 63 5–Adapter Preboot Configuration Getting Started Table 5-1. Adapter Properties (Continued) Adapter Property Description PCI Address PCI device address in bus-device function format Link Status External link status Permanent MAC Address Manufacturer-assigned permanent device MAC address Virtual MAC Address User-defined device MAC address iSCSI MAC Address Manufacturer-assigned permanent device iSCSI Offload MAC address...
  • Page 64: Displaying Firmware Image Properties

    5–Adapter Preboot Configuration Displaying Firmware Image Properties Displaying Firmware Image Properties To view the properties for the firmware image, select Firmware Image Properties on the Main Configuration Page, and then press ENTER. The Firmware Information page (Figure 5-5) specifies the following view-only data: ...
  • Page 65: Configuring Device-Level Parameters

    5–Adapter Preboot Configuration Configuring Device-level Parameters Configuring Device-level Parameters NOTE The iSCSI physical functions (PFs) are listed when the iSCSI Offload feature is enabled. The FCoE PFs are listed when the FCoE Offload feature is enabled. Not all adapter models support iSCSI Offload and FCoE Offload. Device-level configuration includes the following parameters: ...
  • Page 66: Configuring Port-Level Parameters

    5–Adapter Preboot Configuration Configuring Port-level Parameters Configuring Port-level Parameters Port-level configuration comprises the following parameters:  Link Speed  Boot Mode  Link Speed  DCBX Protocol  RoCE Priority  iSCSI Offload  FCoE Offload  PXE VLAN Mode ...
  • Page 67 The IEEE standards do not provide a standards-based method to auto-negotiate between a 10G switch and a 25G adapter or between DACs and optics. Cavium’s SmartAN provides an automatic and convenient method to detect the switch and to determine and set the link speed, FEC types, media type and length, and so on.
  • Page 68 5–Adapter Preboot Configuration Configuring Port-level Parameters If you selected a Link Speed of 10 Gbps or 25 Gbps in Step 2, the FEC Mode control appears as shown in Figure 5-7. Select one of the following fixed-speed FEC Mode values: ...
  • Page 69 5–Adapter Preboot Configuration Configuring Port-level Parameters For DCBX Protocol support, select one of the following options:  Dynamic automatically determines the DCBX type currently in use on the attached switch.  IEEE uses only IEEE DCBX protocol.  CEE uses only CEE DCBX protocol. ...
  • Page 70: Configuring Fcoe Boot

    5–Adapter Preboot Configuration Configuring FCoE Boot Configuring FCoE Boot FCoE general parameters include the following:  FIP VLAN ID is usually set to 0, but if the FIP VLAN ID is known beforehand, you can set the value here. If a non-zero value is used, FIP VLAN discovery is not performed.
  • Page 71 5–Adapter Preboot Configuration Configuring FCoE Boot Choose values for the FCoE General or FCoE Target Configuration parameters. Figure 5-9. FCoE General Parameters Figure 5-10. FCoE Target Configuration Click Back. When prompted, click Yes to save the changes. Changes take effect after a system reset.
  • Page 72: Configuring Iscsi Boot

    5–Adapter Preboot Configuration Configuring iSCSI Boot Configuring iSCSI Boot To configure the iSCSI boot configuration parameters: On the Main Configuration Page, select iSCSI Boot Configuration Menu, and then select one of the following options:  iSCSI General Configuration  iSCSI Initiator Configuration ...
  • Page 73 5–Adapter Preboot Configuration Configuring iSCSI Boot  iSCSI Second Target Configuration (Figure 5-14 on page  Connect  IPv4 Address  TCP Port  Boot LUN  iSCSI Name  CHAP ID  CHAP Secret Click Back. AH0054601-00 B...
  • Page 74 5–Adapter Preboot Configuration Configuring iSCSI Boot When prompted, click Yes to save the changes. Changes take effect after a system reset. Figure 5-11. iSCSI General Configuration Figure 5-12. iSCSI Initiator Configuration Figure 5-13. iSCSI First Target Configuration AH0054601-00 B...
  • Page 75: Configuring Partitions

    5–Adapter Preboot Configuration Configuring Partitions Figure 5-14. iSCSI Second Target Configuration Configuring Partitions You can configure bandwidth ranges for each partition on the adapter. To configure the maximum and minimum bandwidth allocations: On the Main Configuration Page, select Partitions Configuration, and then press ENTER.
  • Page 76 5–Adapter Preboot Configuration Configuring Partitions Figure 5-16 shows the page when NPAR mode is enabled with FCoE Offload and iSCSI Offload enabled. Figure 5-16. Partitions Configuration Page (with FCoE Offload and iSCSI Offload) AH0054601-00 B...
  • Page 77 5–Adapter Preboot Configuration Configuring Partitions On the Global Bandwidth Allocation page (Figure 5-17), click each partition minimum and maximum TX bandwidth field for which you want to allocate bandwidth. There are eight partitions per port in dual-port mode. Figure 5-17. Global Bandwidth Allocation Page ...
  • Page 78 5–Adapter Preboot Configuration Configuring Partitions When prompted, click Yes to save the changes. Changes take effect after a system reset. To configure partitions: To examine a specific partition configuration, on the Partitions Configuration page (Figure 5-15 on page 51), select Partition n Configuration. To configure the first partition, select Partition 1 Configuration to open the Partition 1 Configuration page (Figure...
  • Page 79 5–Adapter Preboot Configuration Configuring Partitions  World Wide Node Name  Virtual World Wide Node Name  PCI Device ID  PCI (bus) Address Figure 5-19. Partition 2 Configuration: FCoE Offload To configure the third partition, select Partition 3 Configuration to open the Partition 3 Configuration page (Figure 5-18).
  • Page 80 5–Adapter Preboot Configuration Configuring Partitions  PCI Address Figure 5-20. Partition 3 Configuration: iSCSI Offload To configure the remaining Ethernet partitions, including the previous (if not offload-enabled), open the page for a partition 2 or greater Ethernet partition. The Personality show as Ethernet (Figure 5-21) and includes the following additional parameters:...
  • Page 81: Roce Configuration

    RoCE Configuration This chapter describes RDMA over converged Ethernet (RoCE v1 and v2) configuration on the 41000 Series Adapter, the Ethernet switch, and the Windows or Linux host, including:  Supported Operating Systems and OFED  Planning for RoCE  Preparing the Adapter ...
  • Page 82: Planning For Roce

    6–RoCE Configuration Planning for RoCE Table 6-1. OS Support for RoCE, RoCEv2, iWARP, iSER, and OFED Operating System Inbox OFED 3.18-3 GA RHEL 7.3 RoCEv1, RoCEv2, iWARP, iSER SLES 11 SP4 RoCEv1, iWARP RoCEv1, iWARP SLES 12 SP1 RoCEv1, iWARP, iSER RoCEv1, iWARP SLES 12 SP2 RoCEv1, RoCEv2, iWARP, iSER...
  • Page 83: Preparing The Adapter

    6–RoCE Configuration Preparing the Adapter Preparing the Adapter Follow these steps to enable DCBX and specify the RoCE priority using the HII management application. For information about the HII application, see Chapter 5 Adapter Preboot Configuration. To prepare the adapter: In the Main Configuration Page, select Data Center Bridging (DCB) Settings, and then click Finish.
  • Page 84 6–RoCE Configuration Preparing the Ethernet Switch To configure the Cisco switch: Open a config terminal session as follows: Switch# config terminal switch(config)# Configure quality of service (QoS) class map and set the RoCE priority to match the adapter (5) as follows: switch(config)# class-map type qos class-roce switch(config)# match cos 5 Configure queuing class maps as follows:...
  • Page 85: Configuring The Dell Z9100 Ethernet Switch

    6–RoCE Configuration Preparing the Ethernet Switch Assign a VLAN ID to the switch port to match the VLAN ID assigned to the adapter (5). switch(config)# interface ethernet x/x switch(config)# switchport mode trunk switch(config)# switchport trunk allowed vlan 1,5 Configuring the Dell Z9100 Ethernet Switch Configuring the Dell Z9100 Ethernet Switch for RoCE comprises configuring a DCB map for RoCE, configuring priority-based flow control (PFC) and enhanced transmission selection (ETS), verifying the DCB map, applying the DCB map to...
  • Page 86 6–RoCE Configuration Preparing the Ethernet Switch Verify the ETS and PFC configuration on the port. The following examples show summarized interface information for ETS and detailed interface information for PFC. Dell(conf-if-tf-1/8/1)# do show interfaces twentyFiveGigE 1/8/1 ets summary Interface twentyFiveGigE 1/8/1 Max Supported TC is 4 Number of Traffic Classes is 8 Admin mode is on...
  • Page 87: Configuring The Arista 7060X Ethernet Switch

    6–RoCE Configuration Preparing the Ethernet Switch Remote ISCSI PriorityMap is 0x20 66 Input TLV pkts, 99 Output TLV pkts, 0 Error pkts, 0 Pause Tx pkts, 0 Pause Rx pkts 66 Input Appln Priority TLV pkts, 99 Output Appln Priority TLV pkts, 0 Error Appln Priority TLV Pkts Configure the DCBX protocol (CEE in this example).
  • Page 88: Configuring Roce On The Adapter For Windows Server

    6–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Setting Up ETS In the following example, traffic class 0 is configured with 5 percent bandwidth, and traffic class 1 is configured with 95 percent bandwidth: Arista-7060X-EIT(config)#dcbx ets traffic-class 0 bandwidth 5 Arista-7060X-EIT(config)#dcbx ets traffic-class 1 bandwidth 95 Configuring the Interface The same VLAN ID must be assigned to the server adapter ports.
  • Page 89 6–RoCE Configuration Configuring RoCE on the Adapter for Windows Server On the Advanced page, configure the properties listed in Table 6-2 selecting each item under Property and choosing an appropriate Value for that item. Then click OK. Table 6-2. Advanced Properties for RoCE Property Value or Description Network Direct Functionality...
  • Page 90 6–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Figure 6-1 shows an example of configuring a property value. Figure 6-1. Configuring RoCE Properties Using Windows PowerShell, verify that RDMA is enabled on the adapter. command lists the adapters that support Get-NetAdapterRdma RDMA—both ports are enabled.
  • Page 91 6–RoCE Configuration Configuring RoCE on the Adapter for Windows Server ReceiveSegmentCoalescing : Enabled Chimney : Disabled TaskOffload : Enabled NetworkDirect : Enabled NetworkDirectAcrossIPSubnets : Blocked PacketCoalescingFilter : Disabled Connect a server message block (SMB) drive, run RoCE traffic, and verify the results.
  • Page 92: Configuring Roce On The Adapter For Linux

    6–RoCE Configuration Configuring RoCE on the Adapter for Linux Configuring RoCE on the Adapter for Linux This section describes the RoCE configuration procedure for RHEL and SLES. It also describes how to verify the RoCE configuration and provides some guidance about using group IDs (GIDs) with VLAN interfaces.
  • Page 93: Roce Configuration For Sles

    6–RoCE Configuration Configuring RoCE on the Adapter for Linux RoCE Configuration for SLES To configure RoCE on the adapter for an SLES host, OFED must be installed and configured on the SLES host. To install inbox OFED for SLES Linux: While installing or upgrading the operating system, select the InfiniBand support packages.
  • Page 94 6–RoCE Configuration Configuring RoCE on the Adapter for Linux NOTE You might see a few packages already installed because of a dependency, but make sure all of the preceding packages are installed ), and follow the dpkg --get-selections | grep <pkg_name> package installation method prescribed by Ubuntu.
  • Page 95 6–RoCE Configuration Configuring RoCE on the Adapter for Linux # make libqedr_install  Option 2: # cd fastlinq-X.X.X.X/libqedr-X.X.X.X/ # ./configure --prefix=/usr --libdir=${exec_prefix}/lib --sysconfdir=/etc # make install Before loading the QLogic Ethernet and RDMA drivers, uninstall the existing out-of-box or inbox drivers by issuing the following commands: # modprobe –r qede # depmod –a # modprobe -v qedr...
  • Page 96: Verifying The Roce Configuration On Linux

    6–RoCE Configuration Configuring RoCE on the Adapter for Linux Verifying the RoCE Configuration on Linux After installing OFED, installing the Linux driver, and loading the RoCE drivers, verify that the RoCE devices were detected on all Linux operating systems. To verify RoCE configuration on Linux: Stop firewall tables using commands.
  • Page 97 6–RoCE Configuration Configuring RoCE on the Adapter for Linux Configure the IP address and enable the port using a configuration method such as ifconfig: # ifconfig ethX 192.168.10.10/24 up Issue the command. For each PCI function, you should see a ibv_devinfo separate , as shown in the following example:...
  • Page 98: Vlan Interfaces And Gid Index Values

    6–RoCE Configuration Configuring RoCE on the Adapter for Linux local address: LID 0x0000, QPN 0xff0000, PSN 0xb3e07e, GID fe80::20e:1eff:fe50:c7c0 remote address: LID 0x0000, QPN 0xff0000, PSN 0x934d28, GID fe80::20e:1eff:fe50:c570 8192000 bytes in 0.05 seconds = 1436.97 Mbit/sec 1000 iters in 0.05 seconds = 45.61 usec/iter Client Ping: root@lambodar:~# ibv_rc_pingpong -d qedr0 -g 0 192.168.10.165 local address:...
  • Page 99: Roce V2 Configuration For Linux

    6–RoCE Configuration Configuring RoCE on the Adapter for Linux NOTE The default GID value is zero (0) for back-to-back or pause settings. For server/switch configurations, you must identify the proper GID value. If you are using a switch, refer to the corresponding switch configuration documents for the proper settings.
  • Page 100: Sys And Class Parameters

    6–RoCE Configuration Configuring RoCE on the Adapter for Linux GID[ 0000:0000:0000:0000:0000:ffff:c0a8:6403 GID[ 0000:0000:0000:0000:0000:ffff:c0a8:6403 Verifying RoCE v1 or RoCE v2 GID Index and Address from sys and class Parameters Use one of the following options to verify the RoCE v1 or RoCE v2 GID Index and address from the sys and class parameters: ...
  • Page 101: Verifying Roce V1 Or Roce V2 Functionality Through Perftest Applications

    6–RoCE Configuration Configuring RoCE on the Adapter for Linux Verifying RoCE v1 or RoCE v2 Functionality Through perftest Applications This section shows how to verify RoCE v1 or RoCE v2 functionality through perftest applications. In this example, the following server IP and client IP are used: ...
  • Page 102 6–RoCE Configuration Configuring RoCE on the Adapter for Linux  Server Configuration: #/sbin/ip link add link p4p1 name p4p1.100 type vlan id 100 #ifconfig p4p1.100 192.168.100.3/24 up #ip route add 192.168.101.0/24 via 192.168.100.1 dev p4p1.100  Client Configuration: #/sbin/ip link add link p4p1 name p4p1.101 type vlan id 101 #ifconfig p4p1.101 192.168.101.3/24 up #ip route add 192.168.100.0/24 via 192.168.101.1 dev p4p1.101 Set the switch settings using the following procedure.
  • Page 103 6–RoCE Configuration Configuring RoCE on the Adapter for Linux Client Switch Settings: Figure 6-3. Switch Settings, Client Configuring RoCE v1 or RoCE v2 Settings for RDMA_CM Applications To configure RoCE, use the following scripts from the FastLinQ source package: # ./show_rdma_cm_roce_ver.sh qedr0 is configured to IB/RoCE v1 qedr1 is configured to IB/RoCE v1 # ./config_rdma_cm_roce_ver.sh v2...
  • Page 104: Configuring Roce On The Adapter For Esx

    6–RoCE Configuration Configuring RoCE on the Adapter for ESX Client Settings: Figure 6-5. Configuring RDMA_CM Applications: Client Configuring RoCE on the Adapter for ESX This section provides the following procedures and information for RoCE configuration:  Configuring RDMA Interfaces  Configuring MTU ...
  • Page 105 6–RoCE Configuration Configuring RoCE on the Adapter for ESX To view a list of the RDMA devices, issue the esxcli rdma device list command. For example: esxcli rdma device list Name Driver State Speed Paired Uplink Description ------- ------- ------ ---- ------- -------------...
  • Page 106: Configuring Mtu

    6–RoCE Configuration Configuring RoCE on the Adapter for ESX Configuring MTU To modify MTU for RoCE interface, change the MTU of the corresponding vSwitch. Set the MTU size of the RDMA interface based on MTU of the vSwitch by issuing the following command: # esxcfg-vswitch -m <new MTU>...
  • Page 107: Configuring A Paravirtual Rdma Device (Pvrdma)

    6–RoCE Configuration Configuring RoCE on the Adapter for ESX Queue pairs in SQD state: 0 Queue pairs in SQE state: 0 Queue pairs in ERR state: 0 Queue pair events: 0 Completion queues allocated: 1 Completion queue events: 0 Shared receive queues allocated: 0 Shared receive queue events: 0 Protection domains allocated: 1 Memory regions allocated: 3...
  • Page 108 6–RoCE Configuration Configuring RoCE on the Adapter for ESX Configure a distributed virtual switch as follows: In the VMware vSphere Web Client, expand the RoCE node in the left pane of the Navigator window. Right-click RoCE-VDS, and then click Add and Manage Hosts. Under Add and Manage Hosts, configure the following: ...
  • Page 109 6–RoCE Configuration Configuring RoCE on the Adapter for ESX On the Firewall Summary page, click Edit. In the Edit Security Profile dialog box under Name, scroll down, select the pvrdma check box, and then select the Set Firewall check box. Figure 6-8 shows an example.
  • Page 110: Iwarp Configuration

    iWARP Configuration Internet wide area RDMA protocol (iWARP) is a computer networking protocol that implements RDMA for efficient data transfer over IP networks. iWARP is designed for multiple environments, including LANs, storage networks, data center networks, and WANs. This chapter provides instructions for: ...
  • Page 111: Configuring Iwarp On The Windows

    7–iWARP Configuration Configuring iWARP on the Windows In the Warning - Saving Changes message box, click Yes to save the configuration. In the Success - Saving Changes message box, click OK. Repeat Step 2 through Step 7 to configure the NIC and iWARP for the other ports.
  • Page 112 7–iWARP Configuration Configuring iWARP on the Windows Using Windows PowerShell, verify that is enabled. The NetworkDirect command output (Figure 7-2) shows Get-NetOffloadGlobalSetting NetworkDirect Enabled Figure 7-2. Windows PowerShell Command: Get-NetOffloadGlobalSetting To verify iWARP traffic: Map SMB drives and run iWARP traffic. Launch Performance Monitor (Perfmon).
  • Page 113 7–iWARP Configuration Configuring iWARP on the Windows Figure 7-3 shows an example. Figure 7-3. Perfmon: Add Counters AH0054601-00 B...
  • Page 114 7–iWARP Configuration Configuring iWARP on the Windows If iWARP traffic is running, counters appear as shown in the Figure 7-4 example. Figure 7-4. Perfmon: Verifying iWARP Traffic To verify the SMB connection: At a command prompt, issue the command as follows: net use C:\Users\Administrator>...
  • Page 115: Configuring Iwarp On Linux

    7–iWARP Configuration Configuring iWARP on Linux Kernel 60 Listener [fe80::71ea:bdd2:ae41:b95f%60]:445 NA Kernel 60 Listener 192.168.11.20:16159 192.168.11.10:445 Configuring iWARP on Linux QLogic 41000 Series Adapters support iWARP on the Linux Open Fabric Enterprise Distributions (OFEDs) listed in Table 6-1 on page iWARP configuration on a Linux system includes the following: ...
  • Page 116: Detecting The Device

    7–iWARP Configuration Configuring iWARP on Linux Load the RDMA driver by issuing the following command: #modprobe -v qedr The following example shows the command entries to change the RDMA protocol to iWARP on multiple NPAR interfaces: # modprobe qed rdma_protocol_map=04:00.1-3,04:00.3-3,04:00.5-3, 04:00.7-3,04:01.1-3,04:01.3-3,04:01.5-3,04:01.7-3 # modprobe -v qedr # ibv_devinfo |grep iWARP...
  • Page 117: Supported Iwarp Applications

    7–iWARP Configuration Configuring iWARP on Linux port: state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 1024 (3) sm_lid: port_lid: port_lmc: 0x00 link_layer: Ethernet Supported iWARP Applications Linux-supported RDMA applications for iWARP include the following:  ibv_devinfo, ib_devices  ib_send_bw/lat, ib_write_bw/lat, ib_read_bw/lat, ib_atomic_bw/lat For iWARP, all applications must use the RDMA communication manager (rdma_cm) using the option.
  • Page 118: Configuring Nfs-Rdma

    7–iWARP Configuration Configuring iWARP on Linux Connection type : RC Using SRQ : OFF TX depth : 128 CQ Moderation : 100 : 1024[B] Link type : Ethernet GID index Max inline data : 0[B] rdma_cm QPs : ON Data ex. method : rdma_cm ---------------------------------------------------------------------------- local address: LID 0000 QPN 0x0192 PSN 0xcde932 GID: 00:14:30:196:192:110:00:00:00:00:00:00:00:00:00:00...
  • Page 119: Iwarp Rdma-Core Support On Sles 12 Sp3, Rhel 7.4, And Ofed 4.8X

    7–iWARP Configuration Configuring iWARP on Linux Include the default RDMA port 20049 into this file as follows: # echo rdma 20049 > /proc/fs/nfsd/portlist To make local directories available for NFS clients to mount, issue the command as follows: exportfs # exportfs -v To configure the NFS client: NOTE This procedure for NFS client configuration also applies to RoCE.
  • Page 120 7–iWARP Configuration Configuring iWARP on Linux Otherwise, go to https://github.com/linux-rdma/rdma-core.git and click Clone or download. Install all OS-dependent packages/libraries as described in the RDMA-Core README. For RHEL and CentOS, issue the following command: # yum install cmake gcc libnl3-devel libudev-devel make pkgconfig valgrind-devel For SLES 12 SP3 (ISO/SDK kit), install the following RPMs: cmake-3.5.2-18.3.x86_64.rpm (OS ISO)
  • Page 121 7–iWARP Configuration Configuring iWARP on Linux # /usr/bin/rping -c -v -C 5 -a 192.168.22.3 (or) rping -c -v -C 5 -a 192.168.22.3 ping data: rdma-ping-0: ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqr ping data: rdma-ping-1: BCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrs ping data: rdma-ping-2: CDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrst ping data: rdma-ping-3: DEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstu ping data: rdma-ping-4: EFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuv client DISCONNECT EVENT...
  • Page 122: Iser Configuration

    iSER Configuration This chapter provides procedures for configuring iSCSI Extensions for RDMA (iSER) for Linux (RHEL, SLES, and Ubuntu), including:  Before You Begin  Configuring iSER for RHEL  Configuring iSER for SLES 12  Using iSER with iWARP on RHEL and SLES ...
  • Page 123 8–iSER Configuration Configuring iSER for RHEL versions. The inbox ib_isert module does not work with any out-of-box OFED versions. Unload any existing FastLinQ drivers as described in “Removing the Linux Drivers” on page Install the latest FastLinQ driver and libqedr packages as described in “Installing the Linux Drivers with RDMA”...
  • Page 124 8–iSER Configuration Configuring iSER for RHEL You can use a Linux TCM-LIO target to test iSER. The setup is the same for any iSCSI target, except that you issue the command on the applicable portals. The portal enable_iser Boolean=true instances are identified as iser in Figure 8-2.
  • Page 125 8–iSER Configuration Configuring iSER for RHEL Confirm that the is iser in the target connection, Iface Transport as shown Figure 8-3. Issue the command; for example: iscsiadm iscsiadm -m session -P2 Figure 8-3. Iface Transport Confirmed To check for a new iSCSI device, as shown Figure 8-4, issue the command.
  • Page 126: Configuring Iser For Sles 12

    8–iSER Configuration Configuring iSER for SLES 12 Configuring iSER for SLES 12 Because the targetcli is not inbox on SLES 12.x, you must complete the following procedure. To configure iSER for SLES 12: To install targetcli, copy and install the following RPMs from the ISO image (x86_64 and noarch location): lio-utils-4.1-14.6.x86_64.rpm python-configobj-4.7.2-18.10.noarch.rpm...
  • Page 127: Using Iser With Iwarp On Rhel And Sles

    8–iSER Configuration Using iSER with iWARP on RHEL and SLES Using iSER with iWARP on RHEL and SLES Configure the iSER initiator and target similar to RoCE to work with iWARP. You ™ can use different methods to create a Linux-IO Target (LIO );...
  • Page 128: Configuring Iser For Ubuntu

    8–iSER Configuration Configuring iSER for Ubuntu To configure an initiator for iWARP: To discover the iSER LIO target using port 3261, issue the iscsiadm command as follows: # iscsiadm -m discovery -t st -p 192.168.21.4:3261 -I iser 192.168.21.4:3261,1 iqn.2017-04.com.org.iserport1.target1 Change the transport mode to as follows: iser # iscsiadm -m node -o update -T iqn.2017-04.com.org.iserport1.target1 -n...
  • Page 129 8–iSER Configuration Configuring iSER for Ubuntu All rights reserved. /> ls o- /...........[...] o- backstores ........[...] | o- fileio ......[0 Storage Object] | o- iblock ......[0 Storage Object] | o- pscsi ......[0 Storage Object] | o- rd_dr ......[0 Storage Object] | o- rd_mcp ......
  • Page 130 8–iSER Configuration Configuring iSER for Ubuntu Created target iqn.2004-01.com.qlogic.iSERPort1.Target1. Selected TPG Tag 1. Successfully created TPG 1. /> ls o- / ..........[...] o- backstores ........[...] | o- fileio ......[0 Storage Object] | o- iblock ......[0 Storage Object] | o- pscsi ......
  • Page 131 8–iSER Configuration Configuring iSER for Ubuntu o- iscsi ........[1 Target] | o- iqn.2004-01.com.qlogic.iSERPort1.Target1 ..[1 TPG] o- tpgt1 ........[enabled] o- acls ........[0 ACLs] o- luns ........[1 LUN] | o- lun0 ... [rd_mcp/iSERPort1-1 (ramdisk)] o- portals ......[0 Portals] o- loopback ........
  • Page 132 8–iSER Configuration Configuring iSER for Ubuntu Enable iSER on the portal by issuing the following command: /> /iscsi/iqn.2004-01.com.qlogic.iSERPort1.Target1/tpgt1/port als/192.168.10.103:3260 iser_enable iser operation has been enabled /> ls o- / ..........[...] o- backstores ........[...] | o- fileio ......[0 Storage Object] | o- iblock ......
  • Page 133 8–iSER Configuration Configuring iSER for Ubuntu | o- pscsi ......[0 Storage Object] | o- rd_dr ......[0 Storage Object] | o- rd_mcp ......[1 Storage Object] o- iSERPort1-1 ....[ramdisk activated] o- ib_srpt ........[0 Targets] o- iscsi ........[1 Target] | o- iqn.2004-01.com.qlogic.iSERPort1.Target1 ..
  • Page 134: Configuring The Initiator

    8–iSER Configuration Configuring iSER for Ubuntu Generated LIO-Target config: /etc/target/backup/lio_backup-2015-06-09_19:07:37.855693.sh Making backup of Target_Core_Mod/ConfigFS with timestamp: 2015-06-09_19:07:37.855693 Generated Target_Core_Mod config: /etc/target/backup/tcm_backup-2015-06-09_19:07:37.855693.sh Successfully updated default config /etc/target/lio_start.sh Successfully updated default config /etc/target/tcm_start.sh /> Configuring the Initiator To configure the initiator: Load the ib_iser module and confirm that it is loaded properly by issuing the following commands: # sudo modprobe ib_iser # lsmod | grep ib_iser...
  • Page 135: Optimizing Linux Performance

    8–iSER Configuration Optimizing Linux Performance Login to [iface: default, target: iqn.2004-01.com.qlogic.iSERPort1.Target1, portal: 192.168.10.5,3260] successful. Verify that the LUNs are visible by issuing the following commands: # sudo apt-get install lsscsi # lsscsi [1:0:0:0] cd/dvd DVD D DS8D9SH JHJ4 /dev/sr0 [2:0:0:0] disk LOGICAL VOLUME 4.68...
  • Page 136: Configuring Kernel Sysctl Settings

    8–iSER Configuration Optimizing Linux Performance Configuring Kernel sysctl Settings Set the kernel sysctl settings as follows: sysctl -w net.ipv4.tcp_mem="4194304 4194304 4194304" sysctl -w net.ipv4.tcp_wmem="4096 65536 4194304" sysctl -w net.ipv4.tcp_rmem="4096 87380 4194304" sysctl -w net.core.wmem_max=4194304 sysctl -w net.core.rmem_max=4194304 sysctl -w net.core.wmem_default=4194304 sysctl -w net.core.rmem_default=4194304 sysctl -w net.core.netdev_max_backlog=250000 sysctl -w net.ipv4.tcp_timestamps=0...
  • Page 137: Iscsi Configuration

    iSCSI Configuration This chapter provides the following iSCSI configuration information:  iSCSI Boot  Configuring iSCSI Boot  Configuring the DHCP Server to Support iSCSI Boot  Configuring iSCSI Boot from SAN for SLES 12  Configuring iSCSI Boot from SAN for RHEL 7.4 ...
  • Page 138: Iscsi Boot

    9–iSCSI Configuration iSCSI Boot iSCSI Boot QLogic 4xxxx Series gigabit Ethernet (GbE) adapters support iSCSI boot to enable network boot of operating systems to diskless systems. iSCSI boot allows a Windows, Linux, or VMware operating system to boot from an iSCSI target machine located remotely over a standard IP network.
  • Page 139: Configuring Iscsi Boot Parameters

    9–iSCSI Configuration iSCSI Boot Associate an iSCSI initiator with the iSCSI target. Record the following information:  iSCSI target name  TCP port number  iSCSI Logical Unit Number (LUN)  initiator iSCSI qualified name (IQN)  CHAP authentication details After configuring the iSCSI target, obtain the following: ...
  • Page 140 9–iSCSI Configuration iSCSI Boot Table 9-1. Configuration Options (Continued) Option Description This option is specific to IPv6. Toggles between IPv4 and IP Version IPv6. All IP settings are lost if you switch from one protocol ver- sion to another. Allows you to specify a maximum wait time in seconds for a DHCP Request Timeout DHCP request, and response to complete.
  • Page 141: Adapter Uefi Boot Mode Configuration

    9–iSCSI Configuration iSCSI Boot Adapter UEFI Boot Mode Configuration To configure the boot mode: Restart the system. Press the OEM hotkey to enter the System setup or configuration menu. This is also known as UEFI HII. For example, the HPE Gen 9 systems use F9 as a hotkey to access the System Utilities menu at boot time (Figure 9-1).
  • Page 142 9–iSCSI Configuration iSCSI Boot On the Main Configuration Page, select Port Level Configuration (Figure 9-3), and then press ENTER. Figure 9-3. Selecting Port Level Configuration AH0054601-00 B...
  • Page 143: Configuring Iscsi Boot

    9–iSCSI Configuration Configuring iSCSI Boot On the Port Level Configuration page (Figure 9-4), select Boot Mode, and then press ENTER to select one of the following iSCSI boot modes:  iSCSI (SW)  iSCSI (HW) Figure 9-4. Port Level Configuration, Boot Mode NOTE The iSCSI (HW) option is not listed if the iSCSI Offload feature is disabled at port level.
  • Page 144: Static Iscsi Boot Configuration

    9–iSCSI Configuration Configuring iSCSI Boot Static iSCSI Boot Configuration In a static configuration, you must enter data for the following:  System’s IP address  System’s initiator IQN  Target parameters (obtained in “Configuring the iSCSI Target” on page 114) For information on configuration options, see Table 9-1 on page 115.
  • Page 145 9–iSCSI Configuration Configuring iSCSI Boot In the iSCSI Boot Configuration Menu, select iSCSI General Parameters (Figure 9-6), and then press ENTER. Figure 9-6. Selecting General Parameters On the iSCSI General Parameters page (Figure 9-7), press the UP ARROW and DOWN ARROW keys to select a parameter, and then press the ENTER key to select or input the following values: ...
  • Page 146 9–iSCSI Configuration Configuring iSCSI Boot Select iSCSI Initiator Parameters (Figure 9-8), and then press ENTER. Figure 9-8. Selecting iSCSI Initiator Parameters On the iSCSI Initiator Configuration page (Figure 9-9), select the following parameters, and then type a value for each: ...
  • Page 147 9–iSCSI Configuration Configuring iSCSI Boot NOTE Note the following for the preceding items with asterisks (*):  The label will change to IPv6 or IPv4 (default) based on the IP version set on the iSCSI General Parameters page (Figure 9-7 on page 121).
  • Page 148 9–iSCSI Configuration Configuring iSCSI Boot On the iSCSI First Target Configuration page, set the Connect option to Enabled to the iSCSI target. Type values for the following parameters for the iSCSI target, and then press ENTER:  IPv4* Address  TCP Port ...
  • Page 149 9–iSCSI Configuration Configuring iSCSI Boot If you want configure a second iSCSI target device, select iSCSI Second Target Parameters (Figure 9-12), and enter the parameter values as you did in Step 10. Otherwise, proceed to Step Figure 9-12. iSCSI Second Target Configuration Press ESC once, and a second time to exit.
  • Page 150: Dynamic Iscsi Boot Configuration

    9–iSCSI Configuration Configuring iSCSI Boot Press the Y key to save changes, or follow the OEM guidelines to save the device-level configuration. For example, in a HPE Gen 9 system, press Y, c to confirm setting change (Figure 9-13). Figure 9-13. Saving iSCSI Changes After all changes have been made, reboot the system to apply the changes to the adapter’s running configuration.
  • Page 151 9–iSCSI Configuration Configuring iSCSI Boot For information on configuration options, see Table 9-1 on page 115. NOTE When using a DHCP server, the DNS server entries are overwritten by the values provided by the DHCP server. This override occurs even if the locally provided values are valid and the DHCP server provides no DNS server information.
  • Page 152: Enabling Chap Authentication

    9–iSCSI Configuration Configuring iSCSI Boot  Target Login Timeout: Default value or as required  DHCP Vendor ID: As required Figure 9-14. iSCSI General Configuration Enabling CHAP Authentication Ensure that the CHAP authentication is enabled on the target. To enable CHAP authentication: Go to the iSCSI General Configuration page.
  • Page 153: Configuring The Dhcp Server To Support Iscsi Boot

    9–iSCSI Configuration Configuring the DHCP Server to Support iSCSI Boot Configuring the DHCP Server to Support iSCSI Boot The DHCP server is an optional component, and is only necessary if you will be doing a dynamic iSCSI boot configuration setup (see “Dynamic iSCSI Boot Configuration”...
  • Page 154: Dhcp Option 43, Vendor-Specific Information

    9–iSCSI Configuration Configuring the DHCP Server to Support iSCSI Boot Table 9-2. DHCP Option 17 Parameter Definitions (Continued) Parameter Definition Target name in either IQN or EUI format. For details on both IQN <targetname> and EUI formats, refer to RFC 3720. An example IQN name is iqn.1995-05.com.QLogic:iscsi-target.
  • Page 155: Configuring The Dhcp Server

    9–iSCSI Configuration Configuring the DHCP Server to Support iSCSI Boot Configuring the DHCP Server Configure the DHCP server to support either Option 16, 17, 43. NOTE The format of DHCPv6 Option 16 and Option 17 are fully defined in RFC 3315. If you use Option 43, you must also configure Option 60.
  • Page 156: Configuring Vlans For Iscsi Boot

    9–iSCSI Configuration Configuring the DHCP Server to Support iSCSI Boot Table 9-4 lists the DHCP Option 17 sub-options. Table 9-4. DHCP Option 17 Sub-option Definitions Sub-option Definition First iSCSI target information in the standard root path format: "iscsi:"[<servername>]":"<protocol>":"<port>":"<LUN> ": "<targetname>" Second iSCSI target information in the standard root path format: "iscsi:"[<servername>]":"<protocol>":"<port>":"<LUN>...
  • Page 157: Configuring Iscsi Boot From San For Sles 12

    9–iSCSI Configuration Configuring iSCSI Boot from SAN for SLES 12 Select VLAN ID to enter and set the VLAN value, as shown in Figure 9-15. Figure 9-15. iSCSI Initiator Configuration, VLAN ID Configuring iSCSI Boot from SAN for SLES 12 Perform L2 to L4 iSCSI boot from SAN through Microsoft Multipath I/O (MPIO) on FastLinQ 41000 Series Adapters for SLES 12 SP1 on UEFI-based systems.
  • Page 158 9–iSCSI Configuration Configuring iSCSI Boot from SAN for SLES 12 On the iSCSI Initiator Configuration page, configure the parameters as shown in Figure 9-16. Figure 9-16. System Configuration: Setting iSCSI Initiator Parameters On the iSCSI Boot Configuration Menu, select iSCSI General Parameters.
  • Page 159 9–iSCSI Configuration Configuring iSCSI Boot from SAN for SLES 12 Go to and change the /etc/default/grub rd.iscsi.ibft parameter to rd.iscsi.firmware Issue the following command: grub2-mkconfig -o /boot/efi/EFI/suse/grub.cfg To load the multipath module, issue the following command: modprobe dm_multipath To enable the multipath daemon, issue the following commands: systemctl start multipathd.service systemctl enable multipathd.service systemctl start multipathd.socket...
  • Page 160: Configuring Iscsi Boot From San For Rhel 7.4

    9–iSCSI Configuration Configuring iSCSI Boot from SAN for RHEL 7.4 Configuring iSCSI Boot from SAN for RHEL 7.4 To install RHEL 7.4 and later: Boot from the RHEL 7.x installation media with the iSCSI target already connected in UEFI. Install Red Hat Enterprise Linux 7.x Test this media &...
  • Page 161 9–iSCSI Configuration Configuring iSCSI Boot from SAN for RHEL 7.4 The installation process prompts you to install the out-of-box driver as shown in the Figure 9-17 example. Figure 9-17. Prompt for Out-of-Box Installation If required for your setup, load the FastLinQ driver update disk when prompted for additional driver disks.
  • Page 162 9–iSCSI Configuration Configuring iSCSI Boot from SAN for RHEL 7.4 In the Configuration window (Figure 9-18), select the language to use during the installation process, and then click Continue. Figure 9-18. Red Hat Enterprise Linux 7.4 Configuration In the Installation Summary window, click Installation Destination. The disk label is sda, indicating a single-path installation.
  • Page 163: Iscsi Offload In Windows Server

    9–iSCSI Configuration iSCSI Offload in Windows Server iSCSI Offload in Windows Server iSCSI offload is a technology that offloads iSCSI protocol processing overhead from host processors to the iSCSI HBA. iSCSI offload increases network performance and throughput while helping to optimize server processor use. This section covers how to configure the Windows iSCSI offload feature for the QLogic FastLinQ 41000 Series Adapters.
  • Page 164: Configuring Microsoft Initiator To Use Qlogic's Iscsi Offload

    9–iSCSI Configuration iSCSI Offload in Windows Server Configuring Microsoft Initiator to Use QLogic’s iSCSI Offload After the IP address is configured for the iSCSI adapter, you must use Microsoft Initiator to configure and add a connection to the iSCSI target using the QLogic iSCSI adapter.
  • Page 165 9–iSCSI Configuration iSCSI Offload in Windows Server In the iSCSI Initiator Name dialog box, type the new initiator IQN name, and then click OK. (Figure 9-20) Figure 9-20. iSCSI Initiator Node Name Change On the iSCSI Initiator Properties, click the Discovery tab. AH0054601-00 B...
  • Page 166 9–iSCSI Configuration iSCSI Offload in Windows Server On the Discovery page (Figure 9-21) under Target portals, click Discover Portal. Figure 9-21. iSCSI Initiator—Discover Target Portal AH0054601-00 B...
  • Page 167 9–iSCSI Configuration iSCSI Offload in Windows Server In the Discover Target Portal dialog box (Figure 9-22): In the IP address or DNS name box, type the IP address of the target. Click Advanced. Figure 9-22. Target Portal IP Address In the Advanced Settings dialog box (Figure 9-23), complete the following under Connect using:...
  • Page 168 9–iSCSI Configuration iSCSI Offload in Windows Server Click OK. Figure 9-23. Selecting the Initiator IP Address On the iSCSI Initiator Properties, Discovery page, click OK. AH0054601-00 B...
  • Page 169 9–iSCSI Configuration iSCSI Offload in Windows Server Click the Targets tab, and then on the Targets page (Figure 9-24), click Connect. Figure 9-24. Connecting to the iSCSI Target AH0054601-00 B...
  • Page 170: Iscsi Offload Faqs

    9–iSCSI Configuration iSCSI Offload in Windows Server On the Connect To Target dialog box (Figure 9-25), click Advanced. Figure 9-25. Connect To Target Dialog Box In the Local Adapter dialog box, select the QLogic <name or model> Adapter, and then click OK. Click OK again to close the Microsoft Initiator.
  • Page 171: Windows Server 2012 R2 And 2016 Iscsi Boot Installation

    9–iSCSI Configuration iSCSI Offload in Windows Server Question: What configurations should be avoided? Answer: The IP address should not be the same as the LAN. Windows Server 2012 R2 and 2016 iSCSI Boot Installation Windows Server 2012 R2 and 2016 support booting and installing in either the offload or non-offload paths.
  • Page 172: Iscsi Offload In Linux Environments

    9–iSCSI Configuration iSCSI Offload in Linux Environments iSCSI Offload in Linux Environments The QLogic FastLinQ 41000 Series iSCSI software consists of a single kernel module called qedi.ko (qedi). The qedi module is dependent on additional parts of the Linux kernel for specific functionality: ...
  • Page 173: Configuring Qedi.ko

    9–iSCSI Configuration Configuring qedi.ko Configuring qedi.ko The qedi driver automatically binds to the exposed iSCSI functions of the CNA, and the target discovery and binding is done through the open-iscsi tools. This functionality and operation is similar to that of the bnx2i driver. NOTE For more information on how to install FastLinQ drivers, see Chapter 3...
  • Page 174 9–iSCSI Configuration Verifying iSCSI Interfaces in Linux ..[0000:42:00.4]:[qedi_link_update:928]:59: Link Up event..[0000:42:00.5]:[__qedi_probe:3563]:60: QLogic FastLinQ iSCSI Module qedi 8.15.6.0, FW 8.15.3.0 ..[0000:42:00.5]:[qedi_link_update:928]:59: Link Up event Use open-iscsi tools to verify that IP is configured properly. Issue the following command: # iscsiadm -m iface | grep qedi qedi.00:0e:1e:c4:e1:6d qedi,00:0e:1e:c4:e1:6d,192.168.101.227,<empty>,iqn.1994-05.com.redhat:534ca9b6...
  • Page 175 9–iSCSI Configuration Verifying iSCSI Interfaces in Linux 192.168.25.100:3260,1 iqn.2003-04.com.sanblaze:virtualun.virtualun.target-05000002 Log into the iSCSI target using the IQN obtained in Step 5. To initiate the login procedure, issue the following command (where the last character in the command is a lowercase letter “L”: #iscsiadm -m node -p 192.168.25.100 -T iqn.2003-04.com.sanblaze:virtualun.virtualun.target-0)000007 -l Logging in to [iface: qedi.00:0e:1e:c4:e1:6c,...
  • Page 176: Open-Iscsi And Boot From San Considerations

    9–iSCSI Configuration Open-iSCSI and Boot from SAN Considerations Open-iSCSI and Boot from SAN Considerations In current distributions (for example, RHEL 6/7 and SLE 11/12) the inbox iSCSI user space utility (Open-iSCSI tools) lacks support for qedi iSCSI transport and cannot perform user space-initiated iSCSI functionality. During boot from SAN installation, you can update the qedi driver using a driver update disk (DUD).
  • Page 177 9–iSCSI Configuration Open-iSCSI and Boot from SAN Considerations To migrate from a non-offload interface to an offload interface: Upgrade qedi transport-supported Open-iSCSI tools such as iscsiadm, iscsid, iscsiuio, and iscsistart. Use the following Open-iSCSI RPM package: qlgc-open-iscsi-2.0_873.107-1.x86_64.rpm Issue the following command: rpm -ivh qlgc-open-iscsi-2.0_873.107-1.x86_64.rpm –force To reload all the daemon services, issue the following command: # systemctl daemon-reload...
  • Page 178 9–iSCSI Configuration Open-iSCSI and Boot from SAN Considerations ExecStart=/usr/sbin/iscsid ExecStop=/sbin/iscsiadm -k 0 2 Comment out the preceding lines as shown in the following: #ExecStart=/usr/sbin/iscsid #ExecStop=/sbin/iscsiadm -k 0 2 Edit the following iscsiuio configuration file: #vi /usr/lib/systemd/system/iscsiuio.service Locate the following lines: Requires=iscsid.service BindTo=iscsid.service Comment out the preceding lines as shown in the following:...
  • Page 179 9–iSCSI Configuration Open-iSCSI and Boot from SAN Considerations Create a backup of the original file, which is in the following grub.cfg locations:  For legacy boot: /boot/grub2/grub.cfg  For UEFI boot: /boot/efi/EFI/redhat/grub.cfg or /boot/grub2/grub.cfg NOTE Step 7 Step 8 describe how to replace the correct file.
  • Page 180 9–iSCSI Configuration Open-iSCSI and Boot from SAN Considerations BindTo=iscsid.service Reboot the server. On the adapter’s preboot iSCSI Boot Configuration Menu, change the value of the iSCSI (offload): On the iSCSI Boot Configuration Menu, set iSCSI Offload to Enable. Set HBA Mode to Enable. NOTE The OS can now boot through the offload interface.
  • Page 181: Fcoe Configuration

    FCoE Configuration This chapter provides the following Fibre Channel over Ethernet (FCoE) configuration information:  FCoE Boot from SAN  Injecting (Slipstreaming) Adapter Drivers into Windows Image Files  Configuring Linux FCoE Offload  Differences Between qedf and bnx2fc  Configuring qedf.ko ...
  • Page 182: Preparing System Bios For Fcoe Build And Boot

    10–FCoE Configuration FCoE Boot from SAN Preparing System BIOS for FCoE Build and Boot To prepare the system BIOS, modify the system boot order and specify the BIOS boot protocol, if required. Specifying the BIOS Boot Protocol FCoE boot from SAN is supported in UEFI mode only. Set the platform in boot mode (protocol) using the system BIOS configuration to UEFI.
  • Page 183 10–FCoE Configuration FCoE Boot from SAN In System HII, select the QLogic device (Figure 10-2). Refer to the OEM user guide on accessing PCI device configuration menu. For example, on an HPE Gen 9 server, the System Utilities for QLogic devices are listed under the System Configuration menu.
  • Page 184 10–FCoE Configuration FCoE Boot from SAN On the Port Level Configuration Page (Figure 10-4) page, select Boot Mode, and then press ENTER to select FCoE as a preferred boot mode. Figure 10-4. Boot Mode in Port Level Configuration NOTE FCoE is not listed as a boot option if the FCoE Offload feature is disabled at the port level.
  • Page 185 10–FCoE Configuration FCoE Boot from SAN Figure 10-5. FCoE Offload Enabled To configure the FCoE boot parameters: On the Device HII Main Configuration Page, select FCoE Configuration, and then press ENTER. In the FCoE Boot Configuration Menu, select FCoE General Parameters (Figure 10-6), and then press ENTER.
  • Page 186 10–FCoE Configuration FCoE Boot from SAN  Target Login Retry Count: Default value or as required Figure 10-7. FCoE General Parameters Return to the FCoE Boot Configuration page. Press ESC, and then select FCoE Target Parameters. Press ENTER. In the FCoE Target Parameters Menu, enable Connect to the preferred FCoE target.
  • Page 187: Windows Fcoe Boot From San

    10–FCoE Configuration FCoE Boot from SAN Where the value of n is between 1 and 8, enabling you to configure 8 FCoE targets. Figure 10-8. FCoE Target Configuration Windows FCoE Boot from SAN FCoE boot from SAN information for Windows includes: ...
  • Page 188: Configuring Fcoe

    10–FCoE Configuration FCoE Boot from SAN The following procedure prepares the image for installation and booting in FCoE mode. To set up Windows Server 2012R2/2016 FCoE boot: Remove any local hard drives on the system to be booted (remote system). Prepare the Windows OS installation media by following the slipstreaming steps in “Injecting (Slipstreaming) Adapter Drivers into Windows Image...
  • Page 189: Injecting (Slipstreaming) Adapter Drivers Into Windows Image Files

    10–FCoE Configuration Injecting (Slipstreaming) Adapter Drivers into Windows Image Files Injecting (Slipstreaming) Adapter Drivers into Windows Image Files To inject adapter drivers into the Windows image files: Obtain the latest driver package for the applicable Windows Server version (2012, 2012 R2, or 2016). Extract the driver package to a working directory: Open a command line session and navigate to the folder that contains the driver package.
  • Page 190: Configuring Linux Fcoe Offload

    10–FCoE Configuration Configuring Linux FCoE Offload NOTE Note the following regarding the operating system installation media:  Operating system installation media is expected to be a local drive. Network paths for operating system installation media are not supported.  The script injects the driver components in all slipstream.bat the SKUs that are supported by the operating system installation...
  • Page 191: Differences Between Qedf And Bnx2Fc

    10–FCoE Configuration Differences Between qedf and bnx2fc Differences Between qedf and bnx2fc Significant differences exist between qedf—the driver for QLogic FastLinQ 41000 Series 10/25GbE Controller (FCoE)—and the previous QLogic FCoE offload driver, bnx2fc. Differences include:  qedf directly binds to a PCI function exposed by the CNA. ...
  • Page 192: Verifying Fcoe Devices In Linux

    10–FCoE Configuration Verifying FCoE Devices in Linux Verifying FCoE Devices in Linux Follow these steps to verify that the FCoE devices were detected correctly after installing and loading the qedf kernel module. To verify FCoE devices in Linux: Check lsmod to verify that the qedf and associated kernel modules were loaded: # lsmod | grep qedf libfcoe 69632...
  • Page 193: Boot From San Considerations

    10–FCoE Configuration Boot from SAN Considerations 5:0:0:5 disk SANBlaze VLUN P2T1L5 V7.3 fc 5:0:0:6 disk SANBlaze VLUN P2T1L6 V7.3 fc 5:0:0:7 disk SANBlaze VLUN P2T1L7 V7.3 fc 5:0:0:8 disk SANBlaze VLUN P2T1L8 V7.3 fc 5:0:0:9 disk SANBlaze VLUN P2T1L9 V7.3 fc Configuration information for the host is located in /sys/class/fc_host/hostX, where...
  • Page 194: Sr-Iov Configuration

    SR-IOV Configuration Single root input/output virtualization (SR-IOV) is a specification by the PCI SIG that enables a single PCI Express (PCIe) device to appear as multiple, separate physical PCIe devices. SR-IOV permits isolation of PCIe resources for performance, interoperability, and manageability. NOTE Some SR-IOV features may not be fully enabled in the current release.
  • Page 195 11–SR-IOV Configuration Configuring SR-IOV on Windows On the Main Configuration Page - Device Level Configuration (Figure 11-1): Set the Virtualization Mode to SR-IOV. Click Back. Figure 11-1. Device Level Configuration On the Main Configuration Page, click Finish. In the Warning - Saving Changes message box, click Yes to save the configuration.
  • Page 196 11–SR-IOV Configuration Configuring SR-IOV on Windows Click OK. Figure 11-2. Adapter Properties, Advanced: Enabling SR-IOV To create a Virtual Machine Switch with SR-IOV (Figure 11-3 on page 173): Launch the Hyper-V Manager. Select Virtual Switch Manager. In the Name box, type a name for the virtual switch. Under Connection type, select External network.
  • Page 197 11–SR-IOV Configuration Configuring SR-IOV on Windows Select the Enable single-root I/O virtualization (SR-IOV) check box, and then click Apply. NOTE Be sure to enable SR-IOV when you create the vSwitch. This option is unavailable after the vSwitch is created. Figure 11-3. Virtual Switch Manager: Enabling SR-IOV AH0054601-00 B...
  • Page 198 11–SR-IOV Configuration Configuring SR-IOV on Windows The Apply Networking Changes message box advises you that Pending changes may disrupt network connectivity. To save your changes and continue, click Yes. To get the virtual machine switch capability, issue the following Windows PowerShell command: PS C:\Users\Administrator>...
  • Page 199 11–SR-IOV Configuration Configuring SR-IOV on Windows In the Settings for VM <VM_Name> dialog box (Figure 11-4), Hardware Acceleration page, under Single-root I/O virtualization, select the Enable SR-IOV check box, and then click OK. NOTE After the virtual adapter connection is created, the SR-IOV setting can be enabled or disabled at any time (even while traffic is running).
  • Page 200 11–SR-IOV Configuration Configuring SR-IOV on Windows Install the QLogic drivers for VF in the VM. NOTE Be sure to use the same driver package on both the VM and the host system. For example, use the same qeVBD and qeND driver version on the Windows VM and in the Windows Hyper-V host.
  • Page 201: Configuring Sr-Iov On Linux

    11–SR-IOV Configuration Configuring SR-IOV on Linux Figure 11-6 shows example output. Figure 11-6. Windows PowerShell Command: Get-NetadapterSriovVf Configuring SR-IOV on Linux To configure SR-IOV on Linux: Access the server BIOS System Setup, and then click System BIOS Settings. On the System BIOS Settings page, click Integrated Devices. On the System Integrated Devices page: Set the SR-IOV Global Enable option to Enabled.
  • Page 202 11–SR-IOV Configuration Configuring SR-IOV on Linux On the Main Configuration Page, click Finish, save your settings, and then reboot the system. To enable and verify virtualization: Open the file and configure the parameter as grub.conf iommu shown in Figure 11-8. ...
  • Page 203 11–SR-IOV Configuration Configuring SR-IOV on Linux To view VF details (number of VFs and total VFs), issue the find command. /sys/|grep -I sriov For a specific port, enable a quantity of VFs. Issue the following command to enable, for example, 8 VF on PCI instance 04:00.0 (bus 4, device 0, function 0): [root@ah-rh68 ~]# echo 8 >...
  • Page 204 11–SR-IOV Configuration Configuring SR-IOV on Linux Figure 11-10 shows example output. Figure 11-10. Command Output for ip link show Command Assign and verify MAC addresses: To assign a MAC address to the VF, issue the following command: ip link set <pf device> vf <vf index> mac <mac address> Ensure that the VF interface is up and running with the assigned MAC address.
  • Page 205 11–SR-IOV Configuration Configuring SR-IOV on Linux Power off the VM and attach the VF. (Some OSs support hot-plugging of VFs to the VM.) In the Virtual Machine dialog box (Figure 11-11), click Add Hardware. Figure 11-11. RHEL68 Virtual Machine In the left pane of the Add New Virtual Hardware dialog box (Figure 11-12), click PCI Host Device.
  • Page 206: Configuring Sr-Iov On Vmware

    11–SR-IOV Configuration Configuring SR-IOV on VMware Click Finish. Figure 11-12. Add New Virtual Hardware Power on the VM and then issue the following command: check lspci -vv|grep -I ether If no inbox driver is available, install the driver. As needed, add more VFs in the VM. Configuring SR-IOV on VMware To configure SR-IOV on VMware: Access the server BIOS System Setup, and then click System BIOS...
  • Page 207 11–SR-IOV Configuration Configuring SR-IOV on VMware Save the configuration settings and reboot the system. To enable the needed quantity of VFs per port (in this example, 16 on each port of a dual-port adapter), issue the following command: "esxcfg-module -s "max_vfs=16,16" qedentv" NOTE Each Ethernet function of the 41000 Series Adapter must have it's own entry.
  • Page 208 11–SR-IOV Configuration Configuring SR-IOV on VMware 0000:05:0e.3 Network controller: QLogic Corp. QLogic FastLinQ QL41xxx Series 10/25 GbE Controller (SR-IOV VF) [PF_0.5.1_VF_3] 0000:05:0f.6 Network controller: QLogic Corp. QLogic FastLinQ QL41xxx Series 10/25 GbE Controller (SR-IOV VF) [PF_0.5.1_VF_14] 0000:05:0f.7 Network controller: QLogic Corp. QLogic FastLinQ QL41xxx Series 10/25 GbE Controller (SR-IOV VF) [PF_0.5.1_VF_15] To validate the VFs per port, issue the command as follows:...
  • Page 209 11–SR-IOV Configuration Configuring SR-IOV on VMware For Physical Function, select the QLogic VF. To save your configuration changes and close this dialog box, click Figure 11-13. VMware Host Edit Settings Power on the VM, and then issue the command to verify that ifconfig -a the added network interface is listed.
  • Page 210 11–SR-IOV Configuration Configuring SR-IOV on VMware If no inbox driver is available, install the driver. As needed, add more VFs in the VM. AH0054601-00 B...
  • Page 211: Nvme-Of Configuration With Rdma

    NVMe-oF Configuration with RDMA Non-Volatile Memory Express over Fabrics (NVMe-oF) enables the use of alternate transports to PCIe to extend the distance over which an NVMe host device and an NVMe storage drive or subsystem can connect. NVMe-oF defines a common architecture that supports a range of storage networking fabrics for the NVMe block storage protocol over a storage networking fabric.
  • Page 212: Installing Device Drivers On Both Servers

    12–NVMe-oF Configuration with RDMA Installing Device Drivers on Both Servers Figure 12-1 illustrates an example network. 41000 Series Adapter 41000 Series Adapter Figure 12-1. NVMe-oF Network The NVMe-oF configuration process covers the following procedures: Installing Device Drivers on Both Servers Configuring the Target Server Configuring the Initiator Server Preconditioning the Target Server...
  • Page 213: Configuring The Target Server

    12–NVMe-oF Configuration with RDMA Configuring the Target Server Configuring the Target Server You configure the target server after the reboot process. After the server is operating, you cannot change the configuration without rebooting. If you are using a startup script to configure the target server, consider pausing the script (using command or something similar) as needed to ensure that each wait command finishes before executing the next command.
  • Page 214: Configuring The Initiator Server

    12–NVMe-oF Configuration with RDMA Configuring the Initiator Server Table 12-1. Target Parameters (Continued) Command Description Create NVMe port 1. # mkdir /sys/kernel/config/nvmet/ ports/1 # cd /sys/kernel/config/nvmet/ports/1 # echo 1.1.1.1 > addr_traddr Set the same IP address. For example, 1.1.1.1 is the IP address for the target port of the 41000 Series Adapter.
  • Page 215 12–NVMe-oF Configuration with RDMA Configuring the Initiator Server Download, compile and install the Initiator utility. Issue these nvme-cli commands at the first configuration—you do not need to issue these commands after each reboot. # git clone https://github.com/linux-nvme/nvme-cli.git # cd nvme-cli # make &&...
  • Page 216: Preconditioning The Target Server

    12–NVMe-oF Configuration with RDMA Preconditioning the Target Server Figure 12-3 shows an example. Figure 12-3. Confirm NVMe-oF Connection Preconditioning the Target Server NVMe target servers that are tested out-of-the-box show a higher-than-expected performance. Before running a benchmark, the target server needs to be prefilled or preconditioned.
  • Page 217: Testing The Nvme-Of Devices

    12–NVMe-oF Configuration with RDMA Testing the NVMe-oF Devices Testing the NVMe-oF Devices Compare the latency of the local NVMe device on the target server with that of the NVMe-oF device on the initiator server to show the latency that NVMe adds to the system.
  • Page 218 12–NVMe-oF Configuration with RDMA Testing the NVMe-oF Devices Run FIO to measure bandwidth of the local NVMe device on the target server. Issue the following command: fio --verify=crc32 --do_verify=1 --bs=8k --numjobs=1 --iodepth=32 --loops=1 --ioengine=libaio --direct=1 --invalidate=1 --fsync_on_close=1 --randrepeat=1 --norandommap --time_based --runtime=60 --filename=/dev/nvme0n1 --name=Write-BW-to-NVMe-Device --rw=randwrite can be...
  • Page 219: Optimizing Performance

    12–NVMe-oF Configuration with RDMA Optimizing Performance Optimizing Performance To optimize performance on both initiator and target servers: Configure the following system BIOS settings:  Power Profiles = 'Max Performance' or equivalent  ALL C-States = disabled  Hyperthreading = disabled Configure the Linux kernel parameters by editing the file grub...
  • Page 220: Irq Affinity (Multi_Rss-Affin.sh)

    12–NVMe-oF Configuration with RDMA Optimizing Performance .IRQ Affinity (multi_rss-affin.sh) The following script sets the IRQ affinity. #!/bin/bash #RSS affinity setup script #input: the device name (ethX) #OFFSET=0 0/1/2 0/1/2/3 #FACTOR=1 OFFSET=0 FACTOR=1 LASTCPU='cat /proc/cpuinfo | grep processor | tail -n1 | cut -d":" -f2' MAXCPUID='echo 2 $LASTCPU ^ p | dc' OFFSET='echo 2 $OFFSET ^...
  • Page 221: Cpu Frequency (Cpufreq.sh)

    12–NVMe-oF Configuration with RDMA Optimizing Performance CPU Frequency (cpufreq.sh) The following script sets the CPU frequency. #Usage "./nameofscript.sh" grep -E '^model name|^cpu MHz' /proc/cpuinfo cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor for CPUFREQ in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do [ -f $CPUFREQ ] || continue; echo -n performance > $CPUFREQ; done cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor To configure the network or memory settings: sysctl -w net.ipv4.tcp_mem="16777216 16777216 16777216"...
  • Page 222: Configuring Roce Interfaces With Hyper-V

    Windows Server 2016 This chapter provides the following information for Windows Server 2016:  Configuring RoCE Interfaces with Hyper-V  RoCE over Switch Embedded Teaming  Configuring QoS for RoCE  Configuring VMMQ  Configuring VXLAN  Configuring Storage Spaces Direct ...
  • Page 223: Creating A Hyper-V Virtual Switch With An Rdma Virtual Nic

    13–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V Creating a Hyper-V Virtual Switch with an RDMA Virtual NIC Follow the procedures in this section to create a Hyper-V virtual switch and then enable RDMA in the host VNIC. To create a Hyper-V virtual switch with an RDMA virtual NIC: Launch Hyper-V Manager.
  • Page 224: Adding A Vlan Id To Host Virtual Nic

    13–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V Click OK. Figure 13-2. Hyper-V Virtual Ethernet Adapter Properties To enable RDMA, issue the following Windows PowerShell command: PS C:\Users\Administrator> Enable-NetAdapterRdma "vEthernet (New Virtual Switch)" PS C:\Users\Administrator> Adding a VLAN ID to Host Virtual NIC To add VLAN ID to a host virtual NIC: To find the host virtual NIC name, issue the following Windows PowerShell command:...
  • Page 225: Verifying If Roce Is Enabled

    13–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V To set the VLAN ID to the host virtual NIC, issue the following Windows PowerShell command: PS C:\Users\Administrator> Set-VMNetworkAdaptervlan -VMNetworkAdapterName "New Virtual Switch" -VlanId 5 -Access -Management05 NOTE Note the following about adding a VLAN ID to a host virtual NIC: ...
  • Page 226: Mapping The Smb Drive And Running Roce Traffic

    13–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V To assign a VLAN ID to the virtual port, issue the following command: Set-VMNetworkAdapterVlan -VMNetworkAdapterName SMB -VlanId 5 -Access -ManagementOS Mapping the SMB Drive and Running RoCE Traffic To map the SMB drive and run the RoCE traffic: Launch the Performance Monitor (Perfmon).
  • Page 227: Roce Over Switch Embedded Teaming

    13–Windows Server 2016 RoCE over Switch Embedded Teaming If the RoCE traffic is running, counters appear as shown in Figure 13-6. Figure 13-6. Performance Monitor Shows RoCE Traffic RoCE over Switch Embedded Teaming Switch Embedded Teaming (SET) is Microsoft’s alternative NIC teaming solution available to use in environments that include Hyper-V and the Software Defined Networking (SDN) stack in Windows Server 2016 Technical Preview.
  • Page 228: Creating A Hyper-V Virtual Switch With Set And Rdma Virtual Nics

    13–Windows Server 2016 RoCE over Switch Embedded Teaming Creating a Hyper-V Virtual Switch with SET and RDMA Virtual NICs To create a Hyper-V virtual switch with SET and RDMA virtual NICs:  To create a SET, issue the following Windows PowerShell command: PS C:\Users\Administrator>...
  • Page 229: Running Rdma Traffic On Set

    13–Windows Server 2016 Configuring QoS for RoCE NOTE Note the following when adding a VLAN ID to a host virtual NIC:  Make sure that the VLAN ID is not assigned to the physical Interface when using host virtual NIC for RoCE. ...
  • Page 230 13–Windows Server 2016 Configuring QoS for RoCE Enable QoS in the miniport as follows: Open the miniport window, and then click the Advanced tab. On the adapter’s Advanced Properties page (Figure 13-9) under Property, select Quality of Service, and then set the value to Enabled.
  • Page 231 13–Windows Server 2016 Configuring QoS for RoCE Click OK. NOTE The preceding step is required for priority flow control (PFC). Figure 13-10. Advanced Properties: Setting VLAN ID To enable priority flow control for RoCE on a specific priority, issue the following command: PS C:\Users\Administrators>...
  • Page 232 13–Windows Server 2016 Configuring QoS for RoCE To disable priority flow control on any other priority, issue the following commands: PS C:\Users\Administrator> Disable-NetQosFlowControl 0,1,2,3,5,6,7 PS C:\Users\Administrator> Get-NetQosFlowControl Priority Enabled PolicySet IfIndex IfAlias -------- ------- --------- ------- ------- False Global False Global False Global...
  • Page 233 13–Windows Server 2016 Configuring QoS for RoCE To configure ETS for all traffic classes defined in the previous step, issue the following commands: PS C:\Users\Administrators> New-NetQosTrafficClass -name "RDMA class" -priority 4 -bandwidthPercentage 50 -Algorithm ETS PS C:\Users\Administrators> New-NetQosTrafficClass -name "TCP class" -priority 0 -bandwidthPercentage 30 -Algorithm ETS PS C:\Users\Administrator>...
  • Page 234: Configuring Qos By Enabling Dcbx On The Adapter

    13–Windows Server 2016 Configuring QoS for RoCE Configuring QoS by Enabling DCBX on the Adapter All configuration must be completed on all of the systems in use. The PFC, ETS, and traffic classes configuration must be the same on the switch and server. To configure QoS by enabling DCBX: Enable DCBX (IEEE, CEE, or Dynamic).
  • Page 235 13–Windows Server 2016 Configuring QoS for RoCE Enable QoS in the miniport as follows: On the adapter’s Advanced Properties page (Figure 13-11) under Property, select Quality of Service, and then set the value to Enabled. Click OK. Figure 13-11. Advanced Properties: Enabling QoS Assign the VLAN ID to the interface (required for PFC) as follows: Open the miniport window, and then click the Advanced tab.
  • Page 236 13–Windows Server 2016 Configuring QoS for RoCE Click OK. Figure 13-12. Advanced Properties: Setting VLAN ID To configure the switch, issue the following Windows PowerShell command: PS C:\Users\Administrators> Get-NetAdapterQoS Name : Ethernet 5 Enabled : True Capabilities Hardware Current -------- ------- MacSecBypass : NotSupported NotSupported...
  • Page 237: Configuring Vmmq

    13–Windows Server 2016 Configuring VMMQ NetDirect 445 RemoteTrafficClasses : TC TSA Bandwidth Priorities -- --- --------- ---------- 0 ETS 0-3,5-7 1 ETS RemoteFlowControl : Priority 4 Enabled RemoteClassifications : Protocol Port/Type Priority -------- --------- -------- NetDirect 445 NOTE The preceding example is taken when the adapter port is connected to an Arista 7060X switch.
  • Page 238: Enabling Vmmq On The Adapter

    13–Windows Server 2016 Configuring VMMQ Enabling VMMQ on the Adapter To enable VMMQ on the adapter: Open the miniport window, and then click the Advanced tab. On the Advanced Properties page (Figure 13-13) under Property, select Virtual Switch RSS, and then set the value to Enabled. Click OK.
  • Page 239: Creating A Virtual Machine Switch With Or Without Sr-Iov

    13–Windows Server 2016 Configuring VMMQ If applicable, adjust the Value for the selected property. Figure 13-14. Advanced Properties: Setting VMMQ Click OK. Creating a Virtual Machine Switch with or Without SR-IOV To create a virtual machine switch with or without SR-IOV: Launch the Hyper-V Manager.
  • Page 240 13–Windows Server 2016 Configuring VMMQ Under Connection type: Click External network. Select the Allow management operating system to share this network adapter check box. Figure 13-15. Virtual Switch Manager Click OK. AH0054601-00 B...
  • Page 241: Enabling Vmmq On The Virtual Machine Switch

    13–Windows Server 2016 Configuring VMMQ Enabling VMMQ on the Virtual Machine Switch To enable VMMQ on the virtual machine switch:  Issue the following Windows PowerShell command: PS C:\Users\Administrators> Set-VMSwitch -name q1 -defaultqueuevmmqenabled $true -defaultqueuevmmqqueuepairs 4 Getting the Virtual Machine Switch Capability To get the virtual machine switch capability: ...
  • Page 242: Creating A Vm And Enabling Vmmq On Vmnetworkadapters

    13–Windows Server 2016 Configuring VMMQ Creating a VM and Enabling VMMQ on VMNetworkadapters in the VM To create a virtual machine (VM) and enable VMMQ on VMNetworksadapters in the VM: Create a VM. Add the VMNetworkadapter to the VM. Assign a virtual switch to the VMNetworkadapter. To enable VMMQ on the VM, issue the following Windows PowerShell command: PS C:\Users\Administrators>...
  • Page 243: Default And Maximum Vmmq Virtual Nic

    13–Windows Server 2016 Configuring VMMQ Ethernet 3 00-15-5D-36-0A-07 Activated Adaptive Ethernet 3 00-15-5D-36-0A-08 0:16 Activated Adaptive Ethernet 3 00-15-5D-36-0A-09 Activated Adaptive Ethernet 3 00-15-5D-36-0A-0A Activated Adaptive Ethernet 3 00-15-5D-36-0A-0B Activated Adaptive Ethernet 3 00-15-5D-36-0A-F4 0:16 Activated Adaptive Ethernet 3 00-15-5D-36-0A-F5 Activated Adaptive Ethernet 3...
  • Page 244: Configuring Vxlan

    13–Windows Server 2016 Configuring VXLAN Configuring VXLAN VXLAN configuration information includes:  Enabling VXLAN Offload on the Adapter  Deploying a Software Defined Network Enabling VXLAN Offload on the Adapter To enable VXLAN offload on the adapter: Open the miniport window, and then click the Advanced tab. On the Advanced Properties page (Figure 13-17) under Property, select...
  • Page 245: Deploying A Software Defined Network

    13–Windows Server 2016 Configuring Storage Spaces Direct Deploying a Software Defined Network To take advantage of VXLAN encapsulation task offload on virtual machines, you must deploy a Software Defined Networking (SDN) stack that utilizes a Microsoft Network Controller. For more details, refer to the following Microsoft TechNet link on Software Defined Networking: https://technet.microsoft.com/en-us/windows-server-docs/networking/sdn/ software-defined-networking--sdn-...
  • Page 246: Configuring The Hardware

    13–Windows Server 2016 Configuring Storage Spaces Direct Configuring the Hardware Figure 13-18 shows an example of hardware configuration. Figure 13-18. Example Hardware Configuration NOTE The disks used in this example are 4 × 400G NVMe ™ , and 12 × 200G SSD disks.
  • Page 247: Deploying The Operating System

    13–Windows Server 2016 Configuring Storage Spaces Direct Deploying the Operating System To deploy the operating systems: Install the operating system. Install the Windows server roles (Hyper-V). Install the following features:  Failover  Cluster  Data center bridging (DCB) Connect the nodes to domain and adding domain accounts. Configuring the Network To deploy Storage Spaces Direct, the Hyper-V switch must be deployed with RDMA-enabled host virtual NICs.
  • Page 248 13–Windows Server 2016 Configuring Storage Spaces Direct Enable Network Quality of Service. NOTE Network Quality of Service is used to ensure that the Software Defined Storage system has enough bandwidth to communicate between the nodes to ensure resiliency and performance. To configure QoS on the adapter, see “Configuring QoS for RoCE”...
  • Page 249 13–Windows Server 2016 Configuring Storage Spaces Direct To verify that the VLAN ID is set, issue the following command: Get-VMNetworkAdapterVlan -ManagementOS To disable and enable each host virtual NIC adapter so that the VLAN is active, issue the following command: Disable-NetAdapter "vEthernet (SMB_1)"...
  • Page 250: Configuring Storage Spaces Direct

    13–Windows Server 2016 Configuring Storage Spaces Direct Configuring Storage Spaces Direct Configuring Storage Spaces Direct in Windows Server 2016 includes the following steps:  Step 1. Running Cluster Validation Tool  Step 2. Creating a Cluster  Step 3. Configuring a Cluster Witness ...
  • Page 251 13–Windows Server 2016 Configuring Storage Spaces Direct Step 4. Cleaning Disks Used for Storage Spaces Direct The disks intended to be used for Storage Spaces Direct must be empty, and without partitions or other data. If a disk has partitions or other data, it will not be included in the Storage Spaces Direct system.
  • Page 252: Deploying And Managing A Nano Server

    13–Windows Server 2016 Deploying and Managing a Nano Server Step 5. Enabling Storage Spaces Direct After creating the cluster, issue the Enable-ClusterStorageSpacesDirect Windows PowerShell cmdlet. The cmdlet places the storage system into the Storage Spaces Direct mode and automatically does the following: ...
  • Page 253: Roles And Features

    13–Windows Server 2016 Deploying and Managing a Nano Server Roles and Features Table 13-1 shows the roles and features that are available in this release of Nano Server, along with the Windows PowerShell options that will install the packages for them. Some packages are installed directly with their own Windows PowerShell options (such as ).
  • Page 254: Deploying A Nano Server On A Physical Server

    13–Windows Server 2016 Deploying and Managing a Nano Server Table 13-1. Roles and Features of Nano Server (Continued) Role or Feature Options System Center Virtual Machine Manager Agent -Packages Microsoft-Windows-Server- SCVMM-Package -Packages Microsoft-Windows-Server- SCVMM-Compute-Package Note: Use this package only if you are monitoring Hyper-V.
  • Page 255 13–Windows Server 2016 Deploying and Managing a Nano Server Import the NanoServerImageGenerator script by issuing the following command: Import-Module .\NanoServerImageGenerator.psm1 -Verbose To create a VHD that sets a computer name and includes the OEM drivers and Hyper-V, issue the following Windows PowerShell command: NOTE This command will prompt you for an administrator password for the new VHD.
  • Page 256 13–Windows Server 2016 Deploying and Managing a Nano Server INFO : Windows path (I:) has been assigned. INFO : System volume location: I: INFO : Applying image to VHD. This could take a while... INFO : Image was applied successfully. INFO : Making image bootable...
  • Page 257: Deploying A Nano Server In A Virtual Machine

    13–Windows Server 2016 Deploying and Managing a Nano Server Deploying a Nano Server in a Virtual Machine To create a Nano Server virtual hard drive (VHD) to run in a virtual machine: Download the Windows Server 2016 OS image. Go to the folder from the downloaded file in Step NanoServer...
  • Page 258 13–Windows Server 2016 Deploying and Managing a Nano Server .\Nano1\VM_NanoServer.vhd -ComputerName Nano-VM1 –GuestDrivers cmdlet New-NanoServerImage at command pipeline position 1 Supply values for the following parameters: Windows(R) Image to Virtual Hard Disk Converter for Windows(R) 10 Copyright (C) Microsoft Corporation. All rights reserved.
  • Page 259: Managing A Nano Server Remotely

    13–Windows Server 2016 Deploying and Managing a Nano Server Managing a Nano Server Remotely Options for managing Nano Server remotely include Windows PowerShell, Windows Management Instrumentation (WMI), Windows Remote Management, and Emergency Management Services (EMS). This section describes how to access Nano Server using Windows PowerShell remoting.
  • Page 260: Managing Qlogic Adapters On A Windows Nano Server

    13–Windows Server 2016 Deploying and Managing a Nano Server You can now run Windows PowerShell commands on the Nano Server as usual. However, not all Windows PowerShell commands are available in this release of Nano Server. To see which commands are available, issue the command .
  • Page 261 13–Windows Server 2016 Deploying and Managing a Nano Server Figure 13-19 shows example output. Figure 13-19. Windows PowerShell Command: Get-NetAdapter To verify whether the RDMA is enabled on the adapter, issue the following Windows PowerShell command: [172.28.41.152]: PS C:\Users\Administrator\Documents> Get-NetAdapterRdma Figure 13-20 shows example output.
  • Page 262 13–Windows Server 2016 Deploying and Managing a Nano Server [172.28.41.152]: PS C:\> New-SMBShare -Name "smbshare" -Path c:\smbshare -FullAccess Everyone Figure 13-22 shows example output. Figure 13-22. Windows PowerShell Command: New-SMBShare To map the SMBShare as a network drive in the client machine, issue the following Windows PowerShell command: NOTE The IP address of an interface on the Nano Server is 192.168.10.10.
  • Page 263: Troubleshooting

    Troubleshooting This chapter provides the following troubleshooting information:  Troubleshooting Checklist  Verifying that Current Drivers Are Loaded  Testing Network Connectivity  Microsoft Virtualization with Hyper-V  Linux-specific Issues  Miscellaneous Issues  Collecting Debug Data Troubleshooting Checklist CAUTION Before you open the server cabinet to add or remove the adapter, review the “Safety Precautions”...
  • Page 264: Verifying That Current Drivers Are Loaded

    14–Troubleshooting Verifying that Current Drivers Are Loaded  Replace the failed adapter with one that is known to work properly. If the second adapter works in the slot where the first one failed, the original adapter is probably defective.  Install the adapter in another functioning system, and then run the tests again.
  • Page 265: Verifying Drivers In Vmware

    14–Troubleshooting Testing Network Connectivity If you loaded a new driver, but have not yet rebooted, the command will modinfo not show the updated driver information. Instead, issue the following dmesg command to view the logs. In this example, the last entry identifies the driver that will be active upon reboot.
  • Page 266: Testing Network Connectivity For Linux

    14–Troubleshooting Microsoft Virtualization with Hyper-V Testing Network Connectivity for Linux To verify that the Ethernet interface is up and running: To check the status of the Ethernet interface, issue the ifconfig command. To check the statistics on the Ethernet interface, issue the netstat -i command.
  • Page 267: Troubleshooting Windows Fcoe And Iscsi Boot From San

    14–Troubleshooting Troubleshooting Windows FCoE and iSCSI Boot from SAN Troubleshooting Windows FCoE and iSCSI Boot from SAN If any USB flash drive is connected while Windows setup is loading files for installation, an error message will appear when you provide the drivers and then select the SAN disk for the installation.
  • Page 268: Collecting Debug Data

    14–Troubleshooting Collecting Debug Data Collecting Debug Data Use the commands in Table 14-1 to collect debug data. Table 14-1. Collecting Debug Data Commands Debug Data Description Kernel logs demesg-T Register dump ethtool-d System information; available in the driver bundle sys_info.sh AH0054601-00 B...
  • Page 269: Adapter Leds

    Adapter LEDS Table A-1 lists the LED indicators for the state of the adapter port link and activity. Table A-1. Adapter Port Link and Activity LEDs Port LED LED Appearance Network State No link (cable disconnected) Link LED Continuously illuminated Link No port activity Activity LED...
  • Page 270: Cables And Optical Modules

    Cables and Optical Modules This appendix provides the following information for the supported cables and optical modules:  Supported Specifications  Tested Cables and Optical Modules  Tested Switches Supported Specifications The 41000 Series Adapters support a variety of cables and optical modules that comply with SFF8024.
  • Page 271: Tested Cables And Optical Modules

    B–Cables and Optical Modules Tested Cables and Optical Modules Tested Cables and Optical Modules QLogic does not guarantee that every cable or optical module that satisfies the compliance requirements will operate with the 41000 Series Adapters. QLogic has tested the components listed in Table B-1 and presents this list for your convenience.
  • Page 272 B–Cables and Optical Modules Tested Cables and Optical Modules Table B-1. Tested Cables and Optical Modules (Continued) Speed/Form Manufac- Cable Part Number Type Gauge Factor turer Length NDCCGJ0003 SFP28 to SFP28 NDCCGJ0005 SFP28 to SFP28 ® Amphenol NDCCGF0001 SFP28 to SFP28 NDCCGF0003 SFP28 to SFP28 25G DAC...
  • Page 273 B–Cables and Optical Modules Tested Cables and Optical Modules Table B-1. Tested Cables and Optical Modules (Continued) Speed/Form Manufac- Cable Part Number Type Gauge Factor turer Length 07R9N9 Rev A00 QSFP100GB to 4XSFP28GB 0YFNDD Rev A00 QSFP100GB to Dell 4XSFP28GB 026FN3 Rev A00 QSFP100GB to 4XSFP28GB...
  • Page 274 B–Cables and Optical Modules Tested Cables and Optical Modules Table B-1. Tested Cables and Optical Modules (Continued) Speed/Form Manufac- Cable Part Number Type Gauge Factor turer Length FTLF8536P4BCL SFP28 Optical Transceiver SR Finisar FTLF8538P4BCL SFP28 Optical Transceiver SR no 25G Optical Transceivers Mellanox MMA2P00-AS...
  • Page 275: Tested Switches

    B–Cables and Optical Modules Tested Switches Tested Switches Table B-2 lists the switches that have been tested for interoperability with the 41000 Series Adapters. This list is based on switches that are available at the time of product release, and is subject to change over time as new switches enter the market or are discontinued.
  • Page 276: Feature Constraints

    Feature Constraints This appendix provides information about feature constraints implemented in the current release. These feature coexistence constraints may be removed in a future release. At that time, you should be able to use the feature combinations without any additional configuration steps beyond what would be usually required to enable the features.
  • Page 277 C–Feature Constraints  For the HII mapping specification, the Virtualization Mode field supports only the values None, NPAR, and SR-IOV. The value NPARSRIOV is not currently supported. NOTE The 41000 Series Adapters will support the NPAR+SR-IOV combination in a future software release. RoCE and iWARP Configuration Is Not Supported if NPAR Is Already Configured If NPAR is already configured on the adapter, you cannot configure RoCE or...
  • Page 278: Glossary

    Glossary ACPI bandwidth The Advanced Configuration and Power A measure of the volume of data that can Interface (ACPI) specification provides an be transmitted at a specific transmission open standard for unified operating rate. A 1Gbps or 2Gbps Fibre Channel system-centric device configuration and port can transmit or receive at nominal power management.
  • Page 279 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series data center bridging exchange dynamic host configuration protocol See DCBX. See DHCP. eCore Data center bridging. Provides enhance- A layer between the OS and the hardware ments to existing 802.1 bridge specifica- and firmware.
  • Page 280 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series human interface infrastructure Enhanced transmission selection. A See HII. standard that specifies the enhancement of transmission selection to support the allocation of bandwidth among traffic Human interface infrastructure. A specifi- classes.
  • Page 281 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series iWARP Internet wide area RDMA protocol. A Large send offload. LSO Ethernet adapter networking protocol that implements feature that allows the TCP\IP network RDMA for efficient data transfer over IP stack to build a large (up to 64KB) TCP networks.
  • Page 282 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series network interface card PCI Express (PCIe) See NIC. A third-generation I/O standard that allows enhanced Ethernet network performance beyond that of the older peripheral compo- Network interface card. Computer card nent interconnect (PCI) and PCI extended installed to enable a dedicated network (PCI-X) desktop and server slots.
  • Page 283 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series RoCE target RDMA over Converged Ethernet. A The storage-device endpoint of a SCSI network protocol that allows remote direct session. Initiators request data from memory access (RDMA) over a converged targets.
  • Page 284 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series transmission control protocol/Internet virtual logical area network protocol See VLAN. See TCP/IP. virtual machine type-length-value See VM. See TLV. VLAN Virtual logical area network (LAN). A group User datagram protocol. A connectionless of hosts with a common set of require- transport protocol without any guarantee ments that communicate as if they were...
  • Page 285 Index VLAN ID to host VNIC address ACC specifications, supported MAC, permanent and virtual ACPI definition of ADK, downloading Windows manageability feature supported Advanced Configuration and Power Interface, activity LED indicator See ACPI adapter affinity settings, IRQ See also adapter port agency certifications xxii definition of...
  • Page 286 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series bnx2fc driver, differences from qedf Coalesce Rx Microseconds boot Coalesce Tx Microseconds See also boot code and boot from SAN collecting debug data installation, Windows Server compliance mode, adapter UEFI, configuring product laser safety xxii parameters, configuring FCoE...
  • Page 287 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series constraints on features in this release protocol supported contacting Cavium debug data, collecting controller, power management options debug parameter 18, conventions, documentation xvii definitions of terms converged enhanced Ethernet, See CEE...
  • Page 288 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series Windows ADK Linux, non-RoCE Windows drivers Linux, RoCE Windows Server 2016 OS image VMware driver Windows See also driver installation dynamic host configuration protocol, See DHCP See also driver removal definition of downloading updates for Linux, verifying loaded...
  • Page 289 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series FastLinQ ESXCLI VMware Plug-in adapter management tool definition of FCoE network installation definition of functional description of adapter boot configuration parameters boot from SAN boot from SAN considerations boot from SAN, troubleshooting boot from SAN, Windows index values, VLAN boot installation, Windows Server...
  • Page 290 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series Hyper-V Internet small computer system interface, See iSCSI Microsoft Virtualization Internet wide area RDMA protocol, See RoCE interfaces, configuring with iWARP hypervisor virtualization for Windows interoperability, switches checksum offloads support definition of I/O control, network IPv4 standards support...
  • Page 291 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series enabling iSCSI boot from SAN, configuring for SLES initiator, configuring iSER and iWARP on iSER with on Linux issues, troubleshooting iWARP, configuring kernel module qed.ko 148, minimum host OS jumbo frames performance, optimizing definition of...
  • Page 292 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series max_vfs parameter network interface card, See NIC maximum bandwidth, allocating network state indicators maximum transmission unit, See MTU Nexus 6000 Ethernet switch memory required for host hardware NFS network installation message signaled interrupts, See MSI, MSI-X NIC partitioning, See NPAR messages, Linux driver...
  • Page 293 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series iSCSI, FAQs definition of Linux FCoE, configuring max PFs per open fabric enterprise distribution, See OFED physical characteristics, adapter operating system physical function, See PF OFED support pnputil tool requirements, host adding and installing driver package with RoCE v1/v2 support...
  • Page 294 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series QCS CLI quality of service, See QoS downloading queues, Tx/Rx 28, management applications qed driver defaults, Linux description of RDMA upgrading on Linux definition of qed.ko, in Linux FCoE offload adapters, supported qed.ko, in Linux iSCSI offload applications...
  • Page 295 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series RoCE scsi_transport_iscsi.ko in Linux iSCSI offload configuration for ESX SCSI, definition of configuration for RHEL configuration for SLES deploying configuring on Windows Nano Server in RoCE over SET definition of SerDes, interface for DAC transceiver driver serializer/deserializer, See SerDes...
  • Page 296 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series device-level parameter configuration tested cables and optical modules enabling tested switches VM switch, creating testing network connectivity standards specifications Linux statistics, driver Windows Storage Spaces Direct, configuring 221, support, accessing technical definition of supported iSCSI lossless...
  • Page 297 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series iSER, configuring virtual interface, definition of OS support virtual LAN, See VLAN virtual logical area network, See VLAN checksum offloads support virtual machine, See VM definition of virtual NIC, See VNIC UEFI virtual switch definition of...
  • Page 298 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 41000 Series VMNetworkadapters in VM drivers, verifying current VMware FCoE boot from SAN driver parameters, defaults firmware, updating on driver parameters, optional Hyper-Converged system, deploying driver, FCoE (qedf) image files, injecting driver, installing iWARP, configuring driver, iSCSI (qedil)
  • Page 299 Cavium, Inc. All other brand and product names are trademarks or registered trademarks of their respective owners. This document is provided for informational purposes only and may contain errors. Cavium reserves the right, without notice, to make changes to this document or in product design or specifications.

Table of Contents