Cavium FastLinQ 45000 Series User Manual

Converged network adapters and intelligent ethernet adapters
Table of Contents

Advertisement

User's Guide
Converged Network Adapters and
Intelligent Ethernet Adapters
FastLinQ 45000 Series
BC0154501-00 P

Advertisement

Table of Contents
loading
Need help?

Need help?

Do you have a question about the FastLinQ 45000 Series and is the answer not in the manual?

Questions and answers

Summary of Contents for Cavium FastLinQ 45000 Series

  • Page 1 User’s Guide Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series BC0154501-00 P...
  • Page 2 Revision N, August 24, 2018 Revision P, January 17, 2019 Changes Sections Affected Updated the Cavium logos, preface content, and Front cover, Preface, and back page copyright information. Removed references to Windows Nano Server. “System Requirements” on page 7 (Table...
  • Page 3 “Although SP3 and Later” on page 91 they are not necessarily required for iSCSI boot from SAN for SUSE, Cavium recommends that you also complete Step 2 through Step 15 of the Configuring iSCSI Boot from SAN for RHEL 7.5 and Later procedure”...
  • Page 4 Version 4. Following Step 12, added a fourth bullet to the “Configuring Microsoft Initiator to Use Cavium’s note: “Switch dependent teaming (IEEE 802.3ad iSCSI Offload” on page 197 LACP and Generic/Static Link Aggregation (Trunk- ing) cannot use a switch independent partitioned virtual adapter.
  • Page 5 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series In the To create a Hyper-V virtual switch with an “Creating a Hyper-V Virtual Switch with an RDMA RDMA NIC procedure: NIC” on page 242  Changed the section title to “...RDMA NIC” (was “...RDMA Virtual NIC”.
  • Page 6: Table Of Contents

    Table of Contents Preface Supported Products ......... . . xviii Intended Audience .
  • Page 7 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series Driver Installation Installing Linux Driver Software ........
  • Page 8 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series Configuring iSCSI Boot ........
  • Page 9 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series Configuring FCoE Boot from SAN on Windows ....Windows Server 2012 R2 and 2016 FCoE Boot Installation . . .
  • Page 10 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series Configuring DCQCN......... . .
  • Page 11 Offload in Windows Server....... . . Installing Cavium QLogic Drivers ......
  • Page 12 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series Windows Server 2016 Configuring RoCE Interfaces with Hyper-V ......
  • Page 13 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series Ingress Packet Redirection ........
  • Page 14 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series List of Figures Figure Page Setting Advanced Adapter Properties ........
  • Page 15 Add Counters Dialog Box..........Performance Monitor: Cavium FastLinQ Counters ......
  • Page 16 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series 10-5 Selecting the Initiator IP Address ........
  • Page 17 Advanced Properties for RoCE ........Cavium FastLinQ RDMA Error Counters ....... . .
  • Page 18: Preface

    To obtain the new GUI, download QCC GUI for your adapter from the Marvell Web site (see “Downloading Updates and Documentation” on page xxiii). Supported Products This user’s guide describes the following Cavium FastLinQ products:  25Gb Intelligent Ethernet Adapters: QL45211HLCU-BK/SP/CK  QL45212HLCU-BK/SP/CK ...
  • Page 19: Intended Audience

    Preface Intended Audience Intended Audience This guide is intended for system administrators and other technical staff members responsible for configuring and managing adapters installed on servers ® ® in Windows, Linux , or VMware environments. What Is in This Guide Following this preface, the remainder of this guide is organized into the following chapters and appendices: ...
  • Page 20: Related Materials

    User’s Guide—FastLinQ ESXCLI VMware Plug-in (part number BC0151101-00) describes the plug-in that extends the capabilities of the ® CLI to manage Cavium QLogic 3400, 8400, 41000, and 45000 Series Adapters installed in VMware ESX/ESXi hosts. For information about downloading documentation from the Marvell Web site, see “Downloading Updates and Documentation”...
  • Page 21: Documentation Conventions

    Preface Documentation Conventions In addition the QConvergeConsole GUI help system provides topics available while using the QCC GUI. Documentation Conventions This guide uses the following documentation conventions: NOTE  provides additional information. CAUTION  without an alert symbol indicates the presence of a hazard that could cause damage to equipment or loss of data.
  • Page 22 Preface Documentation Conventions  Text in italics indicates terms, emphasis, variables, or document titles. For example:  What are shortcut keys? To enter the date type mm/dd/yyyy (where mm is the month, dd is the  day, and yyyy is the year). Topic titles between quotation marks identify related topics either within this ...
  • Page 23: Technical Support

    Technical Support Customers should contact their authorized maintenance provider for technical support of their Cavium QLogic products. Technical support for QLogic-direct products under warranty is available with a Cavium support account. To set up a support account (if needed) and submit a case: Go to www.marvell.com.
  • Page 24: Knowledgebase

    Legal Notices Knowledgebase The Cavium QLogic knowledgebase is an extensive collection of product information that you can search for specific solutions. Cavium is constantly adding to the collection of information in the database to provide answers to your most urgent questions.
  • Page 25: Agency Certification

    Preface Legal Notices Agency Certification The following sections summarize the EMC and EMI test specifications performed on the 45000 Series Adapters. EMI and EMC Requirements FCC Part 15 compliance: Class A FCC compliance information statement: This device complies with Part 15 of the FCC Rules.
  • Page 26: Kcc: Class A

    Preface Legal Notices KCC: Class A Korea RRA Class A Certified Product Name/Model: Converged Network Adapters and Intelligent Ethernet Adapters Certification holder: QLogic Corporation Manufactured date: Refer to date code listed on product Manufacturer/Country of origin: QLogic Corporation/USA A class equipment As this equipment has undergone EMC registration for busi- ness purpose, the seller and/or the buyer is asked to beware (Business purpose...
  • Page 27 Preface Legal Notices 2006/95/EC low voltage directive: TUV EN60950-1:2006+A11+A1+A12+A2 2nd Edition TUV IEC 60950-1: 2005 2nd Edition Am1: 2009 + Am2: 2013 CB CB Certified to IEC 60950-1 2nd Edition xxvii BC0154501-00 P...
  • Page 28: Product Overview

    “Adapter Specifications” on page 5  Functional Description The Cavium FastLinQ 45000 Series Adapters include 10, 25, 40, and 100Gb Converged Network Adapters and Intelligent Ethernet Adapters that are designed to perform accelerated data networking for server systems. The 45000 Series Adapters include a 10/25/40/50/100Gb Ethernet MAC with full-duplex capability.
  • Page 29 1–Product Overview Features  Data center bridging eXchange protocol (DCBX; CEE version 1.01, IEEE)  Single-chip solution: 25/40/100Gb MAC   SerDes interface for direct attach copper (DAC) transceiver connection  PCI Express ® (PCIe ® ) 3.0 x8  Zero copy capable hardware ...
  • Page 30: Adapter Management

    1–Product Overview Adapter Management  Advanced network features:  Jumbo frames (up to 9,600 bytes). The OS and the link partner must support jumbo frames. Virtual LANs (vLANs)   Flow control (IEEE Std 802.3x) Virtualization (SR-IOV Hypervisor)   VMware NetQueue ®...
  • Page 31: Qlogic Control Suite Cli

    Cavium QLogic Fibre Channel Adapters, Converged Network Adapters, and Intelligent Ethernet Adapters. You can use QCC GUI on Windows and Linux platforms to manage Cavium QLogic adapters on both local and remote computer systems. QCC GUI is dependent upon additional software (a management agent) for the adapter.
  • Page 32: Qconvergeconsole Powerkit

    1–Product Overview Adapter Specifications QConvergeConsole PowerKit QConvergeConsole PowerKit lets you manage Cavium QLogic FastLinQ 3400/8400/41000/45000 Series Adapters on the system using Cavium cmdlets in ® the Windows PowerShell application. Windows PowerShell is a Microsoft-developed scriptable language for performing task automation and configuration management both locally and remotely.
  • Page 33 1–Product Overview Adapter Specifications  802.3-2015 IEEE Standard for Ethernet (flow control) 802.3-2015 Clause 78 Energy Efficient Ethernet (EEE)   1588-2002 PTPv1 (Precision Time Protocol)  1588-2008 PTPv2  IPv4 (RFQ 791)  IPv6 (RFC 2460) BC0154501-00 P...
  • Page 34: Hardware Installation

    “Preinstallation Checklist” on page 9 “Installing the Adapter” on page 9  System Requirements Before you install a Cavium 45000 Series Adapter, verify that your system meets the hardware and operating system requirements shown in Table 2-1 Table 2-2. For a complete list of supported operating systems, visit the Marvell Web site.
  • Page 35: Safety Precautions

    2–Hardware Installation Safety Precautions Table 2-2. Minimum Host Operating System Requirements Operating Requirement System Windows Server 2012, 2012 R2, 2016 RHEL 6.6, 7.0, and higher SLES 11 SP4, SLES 12, and higher Linux ® CentOS 7.0 and higher ® Ubuntu 14.04 LTS, 16.04 LTS ESXi 6.0 U2 and later for 25G adapters VMware...
  • Page 36: Preinstallation Checklist

    Never attempt to install a damaged adapter. Installing the Adapter The following instructions apply to installing the Cavium 45000 Series Adapters in most systems. For details about performing these tasks, refer to the manuals that were supplied with the system.
  • Page 37 2–Hardware Installation Installing the Adapter Applying even pressure at both corners of the card, push the adapter card into the slot until it is firmly seated. When the adapter is properly seated, the adapter port connectors are aligned with the slot opening, and the adapter faceplate is flush against the system chassis.
  • Page 38: Driver Installation

    Driver Installation NOTE Cavium now supports QConvergeConsole (QCC) GUI as the only GUI management tool across all Cavium QLogic adapters. QLogic Control Suite (QCS) GUI is no longer supported for the 45000 Series Adapters and adapters based on 57xx/57xxx controllers, and has been replaced by the QCC GUI management tool.
  • Page 39 “Downloading Updates and Documentation” on page xxiii. Table 3-1 describes the 45000 Series Adapter Linux drivers. Table 3-1. Cavium QLogic 45000 Series Adapters Linux Drivers Linux Description Driver The qed core driver module directly controls the firmware, handles interrupts, and pro- vides the low-level API for the protocol specific driver set.
  • Page 40: Installing The Linux Drivers Without Rdma

    3–Driver Installation Installing Linux Driver Software The following kernel module (kmod) RPM installs Linux drivers on SLES hosts running the Xen Hypervisor:  qlgc-fastlinq-kmp-xen-<version>.<OS>.<arch>.rpm The following source RPM installs the RDMA library code on RHEL and SLES hosts:  qlgc-libqedr-<version>.<OS>.<arch>.src.rpm The following source code TAR BZip2 (BZ2) compressed file installs Linux drivers on RHEL and SLES hosts: ...
  • Page 41: Removing The Linux Drivers

    3–Driver Installation Installing Linux Driver Software Removing the Linux Drivers There are two procedures for removing Linux drivers: one for a non-RDMA environment and another for an RDMA environment. Choose the procedure that matches your environment. To remove Linux drivers in a non-RDMA environment, unload and remove the drivers: Follow the procedure that relates to the original installation method and the OS.
  • Page 42: Installing Linux Drivers Using The Src Rpm Package

    3–Driver Installation Installing Linux Driver Software Remove the driver module files:  If the drivers were installed using an RPM package, issue the following command: rpm -e qlgc-fastlinq-kmp-default-<version>.<arch>  If the drivers were installed using a TAR file, issue the following commands for your operating system: For RHEL and CentOS: cd /lib/modules/<version>/extra/qlgc-fastlinq...
  • Page 43: Installing Linux Drivers Using The Kmp/Kmod Rpm Package

    3–Driver Installation Installing Linux Driver Software The drivers will be installed in the following paths. For SLES: /lib/modules/<version>/updates/qlgc-fastlinq For RHEL and CentOS: /lib/modules/<version>/extra/qlgc-fastlinq Turn on all ethX interfaces as follows: ifconfig <ethX> up For SLES, use YaST to configure the Ethernet interfaces to automatically start at boot by setting a static IP address or enabling DHCP on the interface.
  • Page 44: Installing The Linux Drivers With Rdma

    3–Driver Installation Installing Linux Driver Software For RHEL and CentOS: /lib/modules/<version>/extra/qlgc-fastlinq Test the drivers by loading them (unload the existing drivers first, if necessary): rmmod qede rmmod qed modprobe qed modprobe qede Installing the Linux Drivers with RDMA NOTE When using the following procedures in a CentOS environment, follow the instructions for RHEL.
  • Page 45: Linux Driver Optional Parameters

    3–Driver Installation Installing Linux Driver Software Test the drivers by loading them as follows: modprobe qed modprobe qede modprobe qedr Linux Driver Optional Parameters Table 3-2 describes the optional parameters for the qede driver. Table 3-2. qede Driver Optional Parameters Parameter Description Controls driver verbosity level similar to ethtool -s <dev>...
  • Page 46: Linux Driver Messages

    3–Driver Installation Installing Linux Driver Software Table 3-3. Linux Driver Operation Defaults (Continued) Operation qed Driver Default qede Driver Default — 1500 (range is 46–9600) — 1000 Rx Ring Size — 4078 (range is 128–8191) Tx Ring Size — 24 (range is 0–255) Coalesce Rx Microseconds —...
  • Page 47: Installing Windows Driver Software

    3–Driver Installation Installing Windows Driver Software Review the list of certificates that are prepared to be enrolled: # mokutil --list-new Reboot the system again. When the shim launches MokManager, enter the root password to confirm the certificate importation to the Machine Owner Key (MOK) list. To determine if the newly imported key was enrolled: # mokutil --list-enrolled To launch MOK manually and enroll the QLogic public key:...
  • Page 48: Installing The Windows Drivers

    3–Driver Installation Installing Windows Driver Software  Managing Adapter Properties  Setting Power Management Options Installing the Windows Drivers NOTE No other separate procedure is required to install RoCE-supported drivers in Windows. To install the Windows drivers: Download the Windows device drivers for the 45000 Series Adapter from the Marvell Web site.
  • Page 49: Managing Adapter Properties

    3–Driver Installation Installing Windows Driver Software Managing Adapter Properties To view or change the 45000 Series Adapter properties: In the Control Panel, click Device Manager. On the properties of the selected adapter, click the Advanced tab. On the Advanced page (Figure 3-1), select an item under Property and then change the Value for that item as needed.
  • Page 50: Setting Power Management Options

    3–Driver Installation Installing VMware Driver Software Setting Power Management Options You can set power management options to allow the operating system to turn off the controller to save power or to allow the controller to wake up the computer. If the device is busy (servicing a call, for example), the operating system will not shut down the device.
  • Page 51: Vmware Drivers And Driver Packages

    The certified RoCE driver is not included in this release. The uncertified driver may be available as an early preview. Cavium has certified qedrntv for ESXi 6.5 and ESXi 6.7, which is bundled along with qedentv as one package in vCG listings.
  • Page 52: Installing Vmware Drivers

    3–Driver Installation Installing VMware Driver Software Install individual drivers using either:  Standard ESXi package installation commands (see Installing VMware Drivers) Procedures in the individual driver Read Me files   Procedures in the following VMware KB article: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US& cmd=displayKC&externalId=2137853 You should install the NIC driver first, followed by the storage drivers.
  • Page 53: Vmware Nic Driver Optional Parameters

    3–Driver Installation Installing VMware Driver Software Select one of the following installation options:  Option 1: Install the driver bundle (which will install all of the driver VIBs at one time) by issuing the following command: # esxcli software vib install -d /tmp/qedentv-2.0.3.zip ...
  • Page 54 3–Driver Installation Installing VMware Driver Software Table 3-6. VMware NIC Driver Optional Parameters (Continued) Parameter Description Specifies the number of TX/RX queue pairs. num_queues can be 1–11 or num_queues one of the following:  –1 allows the driver to determine the optimal number of queue pairs (default).
  • Page 55: Vmware Driver Parameter Defaults

    3–Driver Installation Installing VMware Driver Software Table 3-6. VMware NIC Driver Optional Parameters (Continued) Parameter Description Enables (1) or disables (0) the VXLAN filtering based on the outer MAC, the vxlan_filter_en inner MAC, and the VXLAN network (VNI), directly matching traffic to a spe- cific queue.
  • Page 56: Removing The Vmware Driver

    3–Driver Installation Installing VMware Driver Software Removing the VMware Driver To remove the .vib file (qedentv), issue the following command: # esxcli software vib remove --vibname qedentv To remove the driver, issue the following command: # vmkload_mod -u qedentv FCoE Support The QLogic VMware FCoE qedf driver included in the VMware software package supports QLogic FCoE converged network interface controllers (C-NICs).
  • Page 57: Firmware Upgrade Utility

    Firmware Upgrade Utility Cavium provides scripts to automate the adapter firmware and boot code upgrade process for Windows and Linux systems. Each script identifies all 45000 Series Adapters and upgrades all firmware components. To upgrade adapter firmware on VMware systems, see the User’s Guide—FastLinQ ESXCLI VMware Plug-in or the User’s Guide—QConvergeConsole Plug-ins for vSphere.
  • Page 58: Image Verification

    Point to Support, and then under Driver Downloads, click Marvell QLogic/FastLinQ Drivers. On the Cavium Downloads and Documentation page, select your adapter model to locate and download the Firmware Upgrade Utility for Linux. Unzip the Firmware Upgrade Utility on the system where the adapter is installed.
  • Page 59: Converting A 100G Adapter To Four-Port 25G Adapter

    4–Firmware Upgrade Utility Converting a 100G Adapter to Four-port 25G Adapter On the Cavium Downloads and Documentation page, select your adapter model to locate and download the Firmware Upgrade Utility for Windows. Unzip the Firmware Upgrade Utility on the system where the adapter is installed.
  • Page 60: Adapter Preboot Configuration

    Adapter Preboot Configuration During the host boot process, you have the opportunity to pause and perform adapter management tasks using the Human Infrastructure Interface (HII) application. These tasks include the following: “Getting Started” on page 34   “Displaying Firmware Information” on page 38 ...
  • Page 61: Getting Started

    5–Adapter Preboot Configuration Getting Started Getting Started To start the HII application: Open the System Setup window for your platform. For information about launching the System Setup, consult the user guide for your system. In the System Setup window (Figure 5-1), select Device Settings, and then press ENTER.
  • Page 62 5–Adapter Preboot Configuration Getting Started The Main Configuration Page presents the adapter management options where you can set the partitioning mode.  If you are not using NPAR, set the Partitioning Mode to Default, as shown in Figure 5-3. Figure 5-3. Main Configuration Page, Setting Default Partitioning Mode BC0154501-00 P...
  • Page 63 Partitions Configuration (if NPAR is selected as the Partitioning Mode) (see “Configuring Partitions” on page NOTE The NPAR option is not available on Cavium 100Gb Intelligent Ethernet Adapters: QL45611HLCU and QL45631HOCU. The QL456x1 (1×100G) adapter does not currently support NPAR mode on Windows. BC0154501-00 P...
  • Page 64 5–Adapter Preboot Configuration Getting Started In addition, the Main Configuration Page presents the adapter properties listed in Table 5-1. Table 5-1. Adapter Properties Adapter Property Description Partitioning Mode Values are Default or NPAR Device Name Factory-assigned device name Chip Type ASIC version PCI Device ID Unique vendor-specific PCI device ID...
  • Page 65: Displaying Firmware Information

    5–Adapter Preboot Configuration Displaying Firmware Information Displaying Firmware Information To view the properties for the firmware image, select Firmware Image Properties on the Main Configuration Page, and then press ENTER. The Firmware Information page (Figure 5-5) specifies the following view-only data: ...
  • Page 66 The Link Speed you choose applies to both adapter ports. The link speed and the forward error correction (FEC) must match that of the connected switch or device port, unless Cavium FastLinQ SmartAN™ mode is used.  SmartAN sets the port link speed to use Smart Auto Negotiation.
  • Page 67 5–Adapter Preboot Configuration Configuring Device-level Parameters Negotiated mode. 10GBASE-T is also auto-negotiated per standards. Auto Negotiated is the IEEE and Consortium Auto Negotiation mode for 25G and above DAC link speeds. Auto Negotiated mode is for DAC cables only; it does not work on optics (discrete modules or active optical cables).
  • Page 68: Configuring Port-Level Parameters

    5–Adapter Preboot Configuration Configuring Port-level Parameters Click Back. When prompted, click Yes to save the changes. Changes take effect after a system reset. Configuring Port-level Parameters Port-level configuration comprises the following parameters:  Boot Mode  DCBX Protocol  RoCE Priority ...
  • Page 69 5–Adapter Preboot Configuration Configuring Port-level Parameters Figure 5-7. Port Level Configuration Page: Setting Boot Mode For DCBX Protocol support, select one of the following options:  Dynamic automatically determines the DCBX type currently in use on the attached switch.  IEEE uses only IEEE DCBX protocol.
  • Page 70: Configuring Fcoe Boot

    5–Adapter Preboot Configuration Configuring FCoE Boot For Link Up Delay, select a value from 0 to 30 seconds. The delay value specifies how long a remote boot should wait for the switch to enable the port, such as when the switch port is using spanning tree protocol loop detection.
  • Page 71 5–Adapter Preboot Configuration Configuring FCoE Boot Figure 5-8. FCoE General Parameters Figure 5-9. FCoE Target Configuration Click Back. When prompted, click Yes to save the changes. Changes take effect after a system reset. BC0154501-00 P...
  • Page 72: Configuring Iscsi Boot

    5–Adapter Preboot Configuration Configuring iSCSI Boot Configuring iSCSI Boot To configure the iSCSI boot configuration parameters: On the Main Configuration Page, select iSCSI Boot Configuration Menu, and then select one of the following options:  iSCSI General Configuration  iSCSI Initiator Configuration ...
  • Page 73 5–Adapter Preboot Configuration Configuring iSCSI Boot  iSCSI Second Target Configuration (Figure 5-13 on page  Connect  IPv4 Address  TCP Port  Boot LUN  iSCSI Name  CHAP ID  CHAP Secret Click Back. When prompted, click Yes to save the changes. Changes take effect after a system reset.
  • Page 74: Configuring Partitions

    5–Adapter Preboot Configuration Configuring Partitions Figure 5-12. iSCSI First Target Configuration Figure 5-13. iSCSI Second Target Configuration Configuring Partitions You can configure bandwidth ranges for each partition on the adapter. To configure the maximum and minimum bandwidth allocations: On the Main Configuration Page, select Partitions Configuration, and then press ENTER.
  • Page 75 5–Adapter Preboot Configuration Configuring Partitions Figure 5-14 shows the Global Bandwidth Configuration page when FCoE Offload and iSCSI Offload are disabled; that is, Ethernet is enabled instead. Figure 5-14. Partitions Configuration Page (No FCoE Offload or iSCSI Offload) Figure 5-15 shows the page when NPAR mode is enabled with FCoE Offload and iSCSI Offload enabled.
  • Page 76 5–Adapter Preboot Configuration Configuring Partitions Figure 5-16. Global Bandwidth Allocation Page Partition n Minimum TX Bandwidth is the minimum transmit  bandwidth of the selected partition expressed as a percentage of the maximum physical port link speed. Values can be –...
  • Page 77 5–Adapter Preboot Configuration Configuring Partitions To configure partitions: To examine a specific partition configuration, on the Partitions Configuration page (Figure 5-14 on page 48), select Partition n Configuration. To configure the first partition, select Partition 1 Configuration to open the Partition 1 Configuration page (Figure 5-17), which shows the following...
  • Page 78 5–Adapter Preboot Configuration Configuring Partitions  World Wide Node Name  Virtual World Wide Node Name  PCI Device ID  PCI (bus) Address Figure 5-18. Partition 2 Configuration: FCoE Offload To configure the third partition, select Partition 3 Configuration to open the Partition 3 Configuration page (Figure 5-19).
  • Page 79 5–Adapter Preboot Configuration Configuring Partitions Figure 5-19. Partition 3 Configuration: iSCSI Offload To configure the remaining Ethernet partitions, including the previous (if not offload-enabled), open the page for a partition 2 or greater Ethernet partition. The Personality is specified as Ethernet (Figure 5-20) and includes the following additional parameters:...
  • Page 80: Changing The Adapter Port Mode

    Before changing the adapter port mode, ensure that your adapter has the most current Flash version. For details on obtaining the latest Flash version from the Cavium Downloads and Documentation Web page, see “Downloading Updates and Documentation” on page xxiii.
  • Page 81: Converting A 25G Adapter To Single-Port 100G Adapter

    Figure 5-21. Changing the Port Mode to 4x25 Save the changes and then power cycle the system. CAUTION Adapter port mode changes alter the base PCIe configuration. Cavium recommends that you perform a cold boot (power cycle) to prevent undefined results.
  • Page 82: Converting A 40G Adapter To A Four-Port 10G Adapter

    Figure 5-22. Changing the Port Mode to 1x100 Save the changes and then power cycle the system. CAUTION Adapter port mode changes alter the base PCIe configuration. Cavium recommends that you perform a cold boot (power cycle) to prevent undefined results.
  • Page 83: Converting A 10G Adapter To A Four-Port 40G Adapter

    Figure 5-23. Changing the Port Mode to 4x10 Save the changes and then power cycle the system. CAUTION Adapter port mode changes alter the base PCIe configuration. Cavium recommends that you perform a cold boot (power cycle) to prevent undefined results.
  • Page 84 Figure 5-24. Changing the Port Mode to 1x40 Save the changes and then power cycle the system. CAUTION Adapter port mode changes alter the base PCIe configuration. Cavium recommends that you perform a cold boot (power cycle) to prevent undefined results.
  • Page 85: Boot From San Configuration

    SAN boot enables deployment of diskless servers in an environment where the boot disk is located on storage connected to the SAN. The server (initiator) communicates with the storage device (target) through the SAN using the Cavium Converged Network Adapter (CNA) Host Bus Adapter (HBA).
  • Page 86: Iscsi Out-Of-Box And Inbox Support

    6–Boot from SAN Configuration iSCSI Boot from SAN iSCSI Out-of-Box and Inbox Support Table 6-1 lists the operating systems’ inbox and out-of-box support for iSCSI boot from SAN (BFS). Table 6-1. iSCSI Out-of-Box and Inbox Boot from SAN Support Out-of-Box Inbox Hardware Hardware...
  • Page 87: Iscsi Preboot Configuration

    Configuring the DHCP Server to Support iSCSI Boot  Supported Operating Systems for iSCSI Boot The Cavium FastLinQ 45000 Series Adapters support iSCSI boot on the following operating systems:  Windows Server 2012 and later 64-bit (supports offload and non-offload paths) ...
  • Page 88: Configuring Adapter Uefi Boot Mode

    6–Boot from SAN Configuration iSCSI Boot from SAN Configuring Adapter UEFI Boot Mode To configure the boot mode: Restart the system. Press the OEM hotkey to enter the System setup or configuration menu. This is also known as UEFI HII. For example, the HPE Gen 9 systems use F9 as a hotkey to access the System Utilities menu at boot time (Figure 6-1).
  • Page 89 6–Boot from SAN Configuration iSCSI Boot from SAN In the System HII, select the QLogic device (Figure 6-2). efer to the OEM user guide on accessing the PCI device configuration menu. For example, on a HPE Gen 9 server, the System Utilities for QLogic devices are listed on the System Configuration menu.
  • Page 90 6–Boot from SAN Configuration iSCSI Boot from SAN On the Port Level Configuration page (Figure 6-4), select Boot Mode, and then press ENTER to select one of the following iSCSI boot modes:  iSCSI (SW)  iSCSI (HW) Figure 6-4. Port Level Configuration, Boot Mode NOTE The iSCSI (HW) option is not listed if the iSCSI Offload feature is disabled at the port level.
  • Page 91: Selecting The Iscsi Uefi Boot Protocol

    6–Boot from SAN Configuration iSCSI Boot from SAN Selecting the iSCSI UEFI Boot Protocol The Boot Mode option is listed under Port Level Configuration (Figure 6-5) for the adapter, and the setting is port specific. Refer to the OEM user manual for direction on accessing the device-level configuration menu under UEFI HII, which is the System Configuration interface in UEFI mode.
  • Page 92: Configuring Iscsi Boot Parameters

     CHAP ID and secret  Configuring iSCSI Boot Parameters Configure the Cavium QLogic iSCSI boot software for either static or dynamic configuration. For configuration options available from the General Parameters window, see Table 6-2 on page 70, which lists parameters for both IPv4 and IPv6.
  • Page 93 6–Boot from SAN Configuration iSCSI Boot from SAN Figure 6-6. Systems Utilities at Boot Time On the Main Configuration Page, select Port Level Configuration (Figure 6-7), and then press ENTER. Figure 6-7. Selecting Port Level Configuration On the Port Level Configuration page (Figure 6-8), select Boot Mode, and then press ENTER to select one of the following iSCSI boot modes:...
  • Page 94: Configuring Iscsi Boot Options

    6–Boot from SAN Configuration iSCSI Boot from SAN Figure 6-8. Port Level Configuration, Boot Mode NOTE The iSCSI (HW) option is not listed if the iSCSI Offload feature is disabled at port level. If the preferred boot mode is iSCSI (HW), make sure the iSCSI offload feature is enabled.
  • Page 95 6–Boot from SAN Configuration iSCSI Boot from SAN Static iSCSI Boot Configuration In a static configuration, you must enter data for the following:  Initiator IP address  Initiator IQN  Target parameters (obtained in “Configuring the iSCSI Target” on page For information on configuration options, see Table 6-2 on page To configure the iSCSI boot parameters using static configuration:...
  • Page 96 6–Boot from SAN Configuration iSCSI Boot from SAN In the iSCSI Boot Configuration Menu, select iSCSI General Parameters (Figure 6-10), and then press ENTER. Figure 6-10. Selecting General Parameters On the iSCSI General Configuration page (Figure 6-11), press the DOWN ARROW key to select a parameter, and then press the ENTER key to input the following values (Table 6-2 on page 70...
  • Page 97 6–Boot from SAN Configuration iSCSI Boot from SAN Table 6-2. iSCSI General Configuration Options Option Description This option is specific to IPv4. Controls whether the iSCSI boot TCP/IP Parameters Via DHCP host software acquires the IP address information using DHCP (Enabled) or using a static IP configuration (Disabled).
  • Page 98 6–Boot from SAN Configuration iSCSI Boot from SAN On the iSCSI Initiator Configuration page (Figure 6-13), select the following parameters, and then type a value for each:  IPv4* Address IPv4* Subnet Mask   IPv4* Default Gateway IPv4* Primary DNS ...
  • Page 99 6–Boot from SAN Configuration iSCSI Boot from SAN Figure 6-13. iSCSI Initiator Configuration Return to the iSCSI Boot Configuration Menu, and then press ESC. Select iSCSI First Target Parameters (Figure 6-14), and then press ENTER. Figure 6-14. iSCSI First Target Parameters On the iSCSI First Target Parameters page, set the Connect option to Enabled for the iSCSI target.
  • Page 100 6–Boot from SAN Configuration iSCSI Boot from SAN NOTE For the preceding parameters with an asterisk (*), the label will change to IPv6 or IPv4 (default) based on IP version set on the iSCSI General Configuration page, as shown in Figure 6-15.
  • Page 101 6–Boot from SAN Configuration iSCSI Boot from SAN Figure 6-17. Saving iSCSI Changes After all changes have been made, reboot the system to apply the changes to the adapter’s running configuration. Dynamic iSCSI Boot Configuration In a dynamic configuration, ensure that the system’s IP address and target (or initiator) information are provided by a DHCP server (see IPv4 and IPv6 configurations in “Configuring the DHCP Server to Support iSCSI Boot”...
  • Page 102 6–Boot from SAN Configuration iSCSI Boot from SAN NOTE When using a DHCP server, the DNS server entries are overwritten by the values provided by the DHCP server. This override occurs even if the locally provided values are valid and the DHCP server provides no DNS server information.
  • Page 103 6–Boot from SAN Configuration iSCSI Boot from SAN Figure 6-18. iSCSI General Configuration Enabling CHAP Authentication Ensure that the CHAP authentication is enabled on the target. To enable CHAP authentication: Go to the iSCSI General Configuration page. Set CHAP Authentication to Enabled. In the Initiator Parameters window, type values for the following: ...
  • Page 104: Configuring The Dhcp Server To Support Iscsi Boot

    Configuring vLANs for iSCSI Boot DHCP iSCSI Boot Configurations for IPv4 DHCP includes several options that provide configuration information to the DHCP client. For iSCSI boot, Cavium QLogic adapters support the following DHCP configurations: DHCP Option 17, Root Path ...
  • Page 105 6–Boot from SAN Configuration iSCSI Boot from SAN Table 6-3. DHCP Option 17 Parameter Definitions (Continued) Parameter Definition Logical unit number to use on the iSCSI target. The value of the <LUN> LUN must be represented in hexadecimal format. A LUN with an ID of 64 must be configured as 40 within the Option 17 parameter on the DHCP server.
  • Page 106 DHCPv6 Option 17, Vendor-Specific Information  NOTE The DHCPv6 standard Root Path option is not yet available. Cavium suggests using Option 16 or Option 17 for dynamic iSCSI boot IPv6 support. DHCPv6 Option 16, Vendor Class Option DHCPv6 Option 16 (vendor class option) must be present and must contain a string that matches your configured DHCP Vendor ID parameter.
  • Page 107 6–Boot from SAN Configuration iSCSI Boot from SAN Table 6-5 lists the DHCP Option 17 sub-options. Table 6-5. DHCP Option 17 Sub-option Definitions Sub-option Definition First iSCSI target information in the standard root path format: "iscsi:"[<servername>]":"<protocol>":"<port>":"<LUN> ": "<targetname>" Second iSCSI target information in the standard root path format: "iscsi:"[<servername>]":"<protocol>":"<port>":"<LUN>...
  • Page 108: Configuring Iscsi Boot From San On Windows

    6–Boot from SAN Configuration iSCSI Boot from SAN Figure 6-19. iSCSI Initiator Configuration, VLAN ID Configuring iSCSI Boot from SAN on Windows Adapters support iSCSI boot to enable network boot of operating systems to diskless systems. iSCSI boot allows a Windows operating system to boot from an iSCSI target machine located remotely over a standard IP network.
  • Page 109: Selecting The Preferred Iscsi Boot Mode

    6–Boot from SAN Configuration iSCSI Boot from SAN  Cavium recommends that you disable the Integrated RAID Controller. Selecting the Preferred iSCSI Boot Mode To select the iSCSI boot mode on Windows: On the NIC Partitioning Configuration page for a selected partition, set the iSCSI Offload Mode to Enabled.
  • Page 110: Configuring The Iscsi Targets

    6–Boot from SAN Configuration iSCSI Boot from SAN  IPv4* Default Gateway  IPv4* Primary DNS  IPv4* Secondary DNS  Virtual LAN ID: (Optional) You can isolate iSCSI traffic on the network in a Layer 2 vLAN to segregate it from general traffic.To segregate traffic, make the iSCSI interface on the adapter a member of the Layer 2 vLAN by setting this value.
  • Page 111: Detecting The Iscsi Lun And Injecting The Qlogic Drivers

    6–Boot from SAN Configuration iSCSI Boot from SAN  CHAP ID CHAP Secret  NOTE For the preceding parameters with an asterisk (*), the label will change to IPv6 or IPv4 (default) based on IP version set on the iSCSI General Configuration page, as shown in Figure 6-11 on page If you want to configure a second iSCSI target device, select iSCSI Second...
  • Page 112 6–Boot from SAN Configuration iSCSI Boot from SAN In the Windows Setup window (Figure 6-21), select the drive name on which to install the driver. Figure 6-21. Windows Setup: Selecting Installation Destination Inject the latest QLogic drivers by mounting drivers in the virtual media: Click Load driver, and then click Browse (see Figure 6-22).
  • Page 113: Configuring Iscsi Boot From San On Linux

    6–Boot from SAN Configuration iSCSI Boot from SAN The server will undergo a reboot multiple times as part of the installation process, and then will boot up from the iSCSI boot from SAN LUN. If it does not automatically boot, access the Boot Menu and select the specific port boot entry to boot from the iSCSI LUN.
  • Page 114 6–Boot from SAN Configuration iSCSI Boot from SAN The installation process prompts you to install the out-of-box driver as shown in the Figure 6-23 example. Figure 6-23. Prompt for Out-of-Box Installation If required for your setup, load the FastLinQ driver update disk when prompted for additional driver disks.
  • Page 115 6–Boot from SAN Configuration iSCSI Boot from SAN In the Configuration window (Figure 6-24), select the language to use during the installation process, and then click Continue. Figure 6-24. Red Hat Enterprise Linux 7.4 Configuration In the Installation Summary window, click Installation Destination. The disk label is sda, indicating a single-path installation.
  • Page 116 6–Boot from SAN Configuration iSCSI Boot from SAN Reboot the server, and then add the following parameters in the command line: rd.iscsi.firmware rd.driver.pre=qed,qedi (to load all drivers: pre=qed,qedi,qede,qedf) selinux=0 After a successful system boot, edit the file to remove the blacklist /etc/modprobe.d/anaconda-blacklist.conf entry for the selected driver.
  • Page 117: Configuring Iscsi Boot From San For Rhel 7.5 And Later

    6–Boot from SAN Configuration iSCSI Boot from SAN Configuring iSCSI Boot from SAN for RHEL 7.5 and Later To install RHEL 7.5 and later: Boot from the RHEL 7.x installation media with the iSCSI target already connected in UEFI. Install Red Hat Enterprise Linux 7.x Test this media &...
  • Page 118: Configuring Iscsi Boot From San For Sles 12 Sp3

    Boot from the SLES 12 SP3 installation media with the iSCSI target pre-configured and connected in UEFI. NOTE Although they are not necessarily required for iSCSI boot from SAN for SUSE, Cavium recommends that you also complete Step 2 through Step 15 of the Configuring iSCSI Boot from SAN for RHEL 7.5 and...
  • Page 119: Configuring Iscsi Boot From San For Other Linux Distributions

    6–Boot from SAN Configuration iSCSI Boot from SAN Known Issue in DHCP Configuration In DHCP configuration for SLES 12 SP3 and later, the first boot after an OS installation may fail if the initiator IP address acquired from the DHCP server is in a different range than the target IP address.
  • Page 120 6–Boot from SAN Configuration iSCSI Boot from SAN For RHEL 6.9: Boot into the OS, and press the E key. At the prompt, grub edit> edit the kernel boot parameters by issuing the following command: grub edit> kernel /images/pxeboot/vmlinuz linux dd ip=ibft Press ENTER until you are prompted for the DUD.
  • Page 121 6–Boot from SAN Configuration iSCSI Boot from SAN Migrating to Offload iSCSI for RHEL 6.9/6.10 To migrate from a software iSCSI installation to an offload iSCSI for RHEL 6.9 or 6.10: Boot into the iSCSI non-offload/L2 boot from SAN operating system. Issue the following commands to install the Open-iSCSI and iscsiuio RPMs: # rpm -ivh --force qlgc-open-iscsi-2.0_873.111-1.x86_64.rpm # rpm -ivh --force iscsiuio-2.11.5.2-1.rhel6u9.x86_64.rpm...
  • Page 122 6–Boot from SAN Configuration iSCSI Boot from SAN initrd /initramfs-2.6.32-696.el6.x86_64.img kernel /vmlinuz-2.6.32-696.el6.x86_64 ro root=/dev/mapper/vg_prebooteit-lv_root rd_NO_LUKS iscsi_firmware LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_DM rd_LVM_LV=vg_prebooteit/lv_swap KEYBOARDTYPE=pc KEYTABLE=us rd_LVM_LV=vg_prebooteit/lv_root selinux=0 initrd /initramfs-2.6.32-696.el6.x86_64.img Build the file by issuing the following command: initramfs # dracut -f For RHEL 6.10 boot from SAN multipath, edit the file as follows: /etc/iscsi/iscsid.conf...
  • Page 123 6–Boot from SAN Configuration iSCSI Boot from SAN Migrating to Offload iSCSI for RHEL 7.2/7.3 To migrate from a software iSCSI installation to an offload iSCSI for RHEL 7.2/7.3: Update Open-iSCSI tools and iscsiuio by issuing the following commands: #rpm -ivh qlgc-open-iscsi-2.0_873.111.rhel7u3-3.x86_64.rpm --force #rpm -ivh iscsiuio-2.11.5.3-2.rhel7u3.x86_64.rpm --force NOTE To retain the original contents of the inbox RPM during installation, you...
  • Page 124 6–Boot from SAN Configuration iSCSI Boot from SAN In the HII, open System Setup, and then select System BIOS, Device Settings. On the Device Settings page, select the adapter port, and then select Port Level Configuration. On the Port Level Configuration page, set the Boot Mode to iSCSI (HW) and set iSCSI Offload to Enabled.
  • Page 125 6–Boot from SAN Configuration iSCSI Boot from SAN INITRD_MODULES="ahci qedi" Save the file. Edit the file, change the value /etc/modprobe.d/unsupported-modules to 1, and then save the file: allow_unsupported_modules allow_unsupported_modules 1 Locate and delete the following files:  /etc/init.d/boot.d/K01boot.open-iscsi  /etc/init.d/boot.open-iscsi Create a backup of initrd, and then build a new initrd by issuing the following commands.
  • Page 126 6–Boot from SAN Configuration iSCSI Boot from SAN # rpm -ivh iscsiuio-2.11.5.3-2.sles12sp2.x86_64.rpm --force NOTE To retain the original contents of the inbox RPM during installation, you must use the option (instead of the option), followed by -ivh -Uvh option. --force Reload all the daemon services by issuing the following command: # systemctl daemon-reload Enable iscsid and iscsiuio services (if they are not already enabled) by...
  • Page 127 6–Boot from SAN Configuration iSCSI Boot from SAN Build the file by issuing the following command: initramfs # dracut -f Reboot the server, and then open the HII. In the HII, open System Configuration, select the adapter port, and then select Port Level Configuration.
  • Page 128 6–Boot from SAN Configuration iSCSI Boot from SAN Migrating and Configuring MPIO to Offloaded Interface for RHEL 6.9/6.10 To migrate from L2 to L4 and configure MPIO to boot the OS over an offloaded interface for RHEL 6.9/6.10: Configure the iSCSI boot settings for L2 BFS on both ports of the adapter. The boot will log in through only one port, but will create an iSCSI Boot Firmware Table (iBFT) interface for both ports.
  • Page 129 6–Boot from SAN Configuration iSCSI Boot from SAN Reboot the server and boot into OS with multipath. NOTE For any additional changes in the file to take effect, /etc/multipath.conf you must rebuild the initrd image and reboot the server. Migrating and Configuring MPIO to Offloaded Interface for RHEL 7.2/7.3 To migrate from L2 to L4 and configure MPIO to boot the OS over an offloaded interface for RHEL 7.2/7.3: Configure the iSCSI boot settings for L2 BFS on both ports of the adapter.
  • Page 130 6–Boot from SAN Configuration iSCSI Boot from SAN Change this statement to: GRUB_CMDLINE_LINUX="rd.iscsi.firmware" Create a new file by issuing the following command: grub.cfg # grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg Build the file by issuing the following command: initramfs # dracut -f Reboot and change the adapter boot settings to use L4 for both port and boot through L4.
  • Page 131 6–Boot from SAN Configuration iSCSI Boot from SAN On the Main Configuration Page under Port Level Configuration, set Boot Mode to iSCSI (HW) and enable iSCSI Offload. On the Main Configuration Page under iSCSI Configuration, perform the necessary iSCSI configurations. Reboot the server and boot into OS.
  • Page 132 6–Boot from SAN Configuration iSCSI Boot from SAN Migrating and Configuring MPIO to Offloaded Interface for SLES 12 SP1/SP2 To migrate from L2 to L4 and configure MPIO to boot the OS over an offloaded interface for SLES 12 SP1/SP2: Configure the iSCSI boot settings for L2 BFS on both ports of the adapter.
  • Page 133 6–Boot from SAN Configuration iSCSI Boot from SAN Change the UEFI configuration to use L4 iSCSI boot: Open System Configuration, select the adapter port, and then select Port Level Configuration. On the Port Level Configuration page, set the Boot Mode to iSCSI (HW) and set iSCSI Offload to Enabled.
  • Page 134: Configuring Iscsi Boot From San On Vmware

    FCoE Boot from SAN Cavium 45000 Series Adapters support FCoE boot to enable network boot of operating systems to diskless systems. FCoE boot allows a Windows, Linux, or VMware operating system to boot from a Fibre Channel or FCoE target machine located remotely over an FCoE supporting network.
  • Page 135: Fcoe Preboot Configuration

    6–Boot from SAN Configuration FCoE Boot from SAN Table 6-6. FCoE Out-of-Box and Inbox Boot from SAN Support Inbox Out-of-Box Hardware Offload Hardware Offload OS Version FCoE BFS Support FCoE BFS Support RHEL 6.10 RHEL 7.4 RHEL 7.5 SLES 12 SP4 SLES 15 ESXi 6.0 U3 ESXi 6.5 U1...
  • Page 136: Configuring Adapter Uefi Boot Mode

    6–Boot from SAN Configuration FCoE Boot from SAN Configuring Adapter UEFI Boot Mode To configure the boot mode to FCOE: Restart the system. Press the OEM hot key to enter System Setup or the configuration menu (Figure 6-25). This is also known as UEFI HII. For example, the HPE Gen 9 system uses F9 as a hotkey to access the System Utilities menu at boot time.
  • Page 137 6-26). Refer to the OEM user guide on accessing PCI device configuration menu. For example, on an HPE Gen 9 server, the System Utilities for Cavium QLogic devices are listed under the System Configuration menu. Figure 6-26. System Configuration, Port Selection...
  • Page 138 6–Boot from SAN Configuration FCoE Boot from SAN On the Port Level Configuration (Figure 6-28) page, select Boot Mode, press ENTER, and then select FCoE as a preferred boot mode. Figure 6-28. Boot Mode in Port Level Configuration NOTE FCoE is not listed as a boot option if the FCoE Offload feature is disabled at the port level.
  • Page 139 6–Boot from SAN Configuration FCoE Boot from SAN To configure the FCoE boot parameters: On the Device HII Main Configuration Page, select FCoE Configuration (Figure 6-30), and then press ENTER. Figure 6-30. Selecting FCoE Boot Configuration In the FCoE Boot Configuration Menu, select FCoE General Parameters (Figure 6-31), and then press ENTER.
  • Page 140 6–Boot from SAN Configuration FCoE Boot from SAN In the FCoE General Parameters menu (Figure 6-32), press the UP ARROW and DOWN ARROW keys to select a parameter, and then press ENTER to select and input the following values:  FIP VLAN ID: As required (if not set, adapter will attempt FIP VLAN discovery) ...
  • Page 141: Configuring Fcoe Boot From San On Windows

    Windows Server 2012 R2 and 2016 FCoE Boot Installation For Windows Server 2012R2/2016 boot from SAN installation, Cavium requires the use of a “slipstream” DVD, or ISO image, with the latest Cavium QLogic drivers injected. See “Injecting (Slipstreaming) Adapter Drivers into Windows Image Files”...
  • Page 142: Configuring Fcoe On Windows

    Files” on page 116. Load the latest Cavium QLogic FCoE boot images into the adapter NVRAM. Configure the FCoE target to allow a connection from the remote device. Ensure that the target has sufficient disk space to hold the new OS installation.
  • Page 143: Image Files

    Network location extract the driver package. For example, type c:\temp. Follow the driver installer instructions to install the drivers in the specified folder. In this example, the Cavium QLogic driver files are installed here: c:\temp\Program File 64\QLogic Corporation\QDrivers Download the Windows Assessment and Deployment Kit (ADK) version 10 from Microsoft: https://developer.microsoft.com/en-us/windows/hardware/...
  • Page 144: Configuring Fcoe Boot From San On Linux

    Prerequisites for Linux FCoE Boot from SAN The following are required for Linux FCoE boot from SAN to function correctly with the Cavium FastLinQ 45000 10/25GbE Controller. General You no longer need to use the FCoE disk tabs in the Red Hat and SUSE installers because the FCoE interfaces are not exposed from the network interface and are automatically activated by the qedf driver.
  • Page 145: Configuring Linux Fcoe Boot From San

    6–Boot from SAN Configuration FCoE Boot from SAN  The installer parameter is required to ensure that the installer will dud=1 ask for the driver update disk.  Do not use the installer parameter because the software withfcoe=1 FCoE will conflict with the hardware offload if network interfaces from qede are exposed.
  • Page 146 6–Boot from SAN Configuration FCoE Boot from SAN Modify the kernel line by typing linux dd at the end, as shown in the following: grub edit> kernel /images/pxeboot/vmlinuz nomodeset askmethod linux dd Press ENTER to initiate the driver update process. Press the B key to continue with the installation.
  • Page 147 6–Boot from SAN Configuration FCoE Boot from SAN At the Please select the drives... prompt, on the Basic Devices page, select the FCoE LUN. Figure 6-35 shows an example. Figure 6-35. Selecting the Drives Complete the installation and boot to the OS. Turn off the lldpad and fcoe services that are used for software FCoE.
  • Page 148 6–Boot from SAN Configuration FCoE Boot from SAN The installation process prompts you to install the out-of-box driver as shown in the Figure 6-23 example. Figure 6-36. Prompt for Out-of-Box Installation If required for your setup, load the FastLinQ driver update disk when prompted for additional driver disks.
  • Page 149 6–Boot from SAN Configuration FCoE Boot from SAN In the Configuration window (Figure 6-24), select the language to use during the installation process, and then click Continue. Figure 6-37. Red Hat Enterprise Linux 7.4 Configuration In the Installation Summary window, click Installation Destination. The disk label is sda, indicating a single-path installation.
  • Page 150: Configuring Fcoe Boot From San On Vmware

    Configuring FCoE Boot from SAN on VMware For VMware ESXi 6.5/6.7 boot from SAN installation, Cavium requires that you use a customized ESXi ISO image that is built with the latest Cavium QLogic Converged Network Adapter bundle injected. This section covers the following VMware FCoE boot from SAN procedures.
  • Page 151: Installing The Customized Esxi Iso

    Use the new DVD to install the ESXi OS. Installing the Customized ESXi ISO Load the latest Cavium QLogic FCOE boot images into the adapter NVRAM. Configure the FCOE target to allow a valid connection with the remote machine. Ensure that the target has sufficient free disk space to hold the new OS installation.
  • Page 152 6–Boot from SAN Configuration FCoE Boot from SAN Save the settings and reboot the system. The initiator should connect to an FCOE target and then boot the system from the DVD-ROM device. Boot from the DVD and begin installation. Follow the on-screen instructions. On the window that shows the list of disks available for the installation, the FCOE target disk should be visible because the injected Converged Network Adapter bundle is inside the customized ESXi ISO.
  • Page 153 6–Boot from SAN Configuration FCoE Boot from SAN In the example shown in Figure 6-40, the first two ports indicate Cavium QLogic adapters. Figure 6-40. VMware Generic USB Boot Options BC0154501-00 P...
  • Page 154: Roce Configuration

    RoCE Configuration This chapter describes RDMA over converged Ethernet (RoCE v1 and v2) configuration on the 45000 Series Adapter, the Ethernet switch, and the Windows, Linux, or VMware host, including:  Supported Operating Systems and OFED  “Planning for RoCE” on page 128 “Preparing the Adapter”...
  • Page 155: Planning For Roce

    7–RoCE Configuration Planning for RoCE Table 7-1. OS Support for RoCE v1, RoCE v2, iWARP, iSER, and OFED (Continued) Operating System Inbox OFED OFED 4.8-2 GA SLES 12 SP3 RoCE v1, RoCE v2, iWARP, iSER RoCE v1, RoCE v2, iWARP SLES 12 SP4 RoCE v1, RoCE v2, iWARP, iSER SLES 15...
  • Page 156: Preparing The Adapter

    7–RoCE Configuration Preparing the Adapter Preparing the Adapter Follow these steps to enable DCBX and specify the RoCE priority using the HII management application. For information about the HII application, see Chapter 5 Adapter Preboot Configuration. To prepare the adapter: On the Main Configuration Page, select Data Center Bridging (DCB) Settings, and then click Finish.
  • Page 157 7–RoCE Configuration Preparing the Ethernet Switch To configure the Cisco switch: Open a config terminal session as follows: Switch# config terminal switch(config)# Configure the quality of service (QoS) class map and set the RoCE priority ) to match the adapter (5) as follows: switch(config)# class-map type qos class-roce switch(config)# match cos 5 Configure queuing class maps as follows:...
  • Page 158: Configuring The Dell Z9100 Ethernet Switch

    7–RoCE Configuration Preparing the Ethernet Switch Assign a vLAN ID to the switch port to match the vLAN ID assigned to the adapter (5). switch(config)# interface ethernet x/x switch(config)# switchport mode trunk switch(config)# switchport trunk allowed vlan 1,5 Configuring the Dell Z9100 Ethernet Switch Configuring the Dell Z9100 Ethernet Switch for RoCE comprises configuring a DCB map for RoCE, configuring priority-based flow control (PFC) and enhanced transmission selection (ETS), verifying the DCB map, applying the DCB map to the port, verifying PFC...
  • Page 159 7–RoCE Configuration Preparing the Ethernet Switch Apply the DCB map to the port. Dell(conf)# interface twentyFiveGigE 1/8/1 Dell(conf-if-tf-1/8/1)# dcb-map roce Verify the ETS and PFC configuration on the port. The following examples show summarized interface information for ETS and detailed interface information for PFC.
  • Page 160: Configuring The Arista 7060X Ethernet Switch

    7–RoCE Configuration Preparing the Ethernet Switch ISCSI TLV Tx Status is enabled Local FCOE PriorityMap is 0x0 Local ISCSI PriorityMap is 0x20 Remote ISCSI PriorityMap is 0x200 66 Input TLV pkts, 99 Output TLV pkts, 0 Error pkts, 0 Pause Tx pkts, 0 Pause Rx pkts 66 Input Appln Priority TLV pkts, 99 Output Appln Priority TLV pkts, 0 Error Appln Priority TLV Pkts...
  • Page 161 7–RoCE Configuration Preparing the Ethernet Switch Arista-7060X-EIT(config)#dcbx ets qos map cos 2 traffic-class 0 Arista-7060X-EIT(config)#dcbx ets qos map cos 3 traffic-class 0 Arista-7060X-EIT(config)#dcbx ets qos map cos 4 traffic-class 0 Arista-7060X-EIT(config)#dcbx ets qos map cos 5 traffic-class 1 Arista-7060X-EIT(config)#dcbx ets qos map cos 6 traffic-class 0 Arista-7060X-EIT(config)#dcbx ets qos map cos 7 traffic-class 0 Setting Up ETS In the following example,...
  • Page 162: Configuring Roce On The Adapter For Windows Server

    7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Configuring RoCE on the Adapter for Windows Server Configuring RoCE on the adapter for Windows Server host comprises enabling RoCE on the adapter and verifying the Network Direct MTU size. To configure RoCE on a Windows Server host: Enable RoCE on the adapter.
  • Page 163 7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Figure 7-1 shows an example of configuring a property value. Figure 7-1. Configuring RoCE Properties Using Windows PowerShell, verify that RDMA is enabled on the adapter. command lists the adapters that support Get-NetAdapterRdma RDMA—both ports are enabled.
  • Page 164 7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Using Windows PowerShell, verify that is enabled on the NetworkDirect host operating system. The command Get-NetOffloadGlobalSetting shows is enabled. NetworkDirect PS C:\Users\Administrators> Get-NetOffloadGlobalSetting ReceiveSideScaling : Enabled ReceiveSegmentCoalescing : Enabled Chimney : Disabled TaskOffload : Enabled...
  • Page 165: Viewing Rdma Counters

    7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Viewing RDMA Counters The following procedure also applies to iWARP. To view RDMA counters for RoCE: Launch Performance Monitor. Open the Add Counters dialog box. Figure 7-2 shows an example. Figure 7-2.
  • Page 166 7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server NOTE If Cavium RDMA counters are not listed in the Performance Monitor Add Counters dialog box, manually add them by issuing the following command from the driver location: Lodctr /M:qend.man Select one of the following counter types: ...
  • Page 167 7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Figure 7-3 shows three examples of the counter monitoring output. Figure 7-3. Performance Monitor: Cavium FastLinQ Counters Table 7-3 provides details about error counters. Table 7-3. Cavium FastLinQ RDMA Error Counters...
  • Page 168 7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Table 7-3. Cavium FastLinQ RDMA Error Counters (Continued) Applies Applies RDMA Error Description Troubleshooting Counter RoCE? iWARP? Posted work requests may be Occurs when the Requestor flushed by sending completions with...
  • Page 169 7–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Table 7-3. Cavium FastLinQ RDMA Error Counters (Continued) Applies Applies RDMA Error Description Troubleshooting Counter RoCE? iWARP? Remote side could not complete the A software issue at the Requestor operation requested due to a local...
  • Page 170: Configuring Roce On The Adapter For Linux

    7–RoCE Configuration Configuring RoCE on the Adapter for Linux Table 7-3. Cavium FastLinQ RDMA Error Counters (Continued) Applies Applies RDMA Error Description Troubleshooting Counter RoCE? iWARP? An internal QP consistency error Indicates a software Responder was detected while processing this issue.
  • Page 171: Roce Configuration For Rhel

    7–RoCE Configuration Configuring RoCE on the Adapter for Linux RoCE Configuration for RHEL To configure RoCE on the adapter, the Open Fabrics Enterprise Distribution (OFED) must be installed and configured on the RHEL host. To prepare inbox OFED for RHEL: While installing or upgrading the operating system, select the InfiniBand and OFED support packages.
  • Page 172: Roce Configuration For Ubuntu

    7–RoCE Configuration Configuring RoCE on the Adapter for Linux perftest-x.x.x.x86_64.rpm (required for bandwidth and latency applications) Install the Linux drivers, as described in “Installing the Linux Drivers with RDMA” on page RoCE Configuration for Ubuntu To configure RoCE on the adapter for an Ubuntu host, RDMA must be installed and configured on the Ubuntu host.
  • Page 173 7–RoCE Configuration Configuring RoCE on the Adapter for Linux KERNEL=="rdma_cm", NAME="infiniband/%k", MODE="0666" Edit the file to increase the size of memory, /etc/security/limits.conf which can be locked by a non-root user. Add the following lines, and then log out: * soft memlock unlimited * hard memlock unlimited root soft memlock unlimited root hard memlock unlimited...
  • Page 174: Verifying The Roce Configuration On Linux

    7–RoCE Configuration Configuring RoCE on the Adapter for Linux Load the RDMA modules by issuing the following commands. You must perform this step whenever you reboot the system. # modprobe rdma_cm # modprobe ib_uverbs # modprobe rdma_ucm # modprobe ib_ucm # modprobe ib_umad To list RoCE devices, issue the following command: # ibv_devinfo...
  • Page 175 7–RoCE Configuration Configuring RoCE on the Adapter for Linux On RHEL or CentOS: Use the status command to start service rdma service:  If RDMA has not started, issue the following command: # service rdma start  If RDMA does not start, issue either of the following alternative commands: # /etc/init.d/rdma start # systemctl start rdma.service...
  • Page 176 7–RoCE Configuration Configuring RoCE on the Adapter for Linux port: state: PORT_ACTIVE (1) max_mtu: 4096 (5) active_mtu: 1024 (3) sm_lid: port_lid: port_lmc: 0x00 link_layer: Ethernet Verify the L2 and RoCE connectivity between all servers: one server acts as a server, another acts as a client. Verify the L2 connection using a simple command.
  • Page 177: Vlan Interfaces And Gid Index Values

    7–RoCE Configuration Configuring RoCE on the Adapter for Linux  To display RoCE statistics, issue the following commands, where X is the device number: > mount -t debugfs nodev /sys/kernel/debug > cat /sys/kernel/debug/qedr/qedrX/stats vLAN Interfaces and GID Index Values If you are using vLAN interfaces on both the server and the client, you must also configure the same vLAN ID on the switch.
  • Page 178: Identifying The Roce V2 Gid Index Or Address

    7–RoCE Configuration Configuring RoCE on the Adapter for Linux Enable L3 routing on the switch. NOTE You can configure RoCE v1 and RoCE v2 by using RoCE v2-supported kernels. These kernels allow you to run RoCE traffic over the same subnet, as well as over different subnets such as RoCE v2 and any routable environment.
  • Page 179: Verifying The Roce V1 Or Roce V2 Function Through

    7–RoCE Configuration Configuring RoCE on the Adapter for Linux  Option 2: Use the scripts from the FastLinQ source package. #/../fastlinq-8.x.x.x/add-ons/roce/show_gids.sh PORT INDEX IPv4 ---- ----- ------------ qedr0 fe80:0000:0000:0000:020e:1eff:fec4:1b20 p4p1 qedr0 fe80:0000:0000:0000:020e:1eff:fec4:1b20 p4p1 qedr0 0000:0000:0000:0000:0000:ffff:1e01:010a 30.1.1.10 p4p1 qedr0 0000:0000:0000:0000:0000:ffff:1e01:010a 30.1.1.10 p4p1 qedr0 3ffe:ffff:0000:0f21:0000:0000:0000:0004...
  • Page 180 You must first configure the route settings for the switch and servers. On the adapter, set the RoCE priority and DCBX mode using either the HII, UEFI user interface, or one of the Cavium management utilities (see “Adapter Management” on page...
  • Page 181 7–RoCE Configuration Configuring RoCE on the Adapter for Linux  If you are using PFC configuration and L3 routing, run RoCE v2 traffic over the vLAN using a different subnet, and use the RoCE v2 vLAN GID index. Server# ib_send_bw -d qedr0 -F -x 5 Client# ib_send_bw -d qedr0 -F -x 5 192.168.100.3 Server Switch Settings: Figure 7-4.
  • Page 182 7–RoCE Configuration Configuring RoCE on the Adapter for Linux Client Switch Settings: Figure 7-5. Switch Settings, Client Configuring RoCE v1 or RoCE v2 Settings for RDMA_CM Applications To configure RoCE, use the following scripts from the FastLinQ source package: # ./show_rdma_cm_roce_ver.sh qedr0 is configured to IB/RoCE v1 qedr1 is configured to IB/RoCE v1 # ./config_rdma_cm_roce_ver.sh v2...
  • Page 183: Configuring Roce On The Adapter For Vmware Esx

    7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX Client Settings: Figure 7-7. Configuring RDMA_CM Applications: Client Configuring RoCE on the Adapter for VMware ESX This section provides the following procedures and information for RoCE configuration:  Configuring RDMA Interfaces Configuring MTU ...
  • Page 184 7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX To view a list of the RDMA devices, issue the esxcli rdma device list command. For example: esxcli rdma device list Name Driver State Speed Paired Uplink Description ------- ------- ------ ---- -------...
  • Page 185: Configuring Mtu

    7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX Configuring MTU To modify the MTU for an RoCE interface, change the MTU of the corresponding vSwitch. Set the MTU size of the RDMA interface based on the MTU of the vSwitch by issuing the following command: # esxcfg-vswitch -m <new MTU>...
  • Page 186: Configuring A Paravirtual Rdma Device (Pvrdma)

    7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX Queue pairs in RTS state: 0 Queue pairs in SQD state: 0 Queue pairs in SQE state: 0 Queue pairs in ERR state: 0 Queue pair events: 0 Completion queues allocated: 1 Completion queue events: 0 Shared receive queues allocated: 0 Shared receive queue events: 0...
  • Page 187 7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX Figure 7-8 shows an example. Figure 7-8. Configuring a New Distributed Switch Configure a distributed virtual switch as follows: In the VMware vSphere Web Client, expand the RoCE node in the left pane of the Navigator window.
  • Page 188 7–RoCE Configuration Configuring RoCE on the Adapter for VMware ESX Figure 7-9 shows an example. Figure 7-9. Assigning a vmknic for PVRDMA Set the firewall rule for the PVRDMA: Right-click a host, and then click Settings. On the Settings page, expand the System node, and then click Security Profile.
  • Page 189: Configuring Dcqcn

    7–RoCE Configuration Configuring DCQCN Set up the VM for PVRDMA as follows: Install one of the following supported guest OSs:  RHEL 7.5 and 7.6  Ubuntu 14.04 (kernel version 4.0) Install OFED-3.18. Compile and install the PVRDMA guest driver and library. Add a new PVRDMA network adapter to the VM as follows: ...
  • Page 190: Dcqcn Terminology

    3 is used for the FCoE traffic group, and 4 is used for the iSCSI-TLV traffic group. You may encounter DCB mismatch issues if you attempt to reuse these numbers on networks that also support FCoE or iSCSI-TLV traffic. Cavium recommends that you use numbers 1–2 or 5–7 for RoCE-related traffic groups. ...
  • Page 191: Dcb-Related Parameters

    7–RoCE Configuration Configuring DCQCN  All traffic of the specified priority is affected, even if there is a subset of specific connections that are causing the congestion.  PFC is a single-hop mechanism. That is, if a receiver experiences congestion and indicates the congestion through a PFC packet, only the nearest neighbor will react.
  • Page 192: Setting Ecn On Rdma Traffic

    7–RoCE Configuration Configuring DCQCN Setting ECN on RDMA Traffic Use the rdma_glob_ecn node to enable ECN for a specified RoCE priority. For example, to enable ECN on RoCE traffic using priority 5, issue the following command: ./debugfs.sh -n eth0 -t rdma_glob_ecn 1 This command is typically required when DCQCN is enabled.
  • Page 193: Dcqcn Algorithm Parameters

    CNPs. Values range dcqcn_cnp_vlan_priority between 0..7. FCoE-Offload uses 3 and iSCSI-Offload-TLV generally uses 4. Cavium rec- ommends that you specify a number from 1–2 or 5–7. Use this same value throughout the entire net- work.
  • Page 194: Script Example

     DCQCN mode currently supports only up to 64 QPs.  Cavium adapters can determine vLAN priority for PFC purposes from vLAN priority or from DSCP bits in the ToS field. However, in the presence of both, vLAN takes precedence.
  • Page 195: Iwarp Configuration

    iWARP Configuration Internet wide area RDMA protocol (iWARP) is a computer networking protocol that implements RDMA for efficient data transfer over IP networks. iWARP is designed for multiple environments, including LANs, storage networks, data center networks, and WANs. This chapter provides instructions for: ...
  • Page 196: Configuring Iwarp On Windows

    8–iWARP Configuration Configuring iWARP on Windows In the Success - Saving Changes message box, click OK. Repeat Step 2 through Step 7 to configure the NIC and iWARP for the other ports. To complete adapter preparation of both ports: On the Device Settings page, click Finish. On the main menu, click Finish.
  • Page 197 8–iWARP Configuration Configuring iWARP on Windows Using Windows PowerShell, verify that is enabled. The NetworkDirect command output (Figure 8-2) shows Get-NetOffloadGlobalSetting NetworkDirect Enabled Figure 8-2. Windows PowerShell Command: Get-NetOffloadGlobalSetting To verify iWARP traffic: Map SMB drives and run iWARP traffic. Launch Performance Monitor (Perfmon).
  • Page 198 If iWARP traffic is running, counters appear as shown in the Figure 8-4 example. Figure 8-4. Perfmon: Verifying iWARP Traffic NOTE For more information on how to view Cavium RDMA counters in Windows, see “Viewing RDMA Counters” on page 138. To verify the SMB connection:...
  • Page 199: Configuring Iwarp On Linux

    [fe80::71ea:bdd2:ae41:b95f%60]:445 NA Kernel 60 Listener 192.168.11.20:16159 192.168.11.10:445 Configuring iWARP on Linux Cavium 45000 Series Adapters support iWARP on the Linux Open Fabric Enterprise Distributions (OFEDs) listed in Table 7-1 on page 127. iWARP configuration on a Linux system includes the following: ...
  • Page 200: Detecting The Device

    8–iWARP Configuration Configuring iWARP on Linux For example, to change the interface on the port given by 04:00.0 from RoCE to iWARP, issue the following command: # modprobe -v qed rdma_protocol_map=04:00.0-3 Load the RDMA driver by issuing the following command: # modprobe -v qedr The following example shows the command entries to change the RDMA protocol to iWARP on multiple NPAR interfaces:...
  • Page 201: Supported Iwarp Applications

    8–iWARP Configuration Configuring iWARP on Linux vendor_part_id: 5718 hw_ver: phys_port_cnt: port: state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 1024 (3) sm_lid: port_lid: port_lmc: 0x00 link_layer: Ethernet Supported iWARP Applications Linux-supported RDMA applications for iWARP include the following:  ibv_devinfo, ib_devices ...
  • Page 202: Configuring Nfs-Rdma

    8–iWARP Configuration Configuring iWARP on Linux Send BW Test Dual-port : OFF Device : qedr1 Number of qps Transport type : IW Connection type : RC Using SRQ : OFF TX depth : 128 CQ Moderation : 100 : 1024[B] Link type : Ethernet GID index...
  • Page 203 8–iWARP Configuration Configuring iWARP on Linux /tmp/nfs-server *(fsid=0,async,insecure,no_root_squash) Ensure that you use a different file system identification (FSID) for each directory that you export. Load the svcrdma module as follows: # modprobe svcrdma Load the service as follows:  For SLES, enable and start the NFS server alias: # systemctl enable|start|status nfsserver ...
  • Page 204: Iwarp Rdma-Core Support On Sles 12 Sp3 And Rhel 7.4

    8–iWARP Configuration Configuring iWARP on Linux For NFS Version 4: # mount -t nfs4 -o rdma,port=20049 192.168.2.4:/tmp/nfs-server /tmp/nfs-client NOTE The default port for NFSoRDMA is 20049. However, any other port that is aligned with the NFS client will also work. Verify that the file system is mounted by issuing the command.
  • Page 205 8–iWARP Configuration Configuring iWARP on Linux To run all OFED applications from the current RDMA-Core-master location, issue the following command: # ls <rdma-core-master>/build/bin cmpost ib_acme ibv_devinfo ibv_uc_pingpong iwpmd rdma_client rdma_xclient rping ucmatose umad_compile_test cmtime ibv_asyncwatch ibv_rc_pingpong ibv_ud_pingpong mckey rdma-ndd rdma_xserver rstream udaddy umad_reg2 ibacm ibv_devices...
  • Page 206: Iser Configuration

    iSER Configuration This chapter provides procedures for configuring iSCSI Extensions for RDMA (iSER) for Linux (RHEL, SLES, and Ubuntu) and VMware ESXi 6.7, including:  Before You Begin  “Configuring iSER for RHEL” on page 180  “Configuring iSER for SLES 12” on page 183 “Configuring iSER for Ubuntu”...
  • Page 207: Configuring Iser For Rhel

    9–iSER Configuration Configuring iSER for RHEL Configuring iSER for RHEL To configure iSER for RHEL: Install inbox OFED as described in “RoCE Configuration for RHEL” on page 144. NOTE Out-of-box OFEDs are not supported for iSER because the ib_isert module is not available in the out-of-box OFED 3.18-2 GA/3.18-3 GA versions.
  • Page 208 9–iSER Configuration Configuring iSER for RHEL Figure 9-1 shows an example of a successful RDMA ping. Figure 9-1. RDMA Ping Successful You can use a Linux TCM-LIO target to test iSER. The setup is the same for any iSCSI target, except that you issue the command on the applicable portals.
  • Page 209 9–iSER Configuration Configuring iSER for RHEL To change the transport mode to iSER, issue the iscsiadm command. For example: iscsiadm -m node -T iqn.2015-06.test.target1 -o update -n iface.transport_name -v iser To connect to or log in to the iSER target, issue the iscsiadm command.
  • Page 210: Configuring Iser For Sles 12

    9–iSER Configuration Configuring iSER for SLES 12 Figure 9-4. Checking for New iSCSI Device Configuring iSER for SLES 12 Because the targetcli is not inbox on SLES 12.x, you must complete the following procedure. To configure iSER for SLES 12: To install targetcli, copy and install the following RPMs from the ISO image (x86_64 and noarch location): lio-utils-4.1-14.6.x86_64.rpm...
  • Page 211: Configuring Iser For Ubuntu

    9–iSER Configuration Configuring iSER for Ubuntu Start the targetcli utility, and configure your targets on the iSER target system. NOTE targetcli versions are different in RHEL and SLES. Be sure to use the proper backstores to configure your targets:  RHEL uses ramdisk ...
  • Page 212 9–iSER Configuration Configuring iSER for Ubuntu o- tcm_fc ........[0 Targets] /> Create a disk of type rd_mcp with size 1G named by issuing iSERPort1-1 the following command: /> backstores/rd_mcp create name=iSERPort1-1 size=1G Generating a wwn serial. Created rd_mcp ramdisk iSERPort1-1 with size 1G. />...
  • Page 213 9–iSER Configuration Configuring iSER for Ubuntu | o- iqn.2004-01.com.qlogic.iSERPort1.Target ..[1 TPG] o- tpgt1 ........[enabled] o- acls ........[0 ACLs] o- luns ........[0 LUNs] o- portals ......[0 Portals] o- loopback ........[0 Targets] o- qla2xxx ........[0 Targets] o- tcm_fc ........
  • Page 214 9–iSER Configuration Configuring iSER for Ubuntu Using default IP port 3260 Successfully created network portal 192.168.10.5:3260. /> ls o- / ..........[...] o- backstores ........[...] | o- fileio ......[0 Storage Object] | o- iblock ......[0 Storage Object] | o- pscsi ......
  • Page 215 9–iSER Configuration Configuring iSER for Ubuntu o- tpgt1 ........[enabled] o- acls ........[0 ACLs] o- luns ........[1 LUN] | o- lun0 ... [rd_mcp/iSERPort1-1 (ramdisk)] o- portals ......[1 Portal] o- 192.168.10.103:3260 ..[OK, iser enabled] o- loopback ........[0 Targets] o- qla2xxx ........
  • Page 216 9–iSER Configuration Configuring iSER for Ubuntu o- tcm_fc ........[0 Targets] /> Save the configuration by issuing the following command: /> saveconfig WARNING: Saving ratan-ProLiant-DL380p-Gen8 current configuration to disk will overwrite your boot settings. The current target configuration will become the default boot config.
  • Page 217: Configuring The Initiator

    9–iSER Configuration Configuring iSER for Ubuntu Configuring the Initiator To configure the initiator: Load the ib_iser module and confirm that it is loaded properly by issuing the following commands: # sudo modprobe ib_iser # lsmod | grep ib_iser ib_isert 56835 iscsi_target_mod 307333 6 ib_isert...
  • Page 218: Optimizing Linux Performance

    9–iSER Configuration Optimizing Linux Performance [3:0:0:0] disk LIO-ORG RAMDISK-MCP /dev/sdb Optimizing Linux Performance Consider the following Linux performance configuration enhancements described in this section.  Configuring CPUs to Maximum Performance Mode  Configuring Kernel sysctl Settings  Configuring IRQ Affinity Settings ...
  • Page 219: Configuring Irq Affinity Settings

    9–iSER Configuration Configuring iSER on ESXi 6.7 Configuring IRQ Affinity Settings The following example sets CPU core 0, 1, 2, and 3 to interrupt request (IRQ) XX, YY, ZZ, and XYZ respectively. Perform these steps for each IRQ assigned to a port (default is eight queues per port).
  • Page 220: Configuring Iser For Esxi 6.7

    9–iSER Configuration Configuring iSER on ESXi 6.7 vmk0 Management Network IPv6 fe80::e2db:55ff:fe0c:5f94 e0:db:55:0c:5f:94 1500 65535 true STATIC, PREFERRED defaultTcpipStack  The iSER target is configured to communicate with the iSER initiator. Configuring iSER for ESXi 6.7 To configure iSER for ESXi 6.7: Add iSER devices by issuing the following commands: esxcli rdma iser add esxcli iscsi adapter list...
  • Page 221 9–iSER Configuration Configuring iSER on ESXi 6.7 esxcli iscsi networkportal add -A vmhba67 -n vmk1 esxcli iscsi networkportal list esxcli iscsi adapter get -A vmhba65 vmhba65 Name: iqn.1998-01.com.vmware:localhost.punelab.qlogic.com qlogic.org qlogic.com mv.qlogic.com:1846573170:65 Alias: iser-vmnic5 Vendor: VMware Model: VMware iSCSI over RDMA (iSER) Adapter Description: VMware iSCSI over RDMA (iSER) Adapter Serial Number: vmnic5 Hardware Version:...
  • Page 222 9–iSER Configuration Configuring iSER on ESXi 6.7 Console Device: /vmfs/devices/cdrom/mpx.vmhba0:C0:T4:L0 Devfs Path: /vmfs/devices/cdrom/mpx.vmhba0:C0:T4:L0 Vendor: TSSTcorp Model: DVD-ROM SN-108BB Revis: D150 SCSI Level: 5 Is Pseudo: false Status: on Is RDM Capable: false Is Removable: true Is Local: true Is SSD: false Other Names: vml.0005000000766d686261303a343a30 VAAI Status: unsupported...
  • Page 223: Iscsi Configuration

    HBA. iSCSI offload increases network performance and throughput while helping to optimize server processor use. This section covers how to configure the Windows iSCSI offload feature for the Cavium FastLinQ 45000 Series Adapters. With the proper iSCSI offload licensing, you can configure your iSCSI-capable FastLinQ 45000 Series Adapter to offload iSCSI processing from the host processor.
  • Page 224: Installing Cavium Qlogic Drivers

    After the IP address is configured for the iSCSI adapter, you must use Microsoft Initiator to configure and add a connection to the iSCSI target using the Cavium QLogic iSCSI adapter. For more details on Microsoft Initiator, see the Microsoft user guide.
  • Page 225 10–iSCSI Configuration iSCSI Offload in Windows Server Figure 10-1. iSCSI Initiator Properties, Configuration Page In the iSCSI Initiator Name dialog box, type the new initiator IQN name, and then click OK. (Figure 10-2) Figure 10-2. iSCSI Initiator Node Name Change On the iSCSI Initiator Properties, click the Discovery tab.
  • Page 226 10–iSCSI Configuration iSCSI Offload in Windows Server On the Discovery page (Figure 10-3) under Target portals, click Discover Portal. Figure 10-3. iSCSI Initiator—Discover Target Portal In the Discover Target Portal dialog box (Figure 10-4): In the IP address or DNS name box, type the IP address of the target. Click Advanced.
  • Page 227 10–iSCSI Configuration iSCSI Offload in Windows Server Figure 10-4. Target Portal IP Address In the Advanced Settings dialog box (Figure 10-5), complete the following under Connect using: For Local adapter, select the QLogic <name or model> Adapter. For Initiator IP, select the adapter IP address. Click OK.
  • Page 228 10–iSCSI Configuration iSCSI Offload in Windows Server Figure 10-5. Selecting the Initiator IP Address On the iSCSI Initiator Properties, Discovery page, click OK. BC0154501-00 P...
  • Page 229 10–iSCSI Configuration iSCSI Offload in Windows Server Click the Targets tab, and then on the Targets page (Figure 10-6), click Connect. Figure 10-6. Connecting to the iSCSI Target BC0154501-00 P...
  • Page 230: Iscsi Offload Faqs

    10–iSCSI Configuration iSCSI Offload in Windows Server On the Connect To Target dialog box (Figure 10-7), click Advanced. Figure 10-7. Connect To Target Dialog Box In the Local Adapter dialog box, select the QLogic <name or model> Adapter, and then click OK. Click OK again to close Microsoft Initiator.
  • Page 231: Windows Server 2012 R2, 2016, And 2019 Iscsi Boot Installation

    Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019 support booting and installing in either the offload or non-offload paths. Cavium requires that you use a slipstream DVD with the latest Cavium QLogic drivers injected. See “Injecting (Slipstreaming) Adapter Drivers into Windows Image Files”...
  • Page 232: Iscsi Crash Dump

    45000 Series Adapters. No additional configurations are required to configure iSCSI crash dump generation. iSCSI Offload in Linux Environments The Cavium QLogic FastLinQ 45000 iSCSI software consists of a single kernel module called (qedi). The qedi module is dependent on additional parts qedi.ko...
  • Page 233: Differences From Bnx2I

    Some key differences exist between qedi—the driver for the Cavium FastLinQ 45000 Series Adapter (iSCSI)—and the previous QLogic iSCSI offload driver— bnx2i for the Cavium 8400 Series Adapters. Some of these differences include:  qedi directly binds to a PCI function exposed by the CNA.
  • Page 234 10–iSCSI Configuration iSCSI Offload in Linux Environments To verify that the iSCSI interfaces were detected properly, issue the following command. In this example, two iSCSI CNA devices are detected with SCSI host numbers 4 and 5. # dmesg | grep qedi [0000:00:00.0]:[qedi_init:3696]: QLogic iSCSI Offload Driver v8.15.6.0.
  • Page 235 10–iSCSI Configuration iSCSI Offload in Linux Environments 192.168.25.100:3260,1 iqn.2003- 04.com.sanblaze:virtualun.virtualun.target-05000001 192.168.25.100:3260,1 iqn.2003-04.com.sanblaze:virtualun.virtualun.target-05000002 Log into the iSCSI target using the IQN obtained in Step 5. To initiate the login procedure, issue the following command (where the last character in the command is a lowercase letter “L”): #iscsiadm -m node -p 192.168.25.100 -T iqn.2003-04.com.sanblaze:virtualun.virtualun.target-0000007 -l Logging in to [iface: qedi.00:0e:1e:c4:e1:6c,...
  • Page 236: Fcoe Configuration

    Chapter 6 Boot from SAN Configuration. Configuring Linux FCoE Offload The Cavium FastLinQ 45000 Series Adapter FCoE software consists of a single kernel module called qedf.ko (qedf). The qedf module is dependent on additional parts of the Linux kernel for specific functionality: ...
  • Page 237: Differences Between Qedf And Bnx2Fc

    No explicit configuration is required for qedf.ko. The driver automatically binds to the exposed FCoE functions of the CNA and begins discovery. This functionality is similar to the functionality and operation of the Cavium QLogic FC driver, qla2xx, as opposed to the older bnx2fc driver.
  • Page 238: Verifying Fcoe Devices In Linux

    11–FCoE Configuration Configuring Linux FCoE Offload The load qedf.ko kernel module performs the following: # modprobe qed # modprobe libfcoe # modprobe qedf Verifying FCoE Devices in Linux Follow these steps to verify that the FCoE devices were detected correctly after installing and loading the qedf kernel module.
  • Page 239 11–FCoE Configuration Configuring Linux FCoE Offload Check for discovered FCoE devices using the lsscsi lsblk -S commands. An example of each command follows. # lsscsi [0:2:0:0] disk DELL PERC H700 2.10 /dev/sda [2:0:0:0] cd/dvd TEAC DVD-ROM DV-28SW R.2A /dev/sr0 [151:0:0:0] disk P2000G3 FC/iSCSI T252 /dev/sdb...
  • Page 240: Sr-Iov Configuration

    SR-IOV Configuration Single root input/output virtualization (SR-IOV) is a specification by the PCI SIG that enables a single PCI Express (PCIe) device to appear as multiple, separate physical PCIe devices. SR-IOV permits isolation of PCIe resources for performance, interoperability, and manageability. NOTE Some SR-IOV features may not be fully enabled in the current release.
  • Page 241 12–SR-IOV Configuration Configuring SR-IOV on Windows On the Main Configuration Page - Device Level Configuration (Figure 12-1): Set the Virtualization Mode to SR-IOV. Click Back. Figure 12-1. Device Level Configuration On the Main Configuration Page, click Finish. In the Warning - Saving Changes message box, click Yes to save the configuration.
  • Page 242 12–SR-IOV Configuration Configuring SR-IOV on Windows Figure 12-2. Adapter Properties, Advanced: Enabling SR-IOV To create a Virtual Machine Switch (vSwitch) with SR-IOV (Figure 12-3 on page 216): Launch the Hyper-V Manager. Select Virtual Switch Manager. In the Name box, type a name for the virtual switch. Under Connection type, select External network.
  • Page 243 12–SR-IOV Configuration Configuring SR-IOV on Windows Figure 12-3. Virtual Switch Manager: Enabling SR-IOV The Apply Networking Changes message box advises you that Pending changes may disrupt network connectivity. To save your changes and continue, click Yes. To get the virtual machine switch capability, issue the following Windows PowerShell command: PS C:\Users\Administrator>...
  • Page 244 12–SR-IOV Configuration Configuring SR-IOV on Windows Output of the command includes the following SR-IOV Get-VMSwitch capabilities: IovVirtualFunctionCount : 96 IovVirtualFunctionsInUse To create a virtual machine (VM) and export the virtual function (VF) in the Create a virtual machine. Add the VMNetworkadapter to the virtual machine. Assign a virtual switch to the VMNetworkadapter.
  • Page 245 Configuring SR-IOV on Windows Figure 12-4. Settings for VM: Enabling SR-IOV Install the Cavium QLogic drivers for the adapters detected in the VM. Use the latest drivers available from your vendor for your host OS (do not use inbox drivers).
  • Page 246 12–SR-IOV Configuration Configuring SR-IOV on Windows After installing the drivers, the Cavium QLogic adapter is listed in the VM. Figure 12-5 shows an example. Figure 12-5. Device Manager: VM with QLogic Adapter To view the SR-IOV VF details, issue the following Windows PowerShell command: PS C:\Users\Administrator>...
  • Page 247: Configuring Sr-Iov On Linux

    On the Processor Settings page: Set the Virtualization Technology option to Enabled. Click Back. On the System Setup page, select Device Settings. On the Device Settings page, select Port 1 for the Cavium QLogic adapter. On the Device Level Configuration page (Figure 12-7): Set the Virtualization Mode to SR-IOV.
  • Page 248 12–SR-IOV Configuration Configuring SR-IOV on Linux Figure 12-8. Editing the grub.conf File for SR-IOV Save the file and then reboot the system. grub.conf To verify that the changes are in effect, issue the following command: dmesg | grep iommu A successful input–output memory management unit (IOMMU) command output should show, for example: Intel-IOMMU: enabled To view VF details (number of VFs and total VFs), issue the following...
  • Page 249 12–SR-IOV Configuration Configuring SR-IOV on Linux Review the command output (Figure 12-9) to confirm that actual VFs were created on bus 4, device 2 (from the 0000:00:02.0 parameter), functions 0 through 7. Note that the actual device ID is different on the PFs (8070 in this example) versus the VFs (8090 in this example).
  • Page 250 12–SR-IOV Configuration Configuring SR-IOV on Linux Ensure that the VF interface is up and running with the assigned MAC address. Power off the VM and attach the VF. (Some OSs support hot-plugging of VFs to the VM.) In the Virtual Machine dialog box (Figure 12-11), click Add Hardware.
  • Page 251: Configuring Sr-Iov On Vmware

    12–SR-IOV Configuration Configuring SR-IOV on VMware Figure 12-12. Add New Virtual Hardware Power on the VM, and then issue the following command: check lspci -vv|grep -I ether Install the drivers for the adapters detected in the VM. Use the latest drivers available from your vendor for your host OS (do not use inbox drivers).
  • Page 252 12–SR-IOV Configuration Configuring SR-IOV on VMware On the Main Configuration Page, click Finish. Save the configuration settings and reboot the system. To enable the needed quantity of VFs per port (in this example, 16 on each port of a dual-port adapter), issue the following command: "esxcfg-module -s "max_vfs=16,16"...
  • Page 253 12-13) as follows: In the New Device box, select Network, and then click Add. For Adapter Type, select SR-IOV Passthrough. For Physical Function, select the Cavium QLogic VF. To save your configuration changes and close this dialog box, click BC0154501-00 P...
  • Page 254 12–SR-IOV Configuration Configuring SR-IOV on VMware Figure 12-13. VMware Host Edit Settings To validate the VFs per port, issue the command as follows: esxcli [root@localhost:~] esxcli network sriovnic vf list -n vmnic6 VF ID Active PCI Address Owner World ID ----- ------ -----------...
  • Page 255 005:03.7 Install the Cavium QLogic drivers for the adapters detected in the VM. Use the latest drivers available from your vendor for your host OS (do not use inbox drivers). The same driver version must be installed on the host and the...
  • Page 256: Nvme-Of Configuration With Rdma

    NVMe-oF Configuration with RDMA Non-Volatile Memory Express over Fabrics (NVMe-oF) enables the use of alternate transports to PCIe to extend the distance over which an NVMe host device and an NVMe storage drive or subsystem can connect. NVMe-oF defines a common architecture that supports a range of storage networking fabrics for the NVMe block storage protocol over a storage networking fabric.
  • Page 257 13–NVMe-oF Configuration with RDMA Figure 13-1 illustrates an example network. 45000 Series Adapter 45000 Series Adapter Figure 13-1. NVMe-oF Network The NVMe-oF configuration process covers the following procedures:  Installing Device Drivers on Both Servers Configuring the Target Server  ...
  • Page 258: Installing Device Drivers On Both Servers

    13–NVMe-oF Configuration with RDMA Installing Device Drivers on Both Servers Installing Device Drivers on Both Servers After installing your operating system (RHEL 7.4 or SLES 12 SP3), install device drivers on both servers. To upgrade the kernel to the latest Linux upstream kernel, go to: https://www.kernel.org/pub/linux/kernel/v4.x/ Install and load the latest FastLinQ drivers (qed, qede, libqedr/qedr)
  • Page 259: Configuring The Target Server

    13–NVMe-oF Configuration with RDMA Configuring the Target Server Enable and start the RDMA service as follows: # systemctl enable rdma.service # systemctl start rdma.service Disregard the error. All OFED modules required by RDMA Service Failed qedr are already loaded. Configuring the Target Server Configure the target server after the reboot process.
  • Page 260 13–NVMe-oF Configuration with RDMA Configuring the Target Server Table 13-1. Target Parameters (Continued) Command Description Sets the NVMe device path. The NVMe device # echo -n /dev/nvme0n1 >namespaces/ path can differ between systems. Check the device 1/device_path path using the lsblk command. This system has two NVMe devices: nvme0n1 and nvme1n1.
  • Page 261: Configuring The Initiator Server

    13–NVMe-oF Configuration with RDMA Configuring the Initiator Server Configuring the Initiator Server You must configure the initiator server after the reboot process. After the server is operating, you cannot change the configuration without rebooting. If you are using a startup script to configure the initiator server, consider pausing the script (using command or something similar) as needed to ensure that each wait command finishes before executing the next command.
  • Page 262: Preconditioning The Target Server

    13–NVMe-oF Configuration with RDMA Preconditioning the Target Server Connect to the discovered NVMe-oF target ( ) using the nvme-qlogic-tgt1 NQN. Issue the following command after each server reboot. For example: # nvme connect -t rdma -n nvme-qlogic-tgt1 -a 1.1.1.1 -s 1023 Confirm the NVMe-oF target connection with the NVMe-oF device as follows: # dmesg | grep nvme...
  • Page 263: Testing The Nvme-Of Devices

    13–NVMe-oF Configuration with RDMA Testing the NVMe-oF Devices Testing the NVMe-oF Devices Compare the latency of the local NVMe device on the target server with that of the NVMe-oF device on the initiator server to show the latency that NVMe adds to the system.
  • Page 264: Optimizing Performance

    13–NVMe-oF Configuration with RDMA Optimizing Performance In this example, the target NVMe device latency is 8µsec. The total latency that results from the use of NVMe-oF is the difference between the initiator device NVMe-oF latency (30µsec) and the target device NVMe-oF latency (8µsec), or 22µsec.
  • Page 265: Irq Affinity (Multi_Rss-Affin.sh)

    13–NVMe-oF Configuration with RDMA Optimizing Performance Set the IRQ affinity for all 45000 Series Adapters. The file is a script file that is listed in “.IRQ Affinity multi_rss-affin.sh (multi_rss-affin.sh)” on page 238. # systemctl stop irqbalance # ./multi_rss-affin.sh eth1 NOTE A different version of this script, , is in the 41xxx Linux qedr_affin.sh...
  • Page 266: Cpu Frequency (Cpufreq.sh)

    13–NVMe-oF Configuration with RDMA Optimizing Performance CPUID=$((CPUID*OFFSET)) for ((A=1; A<=${NUM_FP}; A=${A}+1)) ; do INT='grep -m $A $eth /proc/interrupts | tail -1 | cut -d ":" -f 1' SMP='echo $CPUID 16 o p | dc' echo ${INT} smp affinity set to ${SMP} echo $((${SMP})) >...
  • Page 267 13–NVMe-oF Configuration with RDMA Optimizing Performance NOTE The following commands apply only to the initiator server. # echo 0 > /sys/block/nvme0n1/queue/add_random # echo 2 > /sys/block/nvme0n1/queue/nomerges BC0154501-00 P...
  • Page 268: Configuring Roce Interfaces With Hyper-V

    Windows Server 2016 This chapter provides the following information for Windows Server 2016:  Configuring RoCE Interfaces with Hyper-V  “RoCE over Switch Embedded Teaming” on page 247  “Configuring QoS for RoCE” on page 248 “Configuring VMMQ” on page 257 ...
  • Page 269: Creating A Hyper-V Virtual Switch With An Rdma Nic

    14–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V Creating a Hyper-V Virtual Switch with an RDMA NIC Follow the procedures in this section to create a Hyper-V virtual switch and then enable RDMA in the host VNIC. To create a Hyper-V virtual switch with an RDMA virtual NIC: On all physical interfaces, set the value of the NetworkDirect Functionality parameter to Enabled.
  • Page 270: Adding A Vlan Id To Host Virtual Nic

    14–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V On the Advanced page (Figure 14-2): Under Property, select Network Direct (RDMA). Under Value, select Enabled. Click OK. Figure 14-2. Hyper-V Virtual Ethernet Adapter Properties To enable RDMA, issue the following Windows PowerShell command: PS C:\Users\Administrator>...
  • Page 271: Verifying If Roce Is Enabled

    14–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V Figure 14-3 shows the command output. Figure 14-3. Windows PowerShell Command: Get-VMNetworkAdapter To set the vLAN ID to the host virtual NIC, issue the following Windows PowerShell command: PS C:\Users\Administrator> Set-VMNetworkAdaptervlan -VMNetworkAdapterName "New Virtual Switch" -VlanId 5 -Access -Management0S NOTE Note the following about adding a vLAN ID to a host virtual NIC:...
  • Page 272: Adding Host Virtual Nics (Virtual Ports)

    14–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V Adding Host Virtual NICs (Virtual Ports) To add host virtual NICs: To add a host virtual NIC, issue the following command: Add-VMNetworkAdapter -SwitchName "New Virtual Switch" -Name SMB - ManagementOS Enable RDMA on host virtual NICs as shown in “To enable RDMA in a host virtual NIC:”...
  • Page 273 14–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V Figure 14-5. Add Counters Dialog Box If the RoCE traffic is running, counters appear as shown in Figure 14-6. Figure 14-6. Performance Monitor Shows RoCE Traffic BC0154501-00 P...
  • Page 274: Roce Over Switch Embedded Teaming

    14–Windows Server 2016 RoCE over Switch Embedded Teaming RoCE over Switch Embedded Teaming Switch Embedded Teaming (SET) is Microsoft’s alternative NIC teaming solution available to use in environments that include Hyper-V and the Software Defined Networking (SDN) stack in Windows Server 2016 Technical Preview. SET integrates limited NIC Teaming functionality into the Hyper-V Virtual Switch.
  • Page 275: Assigning A Vlan Id On Set

    14–Windows Server 2016 Configuring QoS for RoCE Figure 14-8 shows command output. Figure 14-8. Windows PowerShell Command: Get-NetAdapter To enable RDMA on SET, issue the following Windows PowerShell command: PS C:\Users\Administrator> Enable-NetAdapterRdma "vEthernet (SET)" Assigning a vLAN ID on SET To assign a vLAN ID on SET: ...
  • Page 276: Configuring Qos By Disabling Dcbx On The Adapter

    14–Windows Server 2016 Configuring QoS for RoCE Configuring QoS by Disabling DCBX on the Adapter All configuration must be completed on all of the systems in use before configuring QoS by disabling DCBX on the adapter. The priority-based flow control (PFC), enhanced transition services (ETS), and traffic classes configuration must be the same on the switch and server.
  • Page 277 14–Windows Server 2016 Configuring QoS for RoCE Figure 14-9. Advanced Properties: Enable QoS Assign the vLAN ID to the interface as follows: Open the miniport properties, and then click the Advanced tab. On the adapter properties’ Advanced page (Figure 14-10) under Property, select VLAN ID, and then set the value.
  • Page 278 14–Windows Server 2016 Configuring QoS for RoCE Figure 14-10. Advanced Properties: Setting VLAN ID To enable PFC for RoCE on a specific priority, issue the following command: PS C:\Users\Administrators> Enable-NetQoSFlowControl -Priority 5 NOTE If configuring RoCE over Hyper-V, do not assign a vLAN ID to the physical interface.
  • Page 279 14–Windows Server 2016 Configuring QoS for RoCE False Global False Global True Global False Global False Global To configure QoS and assign relevant priority to each type of traffic, issue the following commands (where Priority 5 is tagged for RoCE and Priority 0 is tagged for TCP): PS C:\Users\Administrators>...
  • Page 280: Configuring Qos By Enabling Dcbx On The Adapter

    14–Windows Server 2016 Configuring QoS for RoCE Name Algorithm Bandwidth(%) Priority PolicySet IfIndex IfAlias ---- --------- ------------ -------- --------- ------- ------- [Default] 1-4,6-7 Global RDMA class Global TCP class Global To see the network adapter QoS from the preceding configuration, issue the following Windows PowerShell command: PS C:\Users\Administrator>...
  • Page 281 14–Windows Server 2016 Configuring QoS for RoCE NOTE If the switch does not have a way of designating the RoCE traffic, you may need to set the RoCE Priority to the number used by the switch. Arista switches can do so, but some other switches cannot. To install the DCB role in the host, issue the following Windows PowerShell command: PS C:\Users\Administrators>...
  • Page 282 14–Windows Server 2016 Configuring QoS for RoCE Figure 14-11. Advanced Properties: Enabling QoS Assign the vLAN ID to the interface (required for PFC) as follows: Open the miniport properties, and then click the Advanced tab. On the adapter properties’ Advanced page (Figure 14-12) under Property, select VLAN ID, and then set the value.
  • Page 283 14–Windows Server 2016 Configuring QoS for RoCE Figure 14-12. Advanced Properties: Setting VLAN ID To configure the switch, issue the following Windows PowerShell command: PS C:\Users\Administrators> Get-NetAdapterQoS Name : Ethernet 5 Enabled : True Capabilities Hardware Current -------- ------- MacSecBypass : NotSupported NotSupported DcbxSupport : CEE...
  • Page 284: Configuring Vmmq

    14–Windows Server 2016 Configuring VMMQ -- --- --------- ---------- 0 ETS 0-4,6-7 1 ETS RemoteFlowControl : Priority 5 Enabled RemoteClassifications : Protocol Port/Type Priority -------- --------- -------- NetDirect 445 NOTE The preceding example is taken when the adapter port is connected to an Arista 7060X switch.
  • Page 285: Enabling Vmmq On The Adapter

    14–Windows Server 2016 Configuring VMMQ Enabling VMMQ on the Adapter To enable VMMQ on the adapter: Open the miniport properties, and then click the Advanced tab. On the adapter properties’ Advanced page (Figure 14-13) under Property, select Virtual Switch RSS, and then set the value to Enabled. Click OK.
  • Page 286: Enabling Vmmq On The Virtual Machine Switch

    14–Windows Server 2016 Configuring VMMQ Figure 14-14. Virtual Switch Manager Click OK. Enabling VMMQ on the Virtual Machine Switch To enable VMMQ on the virtual machine switch:  Issue the following Windows PowerShell command: PS C:\Users\Administrators> Set-VMSwitch -name q1 -defaultqueuevmmqenabled $true -defaultqueuevmmqqueuepairs 4 BC0154501-00 P...
  • Page 287: Getting The Virtual Machine Switch Capability

    14–Windows Server 2016 Configuring VMMQ Getting the Virtual Machine Switch Capability To get the virtual machine switch capability:  Issue the following Windows PowerShell command: PS C:\Users\Administrator> Get-VMSwitch -Name ql | fl Figure 14-15 shows example output. Figure 14-15. Windows PowerShell Command: Get-VMSwitch Creating a VM and Enabling VMMQ on VMNetworkadapters in the VM To create a virtual machine (VM) and enable VMMQ on VMNetworksadapters...
  • Page 288 14–Windows Server 2016 Configuring VMMQ To enable VMMQ on the VM, issue the following Windows PowerShell command: PS C:\Users\Administrators> set-vmnetworkadapter -vmname vm1 -VMNetworkAdapterName "network adapter" -vmmqenabled $true -vmmqqueuepairs 4 NOTE For an SR-IOV capable virtual switch: If the VM switch and hardware acceleration is SR-IOV-enabled, you must create 10 VMs with 8 virtual NICs each to utilize VMMQ.
  • Page 289: Default And Maximum Vmmq Virtual Nic

    14–Windows Server 2016 Configuring VXLAN Ethernet 3 00-15-5D-36-0A-F9 Activated Adaptive Ethernet 3 00-15-5D-36-0A-FA Activated Adaptive PS C:\Users\Administrator> get-netadaptervmq Name InterfaceDescription Enabled BaseVmqProcessor MaxProcessors NumberOfReceive Queues ---- -------------------- ------- ---------------- ------------- --------------- Ethernet 4 QLogic FastLinQ 45000 False Default and Maximum VMMQ Virtual NIC According to the current implementation, a maximum quantity of 4 VMMQs is available per virtual NIC;...
  • Page 290: Enabling Vxlan Offload On The Adapter

    14–Windows Server 2016 Configuring VXLAN Enabling VXLAN Offload on the Adapter To enable VXLAN offload on the adapter: Open the miniport properties, and then click the Advanced tab. On the adapter properties’ Advanced page (Figure 14-16) under Property, select VXLAN Encapsulated Task Offload. Figure 14-16.
  • Page 291: Configuring Storage Spaces Direct

    14–Windows Server 2016 Configuring Storage Spaces Direct Configuring Storage Spaces Direct Windows Server 2016 introduces Storage Spaces Direct, which allows you to build highly available and scalable storage systems with local storage. For more information, refer to the following Microsoft TechNet link: https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces /storage-spaces-direct-windows-server-2016 Configuring the Hardware...
  • Page 292: Deploying A Hyper-Converged System

    14–Windows Server 2016 Configuring Storage Spaces Direct Deploying a Hyper-Converged System This section includes instructions to install and configure the components of a Hyper-Converged system using the Windows Server 2016. The act of deploying a Hyper-Converged system can be divided into the following three high-level phases: ...
  • Page 293 14–Windows Server 2016 Configuring Storage Spaces Direct Example Dell switch configuration: no ip address mtu 9416 portmode hybrid switchport dcb-map roce_S2D protocol lldp dcbx version cee no shutdown Enable Network Quality of Service. NOTE Network Quality of Service is used to ensure that the Software Defined Storage system has enough bandwidth to communicate between the nodes to ensure resiliency and performance.
  • Page 294: Configuring Storage Spaces Direct

    14–Windows Server 2016 Configuring Storage Spaces Direct To configure the host virtual NIC to use a vLAN, issue the following commands: Set-VMNetworkAdapterVlan -VMNetworkAdapterName "SMB_1" -VlanId 5 -Access -ManagementOS Set-VMNetworkAdapterVlan -VMNetworkAdapterName "SMB_2" -VlanId 5 -Access -ManagementOS NOTE These commands can be on the same or different vLANs. To verify that the vLAN ID is set, issue the following command: Get-VMNetworkAdapterVlan -ManagementOS To disable and enable each host virtual NIC adapter so that the vLAN...
  • Page 295 14–Windows Server 2016 Configuring Storage Spaces Direct Step 1. Running a Cluster Validation Tool Run the cluster validation tool to make sure server nodes are configured correctly to create a cluster using Storage Spaces Direct. To validate a set of servers for use as a Storage Spaces Direct cluster, issue the following Windows PowerShell command: Test-Cluster -Node <MachineName1, MachineName2, MachineName3, MachineName4>...
  • Page 296 14–Windows Server 2016 Configuring Storage Spaces Direct icm (Get-Cluster -Name HCNanoUSClu3 | Get-ClusterNode) { Update-StorageProviderCache Get-StoragePool |? IsPrimordial -eq $false | Set-StoragePool -IsReadOnly:$false -ErrorAction SilentlyContinue Get-StoragePool |? IsPrimordial -eq $false | Get-VirtualDisk | Remove-VirtualDisk -Confirm:$false -ErrorAction SilentlyContinue Get-StoragePool |? IsPrimordial -eq $false | Remove-StoragePool -Confirm:$false -ErrorAction SilentlyContinue Get-PhysicalDisk | Reset-PhysicalDisk -ErrorAction SilentlyContinue...
  • Page 297 14–Windows Server 2016 Configuring Storage Spaces Direct Step 6. Creating Virtual Disks If the Storage Spaces Direct was enabled, it creates a single pool using all of the disks. It also names the pool (for example S2D on Cluster1), with the name of the cluster that is specified in the name.
  • Page 298: Traffic Control Offload

    Traffic Control Offload Use traffic control (TC) offload in the 45000 Series Adapters to control flows based on packet attributes (primarily on the receive side) and to choose different traffic classes (for example, Multiqueue Priority Qdisc [MQPRIO] offload) on the adapter based on flow priorities on the transmission side.
  • Page 299: Ingress Packet Redirection

    15–Traffic Control Offload Ingress Packet Redirection Ingress Packet Redirection Ingress packet redirection for traffic control includes redirection for SR-IOV VFs and redirection with both MAC-vLAN offloaded devices and SR-IOV VFs. Ingress Packet Redirection for SR-IOV VFs Ingress packet redirection for SR-IOV VFs includes the following: tc qdisc add dev p5p1 ingress tc filter add dev p5p1 protocol ip parent ffff: pref 0x2 flower skip_sw dst_ip...
  • Page 300: Tc Drop Action Support (Ingress Drop)

    You can provide a priority to a traffic control map to enable packets with corresponding priorities to use those traffic classes on which to transmit packets. Cavium supports a maximum of four different traffic classes. For example: ethtool -K ethx hw-tc-offload on...
  • Page 301: Troubleshooting

    Troubleshooting This chapter provides the following troubleshooting information:  Troubleshooting Checklist  “Verifying that Current Drivers Are Loaded” on page 275  “Testing Network Connectivity” on page 276 “Microsoft Virtualization with Hyper-V” on page 277   “Linux-specific Issues” on page 277 ...
  • Page 302: Verifying That Current Drivers Are Loaded

    16–Troubleshooting Verifying that Current Drivers Are Loaded  Replace the failed adapter with one that is known to work properly. If the second adapter works in the slot where the first one failed, the original adapter is probably defective.  Install the adapter in another functioning system, and then run the tests again.
  • Page 303: Verifying Drivers In Vmware

    In this example, the last entry identifies the driver that will be active upon reboot. # dmesg | grep -i "Cavium" | grep -i "qede" 10.097526] QLogic FastLinQ 4xxxx Ethernet Driver qede x.x.x.x 23.093526] QLogic FastLinQ 4xxxx Ethernet Driver qede x.x.x.x 34.975396] QLogic FastLinQ 4xxxx Ethernet Driver qede x.x.x.x...
  • Page 304: Testing Network Connectivity For Linux

    16–Troubleshooting Microsoft Virtualization with Hyper-V Testing Network Connectivity for Linux To verify that the Ethernet interface is up and running: To check the status of the Ethernet interface, issue the ifconfig command. To check the statistics on the Ethernet interface, issue the netstat -i command.
  • Page 305: Troubleshooting Windows Fcoe And Iscsi Boot From San

    16–Troubleshooting Troubleshooting Windows FCoE and iSCSI Boot from SAN Troubleshooting Windows FCoE and iSCSI Boot from SAN If any USB flash drive is connected while Windows setup is loading files for installation, an error message will appear when you provide the drivers and then select the SAN disk for the installation.
  • Page 306: Collecting Debug Data

    16–Troubleshooting Collecting Debug Data Collecting Debug Data Use the commands in Table 16-1 to collect debug data. Table 16-1. Collecting Debug Data Commands Debug Data Description Kernel logs demesg-T Register dump ethtool-d System information; available in the driver bundle sys_info.sh BC0154501-00 P...
  • Page 307: Adapter Leds

    Adapter LEDS Table A-1 lists the LED indicators for the state of the adapter port link and activity. Table A-1. Adapter Port Link and Activity LEDs Port LED LED Appearance Network State No link (cable disconnected) Link LED Continuously illuminated Link No port activity Activity LED...
  • Page 308: Cables And Optical Modules

    Cables and Optical Modules This appendix provides the following information for the supported cables and optical modules: Supported Specifications   “Tested Cables and Optical Modules” on page 282  “Tested Switches” on page 287 Supported Specifications The 45000 Series Adapters support a variety of cables and optical modules that comply with SFF8024.
  • Page 309: Tested Cables And Optical Modules

    100G IEEE 802.3 Clause 92 (100GBASE-CR4) Tested Cables and Optical Modules Cavium does not guarantee that every cable or optical module that satisfies the compliance requirements will operate with the 45000 Series Adapters. Cavium has tested the components listed in...
  • Page 310 B–Cables and Optical Modules Tested Cables and Optical Modules Table B-1. Tested Cables and Optical Modules Speed/Form Cable Manufacturer Part Number Type Factor Length Cables COPQAA4JAA SFP Twin-axial 10G COPQAA6JAA SFP Twin-axial 10G Cisco COPQAA5JAA SFP Twin-axial 10G 37-0962-01 SFP Twin-axial 10G 407-BBBK SFP Twin-axial 10G 10G DAC...
  • Page 311 B–Cables and Optical Modules Tested Cables and Optical Modules Table B-1. Tested Cables and Optical Modules (Continued) Speed/Form Cable Manufacturer Part Number Type Factor Length NDAQGF-0001 QSFP100GB to QSFP100GB NDAAFF-0001 QSFP100GB to QSFP100GB NDAQGF-0003 QSFP100GB to QSFP100GB Amphenol NDAAFF-0003 QSFP100GB to QSFP100GB NDAAFJ-0004 QSFP100GB to...
  • Page 312 B–Cables and Optical Modules Tested Cables and Optical Modules Table B-1. Tested Cables and Optical Modules (Continued) Speed/Form Cable Manufacturer Part Number Type Factor Length NDAQGJ-0001 QSFP100GB to 4X SFP28GB NDAQGF-0002 QSFP100GB to 4X SFP28GB Amphenol NDAQGF-0003 QSFP100GB to 4X SFP28GB NDAQGJ-0005 QSFP100GB to 4X...
  • Page 313 B–Cables and Optical Modules Tested Cables and Optical Modules Table B-1. Tested Cables and Optical Modules (Continued) Speed/Form Cable Manufacturer Part Number Type Factor Length 10-2672-02 QSFP-40G SR4 QSFP40G SR Cisco Optical Transceiver 40G Optical FTL410QE2C QSFP-40G QSFP40G SR Finisar Transceiver Optical Transceiver JQP-04SRAB1...
  • Page 314: Tested Switches

    To view the most current list of supported switches, view the Cavium FastLinQ 45000 Series Interoperability Matrix located here: LineCards/Cavium_FastLinQ_45000_Series_Interoperability_Matrix.pdf Table B-2.
  • Page 315: Feature Constraints

    100G 45000 Series Adapters do not support iWARP at this time. Concurrent RoCE and iWARP Is Not Supported on the Same Port RoCE and iWARP are not supported on the same port. HII and Cavium QLogic management tools do not allow users to configure both concurrently.
  • Page 316 Currently, RDMA can be enabled on all PFs and the RDMA transport type (RoCE or iWARP) can be configured on a per-port basis. The per-port configuration is reflected in the per-PF settings by HII and Cavium QLogic management tools.
  • Page 317: Glossary

    Glossary ACPI bandwidth The Advanced Configuration and Power A measure of the volume of data that can Interface (ACPI) specification provides an be transmitted at a specific transmission open standard for unified operating rate. A 1Gbps or 2Gbps Fibre Channel system-centric device configuration and port can transmit or receive at nominal power management.
  • Page 318 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series CHAP DCBX Challenge-handshake authentication Data center bridging exchange. A protocol protocol (CHAP) is used for remote logon, used by devices to exchange config- usually between a client and server or a uration information with directly connected Web browser and Web server.
  • Page 319 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series Energy-efficient Ethernet. A set of Enhanced transmission selection. A enhancements to the twisted-pair and standard that specifies the enhancement backplane Ethernet family of computer of transmission selection to support the...
  • Page 320 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series File transfer protocol. A standard network Internet protocol. A method by which data protocol used to transfer files from one is sent from one computer to another over host to another host over a TCP-based the Internet.
  • Page 321 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series Layer 2 MSI-X (defined in PCI 3.0) allows a device to allocate any number of interrupts Refers to the data link layer of the multilay- between 1 and 2,048 and gives each inter- ered communication model, Open rupt separate data and address registers.
  • Page 322 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series NVRAM RDMA Non-volatile random access memory. A Remote direct memory access. The ability type of memory that retains data (configu- for one node to write directly to the ration settings) even when power is memory of another (with address and size removed.
  • Page 323 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series SerDes Serializer/deserializer. A pair of functional Transmission control protocol. A set of blocks commonly used in high-speed rules to send data in packets over the communications to compensate for limited Internet protocol.
  • Page 324 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series UEFI vLAN Unified extensible firmware interface. A Virtual logical area network (LAN). A group specification detailing an interface that of hosts with a common set of require- helps hand off control of the system for the...
  • Page 325 Index Add Counters dialog box 138, adding ACC specifications, supported host VNIC Accelerated Receive Flow Steering, support VLAN ID to host VNIC address ACPI MAC, permanent and virtual definition of manageability feature supported ADK, downloading Windows activity LED indicator Advanced Configuration and Power Interface, adapter See ACPI See also adapter port...
  • Page 326 Tech Support xxiii DHCP server for iSCSI boot Cavium FastLinQ 41000 Series DSCP-PFC Interoperability Matrix, accessing Ethernet switch for RoCE Cavium FastLinQ 45000 Series FCoE Interoperability Matrix, accessing FCoE boot parameters Cavium FastLinQ error counters FCoE crash dump Cavium Technical Support...
  • Page 327 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series connections configuring (continued) hardware DAC, SerDes interface initiator server for NVMe-oF inspecting iSCSI boot from SAN for other Linux distros L2, verifying network, verifying iSCSI boot from SAN for RHEL 86, 90,...
  • Page 328 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series device (continued) data center bridging, See DCB Data Center Quantized Congestion name Notification, See DCQCN NVMe-oF, testing database, knowledge xxiv Device Manage, verifying Windows driver DHCP DCQCN-related parameters definition of...
  • Page 329 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series driver (continued) for Linux, verifying loaded eCore for VMware, verifying loaded definition of for Windows, verifying loaded qed.ko Linux kernel module 205, injecting into VMware image files injecting into Windows image files...
  • Page 330 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series FCoE (continued) boot from SAN, VMware boot from SAN, Windows index values, VLAN boot installation, Windows Server VLAN interfaces, configuring RoCE boot mode global bandwidth allocation boot mode, configuring adapter UEFI...
  • Page 331 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series interfaces, configuring RoCE Hyper-V Microsoft Virtualization Internet Protocol, definition of RoCE interfaces, configuring with Internet small computer system interface, See iSCSI hypervisor virtualization for Windows Internet wide area RDMA protocol, See...
  • Page 332 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series iSCSI (continued) preferred boot mode L2B firmware version qedil driver, VMware support large send offload, See LRO software installation, migrating to offload large send offload, See LSO iSCSI laser safety...
  • Page 333 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series migrating from software iSCSI installation to Linux drivers (continued) offload iSCSI installing minimum bandwidth, allocating installing with RDMA models, supported adapters xviii optional, qede modules, tested optical MokManager, importing public key with...
  • Page 334 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series NFS-RDMA, configuring server and client OFED (continued) NIC driver, VMware optional parameters preparing for RHEL NIC partitioning, See NPAR preparing for SLES non-volatile random access memory, See preparing for Ubuntu...
  • Page 335 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series product PCIe definition of functional description card for adapter overview of connector slot product safety compliance xxvi public key, importing for Secure Boot host hardware requirement PVRDMA, configuring standards specifications...
  • Page 336 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series RDMA (continued) qedi module for Linux iSCSI offload qedi.ko, configuring qedi driver verifying qedil driver, iSCSI driver for VMware 24, virtual switch, creating Hyper-V qedr driver installation RDMA over Converged Ethernet, See RoCE...
  • Page 337 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series server RoCE (continued) RDMA counters, viewing initiator, configuring for NVMe-oF statistics, ESXi performance, optimizing statistics, viewing on Linux target, configuring for NVMe-oF traffic, running server message block, See SMB...
  • Page 338 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series technical support standards specifications statistics, driver contacting xxiii Storage Spaces Direct, configuring 264, downloading updates and documentation xxiii support account, registering for xxiii knowledgebase xxiv support case, submitting xxiii...
  • Page 339 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series troubleshooting (continued) VMware driver VCCI Class A certification xxvi Windows driver vCenter Plug-In for QCC GUI verbose level Linux driver operations verifying VMware driver parameter FCoE devices in Linux...
  • Page 340 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series VMware (continued) VLAN ID adding to host VNIC minimum host OS requirements assigning on SET NIC driver parameters, optional assigning to switch port 129, SR-IOV, configuring VMware Update Manager, installing driver with...
  • Page 341 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 45000 Series Windows Server FCoE boot installation iSCSI boot installation iSCSI offload, configuring minimum host OS requirements RoCE, configuring Windows Server R2, Microsoft Virtualization with Hyper-V Windows Setup dialog box, installation error...
  • Page 342 International Offices UK | Ireland | Germany | France | India | Japan | China | Hong Kong | Singapore | Taiwan | Israel Copyright © 2016–2019 Marvell. All rights reserved. Cavium LLC, and QLogic LLC are subsidiaries of Marvell. Cavium, QLogic, Marvell, and the Marvell logo are registered trademarks of Marvell.

Table of Contents