Cavium FastLinQ 3400 Series User Manual

Converged network adapters and intelligent ethernet adapters
Table of Contents

Advertisement

Quick Links

User's Guide
Converged Network Adapters and
Intelligent Ethernet Adapters
FastLinQ 3400 and 8400 Series
83840-546-00 N

Advertisement

Table of Contents
loading
Need help?

Need help?

Do you have a question about the FastLinQ 3400 Series and is the answer not in the manual?

Questions and answers

Subscribe to Our Youtube Channel

Summary of Contents for Cavium FastLinQ 3400 Series

  • Page 1 User’s Guide Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series 83840-546-00 N...
  • Page 2 Revision L, August 15, 2017 Revision M, June 22, 2018 Revision N, January 18, 2019 Changes Sections Affected Changed all www.cavium.com and www.qlogic.com Web site references to www.mar- vell.com. In the NOTE, added instructions for uninstalling “Preface” on page xxi QSC GUI in Linux systems.
  • Page 3 “Creating an iSCSI Boot Image with the dd Web site for the related packages. Method” on page 109 In the first paragraph, clarified the first sentence to “iSCSI Offload in Windows Server” on page 112 Cavium’s 8400 iSCSI-Offload…” “ 83840-546-00 N...
  • Page 4 “VLAN Overview” on page 187 VLANs that can be defined, and information about teaming. Added that “Multiple VLANs can be defined for each Cavium adapter on your server, depending on the amount of memory available in your system.” Added a new second paragraph about configuring a VLAN from a physical function, and configuring teams.
  • Page 5 User’s Guide–Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series Added QLASP to title and text where appropriate. “QLASP SLB Team Connected to a Single Hub” on page 237 Added QLASP to title and text where appropriate. “QLASP Teaming with Microsoft NLB”...
  • Page 6: Table Of Contents

    Table of Contents Preface Supported Products ......... . . Intended Audience .
  • Page 7 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series QLogic QConvergeConsole PowerKit ......QLogic Comprehensive Configuration Management.
  • Page 8 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series Linux Driver Software Introduction..........Limitations .
  • Page 9 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series qs_per_cos ......... cos_min_rate .
  • Page 10 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series cos_min_rate ........RSS .
  • Page 11 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series qfle3i_debug_level ........rq_size .
  • Page 12 Offload in Windows Server....... . . Installing Cavium Drivers........
  • Page 13 FCoE Boot Configuration in UEFI Boot Mode ....Preparing Cavium Multi-Boot Agent for FCoE Boot ..
  • Page 14 DCB in Windows Server 2012 ........Using Cavium Teaming Services Executive Summary .
  • Page 15 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series Hardware Requirements ........Repeater Hub .
  • Page 16 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series Teaming with Hubs (for troubleshooting purposes only) ... Hub Usage in Teaming Network Configurations ... . . QLASP SLB Teams .
  • Page 17 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series Running User Diagnostics in DOS Introduction..........System Requirements .
  • Page 18 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series List of Figures Figure Page MBA Configuration Menu ..........Power Management Options .
  • Page 19 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series 10-12 SLES 11 and 12 Installation: Boot Options ....... 10-13 SLES 11 and 12 Installation: Driver Update Medium .
  • Page 20 8400/3400 Series Windows Drivers ........Cavium 8400/3400 Series Linux Drivers ....... . .
  • Page 21: Preface

    QConvergeConsole (QCC) GUI is the only GUI management tool across all Cavium adapters. The QLogic Control Suite (QCS) GUI is no longer supported for the 8400/3400 Series Adapters and adapters based on 57xx/57xxx controllers, and has been replaced by the QCC GUI management tool.
  • Page 22: Intended Audience

    Preface Intended Audience  10GbE Ethernet Adapters  QLE3440-CU-CK  QLE3440-SR-CK  QLE3442-CU-CK  QLE3442-RJ-CK  QLE3442-SR-CK  QLE34152HOCU Intended Audience This guide is intended for personnel responsible for installing and maintaining computer networking equipment. What Is in This Guide This guide describes the features, installation, and configuration of the FastLinQ 8400/3400 Series Converged Network Adapters and Intelligent Ethernet Adapters.
  • Page 23: Related Materials

    Chapter 15 Configuring Data Center Bridging describes the DCB capabilities configuration, and requirements.  Chapter 16 Using Cavium Teaming Services describes the use of teaming to group physical devices to provide fault tolerance and load balancing.  Chapter 17 Configuring Teaming in Windows Server describes the teaming configuration for Windows Server operating systems.
  • Page 24 Preface Documentation Conventions WARNING  indicates the presence of a hazard that could cause serious injury or death.  Text in blue font indicates a hyperlink (jump) to a figure, table, or section in this guide, and links to Web sites are shown in underlined blue.
  • Page 25: License Agreements

    Preface License Agreements  CLI command syntax conventions include the following:  Plain text indicates items that you must type as shown. For example:  cfg iSCSIBoot CDUMP=Enable  < > (angle brackets) indicate a variable whose value you must specify. For example: ...
  • Page 26: Technical Support

    Preface Technical Support Technical Support Customers should contact their authorized maintenance provider for technical support of their Marvell QLogic products. Technical support for QLogic-direct products under warranty is available with a Marvell support account. To set up a support account (if needed) and submit a case: Go to www.marvell.com.
  • Page 27: Knowledge Database

    Preface Legal Notices Knowledge Database The Marvell knowledgebase is an extensive collection of product information that you can search for specific solutions. Marvell is constantly adding to the collection of information in the database to provide answers to your most urgent questions. To access the knowledgebase: Go to www.marvell.com.
  • Page 28: Laser Safety-Fda Notice

    Älä katso suoraan laitteeseen käyttämällä optisia instrumenttej Agency Certification The following sections contain a summary of EMC and EMI test specifications performed on the Cavium adapters to comply with emission and product safety standards. EMI and EMC Requirements CFR Title 47, Part 15, Subpart B:2013 Class A This device complies with Title 47, Part 15 of the United States Code of Federal Regulations.
  • Page 29: Kcc: Class A

    Preface Legal Notices VCCI 2015-04; Class A AS/NZS CISPR22 AS/NZS; CISPR 32:2015 Class A KC-RRA KN22 KN24(2013) Class A KCC: Class A Korea RRA Class A Certified Product Name/Model: Converged Network Adapters and Intelligent Ethernet Adapters Certification holder: QLogic Corporation Manufactured date: Refer to date code listed on product Manufacturer/Country of origin: QLogic Corporation/USA A class equipment...
  • Page 30: Vcci: Class A

    Preface Legal Notices VCCI: Class A This is a Class A product based on the standard of the Voluntary Control Council for Interference (VCCI). If this equipment is used in a domestic environment, radio interference may occur, in which case the user may be required to take corrective actions.
  • Page 31: Product Overview

     “Standards Specifications” on page 8 Functional Description Cavium 8400/3400 Series Adapters are based on 578xx gigabit Ethernet (GbE) and 10GbE converged network interface controllers (C-NIC) that can simultaneously perform accelerated data networking and storage networking on a standard Ethernet network. The C-NIC offers acceleration for popular protocols used in the data center, such as: ...
  • Page 32: Features

    100/1000Mbps or 10Gbps physical layer (PHY). The transceiver is fully compatible with the IEEE 802.3 standard for auto-negotiation of speed. Using the Cavium teaming software, you can split your network into virtual LANs (VLANs) and group multiple network adapters together into teams to provide network load balancing and fault tolerance.
  • Page 33 1–Product Overview Features  Adaptive interrupts  Receive side scaling (RSS)  Transmit side scaling (TSS)  Hardware transparent packet aggregation (TPA)  Manageability  QCC GUI for Windows and Linux. For information, see the Installation Guide: QConvergeConsole GUI (part number SN0051105-00) and QConvergeConsole GUI Help system.
  • Page 34 1–Product Overview Features  Advanced network features  Jumbo frames (up to 9,600 bytes). The OS and the link partner must support jumbo frames. Virtual LANs   IEEE Std 802.3ad Teaming Smart Load Balancing™ (SLB) teaming supported by  QLogic Advanced QLASP) NIC teaming driver on 32-bit/64-bit Windows Server Program (...
  • Page 35: Iscsi

    TCP layer. iSCSI processing can also be offloaded, thereby reducing CPU use even further. Cavium 8400/3400 Series Adapters target best-system performance, maintain system flexibility to changes, and support current and future OS convergence and integration.
  • Page 36: Adaptive Interrupt Frequency

    QLogic Control Suite CLI QCS CLI is a console application that you can run from a Windows command prompt or a Linux terminal console. Use QCS CLI to manage Cavium 8400/3400 Series Adapters or any Cavium adapter based on 57xx/57xxx controllers on both local and remote computer systems.
  • Page 37: Qlogic Qconvergeconsole Vcenter Plug-In

    Plug-In, see the User’s Guide: FastLinQ ESXCLI VMware Plug-in (part number BC0151101-00). QLogic QConvergeConsole PowerKit The QLogic QCC PowerKit allows you to manage your Cavium adapters locally and remotely through the PowerShell interface on Windows and Linux. For information about installing and using the QCC PowerKit, see the User’s Guide: PowerShell (part number BC0054518-00).
  • Page 38: Physical Characteristics

    PCIe slot or an optional spare low profile bracket for use in a low profile PCIe slot. Low profile slots are typically found in compact servers. Standards Specifications The Cavium 8400/3400 Series Adapters support the following standards specifications:  IEEE 802.3ae (10Gb Ethernet) ...
  • Page 39: System Requirements

      “Connecting Network Cables and Optical Modules” on page 12 System Requirements Before you install the Cavium 8400/3400 Series Adapters, verify that your system meets the following hardware and operating system requirements. Hardware Requirements  IA32- or EMT64-based computer that meets operating system requirements ®...
  • Page 40: Safety Precautions

    2–Installing the Hardware Safety Precautions Safety Precautions WARNING The adapter is being installed in a system that operates with voltages that can be lethal. Before you open the case of your system, observe the following precautions to protect yourself and to prevent damage to the system components.
  • Page 41: Installing The Network Adapter

    Installing the Network Adapter Installing the Network Adapter Follow these steps to install Cavium 8400/3400 Series Adapters in most systems. For details about performing these tasks on your specific system, refer to the manuals that were supplied with your system.
  • Page 42: Connecting Network Cables And Optical Modules

    Connecting Network Cables and Optical Modules Connecting Network Cables and Optical Modules Cavium 8400/3400 Series Adapters have either an RJ45 connector used for attaching the system to an Ethernet copper-wire segment, or a fiber optic connector for attaching the system to an Ethernet fiber optic segment.
  • Page 43 2–Installing the Hardware Connecting Network Cables and Optical Modules 8400/3400 Series Adapters support standards-compliant 10G DAC cables. Table 2-2 lists the optical modules that have been tested on the 8400/3400 Series Adapters. Table 2-2. Supported Optical Modules Adapter Model Optical Modules ®...
  • Page 44: Setting Up Multi-Boot Agent Driver Software

    (RPL), iSCSI, and bootstrap protocol (BOOTP). MBA is a software module that allows your network computer to boot with the images provided by remote servers across the network. The Cavium MBA driver complies with the PXE 2.1 specification and is released with split binary images. The MBA provides flexibility to users in different environments where the motherboard may or may not have built-in base code.
  • Page 45: Setting Up Mba In A Client Environment

    Cavium network adapter using the Comprehensive Configuration Management (CCM) utility. To configure the MBA driver on LAN on motherboard (LOM) models of the Cavium network adapter, check your system documentation. Both the MBA driver and the CCM utility reside on the adapter Flash memory.
  • Page 46 3–Setting Up Multi-Boot Agent Driver Software Setting Up MBA in a Client Environment To configure the MBA driver with CCM: Restart your system. Press the CTRL+S keys within four seconds after you are prompted to do so. A list of adapters appears. Select the adapter to configure, and then press the ENTER key.
  • Page 47: Controlling Eee

    The Initrd.img file distributed with Red Hat Enterprise Linux, however, does not have a Linux network driver for the Cavium 8400/3400 Series Adapters. This version requires a driver disk for drivers that are not part of the standard distribution. You can create a driver disk for the Cavium 8400/3400 Series Adapters from the image distributed with the installation CD.
  • Page 48: Ms-Dos Undi Or Intel Apitest

    3–Setting Up Multi-Boot Agent Driver Software Setting Up MBA in a Server Environment MS-DOS UNDI or Intel APITEST To boot in MS-DOS mode and connect to a network for the MS-DOS environment, download the Intel PXE Plug-in Development Kit (PDK) from the Intel Web site. This PXE PDK comes with a TFTP/ProxyDHCP/Boot server.
  • Page 49: Windows Driver Software

    “Setting Power Management Options” on page 25 NOTE The QCC GUI is the only GUI management tool across all Cavium adapters. The QLogic Control Suite (QCS) GUI is no longer supported for the 8400/3400 Series Adapters and adapters based on 57xx/57xxx controllers, and has been replaced by the QCC GUI management tool.
  • Page 50: Windows Drivers

    4–Windows Driver Software Windows Drivers Windows Drivers Table 4-1 describes the Windows drivers for the 8400/3400 Series Adapters. Table 4-1. 8400/3400 Series Windows Drivers Windows Description Driver bxvbd This system driver manages all PCI device resources (registers, host interface queues) on the 5706/5708/5709/5716 1GbE network adapt- ers.
  • Page 51: Installing The Driver Software

    When Windows first starts after a hardware device has been installed (such as Cavium 8400/3400 Series Adapters), or after the existing device driver has been removed, the operating system automatically detects the hardware and prompts you to install the driver software for that device.
  • Page 52: Using The Installer

    4–Windows Driver Software Installing the Driver Software Using the Installer If supported and if you will use the Cavium iSCSI Crash Dump utility, it is important to follow the installation sequence: Run the installer. Install the Microsoft iSCSI Software Initiator along with the patch.
  • Page 53: Using Silent Installation

    4–Windows Driver Software Installing the Driver Software Using Silent Installation NOTE  All commands are case sensitive.  For detailed instructions and information about unattended installs, refer to the silent.txt file in the folder. To perform a silent install from within the installer source folder: Issue the following command: setup /s /v/qn To perform a silent upgrade from within the installer source folder:...
  • Page 54: Removing The Device Drivers

    Removing the Device Drivers Removing the Device Drivers Uninstall the Cavium 8400/3400 Series device drivers from your system only through the InstallShield wizard. Uninstalling the device drivers with Device Manager or any other means may not provide a clean uninstall and may cause the system to become unstable.
  • Page 55: Setting Power Management Options

    4–Windows Driver Software Setting Power Management Options Setting Power Management Options You can set power management options to allow the operating system to turn off the controller to save power. If the device is busy doing something (servicing a call, for example) however, the operating system will not shut down the device. The operating system attempts to shut down every possible device only when the computer attempts to go into hibernation.
  • Page 56: Windows Server 2016 And 2019

    Windows Server 2016 and 2019 This chapter describes how to configure VXLAN for Windows Server 2016 and 2019, and Windows Nano Server. Configuring VXLAN This section provides procedures for enabling the virtual extensible LAN (VXLAN) offload and deploying a software-defined network. Enabling VXLAN Offload on the Adapter To enable VXLAN offload on a FastLinQ adapter: Open the FastLinQ adapter properties.
  • Page 57: Deploying A Software Defined Network

    5–Windows Server 2016 and 2019 Configuring VXLAN Figure 5-1 shows the FastLinQ adapter properties on the Advanced page. Figure 5-1. Enabling VXLAN Encapsulated Task Offload Deploying a Software Defined Network To take advantage of VXLAN Encapsulation Task Offload on virtual machines, you must deploy a software defined network (SDN) that utilizes a Microsoft Network Controller.
  • Page 58: Linux Driver Software

    “Teaming with Channel Bonding” on page 46  “Statistics” on page 47 Introduction This section describes the Linux drivers for the Cavium 8400/3400 Series network adapters. Table 6-1 lists the 8400/3400 Series Linux drivers. For information about iSCSI offload in Linux server, see “iSCSI Offload in Linux Server”...
  • Page 59: Limitations

    Linux FCoE kernel mode driver used to provide a translation layer between the Linux SCSI stack and the Cavium FCoE firmware and hardware. In addition, the driver interfaces with the networking layer to transmit and receive encapsulated FCoE frames on behalf of open-fcoe’s libfc/libfcoe for FIP/device discovery.
  • Page 60: Bnx2Fc Driver

    6–Linux Driver Software Packaging bnx2fc Driver The current version of the driver has been tested on 2.6.x kernels, starting from 2.6.32 kernel, which is included in RHEL 6.1 distribution. This driver may not compile on older kernels. Testing was limited to i386 and x86_64 architectures, ®...
  • Page 61: Installing Linux Driver Software

    6–Linux Driver Software Installing Linux Driver Software Installing Linux Driver Software Linux driver software installation steps include:  Installing the Binary RPM Package  Installing the KMP/KMOD Package  Installing Drivers on Ubuntu NOTE If a bnx2x, bnx2i, or bnx2fc driver is loaded and the Linux kernel is updated, you must recompile the driver module if the driver module was installed using the source RPM or the TAR package.
  • Page 62: Installing The Kmp/Kmod Package

    6–Linux Driver Software Installing Linux Driver Software For FCoE offload, after rebooting, create configuration files for all FCoE ethX interfaces: cd /etc/fcoe cp cfg-ethx cfg-<ethX FCoE interface name> NOTE Note that your distribution might have a different naming scheme for Ethernet devices (pXpX or emX instead of ethX).
  • Page 63: Installing Drivers On Ubuntu

    6–Linux Driver Software Installing Linux Driver Software To install the KMP/KMOD package, and load the driver for FCoE without rebooting: Remove FCoE interfaces by issuing the following command: fcoeadm -d <interface_name> Remove the drivers in sequence by issuing the following commands: # rmmod bnx2fc # rmmod cnic # rmmod bnx2x...
  • Page 64: Load And Run Necessary Iscsi Software Components

    For RoCE, FCoE, and iSCSI drivers, refer to the deb packages. Load and Run Necessary iSCSI Software Components The Cavium iSCSI Offload software suite comprises three kernel modules and a user daemon. The required software components can be loaded either manually or through system services.
  • Page 65: Uninstalling Qcc Gui

    6–Linux Driver Software Patching PCI Files (Optional) To unload the driver, use ifconfig to bring down all eth# interfaces opened by the driver, and then issue the following command: rmmod bnx2x NOTE The rmmod bnx2x command will not remove the cnic module or the offload protocol driver modules.
  • Page 66: Network Installations

    6–Linux Driver Software Network Installations Network Installations For network installations through NFS, FTP, or HTTP (using a network boot disk or PXE), you may need a driver disk that contains the bnx2x driver. The driver disk includes images for the most recent Red Hat and SUSE versions. Boot drivers for other Linux versions can be compiled by modifying the makefile and the make environment.
  • Page 67: Disable_Tpa

    6–Linux Driver Software Setting Values for Optional Properties disable_tpa Use the optional parameter disable_tpa to disable the transparent packet aggregation (TPA) feature. By default, the driver aggregates TCP packets. To disable the TPA feature on all 8400/3400 Series Adaptersnetwork adapters in the system, set the disable_tpa parameter to 1: insmod bnx2x.ko disable_tpa=1 modprobe bnx2x disable_tpa=1...
  • Page 68: Multi_Mode

    6–Linux Driver Software Setting Values for Optional Properties multi_mode Use the optional parameter multi_mode on systems that support multiqueue networking. Multiqueue networking on the receive side depends only on MSI-X capability of the system; multiqueue networking on the transmit side is supported only on kernels starting from 2.6.27.
  • Page 69: Num_Vfs

    6–Linux Driver Software Setting Values for Optional Properties num_vfs Use the num_vfs parameter to active SR-IOV. For example: modprobe bnx2x num_vfs=<num of VFs, 1..64> The actual number of VFs are derived from this parameter, as well as the VF maximum value configured by CCM. autogreen parameter forces the specific AutoGrEEEN behavior.
  • Page 70: Use_Random_Vf_Mac

    6–Linux Driver Software Setting Values for Optional Properties use_random_vf_mac When the use_random_vf_mac parameter is set to 1, all created VFs have a random forced MAC. This MAC can be changed through the hypervisor with the command ip link set dev <pf device> vf <index> mac <hw mac>. This MAC cannot be changed locally, that is, by issuing the command ifconfig <vf device>...
  • Page 71: Qs_Per_Cos

    6–Linux Driver Software Setting Values for Optional Properties For example, set the pri_map parameter to 0x22221100 to map priority 0 and 1 to CoS 0, map priority 2 and 3 to CoS 1, and map priority 4–7 to CoS 2. In another example, set the pri_map parameter to 0x11110000 to map priority 0 to 3 to CoS 0, and map priority 4 to 7 to CoS 1.
  • Page 72: Bnx2I Driver Parameters

    CAUTION Do not use error_mask if you are not sure about the consequences. These values are to be discussed with the Cavium development team on a case-by-case basis. This parameter is just a mechanism to work around iSCSI implementation issues on the target side and without proper knowledge of iSCSI protocol details, users are advised not to experiment with these parameters.
  • Page 73: Sq_Size

    QP size increases, the quantity of connections supported decreases. With the default values, the adapters can offload 28 connections. Default: Range: Note that Cavium validation is limited to a power of 2; for example, , and rq_size Use the...
  • Page 74: Event_Coal_Div

    The tcp_buf_size parameter controls the size of the transmit and receive buffers for offload connections. Default: 64 CAUTION Cavium does not recommend changing this parameter. debug_logging The bit mask to enable debug logging enables and disables driver debug logging. Default: None. For example: insmod bnx2fc.ko debug_logging=0xff...
  • Page 75: Ooo_Enable

    6–Linux Driver Software Setting Values for Optional Properties ooo_enable (enable TCP out-of-order) parameter feature enables and ooo_enable disables TCP out-of-order RX handling feature on offloaded iSCSI connections. Default: TCP out-of-order feature is ENABLED. For example: insmod bnx2i.ko ooo_enable=1 modprobe bnx2i ooo_enable=1 bnx2fc Driver Parameters Supply the optional parameter debug_logging as a command line argument to the insmod or modprobe command for bnx2fc.
  • Page 76: Cnic_Dump_Kwqe_Enable

    6–Linux Driver Software Driver Defaults cnic_dump_kwqe_enable parameter enables and disables single work-queue cnic_dump_kwe_en element message (kwqe) logging. By default, this parameter is set to (disabled). Driver Defaults This section defines the defaults for the bnx2x driver. bnx2x Driver Defaults Defaults for the bnx2x Linux driver are listed in Table 6-2.
  • Page 77: Statistics

    6–Linux Driver Software Statistics Statistics Detailed statistics and configuration information can be viewed using the ethtool utility. See the ethtool man page for more information. 83840-546-00 N...
  • Page 78: Vmware Driver Software

    (for L2 networking) and on behalf of the bnx2fc (FCoE protocol) and C-NIC drivers. cnic This driver provides the interface between Cavium’s upper layer protocol (storage) drivers and the FastLinQ 8400/3400 Series 10Gb network adapters. The converged network interface controller (C-NIC) module works with the bnx2 and bnx2x network drivers in the downstream, and the bnx2fc (FCoE) and bnx2i (iSCSI) drivers in the upstream.
  • Page 79: Packaging

    Series 10Gb network adapters. bnx2fc This Cavium VMware FCoE driver is a kernel mode driver that provides a translation layer between the VMware SCSI stack and the Cavium FCoE firmware and hardware. In addition, the bnx2fc driver interfaces with the networking layer to transmit and receive encapsulated FCoE frames on behalf of the Open-FCoE libfc/libfcoe for FIP and device discovery.
  • Page 80 7–VMware Driver Software Download, Install, and Update Drivers Under Keyword, type the adapter name in quotes (for example, "QLE3442"), and then click Update and View Results (Figure 7-1). Figure 7-1. Selecting an Adapter Figure 7-2 shows the available QLE3442 driver versions. Figure 7-2.
  • Page 81 7–VMware Driver Software Download, Install, and Update Drivers Click the model link to show a listing of all of the driver packages (Figure 7-4). Click the ESXi version that you want, and then click the link to go to the VMware driver download Web page. Figure 7-4.
  • Page 82 7–VMware Driver Software Download, Install, and Update Drivers Log in to the VMware driver download page, and then click Download to download the driver package that you want (Figure 7-5). Figure 7-5. Download the VMware Driver Package This package is double zipped; you must unzip the package once before copying the offline bundle zip file to the ESXi host.
  • Page 83: Driver Parameters

    7–VMware Driver Software Driver Parameters Driver Parameters The following sections describe the parameters for these VMware ESXi drivers:  bnx2x Driver Parameters  cnic Driver Parameters  qfle3 Driver Parameters  qfle3i Driver Parameters  qfle3f Driver Parameters bnx2x Driver Parameters Several optional bnx2x driver parameters can be supplied as a command line argument to the vmkload_mod command.
  • Page 84: Dropless_Fc

    7–VMware Driver Software Driver Parameters dropless_fc The dropless_fc parameter is set to (by default) to enable a complementary flow control mechanism on 8400/3400 Series Adapters. The normal flow control mechanism is to send pause frames when the on-chip buffer (BRB) is reaching a specific level of occupancy, which is a performance-targeted flow control mechanism.
  • Page 85: Qs_Per_Cos

    7–VMware Driver Software Driver Parameters For example, set the pri_map parameter to 0x22221100 to map priority 0 and 1 to CoS 0, map priority 2 and 3 to CoS 1, and map priority 4 to 7 to CoS 2. In another example, set the pri_map parameter to 0x11110000 to map priority 0 to 3 to CoS 0, and map priority 4 to 7 to CoS 1.
  • Page 86: Cnic Driver Parameters

    #esxcfg-module -s Param=Value qfle3 debug_mask Set the debug_mask module parameter only for debug purposes, as the additional logging will flood numerous messages. Cavium does not recommend setting this parameter for regular driver use. The valid values for debug_mask are: 0x00000001...
  • Page 87: Enable_Fwdump

    7–VMware Driver Software Driver Parameters 0x00000400 /* lro processing 0x00000800 /* uplink debug 0x00001000 /* qeueu debug 0x00002000 /* hw debug 0x00004000 /* cmp debug 0x00008000 /* start process debug 0x00010000 /* debug assert 0x00020000 /* debug poll 0x00040000 /* debug TXSG 0x00080000 /* debug crash 0x00100000...
  • Page 88: Intr_Mode

    7–VMware Driver Software Driver Parameters intr_mode The intr_mode parameter sets the interrupt mode: Value Mode Auto (default) MSI-X This parameter specifies the MTU when the driver is loaded. Valid values are in the range of 0–9000. (default: 1500) offload_flags This parameter specifies the offload flags: Value Flag VXLAN offload GENEVE offload...
  • Page 89: Txqueue_Nr

    7–VMware Driver Software Driver Parameters txqueue_nr The txqueue_nr parameter sets the number of transmit queues. Set to 0 for Auto. Set to a value in the range of 1–8 for the number of fixed queues. The default is 4 queues. txring_bd_nr The txring_bd_nr parameter sets the number of transmit BDs.
  • Page 90: Dropless_Fc

    7–VMware Driver Software Driver Parameters dropless_fc The dropless_fc parameter is set to (by default) to enable a complementary flow control mechanism on 8400/3400 Series Adapters. The normal flow control mechanism is to send pause frames when the on-chip buffer (BRB) is reaching a specific level of occupancy, which is a performance-targeted flow control mechanism.
  • Page 91: Qfle3I_Max_Task_Pgs

    CAUTION Do not use error_mask if you are not sure about the consequences. These values are to be discussed with the Cavium development team on a case-by-case basis. This parameter is just a mechanism to work around iSCSI implementation issues on the target side and without proper knowledge of iSCSI protocol details, users are advised not to experiment with these parameters.
  • Page 92: Event_Coal_Div

    7–VMware Driver Software Driver Parameters event_coal_div The event_coal_div parameter sets the event coalescing divide factor. The default value is 1. event_coal_min The event_coal_min parameter sets the minimum number of event-coalescing commands. The default is 24. ooo_enable (enable TCP out-of-order) parameter feature enables and ooo_enable disables TCP out-of-order RX handling feature on offloaded iSCSI connections.
  • Page 93: Rq_Size

    ASYNC/NOP/REJECT messages and SCSI sense data. Default: 16 Range: 16 to 32 Note that Cavium validation is limited to a power of 2; for example, 16 or 32. sq_size Use th parameter to choose send queue size for offloaded...
  • Page 94: Qfle3F_Debug_Level

    7–VMware Driver Software Driver Parameters qfle3f_debug_level The qfle_3f_debug_level parameter enables additional messaging from the driver. Set to 0 (default) to disable additional messaging. Set to 1 to enable additional messaging. qfle3f_devloss_tmo The qfle3f_devloss_tmo parameter sets the remote LUN device loss time-out value (in seconds).
  • Page 95: Driver Defaults

    7–VMware Driver Software Driver Defaults Driver Defaults The following sections list the defaults for the 8400/3400 Series Adapters drivers. bnx2x Driver Defaults Defaults for the bnx2x VMware ESXi driver are listed in Table 7-2. Table 7-2. bnx2x Driver Defaults Parameter Default Speed Control Autonegotiation with all speeds advertised...
  • Page 96: Unloading And Removing Driver

    The following are the most common sample messages that may be logged in the file /var/log/messages. Driver Sign On Cavium 8400/3400 Series 10Gigabit Ethernet Driver bnx2x 0.40.15 ($DateTime: 2007/11/22 05:32:40 $) NIC Detected eth0: Cavium 8400/3400 Series XGb (A1) PCI-E x8 2.5GHz found at mem e8800000, IRQ 16, node addr...
  • Page 97: Msi-X Enabled Successfully

    7–VMware Driver Software Driver Defaults MSI-X Enabled Successfully bnx2x: eth0: using MSI-X Link Up and Speed Indication bnx2x: eth0 NIC Link is Up, 10000 Mbps full duplex, receive & transmit flow control ON Link Down Indication bnx2x: eth0 NIC Link is Down Memory Limitation If you see messages in the log file that look like the following, it indicates that the ESXi host is severely strained:...
  • Page 98: Fcoe Support

    CPUs on the machine. FCoE Support This section describes the contents and procedures associated with installation of the VMware software package for supporting Cavium FCoE C-NICs. To enumerate the FCoE-Offload instance of the required ports, you must complete the following procedures: ...
  • Page 99 The label Software FCoE is a VMware term used to describe initiators that depend on the inbox FCoE libraries and utilities. Cavium's FCoE solution is a fully state connection-based hardware offload solution designed to significantly reduce the CPU burden encumbered by a non-offload software initiator.
  • Page 100: Verifying Installation

    NPIV is not currently supported with this release on ESXi, due to lack of supporting inbox components.  Non-offload FCoE is not supported with offload-capable Cavium devices. Only the full hardware offload path is supported. Supported Distributions The FCoE and DCB feature set is supported on VMware ESXi 5.5 and later.
  • Page 101: Upgrading The Firmware

    Upgrading the Firmware Cavium provides a Windows and Linux utility for upgrading adapter firmware and boot code. Each utility executes as a console application that can be run from a command prompt. Upgrade VMware firmware with the QCC VMware vSphere plug-in;...
  • Page 102 8–Upgrading the Firmware Upgrading Firmware for Windows Upgrading SWIM3B image: to version SWIM3 7.12.31 Upgrading SWIM4B image: to version SWIM4 7.12.31 Upgrading SWIM5B image: to version SWIM5 7.12.31 Upgrading SWIM6B image: to version SWIM6 7.12.31 Upgrading SWIM7B image: to version SWIM7 7.12.31 Upgrading SWIM8B image: to version SWIM8 7.12.31 Forced upgrading E3_EC_V2 image: from ver N/A to ver N/A Forced upgrading E3_PCIE_V2 image: from ver N/A to ver N/A...
  • Page 103 8–Upgrading the Firmware Upgrading Firmware for Windows Updating PCI ROM header with Vendor ID = 0x14e4 Device ID = 0x16a1 Forced upgrading MBA image: from ver PCI30 MBA 7.11.3 ;EFI x64 7.10.54 to ver PCI30 7.12.4 C Brd Name - ---- ------------ --- ------------------------------------------------------ 0 16A1 000E1E508E20 Yes [0061] QLogic 57840 10 Gigabit Ethernet #61 1 16A1 000E1E508E22 Yes [0062] QLogic 57840 10 Gigabit Ethernet #62 Upgrading MBA...
  • Page 104: Upgrading Firmware For Linux

    8–Upgrading the Firmware Upgrading Firmware for Linux The System Reboot is required in order for the upgrade to take effect. Quitting program ... Program Exit Code: (95) Upgrading Firmware for Linux To upgrade firmware for Linux: Go to the Download and Documentation page at www.marvell.com, as described in “Downloading Updates and Documentation”...
  • Page 105 8–Upgrading the Firmware Upgrading Firmware for Linux C Brd Name - ---- ------------ --- ------------------------------------------------------ 0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1) 1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2) 2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3) 3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4) 4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1) 5 16A1 000E1E503152 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p2)
  • Page 106 8–Upgrading the Firmware Upgrading Firmware for Linux Forced upgrading MBA image: from ver PCI30_CLP MBA 7.10.33;EFI x64 7.10.50 to ver PCI30_CLP MBA 7.12.4 C Brd Name - ---- ------------ --- ------------------------------------------------------ 0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1) 1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2) 2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3) 3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4)
  • Page 107 8–Upgrading the Firmware Upgrading Firmware for Linux NIC is not supported. C Brd Name - ---- ------------ --- ------------------------------------------------------ 0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1) 1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2) 2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3) 3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4) 4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1)
  • Page 108 8–Upgrading the Firmware Upgrading Firmware for Linux Quitting program ... Program Exit Code: (95) Successfully upgraded ibootv712.01 ****************************************************************************** QLogic Firmware Upgrade Utility for Linux v2.7.13 ****************************************************************************** C Brd Name - ---- ------------ --- ------------------------------------------------------ 0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1) 1 1639 0026B942B540 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em2) 2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3) 3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4)
  • Page 109 8–Upgrading the Firmware Upgrading Firmware for Linux 2 1639 0026B942B542 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em3) 3 1639 0026B942B544 Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em4) 4 16A1 000E1E503150 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p1) 5 16A1 000E1E503152 Yes BCM57840 NetXtreme II 10 Gigabit Ethernet rev 11 (p1p2) Forced upgrading L2T image: from ver L2T 7.10.31 to ver L2T 7.10.31 Forced upgrading L2C image: from ver L2C 7.10.31 to ver L2C 7.10.31...
  • Page 110: Configuring Iscsi Protocol

    For both Windows and Linux operating systems, iSCSI boot can be configured to boot with two distinctive paths: non-offload (also known as Microsoft/Open-iSCSI initiator) and offload (Cavium’s offload iSCSI driver or HBA). Configure the path in the iSCSI Configuration utility, General Parameters window, by setting the HBA Boot Mode option.
  • Page 111: Iscsi Boot Setup

    9–Configuring iSCSI Protocol iSCSI Boot iSCSI Boot Setup The iSCSI boot setup includes:  Configuring the iSCSI Target  Configuring iSCSI Boot Parameters  Configuring iSCSI Boot Parameters on VMware  MBA Boot Protocol Configuration  iSCSI Boot Configuration  Enabling CHAP Authentication ...
  • Page 112: Configuring Iscsi Boot Parameters

    9–Configuring iSCSI Protocol iSCSI Boot Configuring iSCSI Boot Parameters Configure the Cavium iSCSI boot software for either static or dynamic configuration. For configuration options available from the General Parameters window, see Table 9-1. Table 9-1 lists parameters for both IPv4 and IPv6. Parameters specific to either IPv4 or IPv6 are noted.
  • Page 113: Configuring Iscsi Boot Parameters On Vmware

    9–Configuring iSCSI Protocol iSCSI Boot Table 9-1. Configuration Options (Continued) Option Description Allows specifying that the iSCSI target drive will appear as the first hard Target as First HDD drive in the system. LUN Busy Retry Count Controls the quantity of connection retries the iSCSI Boot initiator will attempt if the iSCSI target LUN is busy.
  • Page 114: Mba Boot Protocol Configuration

    9–Configuring iSCSI Protocol iSCSI Boot Set the target parameters. Configure the target system's port IP address, target name, and login information. If authentication is required, configure the CHAP ID and CHAP secret parameters. On the storage array, configure the Boot LUN ID (the LUN on the target that is used for the vSphere host installation and subsequent boots).
  • Page 115 9–Configuring iSCSI Protocol iSCSI Boot In the CCM Device List (Figure 9-2), press the UP ARROW or DOWN ARROW keys to select a device, and then press ENTER. Figure 9-2. CCM Device List On the Main Menu, select MBA Configuration (Figure 9-3), and then press ENTER.
  • Page 116: Iscsi Boot Configuration

    9–Configuring iSCSI Protocol iSCSI Boot On the MBA Configuration Menu (Figure 9-4), press the UP ARROW or DOWN ARROW keys to select Boot Protocol. Press the LEFT ARROW or RIGHT ARROW keys to change the boot protocol option to iSCSI. Press ENTER.
  • Page 117 9–Configuring iSCSI Protocol iSCSI Boot To configure the iSCSI boot parameters using static configuration: On the Main Menu, select iSCSI Boot Configuration (Figure 9-5), and then press ENTER. Figure 9-5. Selecting iSCSI Boot Configuration On the iSCSI Boot Main Menu, select General Parameters (Figure 9-6), and then press ENTER.
  • Page 118 9–Configuring iSCSI Protocol iSCSI Boot  HBA Boot Mode: As required NOTE For initial OS installation to a blank iSCSI target LUN from a CD/DVD-ROM or mounted bootable OS installation image, set Boot to iSCSI Target to One Time Disabled. This setting causes the system not to boot from the configured iSCSI target after establishing a successful login and connection.
  • Page 119 9–Configuring iSCSI Protocol iSCSI Boot On the 1st Target Parameters Menu, enable Connect to connect to the iSCSI target. Type values for the following parameters for the iSCSI target, and then press ENTER:  IP Address  TCP Port  Boot LUN ...
  • Page 120 9–Configuring iSCSI Protocol iSCSI Boot Dynamic iSCSI Boot Configuration In a dynamic configuration, you only need to specify that the system’s IP address and target/initiator information are provided by a DHCP server (see IPv4 and IPv6 configurations in “Configuring the DHCP Server to Support iSCSI Boot” on page 92).
  • Page 121 9–Configuring iSCSI Protocol iSCSI Boot  DHCP Vendor ID: As required Link Up Delay Time: As required   Use TCP Timestamp: As required  Target as First HDD: As required  LUN Busy Retry Count: As required  IP Version: As required ...
  • Page 122: Enabling Chap Authentication

    DHCP iSCSI Boot Configuration for IPv6 DHCP iSCSI Boot Configurations for IPv4 The DHCP protocol includes a several options that provide configuration information to the DHCP client. For iSCSI boot, Cavium adapters support the following DHCP configurations:  DHCP Option 17, Root Path ...
  • Page 123 9–Configuring iSCSI Protocol iSCSI Boot DHCP Option 17, Root Path Option 17 is used to pass the iSCSI target information to the iSCSI client. The format of the root path as defined in IETC RFC 4173 is: "iscsi:"<servername>":"<protocol>":"<port>":"<LUN>":"<targetname>" Table 9-2 lists the DHCP option 17 parameters.
  • Page 124: Dhcp Iscsi Boot Configuration For Ipv6

     DHCPv6 Option 16, Vendor Class Option  DHCPv6 Option 17, Vendor-specific Information NOTE The DHCPv6 standard Root Path option is not yet available. Cavium suggests using Option 16 or Option 17 for dynamic iSCSI boot IPv6 support. 83840-546-00 N...
  • Page 125: Configuring The Dhcp Server

    9–Configuring iSCSI Protocol iSCSI Boot DHCPv6 Option 16, Vendor Class Option DHCPv6 Option 16 (vendor class option) must be present and must contain a string that matches your configured DHCP Vendor ID parameter. The DHCP Vendor ID value is QLGC ISAN, as shown in General Parameters of the iSCSI Boot Configuration menu.
  • Page 126: Preparing The Iscsi Boot Image

    Bindview.exe (Windows Server 2008 R2 only; see KB976042)  To set up Windows Server 2008 iSCSI boot: Remove any local hard drives on the system to be booted (the “remote system”). Load the latest Cavium MBA and iSCSI boot images onto NVRAM of the adapter. 83840-546-00 N...
  • Page 127 9–Configuring iSCSI Protocol iSCSI Boot Configure the BIOS on the remote system to have the Cavium MBA as the first bootable device, and the CDROM as the second device. Configure the iSCSI target to allow a connection from the remote device.
  • Page 128 Remove any local hard drives on the system to be booted (the “remote system”). Load the latest Cavium MBA and iSCSI boot images into the NVRAM of the adapter. Configure the BIOS on the remote system to have the Cavium MBA as the first bootable device and the CDROM as the second device.
  • Page 129 Following another system restart, check and verify that the remote system is able to boot to the desktop. After Windows Server 2012 boots to the OS, Cavium recommends running the driver installer to complete the Cavium drivers and application installation.
  • Page 130 Note that SLES 10.x and SLES 11 have support only for the non-offload path. To set up Linux iSCSI boot: For driver update, obtain the latest Cavium Linux driver CD. Configure the iSCSI Boot Parameters for DVD direct install to target by disabling the boot-from-target option on the network adapter.
  • Page 131 9–Configuring iSCSI Protocol iSCSI Boot Reboot the system. The system connects to the iSCSI target, and then boots from the CD/DVD drive. Follow the corresponding OS instructions. RHEL 5.5—At the "boot:" prompt, type linux dd, and then press ENTER. SUSE 11.x—Choose installation, and then at the boot option, type withiscsi=1 netsetup=1.
  • Page 132 9–Configuring iSCSI Protocol iSCSI Boot Make sure 2, 3, and 5 run levels of iSCSI service are on as follows: chkconfig -level 235 iscsi on For Red Hat 6.0, make sure Network Manager service is stopped and disabled. Install iscsiuio if needed (not required for SUSE 10). Install linux-nx2 package if needed.
  • Page 133 9–Configuring iSCSI Protocol iSCSI Boot ### BEGIN INIT INFO # Provides: iscsiboot # Required-Start: # Should-Start: boot.multipath # Required-Stop: # Should-Stop: $null # Default-Start: # Default-Stop: # Short-Description: iSCSI initiator daemon root-fs support # Description: Starts the iSCSI initiator daemon if the root-filesystem is on an iSCSI device ### END INIT INFO ISCSIADM=/sbin/iscsiadm...
  • Page 134 9–Configuring iSCSI Protocol iSCSI Boot ip=${i%%:*} STARTUP=`$ISCSIADM -m node -p $ip -T $target 2> /dev/null | grep "node.conn\[0\].startup" | cut -d' ' -f3` if [ "$STARTUP" -a "$STARTUP" != "onboot" ] ; then $ISCSIADM -m node -p $ip -T $target -o update -n node.conn[0].startup -v onboot done # Reset status of this service...
  • Page 135 9–Configuring iSCSI Protocol iSCSI Boot echo "Usage: $0 {start|stop|status|restart|reload}" exit 1 esac rc_exit VMware iSCSI Boot from SAN The 8400/3400 Series adapters are VMware-dependent hardware iSCSI-Offload adapters. The iSCSI-Offload functionality partially depends on the VMware Open-iSCSI library and networking stack for iSCSI configuration and the management interfaces provided by VMware.
  • Page 136: Booting

    9–Configuring iSCSI Protocol iSCSI Boot Installation begins, in which the following occurs: As part of the installation process, a memory-only stateless VMkernel is loaded. The VMkernel discovers suitable LUNs for installation, one of which is the remote iSCSI LUN. For the VMkernel iSCSI driver to communicate with the target, the TCP/IP protocol must be set up (as part of the startup init script).
  • Page 137: Configuring Vlans For Iscsi Boot

    9–Configuring iSCSI Protocol iSCSI Boot Booting from iSCSI LUN on VMware After installing the boot image onto the remote LUN, you may need to change the iSCSI configuration. If the One Time Disable option was not used, then the Boot to iSCSI Target setting must be changed from Disabled to Enabled.
  • Page 138 9–Configuring iSCSI Protocol iSCSI Boot On the Main Menu, select MBA Configuration (Figure 9-10), and then press ENTER. Figure 9-10. Configuring VLANs—Multiboot Agent Configuration On the MBA Configuration Menu (Figure 9-11), press the UP ARROW or DOWN ARROW key to select each of following parameters. ...
  • Page 139: Other Iscsi Boot Considerations

    Marvell Web site. Install the bibt package on you Linux system. You can get this package from Cavium CD. Delete all ifcfg-eth* files. Configure one port of the network adapter to connect to the iSCSI target (for instructions, see “Configuring the iSCSI Target”...
  • Page 140: Troubleshooting Iscsi Boot

    9–Configuring iSCSI Protocol iSCSI Boot Use the dd command to copy from the local hard drive to iSCSI target. When DD is done, issue the sync command two times, log out, and then log in to iSCSI target again. On all partitions created on the iSCSI target, issue the fsck command. Change to the /OPT/bcm/bibt folder and run the iscsi_setup.sh script to create the initrd images.
  • Page 141 9–Configuring iSCSI Protocol iSCSI Boot Problem: The Cavium iSCSI Crash Dump utility will not work properly to capture a memory dump when the link speed for iSCSI boot is configured for 10Mbps or 100Mbps. Solution: The iSCSI Crash Dump utility is supported when the link speed for iSCSI boot is configured for 1Gbps or 10Gbps.
  • Page 142: Iscsi Crash Dump

    Solution: Disable the Console Redirect setting in BIOS, and then reboot. iSCSI Crash Dump If you intend to use the Cavium iSCSI Crash Dump utility, it is important that you follow the iSCSI Crash Dump driver installation procedure. For more information, “Using the Installer”...
  • Page 143: Installing Cavium Drivers

    9–Configuring iSCSI Protocol iSCSI Offload in Windows Server  Installing the Microsoft iSCSI Initiator  Configuring Microsoft Initiator to Use Cavium’s iSCSI Offload Installing Cavium Drivers Install the Windows drivers as described in Chapter 4 Windows Driver Software. Enabling and Disabling iSCSI-Offload...
  • Page 144: Installing The Microsoft Iscsi Initiator

    9–Configuring iSCSI Protocol iSCSI Offload in Windows Server Click the Apply button. Figure 9-13. Enabling or Disabling iSCSI-Offload on Windows The iSCSI-Offload instance appears in QCC GUI when the bxOIS driver attaches (loads). NOTE  To enable or disable iSCSI-Offload in Single Function or NPAR mode on Windows or Linux using the QLogic Control Suite CLI, see the User’s Guide: QLogic Control Suite CLI (part number BC0054511-00).
  • Page 145: Configuring Microsoft Initiator To Use Cavium's Iscsi Offload

    After the IP address is configured for the iSCSI adapter, you must use Microsoft Initiator to configure and add a connection to the iSCSI target using the Cavium iSCSI adapter. For more details on Microsoft Initiator, see the Microsoft’s user guide.
  • Page 146 9–Configuring iSCSI Protocol iSCSI Offload in Windows Server In the Initiator Node Name Change dialog box (Figure 9-15), type the initiator IQN name, and then click OK. Figure 9-15. iSCSI Initiator Node Name Change In the iSCSI Initiator Properties, click the Discovery tab (Figure 9-16), and then on the Discovery page under Target Portals, click Add.
  • Page 147 9–Configuring iSCSI Protocol iSCSI Offload in Windows Server In the Add Target Portal dialog box, type the IP address of the target, and then click Advanced (Figure 9-17). Figure 9-17. Target Portal IP Address 83840-546-00 N...
  • Page 148 9–Configuring iSCSI Protocol iSCSI Offload in Windows Server Complete the Advanced Settings dialog box as follows: On the General page under Connect using, select QLogic 10 Gigabit Ethernet iSCSI Adapter as the Local adapter (Figure 9-18). Figure 9-18. Selecting the Local Adapter 83840-546-00 N...
  • Page 149 9–Configuring iSCSI Protocol iSCSI Offload in Windows Server For the Initiator IP, select the adapter IP address, and then click OK to save your changes (Figure 9-19). Figure 9-19. Selecting the Initiator IP Address 83840-546-00 N...
  • Page 150 9–Configuring iSCSI Protocol iSCSI Offload in Windows Server Complete the iSCSI Initiator Properties dialog box as follows: Click the Discovery tab, and then on the Discovery page (Figure 9-20), click OK to add the target portal. Figure 9-20. Adding the Target Portal 83840-546-00 N...
  • Page 151 9-21), and then on the Targets page, select the target. Click Log On to log into your iSCSI target using the Cavium iSCSI adapter. Figure 9-21. Logging on to the iSCSI Target In the Log On to Target dialog box (Figure 9-22), click Advanced.
  • Page 152: Iscsi Offload Faqs

    9–Configuring iSCSI Protocol iSCSI Offload in Windows Server Click OK to close the Microsoft Initiator. To format your iSCSI partition, use Disk Manager. NOTE  Teaming does not support iSCSI adapters.  Teaming does not support NDIS adapters that are in the boot path. ...
  • Page 153 9–Configuring iSCSI Protocol iSCSI Offload in Windows Server Table 9-5. Offload iSCSI (OIS) Driver Event Log Messages (Continued) Message Severity Message Number Error Maximum command sequence number is not serially greater than expected command sequence number in login response. Dump data contains Expected Command Sequence number fol- lowed by Maximum Command Sequence number.
  • Page 154 9–Configuring iSCSI Protocol iSCSI Offload in Windows Server Table 9-5. Offload iSCSI (OIS) Driver Event Log Messages (Continued) Message Severity Message Number Error Header Digest is required by the initiator, but target did not offer it. Error Data Digest is required by the initiator, but target did not offer it.
  • Page 155 9–Configuring iSCSI Protocol iSCSI Offload in Windows Server Table 9-5. Offload iSCSI (OIS) Driver Event Log Messages (Continued) Message Severity Message Number Information A connection to the target was lost, but Initiator suc- cessfully reconnected to the target. Dump data contains the target name.
  • Page 156 9–Configuring iSCSI Protocol iSCSI Offload in Windows Server Table 9-5. Offload iSCSI (OIS) Driver Event Log Messages (Continued) Message Severity Message Number Error Target failed to respond in time to a Text Command sent to renegotiate iSCSI parameters. Error Target failed to respond in time to a logout request sent in response to an asynchronous message from the target.
  • Page 157: Iscsi Offload In Linux Server

    This section provides iSCSI offload information in the Linux Server:  Open iSCSI User Applications  User Application, qlgc_iscsiuio  Bind iSCSI Target to Cavium iSCSI Transport Name  VLAN Configuration for iSCSI Offload (Linux)  Making Connections to iSCSI Targets ...
  • Page 158: User Application, Qlgc_Iscsiuio

    # qlgc_iscsiuio -v Start brcm_iscsiuio: # qlgc_iscsiuio Bind iSCSI Target to Cavium iSCSI Transport Name In Linux, each iSCSI port is an interface known as iface. By default, the open-iscsi daemon connects to discovered targets using a software initiator (transport name = tcp) with the iface name default.
  • Page 159: Vlan Configuration For Iscsi Offload (Linux)

    Iface.mtu = 0 Iface.port = 0 #END Record NOTE Although not strictly required, Cavium recommends configuring the same VLAN ID on the iface.iface_num field for iface file identification purposes. Setting the VLAN ID on the Ethernet Interface If using RHEL 5.x versions of Linux, you should configure the iSCSI VLAN on the Ethernet interface.
  • Page 160: Making Connections To Iscsi Targets

    9–Configuring iSCSI Protocol iSCSI Offload in Linux Server To get detailed information about VLAN interface, issue the following command: # cat /proc/net/vlan/ethx.<vlanid> Preserve the VLAN configuration across reboots by adding it to configuration files. Configure the VLAN interface configuration in /etc/sysconfig/network-scripts.
  • Page 161: List All Sessions

     In the scenario where multiple C-NIC devices are in the system and the system is booted with Cavium’s iSCSI boot solution, ensure that the iSCSI node under /etc/iscsi/nodes for the boot target is bound to the NIC that is used for booting.
  • Page 162 9–Configuring iSCSI Protocol iSCSI Offload in VMware Server Similar to bnx2fc, bnx2i is a kernel mode driver used to provide a translation layer between the VMware SCSI stack and the QLogic iSCSI firmware/hardware. Bnx2i functions under the open-iscsi framework. iSCSI traffic on the network may be isolated in a VLAN to segregate it from other traffic.
  • Page 163 9–Configuring iSCSI Protocol iSCSI Offload in VMware Server Configure the VLAN on VMkernel (Figure 9-24). Figure 9-24. Configuring the VLAN on VMkernel 83840-546-00 N...
  • Page 164: Configuring Fibre Channel Over Ethernet

    (NAS), management, IPC, and storage, are used to achieve the necessary performance and versatility. In addition to iSCSI for storage solutions, FCoE can now be used with capable Cavium C-NICs. FCoE is a standard that allows Fibre Channel protocol to be transferred over Ethernet by preserving existing Fibre Channel infrastructures and capital investments by classifying received FCoE and FIP frames.
  • Page 165: Fcoe Boot From San

    DCB supports storage, management, computing, and communications fabrics onto a single physical fabric that is simpler to deploy, upgrade, and maintain than in standard Ethernet networks. DCB technology allows the capable Cavium C-NICs to provide lossless data delivery, lower latency, and standards-based bandwidth sharing of data center physical links.
  • Page 166: Provisioning Storage Access In The San

    WWPNs. Configure at least one boot target through CCM as described in “Preparing Cavium Multi-Boot Agent for FCoE Boot” on page 139. Allow the system to attempt to boot through the selected initiator.
  • Page 167: One-Time Disabled

    137. One-time Disabled Cavium’s FCoE ROM is implemented as boot entry vector (BEV), where the Option ROM only connects to the target after it has been selected by BIOS as the boot device. This behavior is different from other implementations that will connect to the boot device even if another device has been selected by the system BIOS.
  • Page 168: Fcoe Boot Configuration In Uefi Boot Mode

    Figure 10-1. One-time Disabled FCoE Boot Configuration in UEFI Boot Mode The following subsections describe FCoE boot configuration procedures in UEFI boot mode prior to OS installation.  Preparing Cavium Multi-Boot Agent for FCoE Boot UEFI Boot LUN Scanning  83840-546-00 N...
  • Page 169: Preparing Cavium Multi-Boot Agent For Fcoe Boot

    10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Preparing Cavium Multi-Boot Agent for FCoE Boot To prepare the Cavium multiple boot agent for FCoE boot: During POST, press Ctrl+S at the Ethernet Boot Agent banner to open the CCM utility.
  • Page 170 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Under Device Hardware Configuration, ensure that DCB/DCBX is enabled on the device (Figure 10-3). FCoE boot is only supported on DCBX capable configurations. Therefore, DCB/DCBX must be enabled, and the directly attached link peer must also be DCBX-capable with parameters that allow for full DCBX synchronization.
  • Page 171 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN To configure the boot target and LUN, access the Target Information menu (Figure 10-5), and select the first available path. Figure 10-5. FCoE Boot—Target Information On the No. 1 Target Parameters window (Figure 10-6): Enable the Connect option.
  • Page 172 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Press ENTER. Figure 10-6. FCoE Boot—Specify Target WWPN and Boot LUN The Target Information menu (Figure 10-7) now shows the parameters set in Step 6 for the first target. Figure 10-7. FCoE Boot—Target Information Press the ESC key until prompted to exit and save changes.
  • Page 173: Uefi Boot Lun Scanning

    10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN UEFI Boot LUN Scanning UEFI boot LUN scanning eases the task of configuring FCoE boot from SAN by allowing you to choose from a list of targets and selecting a WWPN instead of typing the WWPN.
  • Page 174: Windows Server 2008 Sp2 Fcoe Boot Installation

    10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Windows Server 2008 SP2 FCoE Boot Installation Ensure that no USB flash drive is attached before starting the OS installer. The 10GbE Virtual Bus Driver (EVBD) and Optical Fiber Communication (OFC) bxfcoe drivers must be loaded during installation.
  • Page 175 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Load the bxfcoe (OFC) driver (Figure 10-10). Figure 10-10. Load bxfcoe Driver Select the boot LUN to be installed (Figure 10-11). Figure 10-11. Selecting the FCoE Boot LUN 83840-546-00 N...
  • Page 176: Boot Installation

    Windows Server 2012, 2012 R2, and 2016 FCoE Boot Installation For Windows Server 2012, 2012 R2, and 2016 Boot from SAN installation, Cavium requires the use of a “slipstream” DVD or ISO image with the latest Cavium drivers injected. See “Injecting (Slipstreaming) Adapter Drivers into...
  • Page 177: Linux Fcoe Boot Installation

    10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Linux FCoE Boot Installation Configure the adapter boot parameters and Target Information (press CTRL+S and enter the CCM utility) as detailed in “Preparing System BIOS for FCoE Build and Boot” on page 135.
  • Page 178 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Follow the on-screen instructions to choose the Driver Update medium and load the drivers (Figure 10-13). Figure 10-13. SLES 11 and 12 Installation: Driver Update Medium After the driver update is complete, click Next to continue with OS installation.
  • Page 179 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Ensure FCoE Enable is set to yes on the 10GbE Cavium initiator ports you want to use as the SAN boot path or paths. Figure 10-15 shows an example for SLES11 SP2; SLES11 SP3/SP4 and SLES12 might be different.
  • Page 180 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN For each interface to be enabled for FCoE boot: Click Create FCoE VLAN Interface. On the VLAN Interface Creation dialog box, click Yes to confirm. This confirmation triggers automatic FIP VLAN discovery. If successful, the VLAN is displayed under FCoE VLAN Interface.
  • Page 181 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Click Next to continue installation. YaST2 will prompt to activate multipath (Figure 10-18). Respond by clicking either Yes or No as appropriate. Figure 10-18. SLES 11 and 12 Installation: Disk Activation Continue installation as usual.
  • Page 182 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Click the Boot Loader Installation tab, and then select Boot Loader Installation Details. On the Boot Loader Device Map window (Figure 10-20), make sure you have one boot loader entry and delete all redundant entries. Figure 10-20.
  • Page 183 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN RHEL 6 Installation To install Linux FCoE boot on RHEL 6: Boot from the installation medium. For RHEL 6.3, an updated Anaconda image is required for FCoE boot from SAN. That updated image is provided by Red Hat at the following URL: http://rvykydal.fedorapeople.org/updates.823086-fcoe.img For details about installing the Anaconda update image, refer to the Red Hat Enterprise Linux 6 Installation Guide, Section 28.1.3, located here:...
  • Page 184 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN When prompted Do you have a driver disk, click Yes. Figure 10-22 shows an example. NOTE RHEL does not allow driver update media to be loaded over the network when installing driver updates for network devices. Use local media.
  • Page 185 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN When prompted for a device type, click Specialized Storage Devices, and then click Next. Figure 10-23 shows an example. Figure 10-23. RHEL 6 Installation: Select Specialized Storage Devices When prompted to select the drivers to install, click Add Advanced Target. Figure 10-24 shows an example.
  • Page 186 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN On the Advanced Storage Options dialog box, select Add FCoE SAN, and then click Add drive. Figure 10-25 shows an example. Figure 10-25. RHEL 6 Installation: Add FCoE Drive On the Configure FCoE Parameters dialog box, for each interface intended for FCoE boot, select the interface, clear the Use DCB check box, select Use auto vlan, and then click Add FCoE Disk(s).
  • Page 187 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Confirm all FCoE visible disks are visible on either the Multipath Devices page or the Other SAN Devices page. Figure 10-27 shows an example. Figure 10-27. RHEL 6 Installation: Confirm FCoE Disks Click Next to proceed.
  • Page 188 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN RHEL 7 Installation To install Linux FCoE on RHEL 7: Boot from the installation medium. On the installation splash screen, press the TAB key and add the option inst.dd to the boot command line, as shown. Press ENTER to proceed. >...
  • Page 189 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN On the Installation Destination page, click Add a disk..., and then click Add FCoE SAN. Figure 10-29 shows an example. Figure 10-29. Installation Destination Page In the dialog box that prompts you to indicate the network interface that is connected to the FCoE switch: In the NIC drop-down menu, select a NIC.
  • Page 190 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Select the appropriate disk, and then click Done. Select the appropriate partition configuration, and then click Done. 83840-546-00 N...
  • Page 191 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN On the Installation Summary page, click Begin Installation. RHEL 7.3 and 7.4 dracut FCoE Wait Time Sometimes the bnx2fc FCoE boot process takes more than the default 3 seconds for all the link level protocols to converge, so the FCoE initialization protocol (FIP) and FCoE traffic can flow correctly.
  • Page 192 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Change the line sleep 3 to sleep 15 so that part of the script looks like this: elif [ "$netdriver" = "bnx2x" ]; then # If driver is bnx2x, do not use /sys/module/fcoe/parameters/create but fipvlan modprobe 8021q udevadm settle --timeout=30...
  • Page 193: Adding Additional Linux Boot Paths

    10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN To create a configuration file for an additional FCoE interface: Change the directory to /etc/fcoe. Copy the sample configuration file to a new configuration file for the new FCoE interface. # cp cfg-ethx cfg-<interface_name> Edit /etc/fcoe/cfg-<interface_name>...
  • Page 194 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Add ifname=<INTERFACE>:<MAC_ADDRESS> to the line kernel /vmlinuz … for each new interface. The MAC address must be all lower case and separated by a colon. For example: ifname=em1:00:00:00:00:00:00 Create a /etc/fcoe/cfg-<INTERFACE> file for each new FCoE initiator by duplicating the /etc/fcoe/cfg-<INTERFACE>...
  • Page 195 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN For each new interface, create a /etc/sysconfig/network/ifcfg-<INTERFACE> file by duplicating the /etc/sysconfig/network/ifcfg-<INTERFACE> file that was already configured during initial installation. Create a new ramdisk to update changes: # mkinitrd RHEL 7.2 Boot from SAN There is an issue with RHEL 7.2 general availability (GA).
  • Page 196 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Select the disks for multipath installation. Select the minimum package. Reboot the system after the installation is complete. The OS boots to text mode. 83840-546-00 N...
  • Page 197: Vmware Esxi Fcoe Boot Installation

    FCoE Boot from SAN VMware ESXi FCoE Boot Installation FCoE boot from SAN requires that the latest Cavium 8400 Series asynchronous drivers be included into the ESXi (5.1, 5.5, 6.0, and 6.5) install image. For information on how to slipstream drivers, refer to the Image_builder_doc.pdf from VMware.
  • Page 198 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Select the boot LUN for installation (Figure 10-32) and press ENTER to continue. Figure 10-32. ESXi Installation: Select Disk Select an installation method (Figure 10-33). Figure 10-33. ESXi Installation: Select Install Method 83840-546-00 N...
  • Page 199 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN Select the keyboard layout (Figure 10-34). Figure 10-34. ESXi Installation: Select Keyboard Layout (Optional but recommended) Enter and confirm a Root password (Figure 10-35), and then press ENTER to continue. Figure 10-35. ESXi Installation: Enter Password To confirm installation configuration, press F11 (Figure 10-36).
  • Page 200 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN To reboot after installation, press ENTER (Figure 10-37). Figure 10-37. ESXi Installation: Installation Complete The management network is not vmnic0. After booting, open the GUI console and access the Configure Management Network. On the Network Adapters window (Figure 10-38), select the NIC to be used as the...
  • Page 201 10–Configuring Fibre Channel Over Ethernet FCoE Boot from SAN For 8400 Series Adapters, the FCoE boot devices must have a separate vSwitch other than vSwitch0. This switch allows DHCP to assign the IP address to the management network rather than to the FCoE boot device. To create a vSwitch for the FCoE boot devices, add the boot device vmnics in vSphere Client under Networking (Figure...
  • Page 202: Booting From San After Installation

    10–Configuring Fibre Channel Over Ethernet Booting from SAN After Installation Booting from SAN After Installation After boot configuration and OS installation are complete, you can reboot and test the installation. On this and all future reboots, no other user interactivity is required.
  • Page 203: Errors During Windows Fcoe Boot From San Installation

    10–Configuring Fibre Channel Over Ethernet Booting from SAN After Installation Install the binary RPM containing the new driver version. Refer to the linux-nx2 package README for instructions on how to prepare a binary driver RPM. Enter the following command to update the ramdisk: On RHEL 6.x systems: dracut -force ...
  • Page 204: Configuring Fcoe

    SAN disk or disks, detach or disconnect the USB flash drive immediately before selecting the disk for further installation. Configuring FCoE By default, DCB is enabled on Cavium 8400 Series FCoE- and DCB-compatible C-NICs. Cavium 8400 Series FCoE requires a DCB-enabled interface: ...
  • Page 205 10–Configuring Fibre Channel Over Ethernet Configuring FCoE Click the Apply button. Figure 10-42. Enabling or Disabling FCoE-Offload on Windows The FCoE-Offload instance appears in QCC GUI after the bxFCoE driver attaches (loads). NOTE  To enable or disable FCoE-Offload in Single Function or NPAR mode on Windows or Linux using the QCS CLI, see the User’s Guide: QLogic Control Suite CLI (part number BC0054511-00).
  • Page 206: Configuring Nic Partitioning And Managing Bandwidth

    “Configuration Parameters” on page 177 Overview NPAR divides a Cavium 8400/3400 Series 10GbE NIC into multiple virtual NICs by having multiple PCI physical functions per port. Each PCI function is associated with a different virtual NIC. To the OS and the network, each physical function appears as a separate NIC port.
  • Page 207: Supported Operating Systems For Npar

    11–Configuring NIC Partitioning and Managing Bandwidth Configuring for NPAR Supported Operating Systems for NPAR The Cavium 8400/3400 Series 10GbE Adapters support NPAR on the following operating systems:  Windows Server 2008 family  Windows Server 2012 family  Windows Server 2016 family ...
  • Page 208: Number Of Partitions

    11–Configuring NIC Partitioning and Managing Bandwidth Configuration Parameters  Flow Control  Physical Link Speed  Relative Bandwidth Weight (%)  Maximum Bandwidth (%) Number of Partitions Each port can have from one to four partitions with each partition behaving as if it is an independent NIC port.
  • Page 209: Network Mac Address

    11–Configuring NIC Partitioning and Managing Bandwidth Configuration Parameters  For iSCSI offloaded boot from SAN (Windows and some Linux versions), you must use and enable the iSCSI offload on the first partition of the port from which you want to boot. Software (non-offloaded) iSCSI boot from SAN (Windows, Linux, and VMware) does not require iSCSI offload to be enabled;...
  • Page 210: Maximum Bandwidth (%)

    11–Configuring NIC Partitioning and Managing Bandwidth Configuration Parameters Maximum Bandwidth (%) The maximum bandwidth weight is defined as:  The maximum bandwidth setting defines an upper threshold value, ensuring that this limit is not exceeded during transmission. The valid range for this value is between 1 and 100.
  • Page 211: Using Microsoft Virtualization With Hyper-V

    Using Microsoft Virtualization with Hyper-V Microsoft Virtualization is a hypervisor virtualization system for Windows Server 2008, 2012, 2016, and Nano Server 2016. This chapter is intended for those who are familiar with Hyper-V, and it addresses issues that affect the configuration of 8400/3400 Series network adapters and teamed network adapters when Hyper-V is used.
  • Page 212: Configuring A Single Network Adapter

    12–Using Microsoft Virtualization with Hyper-V Configuring a Single Network Adapter Table 12-1. Configurable Network Adapter Hyper-V Features (Continued) Supported in Windows Server Feature 2016 2008 2008 R2 2012 2016 Nano IPv6 LSO (parent and child partition) IPv6 CO (parent and child partition) Jumbo frames SR-IOV When bound to a virtual network;...
  • Page 213: Windows Server 2008 R2 And 2012

    12–Using Microsoft Virtualization with Hyper-V Teamed Network Adapters  In an IPv6 network, a team that supports CO or LSO and is bound to a Hyper-V virtual network will report CO and LSO as an offload capability in QCC GUI; however, CO and LSO will not work. This is a limitation of Hyper-V because Hyper-V does not support CO and LSO in an IPv6 network.
  • Page 214 12–Using Microsoft Virtualization with Hyper-V Teamed Network Adapters Table 12-2. Configurable Teamed Network Adapter Hyper-V Features (Continued) Supported in Windows Server Version Feature Comment or Limitation 2012/ 2008 2008 R2 2012R2 Generic Trunking (FEC/GEC) — 802.3ad Draft Static team type Failover —...
  • Page 215: Windows Server 2008

    12–Using Microsoft Virtualization with Hyper-V Teamed Network Adapters Windows Server 2008 When configuring a team of 8400/3400 Series network adapters on a Hyper-V system, be aware of the following:  Create the team prior to binding the team to the Hyper-V virtual network. ...
  • Page 216: Configuring Vmq With Slb Teaming

    From Windows Server 2008 to Windows Server 2008 R2  From Windows Server 2008 R2 to Windows Server 2012 Prior to performing an OS upgrade when a Cavium 8400/3400 Series Adapter is installed on your system, Cavium recommends the following steps: Save all team and adapter IP information.
  • Page 217: Using Virtual Lans In Windows

    Each defined VLAN behaves as its own separate network with its traffic and broadcasts isolated from the others, increasing bandwidth efficiency within each logical group. Multiple VLANs can be defined for each Cavium adapter on your server, depending on the amount of memory available in your system.
  • Page 218 Although VLANs are commonly used to create individual broadcast domains and/or separate IP subnets, it is useful for a server to have a presence on more than one VLAN simultaneously. Cavium adapters support a single VLAN per function or multiple VLANs per team, allowing very flexible network configurations.
  • Page 219 PC #4 switch port. PC #5 A member of both VLANs #1 and #2, PC #5 has an Cavium adapter installed. It is connected to switch port #10. Both the adapter and the switch port are configured for VLANs #1 and #2 and have tagging enabled.
  • Page 220: Adding Vlans To Teams

    VLAN tagging is only required to be enabled on switch ports that create trunk links to other switches, or on ports connected to tag-capable end-stations, such as servers or workstations with Cavium adapters. For Hyper-V, create VLANs in the vSwitch-to-VM connection instead of in a team or in the adapter driver, to allow VM live migrations to occur without having to ensure the future host system has a matching team VLAN setup.
  • Page 221: Enabling Sr-Iov Overview

    (VF), a lightweight PCIe function that can be directly assigned to a virtual machine (VM), bypassing the hypervisor layer for the main data movement. Not all Cavium adapters support SR-IOV; refer to your product documentation for details. Enabling SR-IOV Before attempting to enable SR-IOV, ensure that: ...
  • Page 222 Enable SR-IOV. SR-IOV must be done now and cannot be enabled after the vSwitch is created. Install the Cavium drivers for the adapters detected in the VM. Use the latest drivers available from your vendor for the host OS (do not use the inbox drivers).
  • Page 223: Sr-Iov And Storage

    14–Enabling SR-IOV Enabling SR-IOV To verify that SR-IOV is operational: Start the VM. In Hyper-V Manager, select the adapter and select the VM in the Virtual Machines list. Select the Networking tab at the bottom of the window and view the adapter status.
  • Page 224: Configuring Data Center Bridging

    Configuring Data Center Bridging This chapter provides the following information about data center bridging (DCB): Overview  “Configuring DCB” on page 196   “DCB Conditions” on page 196  “DCB in Windows Server 2012” on page 197 Overview DCB is a collection of IEEE specified standard extensions to Ethernet to provide lossless data delivery, low latency, and standards-based bandwidth sharing of data center physical links.
  • Page 225: Priority-Based Flow Control

    15–Configuring Data Center Bridging Overview The transmission scheduler in the peer is responsible for maintaining the allocated bandwidth for each PG. For example, a user can configure FCoE traffic to be in PG 0 and iSCSI traffic in PG 1. The user can then allocate each group a specific bandwidth.
  • Page 226: Configuring Dcb

    15–Configuring Data Center Bridging Configuring DCB Configuring DCB By default, DCB is enabled on Cavium 8400/3400 Series DCB-compatible C-NICs. DCB configuration is rarely required, because the default configuration should satisfy most scenarios. You can configure DCB parameters using QCC GUI.
  • Page 227: Dcb In Windows Server 2012

    DCB Windows PowerShell User Scripting Guide in the Microsoft Technet Library. To revert to standard QCC control over the Cavium DCB feature set, uninstall the Microsoft QoS feature or disable QoS in QCC GUI or the Device Manager NDIS Advance Properties.
  • Page 228 15–Configuring Data Center Bridging DCB in Windows Server 2012 The 8400/3400 Series Adapters support up to two traffic classes (in addition to the default traffic class) that can be used by the Windows QoS service. On 8400 Series Adapters, disable iSCSI-offload or FCoE-offload (or both) to free one or two traffic classes for use by the Windows QoS service.
  • Page 229: Using Cavium Teaming Services

    This section describes the technology and implementation considerations when working with the network teaming services offered by the Cavium software shipped with servers and storage products. The goal of Cavium teaming services is to provide fault tolerance and link aggregation across a team of two or more adapters.
  • Page 230: Glossary Of Teaming Terms

    16–Using Cavium Teaming Services Executive Summary  Supported Features by Team Type  Selecting a Team Type Glossary of Teaming Terms Table 16-1. Teaming Terminology Item Definition Address Resolution Protocol QConvergeConsole QLASP QLogic Advanced Server Program (intermediate NIC teaming driver)
  • Page 231: Teaming Concepts

    16–Using Cavium Teaming Services Executive Summary Table 16-1. Teaming Terminology (Continued) Item Definition LAN on motherboard media access control NDIS Network Driver Interface Specification Network Load Balancing (Microsoft) Preboot execution environment QinQ An extension of the IEEE 802.1Q VLAN standard pro-...
  • Page 232: Network Addressing

    16–Using Cavium Teaming Services Executive Summary This section provides information on the following teaming concepts:  Network Addressing  Teaming and Network Addresses  Description of Teaming Types Network Addressing To understand how teaming works, it is important to understand how node communications work in an Ethernet network.
  • Page 233: Teaming And Network Addresses

    16–Using Cavium Teaming Services Executive Summary Teaming and Network Addresses A team of adapters function as a single virtual network interface and does not appear any different to other network devices than a non-teamed adapter. A virtual network adapter advertises a single Layer 2 and one or more Layer 3 addresses.
  • Page 234 16–Using Cavium Teaming Services Executive Summary Table 16-2 shows a summary of the teaming types and their classification. Table 16-2. Available Teaming Types Switch-Dependent Link Aggregation (Switch must Control Protocol Load Teaming Type Failover support specific Support Required Balancing type of team)
  • Page 235 16–Using Cavium Teaming Services Executive Summary Smart Load Balancing enables both transmit and receive load balancing based on the Layer 3 and Layer 4 IP address and TCP/UDP port number. In other words, the load balancing is not done at a byte or frame level but on a TCP/UDP session basis.
  • Page 236 16–Using Cavium Teaming Services Executive Summary When the clients and the system are on different subnets, and incoming traffic has to traverse a router, the received traffic destined for the system is not load balanced. The physical adapter that the intermediate driver has selected to carry the IP flow carries all of the traffic.
  • Page 237 16–Using Cavium Teaming Services Executive Summary In this teaming mode, the intermediate driver controls load balancing and failover for outgoing traffic only, while incoming traffic is controlled by the switch firmware and hardware. As is the case for Smart Load Balancing, the QLASP intermediate driver uses the IP/TCP/UDP source and destination addresses to load balance the transmit traffic from the server.
  • Page 238: Software Components

    16–Using Cavium Teaming Services Executive Summary SLB (Auto-Fallback Disable) This type of team is identical to the Smart Load Balance and Failover type of team, with the following exception: when the standby member is active, if a primary member comes back on line, the team continues using the standby member rather than switching back to the primary member.
  • Page 239: Repeater Hub

    16–Using Cavium Teaming Services Executive Summary Repeater Hub A repeater hub allows a network administrator to extend an Ethernet network beyond the limits of an individual segment. The repeater regenerates the input signal received on one port onto all other connected ports, forming a single collision domain.
  • Page 240: Configuring Teaming

    16–Using Cavium Teaming Services Executive Summary Configuring Teaming QCC GUI is used to configure teaming in the supported operating system environments and is designed to run on 32-bit and 64-bit Windows family of operating systems. QCC GUI is used to configure load balancing and fault tolerance teaming, and VLANs.
  • Page 241 Same IP address for all team mem- bers Load balancing by IP address Load balancing by Yes (used for MAC address no-IP/IPX) SLB with one primary and one standby member. Requires at least one Cavium adapter in the team. 83840-546-00 N...
  • Page 242: Selecting A Team Type

    16–Using Cavium Teaming Services Executive Summary Selecting a Team Type The following flow chart provides the decision flow when planning for Layer 2 teaming. The primary rationale for teaming is the need for additional network bandwidth and fault tolerance. Teaming offers link aggregation and fault tolerance to meet both of these requirements.
  • Page 243: Teaming Mechanisms

    16–Using Cavium Teaming Services Teaming Mechanisms Teaming Mechanisms Teaming mechanisms include the following:  Architecture  Types of Teams  Attributes of the Features Associated with Each Type of Team  Speeds Supported for Each Type of Team 83840-546-00 N...
  • Page 244: Architecture

    16–Using Cavium Teaming Services Teaming Mechanisms Architecture The QLASP is implemented as an NDIS intermediate driver (see Figure 16-2). It operates below protocol stacks such as TCP/IP and IPX and appears as a virtual adapter. This virtual adapter inherits the MAC address of the first port initialized in the team.
  • Page 245: Outbound Traffic Flow

    Teaming Mechanisms Outbound Traffic Flow The Cavium intermediate driver manages the outbound traffic flow for all teaming modes. For outbound traffic, every packet is first classified into a flow, and then distributed to the selected physical adapter for transmission. The flow classification involves an efficient hash computation over known protocol fields.
  • Page 246: Protocol Support

    16–Using Cavium Teaming Services Teaming Mechanisms When an inbound IP Datagram arrives, the appropriate Inbound Flow Head Entry is located by hashing the source IP address of the IP Datagram. Two statistics counters stored in the selected entry are also updated. These counters are used in the same fashion as the outbound counters by the load-balancing engine periodically to reassign the flows to the physical adapter.
  • Page 247: Performance

    LiveLink. Switch-independent The Cavium Smart Load Balancing type of team allows two to eight physical adapters to operate as a single virtual adapter. The greatest benefit of the SLB type of team is that it operates on any IEEE compliant switch and requires no special configuration.
  • Page 248: Switch-Dependent-Generic Static Trunking

    The following are the key attributes of SLB:  Failover mechanism—Link loss detection.  Load Balancing Algorithm—Inbound and outbound traffic are balanced through a Cavium proprietary mechanism based on L4 flows.  Outbound Load Balancing using MAC address—No  Outbound Load Balancing using IP address—Yes ...
  • Page 249 Network Communications The following are the key attributes of Generic Static Trunking:  Failover mechanism—Link loss detection Load Balancing Algorithm—Outbound traffic is balanced through Cavium  proprietary mechanism based L4 flows. Inbound traffic is balanced according to a switch specific mechanism.
  • Page 250: Switch-Dependent-Dynamic Trunking

    The following are the key attributes of Dynamic Trunking:  Failover mechanism—Link loss detection  Load Balancing Algorithm—Outbound traffic is balanced through a Cavium proprietary mechanism based on L4 flows. Inbound traffic is balanced according to a switch specific mechanism. ...
  • Page 251: Livelink

    16–Using Cavium Teaming Services Teaming Mechanisms LiveLink LiveLink is a feature of QLASP that is available for the Smart Load Balancing (SLB) and SLB (Auto-Fallback Disable) types of teaming. The purpose of LiveLink is to detect link loss beyond the switch and to route traffic only through team members that have a live link.
  • Page 252 16–Using Cavium Teaming Services Teaming Mechanisms Table 16-4. Team Type Attributes (Continued) Feature Attribute Failover event Loss of link Failover time <500 ms Fallback time 1.5 s (approximate) MAC address Different Multivendor teaming Generic (Static) Trunking User interface QCC GUI...
  • Page 253: Speeds Supported For Each Type Of Team

    16–Using Cavium Teaming Services Teaming Mechanisms Table 16-4. Team Type Attributes (Continued) Feature Attribute Hot remove Link speed support Different speeds Frame protocol Incoming packet management Switch Outgoing packet management QLASP Failover event Loss of link only Failover time <500 ms Fallback time 1.5 s (approximate)
  • Page 254: Teaming And Other Advanced Networking Properties

    16–Using Cavium Teaming Services Teaming and Other Advanced Networking Properties Teaming and Other Advanced Networking Properties Advanced networking properties for teaming support include:  Checksum Offload IEEE 802.1p QoS Tagging   Large Send Offload Jumbo Frames   IEEE 802.1Q VLANs IEEE 802.1ad Provider Bridges (QinQ)
  • Page 255: Checksum Offload

    Checksum Offload Checksum Offload is a property of the Cavium network adapters that allows the TCP/IP/UDP checksums for send and receive traffic to be calculated by the adapter hardware rather than by the host CPU. In high-traffic situations, this can allow a system to handle more connections more efficiently than if the host CPU were forced to calculate the checksums.
  • Page 256: Jumbo Frames

    Ethernet frame to a maximum size of 9600 bytes. Though never formally adopted by the IEEE 802.3 Working Group, support for jumbo frames has been implemented in Cavium 8400/3400 Series Adapters. The QLASP intermediate driver supports jumbo frames, provided that all of the physical adapters in the team also support jumbo frames and the same size is set on all adapters in the team.
  • Page 257 16–Using Cavium Teaming Services Teaming and Other Advanced Networking Properties VLAN filtering is supported for Windows, Linux, and VMware operating systems. There are three modes associated with VLAN filtering:  Normal—Disables VLAN filtering. Packets are transmitted and received regardless of VLAN ID.
  • Page 258 16–Using Cavium Teaming Services Teaming and Other Advanced Networking Properties In the network device configuration menu, select the adapter port, and then press ENTER. In the Main Configuration page, do one of the following: If the value for Multi-Function Mode is <SF>, select Device ...
  • Page 259: Preboot Execution Environment

    The only supported QLASP team configuration when using Microsoft Virtual Server 2005 is with a QLASP Smart Load Balancing team-type consisting of a single primary Cavium adapter and a standby Cavium adapter. Make sure to unbind or deselect “Virtual Machine Network Services” from each team member prior to creating a team and prior to creating virtual networks with Microsoft Virtual Server.
  • Page 260: Teaming Across Switches

    This behavior is true for all types of teaming supported by Cavium. Therefore, an interconnect link must be provided between the switches that connect to ports in the same team.
  • Page 261 16–Using Cavium Teaming Services General Network Considerations Furthermore, a failover event would cause additional loss of connectivity. Consider a cable disconnect on the Top Switch port 4. In this case, Gray would send the ICMP Request to Red 49:C9, but because the Bottom switch has no...
  • Page 262 16–Using Cavium Teaming Services General Network Considerations The addition of a link between the switches allows traffic to and from Blue and Gray to reach each other without any problems. Note the additional entries in the CAM table for both switches. The link interconnect is critical for the proper operation of the team.
  • Page 263 16–Using Cavium Teaming Services General Network Considerations Figure 16-5 represents a failover event in which the cable is unplugged on the Top Switch port 4. This failover is successful, with all stations pinging each other without loss of connectivity. Figure 16-5. Failover Event...
  • Page 264: Spanning Tree Algorithm

    16–Using Cavium Teaming Services General Network Considerations Spanning Tree Algorithm In Ethernet networks, only one active path may exist between any two bridges or switches. Multiple active paths between switches can cause loops in the network. When loops occur, some switches recognize stations on both sides of the switch.
  • Page 265: Topology Change Notice (Tcn)

    16–Using Cavium Teaming Services General Network Considerations Topology Change Notice (TCN) A bridge or switch creates a forwarding table of MAC addresses and port numbers by learning the source MAC address received on a specific port. The table is used to forward frames to a specific port rather than flooding the frame to all ports.
  • Page 266: Teaming With Hubs (For Troubleshooting Purposes Only)

    16–Using Cavium Teaming Services General Network Considerations Teaming with Hubs (for troubleshooting purposes only) Information on teaming with hubs includes:  Hub Usage in Teaming Network Configurations  QLASP SLB Teams  QLASP SLB Team Connected to a Single Hub ...
  • Page 267: Qlasp Slb Team Connected To A Single Hub

    16–Using Cavium Teaming Services Application Considerations QLASP SLB Team Connected to a Single Hub QLASP SLB teams configured as shown in Figure 16-6 maintain their fault tolerance properties. Either server connection could fail without affecting the network. Clients could be connected directly to the hub, and fault tolerance would still be maintained;...
  • Page 268: Teaming And Clustering

    Multiple adapters may be used for each of these purposes: private, intracluster communications and public, external client communications. All Cavium teaming modes are supported with Microsoft Cluster Software for the public adapter only. Private network adapter teaming is not supported. Microsoft indicates that the use...
  • Page 269 16–Using Cavium Teaming Services Application Considerations Figure 16-7 shows a 2-node Fibre-Channel cluster with three network interfaces per cluster node: one private and two public. On each node, the two public adapters are teamed, and the private adapter is not. Teaming is supported across the same switch or across two switches.
  • Page 270: High-Performance Computing Cluster

    (OMSA) management of the nodes in the cluster. It can also be used for job scheduling and monitoring. In Cavium’s current HPCC offerings, only one of the on-board adapters is used. If Myrinet or IB is present, this adapter serves I/O and administration purposes;...
  • Page 271: Oracle

    16–Using Cavium Teaming Services Application Considerations Oracle In the Oracle Solution Stacks, Cavium supports adapter teaming in both the private network (interconnect between RAC nodes) and public network with clients or the application layer above the database layer (Figure 16-8).
  • Page 272: Teaming And Network Backup

    16–Using Cavium Teaming Services Application Considerations Teaming and Network Backup When you perform network backups in a non-teamed environment, overall throughput on a backup server adapter can be easily impacted due to excessive traffic and adapter overloading. Depending on the quantity of backup servers,...
  • Page 273: Load Balancing And Failover

    Figure 16-10 on page 245 shows a network topology that demonstrates tape backup in a Cavium teamed environment and how smart load balancing can load balance tape backup data across teamed adapters. There are four paths that the client-server can use to send data to the backup server, but only one of these paths will be designated during data transfer.
  • Page 274 The designated path is determined by two factors:  Client-Server ARP cache, which points to the backup server MAC address. This is determined by the Cavium intermediate driver inbound load balancing algorithm. The physical adapter interface on Client-Server Red will be used to transmit ...
  • Page 275: Fault Tolerance

    If a network link fails during tape backup operations, all traffic between the backup server and client stops and backup jobs fail. If, however, the network topology was configured for both Cavium SLB and switch fault tolerance, then this would allow tape backup operations to continue without interruption during the link failure. All failover processes within the network are transparent to tape backup software applications.
  • Page 276: Troubleshooting Teaming Problems

    QLASP and shows the MAC address of the team and not the MAC address of the interface transmitting the frame. Cavium recommends using the following process to monitor a team:  Mirror all uplink ports from the team at the switch.
  • Page 277: Troubleshooting Guidelines

    Network teaming is not supported when running iSCSI traffic through Microsoft iSCSI initiator or iSCSI offload. MPIO should be used instead of Cavium network teaming for these ports. For information on iSCSI boot and iSCSI offload restrictions, see Chapter 9 Configuring iSCSI Protocol.
  • Page 278: Frequently Asked Questions

    Question: What network protocols are load balanced when in a team? Answer: Cavium’s teaming software only supports IP/TCP/UDP traffic. All other traffic is forwarded to the primary adapter. Question: Which protocols are load balanced with SLB and which ones are not?
  • Page 279 16–Using Cavium Teaming Services Frequently Asked Questions Question: What is the difference between adapter load balancing and Microsoft’s Network Load Balancing (NLB)? Answer: Adapter load balancing is done at a network session level, whereas NLB is done at the server application level.
  • Page 280 16–Using Cavium Teaming Services Frequently Asked Questions Question: Can I connect a team across multiple switches? Answer: Smart Load Balancing can be used with multiple switches because each physical adapter in the system uses a unique Ethernet MAC address. Link Aggregation and Generic Trunking cannot operate across switches because they require all physical adapters to share the same Ethernet MAC address.
  • Page 281: Event Log Messages

    Table 16-7 on page 252 Table 16-8 on page 255. As a Cavium adapter driver loads, Windows places a status code in the system event viewer. There may be up to two classes of entries for these event codes depending on whether both drivers are loaded (one set for the base or miniport driver and one set for the intermediate or teaming driver).
  • Page 282: Base Driver (Physical Adapter Or Miniport) Messages

    16–Using Cavium Teaming Services Event Log Messages Base Driver (Physical Adapter or Miniport) Messages The base driver is identified by source L2ND. Table 16-7 lists the event log messages supported by the base driver, explains the cause for the message, and provides the recommended action.
  • Page 283 16–Using Cavium Teaming Services Event Log Messages Table 16-7. Base Driver Event Log Messages (Continued) Message Severity Message Cause Corrective Action Number Informational Network controller The adapter has been No action is required. configured for 10Mb manually configured for half-duplex link.
  • Page 284 16–Using Cavium Teaming Services Event Log Messages Table 16-7. Base Driver Event Log Messages (Continued) Message Severity Message Cause Corrective Action Number Error Unable to map IO The device driver cannot Remove other adapt- space. allocate mem- ers from the system,...
  • Page 285: Intermediate Driver (Virtual Adapter Or Team) Messages

    16–Using Cavium Teaming Services Event Log Messages Table 16-7. Base Driver Event Log Messages (Continued) Message Severity Message Cause Corrective Action Number Error Network controller The driver and the bus Update to the latest failed to exchange the driver are not compati-...
  • Page 286 16–Using Cavium Teaming Services Event Log Messages Table 16-8. Intermediate Driver Event Log Messages (Continued) System Event Severity Message Cause Corrective Action Message Number Error Could not allocate The driver cannot allo- Close running applica- memory for internal cate memory from the tions to free memory.
  • Page 287 16–Using Cavium Teaming Services Event Log Messages Table 16-8. Intermediate Driver Event Log Messages (Continued) System Event Severity Message Cause Corrective Action Message Number Informational Network adapter does The physical adapter Replace the adapter not support Advanced does not support the with one that does sup- Failover.
  • Page 288: Virtual Bus Driver Messages

    16–Using Cavium Teaming Services Event Log Messages Virtual Bus Driver Messages Table 16-9. VBD Event Log Messages Message Severity Message Cause Corrective Action Number Error Failed to allocate The driver cannot allo- Close running applica- memory for the cate memory from the tions to free memory.
  • Page 289 16–Using Cavium Teaming Services Event Log Messages Table 16-9. VBD Event Log Messages (Continued) Message Severity Message Cause Corrective Action Number Informational Network controller The adapter has been No action is required. configured for 1Gb manually configured for full-duplex link.
  • Page 290: Configuring Teaming In Windows Server

    (called “Channel Bonding”), refer to your operating system documentation. QLASP Overview QLASP is the Cavium teaming software for the Windows family of operating systems. QLASP settings are configured by QCC GUI. QLASP provides heterogeneous support for adapter teaming to include Cavium 8400/3400 Series Adapters, and Cavium-shipping Intel NIC adapters and LOMs.
  • Page 291: Load Balancing And Fault Tolerance

    SLB (with Auto-Fallback Disable) Smart Load Balancing and Failover Smart Load Balancing and Failover is the Cavium implementation of load balancing based on IP flow. This feature supports balancing IP traffic across multiple adapters (team members) in a bidirectional manner. In this type of team, all adapters in the team have separate MAC addresses.
  • Page 292: Link Aggregation (802.3Ad)

    17–Configuring Teaming in Windows Server Load Balancing and Fault Tolerance NOTE  If you do not enable LiveLink when configuring SLB teams, disabling STP or enabling Port Fast at the switch or port is recommended. This minimizes the downtime due to spanning tree loop determination when failing over.
  • Page 293: Slb (Auto-Fallback Disable)

    (Auto-Fallback Disable) Types of Teams Smart Load Balancing is a protocol-specific scheme. Table 17-1 lists the level of support for IP, IPX, and NetBEUI protocols. Table 17-1. Smart Load Balancing Failover/Fallback—All Cavium Failover/Fallback—Multivendor Operating System Protocol NetBEUI NetBEUI Windows Server 2008...
  • Page 294: Teaming With Large Send Offload And Checksum Offload Support

    Other protocol packets are sent and received through one primary interface only. Failover for non-IP traffic is supported only for Cavium network adapters. The Generic Trunking type of team requires the Ethernet switch to support some form of port trunking mode (for example, Cisco's Gigabit EtherChannel or other switch vendor's Link Aggregation mode).
  • Page 295: Running User Diagnostics In Dos

    “Performing Diagnostics” on page 265  “Diagnostic Test Descriptions” on page 268 Introduction Cavium 8400/3400 Series User Diagnostics is an MS-DOS based application that runs a series of diagnostic tests (see Table 18-2) on the Cavium 8400/3400 Series network adapters in your system. Cavium 8400/3400 Series User Diagnostics also allows you to update device firmware and to view and change settings for available adapter properties.
  • Page 296 Table 18-1. uediag Command Options Command Options Description uediag Performs all tests on all Cavium 8400/3400 Series Adapters in the system uediag -c <device#> Specifies the adapter (device#) to test. Similar to -dev (for back- ward compatibility)
  • Page 297 1 = Enable 0 = Disable uediag -t <groups/tests> Disables specific groups or tests uediag -T <groups/tests> Enables specific groups or tests uediag -ver Displays the version of Cavium 8400/3400 Series User Diagnostics (uediag) and all installed adapters 83840-546-00 N...
  • Page 298: Diagnostic Test Descriptions

    18–Running User Diagnostics in DOS Diagnostic Test Descriptions Diagnostic Test Descriptions The diagnostic tests are divided into four groups: Basic Functional Tests (Group A), Memory Tests (Group B), Block Tests (Group C), and Ethernet Traffic Tests (Group D). The diagnostic tests are listed and described in Table 18-2.
  • Page 299 Description Number Group B: Memory Tests TXP Scratchpad The Group B tests verify all memory blocks of the Cavium 8400/3400 Series Adapters by writing various data patterns TPAT Scratchpad (0x55aa55aa, 0xaa55aa55, walking zeroed, walking ones, address, and so on) to each memory location, reading back the data, and RXP Scratchpad then comparing it to the value written.
  • Page 300 (identifying the TCP, IP, and UDP header data structures) and calculates the checksum or CRC. The TPAT block results are com- pared with the values expected by Cavium 8400/3400 Series User Diagnostics and any errors are displayed. FIO Register The fast IO (FIO) verifies the register interface that is exposed to the internal CPUs.
  • Page 301 Verifies the adapter’s large send offload (LSO) support by enabling MAC loopback mode and transmitting large TCP packets. As the packets are received back by Cavium 8400/3400 Series User Diag- nostics, they are checked for proper segmentation (according to the selected MSS size) and any other errors.
  • Page 302: Troubleshooting

    Troubleshooting Troubleshooting information for FastLinQ 8400/3400 Series Adapters include:  Hardware Diagnostics  “Checking Port LEDs” on page 273  “Troubleshooting Checklist” on page 273 “Checking if Current Drivers are Loaded” on page 274   “Possible Problems and Solutions” on page 276 Hardware Diagnostics Loopback diagnostic tests are available for testing the adapter hardware.
  • Page 303: Qcc Network Test Failures

    A–Troubleshooting Checking Port LEDs Troubleshooting steps that may help correct the failure: Remove the failing device and reseat it in the slot, ensuring that the card is firmly seated in the slot from front to back. Rerun the test. If the card still fails, replace it with a different card of the same model and run the test.
  • Page 304: Checking If Current Drivers Are Loaded

    A–Troubleshooting Checking if Current Drivers are Loaded The following checklist provides recommended actions to take to resolve problems installing the Cavium 8400/3400 Series Adapters or running them in your system.  Inspect all cables and connections. Verify that the cable connections at the network adapter and the switch are attached properly.
  • Page 305 A–Troubleshooting Checking if Current Drivers are Loaded bnx2fc 133775 libfcoe 39764 2 bnx2fc,fcoe libfc 108727 3 bnx2fc,fcoe,libfcoe scsi_transport_fc 55235 3 bnx2fc,fcoe,libfc bnx2i 53488 cnic 86401 6 bnx2fc,bnx2i libiscsi 47617 8 be2iscsi,bnx2i,cxgb4i,cxgb3i,libcxgbi,ib_iser, iscsi_tcp,libiscsi_tcp scsi_transport_iscsi 53047 8 be2iscsi,bnx2i,libcxgbi,ib_iser,iscsi_tcp, libiscsi bnx2x 1417947 libcrc32c 1246 1 bnx2x mdio...
  • Page 306: Possible Problems And Solutions

    A–Troubleshooting Possible Problems and Solutions Possible Problems and Solutions This section presents a list of possible problems and solutions for these components and categories:  Multi-Boot Agent Issues  QLASP Issues  Linux Issues  NPAR Issues  Miscellaneous Issues Multi-Boot Agent Issues Problem: Unable to obtain network settings through DHCP using PXE.
  • Page 307 A–Troubleshooting Possible Problems and Solutions Problem: A system containing an 802.3ad team causes a Netlogon service failure in the system event log and prevents it from communicating with the domain controller during boot up. Solution: Microsoft Knowledge Base Article 326152 (http://support.microsoft.com/kb/326152/en-us) indicates that Gigabit Ethernet adapters may experience problems with connectivity to a domain controller due to link fluctuation while the driver initializes and...
  • Page 308: Linux Issues

    A–Troubleshooting Possible Problems and Solutions Linux Issues Problem: 8400/3400 Series devices with SFP+ flow control default to Off rather than RX/TX Enable. Solution: The Flow Control default setting for revision 1.6.x and later has been changed to RX Off and TX Off because SFP+ devices do not support auto-negotiation for flow control.
  • Page 309 A–Troubleshooting Possible Problems and Solutions Problem: Errors appear when compiling driver source code. Solution: Some installations of Linux distributions do not install the development tools by default. Ensure the development tools for the Linux distribution you are using are installed before compiling driver source code.
  • Page 310: Npar Issues

    Solution: In the QCC GUI Configuration page’s Advanced section, enable iSCSI Crash Dump. Problem: The Cavium 8400/3400 Series Adapters may not perform at optimal level on some systems if it is added after the system has booted. Solution: The system BIOS in some systems does not set the cache line size and the latency timer if the adapter is added after the system has booted.
  • Page 311 Possible Problems and Solutions Problem: A DCOM error message (event ID 10016) appears in the System Event Log during the installation of the Cavium adapter drivers. Solution: This is a Microsoft issue. For more information, see Microsoft knowledge base KB913119 at: http://support.microsoft.com/kb/913119...
  • Page 312: Adapter Leds

    Adapter LEDS For copper-wire Ethernet connections, the state of the network link and activity is indicated by the LEDs on the RJ45 connector, as described in Table B-1. Table B-1. Network Link and Activity Indicated by the RJ45 Port LEDs Port LED LED Appearance Network State...
  • Page 313: Glossary

    Glossary ACPI The Advanced Configuration and Power Application programming interface. A set Interface (ACPI) specification provides an of routines, protocols, and tools for open standard for unified operating building software applications. API simpli- system-centric device configuration and fies development by providing the building power management.
  • Page 314 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series bandwidth Boot code for Fibre Channel Adapters is required if the computer system is booting A measure of the volume of data that can from a storage device (disk drive) attached be transmitted at a specific transmission to the adapter.
  • Page 315 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series CAT-6 Category 6 cable. A cable standard for Converged Network Adapter. gigabit Ethernet and other network proto- command line interface cols that are backward-compatible with the Category 5/5e and Category 3 cable See CLI.
  • Page 316 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series DCBX Ethernet Data center bridging exchange. A protocol The most widely used LAN technology that used by devices to exchange config- transmits information between computer, uration information with directly connected typically at speeds of 10 and 100 million peers.
  • Page 317 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series frame Fibre Channel. Data unit consisting of a start-of-frame (SOF) delimiter, header, data payload, FCoE CRC, and an end-of-frame (EOF) delim- Fibre Channel over Ethernet. A new iter.
  • Page 318 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series IEEE IPv4 Institute of Electrical and Electronics Internet protocol version 4. A data-oriented Engineers. An international nonprofit protocol used on a packet switched inter- organization for the advancement of network (Ethernet, for example).
  • Page 319 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series Layer 2 Technically, a LUN can be a single physical disk drive, multiple physical disk Refers to the data link layer of the multilay- drives, or a portion (volume) of a single ered communication model, Open physical disk drive.
  • Page 320 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series N_Port Management information base. A set of Node port. A port that connects by a guidelines and definitions for SNMP point-to-point link to either a single N_Port functions.
  • Page 321 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series NL_Port NPIV Node loop port. A port capable of N_Port ID virtualization. The ability for a arbitrated loop functions and protocols. An single physical Fibre Channel end point NL_Port connects through an arbitrated (N_Port) to support multiple, uniquely loop to other NL_Port and at most a single...
  • Page 322 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series Because a path is a combination of an ping adapter and a target port, it is distinct from A computer network administration utility another path if it is accessed through a used to test whether a specified host is different adapter and/or it is accessing a reachable across an IP network, and to...
  • Page 323 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series port instance SCSI The number of the port in the system. Small computer system interface. A Each adapter may have one or multiple high-speed interface used to connect ports, identified with regard to the adapter devices, such as hard drives, CD drives, as port 0, port 1, and so forth.
  • Page 324 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series SNMP Simple network management protocol. Type-length-value. Optional information SNMP is a networking protocol that that may be encoded as an element inside enables you to monitor the router using of the protocol.
  • Page 325 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series UEFI wake on LAN Unified extensible firmware interface. A See WoL. specification detailing an interface that Windows Management Instrumentation helps hand off control of the system for the pre-boot environment (that is, after the See WMI.
  • Page 326 Index Numerics adding boot device, vSphere Client 100/1000BASE-T cable specifications Linux boot paths 10GBASE-T cable specifications target portal VLAN configuration to configuration files VLAN to teams address resolution protocol, See ARP advanced configuration and power interface, ACPI See ACPI definition of Advanced Settings dialog box management feature Anaconda installer...
  • Page 327 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series authentication (continued) CHAP, enabling on target bnx2i driver CHAP, iSCSI target description of CHAP, static configuration kernel mode limitations maximum offload iSCSI connections optional parameters, setting VMware iSCSI adapter bandwidth bnx2i ifaces, creating definition of...
  • Page 328 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series boot protocol checklist BIOS, specifying adapter pre-installation MBA, configuring troubleshooting actions booting iSCSI Class A certification xxix, BOOTP definition of definition of MBA driver protocol command syntax, documentation conventions MBA, setting with uediag FastLinQ ESX CLI VMware plug-in,...
  • Page 329 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series configuring (continued) creating DCB parameters bnx2i ifaces DHCP server iSCSI boot image, dd method FCoE, overview uediag.exe file iSCSI boot parameters on VMware CTRL+R method, boot from SAN MBA driver cyclic redundancy check, See CRC Microsoft iSCSI initiator...
  • Page 330 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series diagnostics hardware, loopback tests Edge Port TCN setting MS-DOS, running in EM64T processor support test descriptions enabling user, running in DOS CHAP authentication disabling iSCSI offload FCoE hardware offload DOS, running user diagnostics in Integrated Services downloading...
  • Page 331 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series event log messages FCoE (continued) iSCSI offload driver interfaces, adding teaming overview extended unique identifier, See EUI preparing system BIOS extracting service device drivers manually support for C-NICs Windows driver package supported features VLAN discovery with DCB...
  • Page 332 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series frame Host Bus Adapter (continued) buffer memory, integrated iSCSI traffic in definition of offload iSCSI driver for FCoE, encapsulated HPCC, teaming considerations HTTP, Linux network installation definition of hubs Ethernet network for teaming QLASP SLB teams with...
  • Page 333 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series instance (port), definition of iSCSI boot (continued) Integrated Services, enabling for Hyper-V DHCP, configuring for IPv4 Intel PXE PDK, downloading DHCP, configuring for IPv6 intermediate driver failure event log messages image, creating with dd method teaming 214,...
  • Page 334 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series iSCSI-offload-TLV limitations (continued) Linux driver SLB team types SLES driver VMware driver installation VMware server Link Aggregation iscsiuio package, updating (802.3ad) team issues team type miscellaneous, resolving link LEDs Multi-Boot Agent, resolving fiber optic Ethernet connections and SFP+...
  • Page 335 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series memory hardware issue with limitation, VMkernel message network issue with tests loop (arbitrated), definition of message signaled interrupts, See MSI, MSI-X loopback messages definition of driver tests, hardware diagnostics offload iSCSI event log resolving teaming issues with definition of...
  • Page 336 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series multicast NL_Port, definition of definition of node destination address in link aggregation loop port, See NL_Port mode, Microsoft NLB port, See N_Port support for non-volatile random access memory, See NVRAM multiqueue parameter NPAR...
  • Page 337 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series overview Port Fast TCN setting 8400/3400 Series Adapters ports DCB configuration definition of FCoE instance, definition of QLASP teaming software LED indicators SR-IOV LEDs, checking teaming services partitions on VLANs, using power management...
  • Page 338 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series QCC GUI (continued) removing teaming, configuring device drivers uninstalling Linux driver QConvergeConsole repeater hub for teaming GUI, See QCC GUI requirements PowerKit, See QCC PowerKit 10G DAC cable copper cable hardware GUI, uninstalling...
  • Page 339 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series SFP+ SR-IOV (continued) flow control default issue overview of iSCSI boot failure on storage limitations MAC addresses printed on verifying port LED standards specifications, supported silent installation, Windows driver static iSCSI boot configuration simple network management protocol, See statistics...
  • Page 340 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series teaming (continued) definition of switches, across network addressing in teaming teaming in Windows Server offload performance feature terminology used TCP/IP, definition of troubleshooting guidelines team event log messages troubleshooting issues with team type type, selecting...
  • Page 341 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series traffic updating classes, bandwidth assignment iscsiuio tests, Ethernet VMware driver types, controlling flow upgrading transmission control protocol driver on Linux boot from SAN systems Internet protocol, definition of firmware and boot code for Linux See TCP firmware and boot code for Windows...
  • Page 342 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series VLAN (continued) using in Windows wake on LAN, See WoL within a VLAN we23r45r6t7 VLAN ID Windows adding to iface file for iSCSI driver software, installing Ethernet interface, setting on driver, silent installation drivers, verifying current definition of...
  • Page 343 User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters FastLinQ 3400 and 8400 Series WWNN default, pre-provisioning definition of WWPN definition of in FLOGI database initiator with pre-provisioning zone provisioning 83840-546-00 N...
  • Page 344 Copyright © 2014–2018 Marvell. All rights reserved. Cavium, LLC and QLogic LLC are subsidiaries of Marvell. Cavium, QLogic, QConvergeConsole, FastLinQ, LiquidIO, Marvell and the Marvell logo are registered trademarks of Marvell. For a more complete listing of Marvell trademarks, visit www.marvell.com. Patent(s) Pending—Products identified in this document may be covered by one or more Marvell patents and/or patent applications.

This manual is also suitable for:

Fastlinq 8400 series

Table of Contents