User’s Guide—Converged Network Adapters QLogic 41xxx Series Document Revision History Revision A, April 28, 2017 Changes Sections Affected Initial release of new guide for Dell. AH0054602-00 A...
Table of Contents Preface Supported Products ......... . . xiii Intended Audience .
Page 4
User’s Guide—Converged Network Adapters QLogic 41xxx Series Installing the Linux Drivers with RoCE ......Linux Driver Optional Parameters ......Linux Driver Parameter Defaults .
Page 5
User’s Guide—Converged Network Adapters QLogic 41xxx Series Preparing the Ethernet Switch ........Configuring the Cisco Nexus 6000 Ethernet Switch .
Page 6
User’s Guide—Converged Network Adapters QLogic 41xxx Series iSCSI Offload in Windows Server (continued) iSCSI Offload FAQs........Windows Server 2012, 2012 R2, and 2016 iSCSI Boot Installation .
Page 7
User’s Guide—Converged Network Adapters QLogic 41xxx Series iSCSI Extensions for RDMA Before You Begin ..........Configuring iSER for RHEL .
Page 8
User’s Guide—Converged Network Adapters QLogic 41xxx Series Configuring Storage Spaces Direct ....... Configuring the Hardware .
Page 9
User’s Guide—Converged Network Adapters QLogic 41xxx Series List of Figures Figure Page Dell Update Package Window ......... QLogic InstallShield Wizard: Welcome Window .
Page 10
User’s Guide—Converged Network Adapters QLogic 41xxx Series Dell System Setup: iSCSI General Parameters ......Dell System Setup: iSCSI Initiator Parameters .
Page 11
User’s Guide—Converged Network Adapters QLogic 41xxx Series 11-3 Iface Transport Confirmed ..........11-4 Checking for New iSCSI Device.
Page 12
User’s Guide—Converged Network Adapters QLogic 41xxx Series List of Tables Table Page Host Hardware Requirements ......... Minimum Host Operating System Requirements .
Page 14
Preface What Is in This Guide Chapter 3 Driver Installation describes the installation of the adapter drivers on Windows, Linux, and VMware. Chapter 4 Upgrading the Firmware describes how to use the Dell Update Package (DUP) to upgrade the adapter firmware, Chapter 5 Adapter Preboot Configuration describes the preboot adapter ...
Preface Documentation Conventions Documentation Conventions This guide uses the following documentation conventions: NOTE provides additional information. CAUTION without an alert symbol indicates the presence of a hazard that could cause damage to equipment or loss of data. CAUTION with an alert symbol indicates the presence of a hazard that ...
Page 16
Preface Documentation Conventions What are shortcut keys? To enter the date, type mm/dd/yyyy (where mm is the month, dd is the day, and yyyy is the year). Topic titles between quotation marks identify related topics either within this ...
Preface License Agreements License Agreements Refer to the QLogic Software End User License Agreement for a complete listing of all license agreements affecting this product. Legal Notices Legal notices covered in this section include warranty, laser safety (FDA notice), agency certification, and product safety compliance. Warranty For warranty details, please check the QLogic Web site: www.qlogic.com/Support/Pages/Warranty.aspx...
Preface Legal Notices ICES-003 Compliance: Class A This Class A digital apparatus complies with Canadian ICES-003. Cet appareil numériqué de la classe A est conformé à la norme NMB-003 du Canada. CE Mark 2004/108/EC EMC Directive Compliance: EN55022:2010 Class A1:2007/CISPR22:2006: Class A EN55024:2010 EN61000-3-2: Harmonic Current Emission EN61000-3-3: Voltage Fluctuation and Flicker...
Product Overview This chapter provides the following information for the 41xxx Series Adapters: Functional Description Features Adapter Specifications Functional Description The QLogic FastLinQ 41000 Series Adapters include 10 and 25Gb Converged Network Adapters and Intelligent Ethernet Adapters that are designed to perform accelerated data networking for server systems.
Page 21
1–Product Overview Features Generic segment offload (GSO) Large receive offload (LRO) Receive segment coalescing (RSC) ® Microsoft dynamic virtual machine queue (VMQ), and Linux multiqueue Adaptive interrupts Transmit/receive side scaling (TSS/RSS) Stateless offloads for Network Virtualization using Generic Routing Encapsulation (NVGRE) and virtual LAN (VXLAN) L2/L3 GRE tunneled traffic ...
1–Product Overview Adapter Specifications Adapter Specifications 41xxx Series Adapter specifications include physical characteristics and standards specifications. Physical Characteristics The 41xxx Series Adapter is a standard PCI Express ® card and ships with either a full-height or a low-profile bracket for use in a standard PCIe ®...
Hardware Installation This chapter provides the following hardware installation information: System Requirements Safety Precautions Preinstallation Checklist Installing the Adapter System Requirements Before you install a QLogic 41xxx Series Adapter, verify that your system meets the hardware and operating system requirements shown in Table 2-1 Table 2-2.
2–Hardware Installation Installing the Adapter NOTE If you acquired the adapter software on a disk or from the QLogic Web site (driverdownloads.qlogic.com), verify the path to the adapter driver files. If your system is active, shut it down. When system shutdown is complete, turn off the power and unplug the power cord.
Page 26
2–Hardware Installation Installing the Adapter CAUTION Do not use excessive force when seating the card, as this may damage the system or the adapter. If you have difficulty seating the adapter, remove it, realign it, and try again. Secure the adapter with the adapter clip or screw. Close the system case and disconnect any personal anti-static devices.
Driver Installation This chapter provides the following information about driver installation: Installing Linux Driver Software Installing Windows Driver Software Installing VMware Driver Software Installing Linux Driver Software This section describes how to install Linux drivers with or without RoCE. It also describes the Linux driver optional parameters, default values, messages, and statistics.
Page 28
3–Driver Installation Installing Linux Driver Software Table 3-1. QLogic 41xxx Series Adapters Linux Drivers (Continued) Linux Description Driver qede Linux Ethernet driver for the 41xxx Series Adapter. This driver directly controls the hard- ware and is responsible for sending and receiving Ethernet packets on behalf of the Linux host networking stack.
3–Driver Installation Installing Linux Driver Software NOTE For network installations through NFS, FTP, or HTTP (using a network boot disk), a driver disk that contains the qede driver may be needed. Linux boot drivers can be compiled by modifying the makefile and the make environment.
Page 30
3–Driver Installation Installing Linux Driver Software If the Linux drivers were installed using a TAR file, issue the following commands: rmmod qede rmmod qed depmod -a For RHEL: cd /lib/modules/<version>/extra/qlgc-fastlinq rm -rf qed.ko qede.ko qedr.ko For SLES: cd /lib/modules/<version>/updates/qlgc-fastlinq rm -rf qed.ko qede.ko qedr.ko To remove Linux drivers in a non-RoCE environment:...
3–Driver Installation Installing Linux Driver Software Delete the , and files from the directory in which qed.ko qede.ko qedr.ko they reside. For example, in SLES, issue the following commands: cd /lib/modules/<version>/updates/qlgc-fastlinq rm -rf qed.ko rm -rf qede.ko rm -rf qedr.ko depmod -a To remove Linux drivers in an RoCE environment: To get the path to the currently installed drivers, issue the following...
3–Driver Installation Installing Linux Driver Software Change the directory to the RPM path and build the binary RPM for the kernel. For RHEL: cd /root/rpmbuild rpmbuild -bb SPECS/fastlinq-<version>.spec For SLES: cd /usr/src/packages rpmbuild -bb SPECS/fastlinq-<version>.spec Install the newly compiled RPM: rpm -ivh RPMS/<arch>/qlgc-fastlinq-<version>.<arch>.rpm NOTE option may be needed on some Linux distributions if...
3–Driver Installation Installing Linux Driver Software Installing Linux Drivers Using the TAR File To install Linux drivers using the TAR file: Create a directory and extract the TAR files to the directory: tar xjvf fastlinq-<version>.tar.bz2 Change to the recently created directory, and then install the drivers: cd fastlinq-<version>...
3–Driver Installation Installing Linux Driver Software Install libqedr libraries to work with RDMA user space applications. The libqedr RPM is available only for inbox OFED. Issue the following command: rpm –ivh qlgc-libqedr-<version>.<arch>.rpm Test the drivers by loading them as follows: modprobe qedr make install_libeqdr Linux Driver Optional Parameters...
3–Driver Installation Installing Linux Driver Software Linux Driver Parameter Defaults Table 3-3 lists the qed and qede Linux driver parameter defaults. Table 3-3. Linux Driver Parameter Defaults Parameter qed Driver Default qede Driver Default Auto-negotiation with Auto-negotiation with Speed speed advertised speed advertised Enabled Enabled...
3–Driver Installation Installing Windows Driver Software Installing Windows Driver Software NOTE Currently, Windows supports only 25G 41xxx Series Adapters. Installing the Windows Drivers Removing the Windows Drivers Managing Adapter Properties Setting Power Management Options Installing the Windows Drivers Install Windows driver software using the Dell Update Package (DUP): Running the DUP in the GUI ...
3–Driver Installation Installing Windows Driver Software Complete the following in the wizard’s License Agreement window (Figure 3-3): Read the QLogic End User Software License Agreement. To continue, select I accept the terms in the license agreement. Click Next. Figure 3-3. QLogic InstallShield Wizard: License Agreement Window Complete the wizard’s Setup Type window (Figure 3-4) as follows:...
3–Driver Installation Installing Windows Driver Software If you clicked Complete, proceed directly to Step Figure 3-4. InstallShield Wizard: Setup Type Window If you selected Custom in Step 5, complete the Custom Setup window (Figure 3-5) as follows: Select the features to install. By default, all features are selected. To change a feature’s install setting, click the icon next to it, and then select one of the following options: ...
3–Driver Installation Installing Windows Driver Software Click Next to continue. Figure 3-5. InstallShield Wizard: Custom Setup Window In the InstallShield Wizard’s Ready To Install window (Figure 3-6), click Install. The InstallShield Wizard installs the QLogic Adapter drivers and Management Software Installer. Figure 3-6.
3–Driver Installation Installing Windows Driver Software When the installation is complete, the InstallShield Wizard Completed window appears (Figure 3-7). Click Finish to dismiss the installer. Figure 3-7. InstallShield Wizard: Completed Window In the Dell Update Package window (Figure 3-8), “Update installer operation was successful”...
3–Driver Installation Installing Windows Driver Software To close the Update Package window, click CLOSE. Figure 3-8. Dell Update Package Window DUP Installation Options To customize the DUP installation behavior, use the following command line options. To extract only the driver components to a directory: /drivers=<path>...
3–Driver Installation Installing Windows Driver Software NOTE This command requires the /s option. DUP Installation Examples The following examples show how to use the installation options. To update the system silently: <DUP_file_name>.exe /s To extract the update contents to the C:\mydir\ directory: <DUP_file_name>.exe /s /e=C:\mydir To extract the driver components to the C:\mydir\ directory: <DUP_file_name>.exe /s /drivers=C:\mydir...
3–Driver Installation Installing Windows Driver Software On the Advanced page (Figure 3-9), select an item under Property and then change the Value for that item as needed. Figure 3-9. Setting Advanced Adapter Properties AH0054602-00 A...
3–Driver Installation Installing VMware Driver Software Setting Power Management Options You can set power management options to allow the operating system to turn off the controller to save power or to allow the controller to wake up the computer. If the device is busy (servicing a call, for example), the operating system will not shut down the device.
3–Driver Installation Installing VMware Driver Software Procedures in the following VMware KB article: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US& cmd=displayKC&externalId=2137853 QLogic recommends that you install the NIC driver first, followed by the storage drivers. Installing the VMware Driver You can use the driver ZIP file to install a new driver or update an existing driver. Be sure to install the entire driver set from the same driver ZIP file.
3–Driver Installation Installing VMware Driver Software To install the file using the command line interface, issue the .vib following command. Be sure to specify the full file path. .vib # esxcli software vib install -v /tmp/qedentv-1.0.3.11-1OEM.550.0.0.1331820.x86_64.vib Option 2: To install all of the individual VIBs at one time, issue the following command: # esxcli software vib install –d /tmp/qedentv-bundle-2.0.3.zip To upgrade an existing driver:...
Page 49
3–Driver Installation Installing VMware Driver Software Table 3-6. VMware Driver Optional Parameters (Continued) Parameter Description Specifies the number of virtual functions (VFs) per physical function (PF). max_vfs max_vfs can be 0 (disabled) or 64 VFs on a single port (enabled). The 64 VF maximum support for ESXi is an OS resource allocation constraint.
3–Driver Installation Installing VMware Driver Software VMware Driver Parameter Defaults Table 3-7 lists the VMware driver parameter default values. Table 3-7. VMware Driver Parameter Defaults Parameter Default Autonegotiation with all speeds advertised. The speed Speed parameter must be the same on all ports. If auto- negotiation is enabled on the device, all of the device ports will use autonegotiation.
3–Driver Installation Installing VMware Driver Software FCoE Support Table 3-8 describes the driver included in the VMware software package to support QLogic FCoE converged network interface controllers (C-NICs). Table 3-8. QLogic 41xxx Series Adapter VMware FCoE Driver Driver Description qedf The QLogic VMware FCoE driver is a kernel-mode driver that pro- vides a translation layer between the VMware SCSI stack and the QLogic FCoE firmware and hardware.
Upgrading the Firmware This chapter provides information about upgrading the firmware using the Dell Update Package (DUP). The firmware DUP is a Flash update utility only; it is not used for adapter configuration. You can run the firmware DUP by double-clicking the executable file.
4–Upgrading the Firmware Running the DUP by Double-Clicking Follow the on-screen instructions. In the Warning dialog box, click Yes to continue the installation. The installer indicates that it is loading the new firmware, as shown in Figure 4-2. Figure 4-2. Dell Update Package: Loading New Firmware When complete, the installer indicates the result of the installation, as shown Figure 4-3.
4–Upgrading the Firmware Running the DUP from a Command Line Click Yes to reboot the system. Click Finish to complete the installation, as shown in Figure 4-4. Figure 4-4. Dell Update Package: Finish Installation Running the DUP from a Command Line Running the firmware DUP from the command line, with no options specified, results in the same behavior as double-clicking the DUP icon.
4–Upgrading the Firmware Running the DUP Using the .bin File Figure 4-5 shows the options that you can use to customize the Dell Update Package installation. Figure 4-5. DUP Command Line Options Running the DUP Using the .bin File The following procedure is supported only on Linux OS. To update the DUP using the .bin file: Copy the Network_Firmware_NJCX1_LN_X.Y.Z.BIN file to the system under test (SUT).
Page 56
4–Upgrading the Firmware Running the DUP Using the .bin File Example output from SUT during the DUP update: ./Network_Firmware_NJCX1_LN_08.07.26.BIN Collecting inventory... Running validation... BCM57810 10 Gigabit Ethernet rev 10 (p2p1) The version of this Update Package is the same as the currently installed version.
Adapter Preboot Configuration During the host boot process, you have the opportunity to pause and perform adapter management tasks using the Human Infrastructure Interface (HII) application. These tasks include the following: Displaying Firmware Image Properties Configuring Device-level Parameters ...
5–Adapter Preboot Configuration Getting Started Figure 5-1. System Setup In the Device Settings window (Figure 5-2), select the 41xxx Series Adapter port that you want to configure, and then press ENTER. Figure 5-2. Dell System Setup: Device Settings The Main Configuration Page presents the adapter management options. ...
5–Adapter Preboot Configuration Getting Started Figure 5-3. Dell System Setup: Main Configuration Page, Default Mode Setting Partitioning Mode to NPAR adds the Partitions Configuration option to the Main Configuration Page, as shown in Figure 5-4. Figure 5-4. Dell System Setup: Main Configuration Page, NPAR Mode AH0054602-00 A...
5–Adapter Preboot Configuration Displaying Firmware Image Properties In addition to the management options, the Main Configuration Page presents the adapter properties shown in Table 5-1. Table 5-1. Adapter Properties Adapter Property Description Partitioning Mode Default or NPAR. Device Name Factory-assigned device name Chip Type ASIC version PCI Device ID...
5–Adapter Preboot Configuration Configuring Device-level Parameters Configuring Device-level Parameters NOTE The iSCSI physical function (PF) is enumerated when the iSCSI Offload feature is enabled. Not all adapters support iSCSI Offload. Device-level configuration includes the following parameters: Link speed Forward error correction (FEC) Mode ...
5–Adapter Preboot Configuration Configuring Device-level Parameters Figure 5-6. Dell System Setup: NIC Configuration Click Back. When prompted, click Yes to save the changes. Changes will take effect after a system reset. Table 5-2 describes the adapter device-level parameters. Table 5-2. Device-level Parameters Parameter Description Link Speed...
5–Adapter Preboot Configuration Configuring Port-level Parameters Table 5-2. Device-level Parameters (Continued) Parameter Description FEC Mode FEC options are available when the Link Speed is set explicitly to 25Gbps on 25Gb adapters: None—Disables FEC Fire Code—Enables FEC (BaseR-FEC) on 25Gb adapters Boot Mode Enables (UNDI) UEFI PXE boot, FCoE boot from SAN, iSCSI boot, or None.
5–Adapter Preboot Configuration Configuring FCoE Boot Figure 5-7. Dell System Setup: NIC Configuration, Boot Mode Click Back. When prompted, click Yes to save the changes. Changes take effect after a system reset. Configuring FCoE Boot FCoE general parameters include the following: ...
5–Adapter Preboot Configuration Configuring iSCSI Boot When prompted, click Yes to save the changes. Changes take effect after a system reset. Configuring iSCSI Boot To configure the iSCSI boot configuration parameters: On the Main Configuration Page, select iSCSI Boot Configuration Menu, and then select one of the following options: iSCSI General Configuration ...
5–Adapter Preboot Configuration Configuring iSCSI Boot iSCSI Second Target Parameters (Figure 5-13 on page IPv4 Address TCP Port Boot LUN iSCSI Name CHAP ID CHAP Secret Click Back. When prompted, click Yes to save the changes. Changes take effect after a system reset.
5–Adapter Preboot Configuration Configuring Partitions Configuring Partitions You can configure bandwidth ranges for each partition on a 25Gb adapter. To configure the maximum and minimum bandwidth allocations: On the Main Configuration Page, select Partitions Configuration, and then press ENTER. On the Partitions Configuration page (Figure 5-14), select Global Bandwidth Allocation.
5–Adapter Preboot Configuration Configuring Partitions Figure 5-15. Global Bandwidth Allocation Page Partition n Minimum TX Bandwidth is the minimum transmit bandwidth of the selected partition expressed as a percentage of the maximum physical port link speed. Values can be 0–100. When DCBX ETS mode is enabled, the per-traffic class DCBX ETS minimum bandwidth value overrides the per-partition minimum TX bandwidth value.The total of the minimum TX bandwidth values of all partitions on...
5–Adapter Preboot Configuration Configuring Partitions To examine a specific partition configuration, on the Partitions Configuration page (Figure 5-14 on page 51), select Partition n Configuration. For example, selecting Partition 1 Configuration, opens the Partition 1 Configuration page (Figure 5-16), which shows the Partition 1 parameters: Personality, PCI Device ID, PCI (bus) Address, Permanent MAC Address, and Virtual MAC Address.
RoCE Configuration This chapter describes RDMA over converged Ethernet (RoCE v1 and v2) configuration on the 41xxx Series Adapter, the Ethernet switch, and the Windows or Linux host, including: Supported Operating Systems and OFED Planning for FCoE Preparing the Adapter ...
6–RoCE Configuration Planning for FCoE Table 6-1. Operating System Support for RoCE and OFED (Continued) Out-of-Box Operating System RoCE v1 RoCE v2 Inbox OFED OFED VMware ESXi 6.0 u3 VMware ESXi 6.5 : The certified RoCE driver is not included in this release. The uncertified driver is available as an early preview. Planning for FCoE As you prepare to implement RoCE, consider the following limitations: ...
6–RoCE Configuration Preparing the Adapter Preparing the Adapter Follow these steps to enable DCBX and specify the RoCE priority using the HII management application. For information about the HII application, see Chapter 5 Adapter Preboot Configuration. In the Main Configuration Page, select Data Center Bridging (DCB) Settings, and then click Finish.
Page 76
6–RoCE Configuration Preparing the Ethernet Switch Configure quality of service (QoS) class map and set the RoCE priority to match the adapter (5). switch(config)# class-map type qos class-roce switch(config)# match cos 5 Configure queuing class maps. switch(config)# class-map type queuing class-roce switch(config)# match qos-group 3 Configure network QoS class maps.
6–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Configuring the Dell Z9100 Ethernet Switch To configure the Dell Z9100 Ethernet for RoCE, see the procedure in Appendix C Dell Z9100 Switch Configuration. Configuring RoCE on the Adapter for Windows Server Configuring RoCE on the adapter for Windows Server comprises enabling RoCE on the adapter and verifying the Network Direct MTU size.
6–RoCE Configuration Configuring RoCE on the Adapter for Windows Server Figure 6-1 shows an example of configuring a property value. Figure 6-1. Configuring RoCE Properties ® Verify that RoCE is enabled on the adapter using Windows PowerShell The Get-NetAdapterRdma command lists the adapters that support RDMA—both ports are enabled.
6–RoCE Configuration Configuring RoCE on the Adapter for Linux Verify that RoCE is enabled on the host operating system using PowerShell. The Get-NetOffloadGlobalSetting command shows NetworkDirect is enabled. PS C:\Users\Administrators> Get-NetOffloadGlobalSetting ReceiveSideScaling : Enabled ReceiveSegmentCoalescing : Enabled Chimney : Disabled TaskOffload : Enabled NetworkDirect...
6–RoCE Configuration Configuring RoCE on the Adapter for Linux RoCE v2 Configuring for Linux To verify RoCE v2 functionality, you must use RoCE v2 supported kernels. To configure RoCE v2 for Linux: Ensure that you are using one of the following supported kernels: SLES 12 SP2 GA ...
6–RoCE Configuration Configuring RoCE on the Adapter for Linux Verifying RoCE v1 or v2 GID Index and Address from sys and class Parameters Use one of the following options to verify the RoCE v1 or v2 GID Index and address from the sys and class parameters: ...
6–RoCE Configuration Configuring RoCE on the Adapter for Linux Verifying RoCE v1 or v2 Functionality Through perftest Applications This section shows how to verify RoCE v1 or v2 functionality through perftest applications. In this example, the following server IP and client IP are used: ...
6–RoCE Configuration Configuring RoCE on the Adapter for Linux Client Configuration: #/sbin/ip link add link p4p1 name p4p1.101 type vlan id 101 #ifconfig p4p1.101 192.168.101.3/24 up #ip route add 192.168.100.0/24 via 192.168.101.1 dev p4p1.101 Set the switch settings using the following procedure. ...
6–RoCE Configuration Configuring RoCE on the Adapter for Linux Client Switch Settings: Figure 6-3. Switch Settings, Client Configuring RoCE v1 or v2 Settings for RDMA_CM Applications Use the following scripts from the FastLinQ source package to configure RoCE: # ./show_rdma_cm_roce_ver.sh qedr0 is configured to IB/RoCE v1 qedr1 is configured to IB/RoCE v1 # ./config_rdma_cm_roce_ver.sh v2...
6–RoCE Configuration Configuring RoCE on the Adapter for Linux Client Settings: Figure 6-5. Configuring RDMA_CM Applications: Client RoCE Configuration for RHEL To configure RoCE on the adapter, the Open Fabrics Enterprise Distribution (OFED) must be installed and configured on the RHEL host. To prepare inbox OFED for RHEL: ®...
6–RoCE Configuration Configuring RoCE on the Adapter for Linux RoCE Configuration for SLES To configure RoCE on the adapter for an SLES host, OFED must be installed and configured on the SLES host. To install inbox OFED for SLES Linux: While installing or upgrading the operating system, select the InfiniBand support packages.
Page 87
6–RoCE Configuration Configuring RoCE on the Adapter for Linux Verify that the RoCE devices were detected by examining the dmesg logs. # dmesg|grep qedr [87910.988411] qedr: discovered and registered 2 RoCE funcs Verify that all of the modules have been loaded. For example: # lsmod|grep qedr qedr 89871...
6–RoCE Configuration Configuring RoCE on the Adapter for Linux Verify the RoCE connection by performing an RDMA ping: On the server, issue the following command: ibv_rc_pingpong -d <ib-dev> -g 0 On the client, issue the following command: ibv_rc_pingpong -d <ib-dev> -g 0 <server L2 IP address> The following are examples of successful ping pong tests on the server and the client.
Page 89
6–RoCE Configuration Configuring RoCE on the Adapter for Linux NOTE The default GID value is zero (0) for back-to-back or Pause settings. For server/switch configurations, you must identify the proper GID value. If you are using a switch, refer to the corresponding switch configuration documents for the proper settings.
iSCSI Configuration This chapter provides the following iSCSI configuration information: iSCSI Boot Configuring iSCSI Boot Configuring the DHCP Server to Support iSCSI Boot iSCSI Offload in Windows Server Software Components Differences from bnx2i Configuring qedi.ko ...
7–iSCSI Configuration iSCSI Boot iSCSI Boot Setup The iSCSI boot setup includes: Selecting the Preferred iSCSI Boot Mode Configuring the iSCSI Target Configuring iSCSI Boot Parameters Selecting the Preferred iSCSI Boot Mode Boot mode option is listed under iSCSI Configuration (Figure 7-1) of the adapter, and the setting is port specific.
7–iSCSI Configuration iSCSI Boot Configuring the iSCSI Target Configuring the iSCSI target varies by target vendors. For information on configuring the iSCSI target, refer to the documentation provided by the vendor. To configure the iSCSI target: Select the appropriate procedure based on your iSCSI target, either: ®...
7–iSCSI Configuration iSCSI Boot Table 7-1. Configuration Options Option Description This option is specific to IPv4. Controls whether the iSCSI boot TCP/IP parameters via DHCP host software acquires the IP address information using DHCP (Enabled) or use a static IP configuration (Disabled). Controls whether the iSCSI boot host software acquires its iSCSI parameters via DHCP iSCSI target parameters using DHCP (Enabled) or through a...
7–iSCSI Configuration iSCSI Boot Adapter UEFI Boot Mode Configuration To configure the boot mode: Restart the system. Access the System Utilities menu. NOTE SAN boot is supported in UEFI environment only. Make sure the system boot option is UEFI, and not legacy. Figure 7-2.
7–iSCSI Configuration iSCSI Boot In System Setup, Device Settings, select the QLogic device (Figure 7-3). Refer to the OEM user guide on accessing the PCI device configuration menu. Figure 7-3. Dell System Setup: Device Settings Configuration Utility AH0054602-00 A...
7–iSCSI Configuration iSCSI Boot On the Main Configuration Page, select NIC Configuration (Figure 7-4), and then press ENTER. Figure 7-4. Selecting NIC Configuration On the NIC Configuration page (Figure 7-5), select Boot Mode, and press ENTER to select one of the following iSCSI boot modes: ...
7–iSCSI Configuration Configuring iSCSI Boot iSCSI (HW) Figure 7-5. Dell System Setup: NIC Configuration, Boot Mode NOTE The iSCSI (HW) option is not listed if the iSCSI Offload feature is disabled at port level. If the preferred boot mode is iSCSI (HW), make sure the iSCSI offload feature is enabled.
7–iSCSI Configuration Configuring iSCSI Boot Static iSCSI Boot Configuration In a static configuration, you must enter data for the following: System’s IP address System’s initiator IQN Target parameters (obtained in “Configuring the iSCSI Target” on page For information on configuration options, see Table 7-1 on page To configure the iSCSI boot parameters using static configuration: In the Device HII Main Configuration Page, select iSCSI Configuration...
7–iSCSI Configuration Configuring iSCSI Boot On the iSCSI Configuration page, select iSCSI General Parameters (Figure 7-7), and then press ENTER. Figure 7-7. Dell System Setup: Selecting General Parameters On the iSCSI General Parameters page (Figure 7-8), press the UP ARROW and DOWN ARROW keys to select a parameter, and then press the ENTER key to select or input the following values: TCP/IP Parameters via DHCP: Disabled...
7–iSCSI Configuration Configuring iSCSI Boot Figure 7-8. Dell System Setup: iSCSI General Parameters Return to the iSCSI Configuration page, and then press the ESC key. Select iSCSI Initiator Parameters (Figure 7-9), and then press ENTER. Figure 7-9. Dell System Setup: iSCSI Initiator Parameters AH0054602-00 A...
Page 101
7–iSCSI Configuration Configuring iSCSI Boot On the iSCSI Initiator Configurations page, select the following parameters, and then type a value for each: IPv4* Address Subnet Mask IPv4* Default Gateway IPv4* Primary DNS IPv4* Secondary DNS ...
7–iSCSI Configuration Configuring iSCSI Boot NOTE Note the following for the preceding items with asterisks (*): The label will change to IPv6 or IPv4 (default) based on IP Version set on the iSCSI General Parameters page (Figure 7-10). Carefully enter the IP address. There is a no error-checking performed against the IP address to check for duplicates, incorrect segment, or network assignment.
7–iSCSI Configuration Configuring iSCSI Boot Select iSCSI First Target Parameters (Figure 7-11), and then press ENTER. Figure 7-11. Dell System Setup: Selecting iSCSI First Target Parameters On the iSCSI First Target Parameters page, set the Connect option to Enabled to the iSCSI target. Type values for the following parameters for the iSCSI target, and then press ENTER: ...
7–iSCSI Configuration Configuring iSCSI Boot Figure 7-12. Dell System Setup: iSCSI First Target Parameters Return to the iSCSI Boot Configuration page, and then press ESC. If you want configure a second iSCSI target device, select iSCSI Second Target Parameters (Figure 7-13), and enter the parameter values as you did in Step...
7–iSCSI Configuration Configuring iSCSI Boot Press ESC once, and a second time to exit. Click Yes to save changes, or follow the OEM guidelines to save the device-level configuration. For example, click Yes to confirm setting change (Figure 7-14). Figure 7-14. Dell System Setup: Saving iSCSI Changes After all changes have been made, reboot the system to apply the changes to the adapter’s running configuration.
Page 106
7–iSCSI Configuration Configuring iSCSI Boot For information on configuration options, see Table 7-1 on page NOTE When using a DHCP server, the DNS server entries are overwritten by the values provided by the DHCP server. This override occurs even if the locally provided values are valid and the DHCP server provides no DNS server information.
7–iSCSI Configuration Configuring iSCSI Boot Figure 7-15. Dell System Setup: iSCSI General Parameters Enabling CHAP Authentication Ensure that the CHAP authentication is enabled on the target. To enable CHAP authentication: Go to the iSCSI General Parameters page. Set CHAP Authentication to Enabled. In the Initiator Parameters window, type values for the following: ...
7–iSCSI Configuration Configuring the DHCP Server to Support iSCSI Boot Press ESC to return to the iSCSI Boot configuration page. Press ESC, and then select confirm Save Configuration. Configuring the DHCP Server to Support iSCSI Boot The DHCP server is an optional component, and is only necessary if you will be doing a dynamic iSCSI boot configuration setup (see “Dynamic iSCSI Boot Configuration”...
7–iSCSI Configuration Configuring the DHCP Server to Support iSCSI Boot Table 7-2. DHCP Option 17 Parameter Definitions (Continued) Parameter Definition The logical unit number to use on the iSCSI target. The value of <LUN> the LUN must be represented in hexadecimal format. A LUN with an ID OF 64 would have to be configured as 40 within the option 17 parameter on the DHCP server.
7–iSCSI Configuration Configuring the DHCP Server to Support iSCSI Boot Configuring the DHCP Server Configure the DHCP server to support either Option 16, 17, 43. NOTE The format of DHCPv6 Option 16 and Option 17 are fully defined in RFC 3315. If you use Option 43, you must also configure Option 60.
7–iSCSI Configuration Configuring the DHCP Server to Support iSCSI Boot Table 7-4 lists the DHCP Option 17 sub-options. Table 7-4. DHCP Option 17 Sub-option Definitions Sub-option Definition First iSCSI target information in the standard root path format: "iscsi:"[<servername>]":"<protocol>":"<port>":"<LUN> ": "<targetname>" Second iSCSI target information in the standard root path format: "iscsi:"[<servername>]":"<protocol>":"<port>":"<LUN>...
7–iSCSI Configuration iSCSI Offload in Windows Server Select VLAN ID to enter and set the VLAN value, as shown in Figure 7-16. Figure 7-16. Dell System Setup: iSCSI General Parameters, VLAN ID iSCSI Offload in Windows Server iSCSI offload is a technology that offloads iSCSI protocol processing overhead from host processors to the iSCSI HBA.
7–iSCSI Configuration iSCSI Offload in Windows Server Installing QLogic Drivers Install the Windows drivers as described in “Installing Windows Driver Software” on page Installing the Microsoft iSCSI Initiator Launch the Microsoft iSCSI initiator applet. At the first launch, the system prompts for an automatic service start.
7–iSCSI Configuration iSCSI Offload in Windows Server To configure the Microsoft Initiator: Open Microsoft Initiator. To configure the initiator IQN name according to your setup, follow these steps: On the iSCSI Initiator Properties, click the Configuration tab. On the Configuration page (Figure 7-17), click Change to modify the initiator name.
7–iSCSI Configuration iSCSI Offload in Windows Server In the iSCSI Initiator Name dialog box, type the new initiator IQN name, and then click OK. (Figure 7-18) Figure 7-18. iSCSI Initiator Node Name Change On the iSCSI Initiator Properties, click the Discovery tab. AH0054602-00 A...
7–iSCSI Configuration iSCSI Offload in Windows Server On the Discovery page (Figure 7-19) under Target portals, click Discover Portal. Figure 7-19. iSCSI Initiator—Discover Target Portal AH0054602-00 A...
7–iSCSI Configuration iSCSI Offload in Windows Server In the Discover Target Portal dialog box (Figure 7-20): In the IP address or DNS name box, type the IP address of the target. Click Advanced. Figure 7-20. Target Portal IP Address In the Advanced Settings dialog box (Figure 7-21), complete the following under Connect using:...
7–iSCSI Configuration iSCSI Offload in Windows Server Figure 7-21. Selecting the Initiator IP Address On the iSCSI Initiator Properties, Discovery page, click OK. AH0054602-00 A...
7–iSCSI Configuration iSCSI Offload in Windows Server Click the Targets tab, and then on the Targets page (Figure 7-22), click Connect. Figure 7-22. Connecting to the iSCSI Target AH0054602-00 A...
7–iSCSI Configuration iSCSI Offload in Windows Server On the Connect To Target dialog box (Figure 7-23), click Advanced. Figure 7-23. Connect To Target Dialog Box In the Local Adapter dialog box, select the QLogic <name or model> Adapter, and then click OK. Click OK again to close the Microsoft Initiator.
7–iSCSI Configuration iSCSI Offload in Windows Server Question: What configurations should be avoided? Answer: The IP address should not be the same as the LAN. Windows Server 2012, 2012 R2, and 2016 iSCSI Boot Installation Windows Server 2012, 2012 R2, and 2016 support booting and installing in either the offload or non-offload paths.
7–iSCSI Configuration Software Components iSCSI Crash Dump Crash dump functionality is currently supported only for offload iSCSI boot for the FastLinQ 41xxx Series Adapters. No additional configurations are required to configure iSCSI crash dump generation when in offload iSCSI boot mode. Software Components The QLogic FastLinQ 41xxx iSCSI software consists of a single kernel module called qedi.ko (qedi).
7–iSCSI Configuration Configuring qedi.ko Configuring qedi.ko The qedi driver automatically binds to the exposed iSCSI functions of the CNA, and the target configuration is done through the open-iscsi tools. This functionality and operation is similar to that of the bnx2i driver. NOTE For more information on how to install FastLinQ drivers, see Chapter 3...
Page 124
7–iSCSI Configuration Verifying iSCSI Devices in Linux ..[0000:42:00.4]:[qedi_link_update:928]:59: Link Up event..[0000:42:00.5]:[__qedi_probe:3563]:60: QLogic FastLinQ iSCSI Module qedi 8.15.6.0, FW 8.15.3.0 ..[0000:42:00.5]:[qedi_link_update:928]:59: Link Up event Use open-iscsi tools to verify that IP is configured properly. Issue the following command: # iscsiadm -m iface | grep qedi qedi.00:0e:1e:c4:e1:6d qedi,00:0e:1e:c4:e1:6d,192.168.101.227,<empty>,iqn.1994-05.com.redhat:534ca9b6...
7–iSCSI Configuration Open-iSCSI and Boot from SAN Considerations 192.168.25.100:3260,1 iqn.2003-04.com.sanblaze:virtualun.virtualun.target-05000002 To log into the iSCSI target, issue the following command: #iscsiadm -m node -p 192.168.25.100 -T iqn.2003- 04.com.sanblaze:virtualun.virtualun.target-05000007 -l Logging in to [iface: qedi.00:0e:1e:c4:e1:6c, target:iqn.2003-04.com.sanblaze:virtualun.virtualun.target-05000007, portal:192.168.25.100,3260] (multiple) Login to [iface: qedi.00:0e:1e:c4:e1:6c, target:iqn.2003- 04.com.sanblaze:virtualun.virtualun.target-05000007, portal:192.168.25.100,3260] successful.
Page 126
7–iSCSI Configuration Open-iSCSI and Boot from SAN Considerations To overcome this limitation, perform the initial boot from SAN with the pure L2 interface (do not use hardware offloaded iSCSI) using one of the following procedures during the boot from SAN. To boot from SAN using a software initiator: Complete the following in the adapter's preboot device configuration: On the iSCSI Boot Configuration Menu, set iSCSI Offload to Disable.
Page 127
7–iSCSI Configuration Open-iSCSI and Boot from SAN Considerations Enable iscsid and iscsiuio sockets and services as follows: #systemctl enable iscsid.socket #systemctl enable iscsid #systemctl enable iscsiuio.socket #systemctl enable iscsiuio Issue the following command: cat /proc/cmdline Check if the OS has preserved any boot options, such as ip=ibft rd.iscsi.ibft ...
Page 128
7–iSCSI Configuration Open-iSCSI and Boot from SAN Considerations Replace the original file with the new file. grub.cfg grub.cfg Create a new initramfs image by issuing the following command: dracut –force On the adapter’s preboot iSCSI Boot Configuration Menu, change the value of the iSCSI (offload): On the iSCSI Boot Configuration Menu, set iSCSI Offload to Enable.
FCoE Configuration This chapter provides the following FCoE configuration information: FCoE Boot from SAN Injecting (Slipstreaming) Adapter Drivers into Windows Image Files Configuring Linux FCoE Offload Differences Between qedf and bnx2fc Configuring qedf.ko Verifying FCoE Devices in Linux ...
8–FCoE Configuration FCoE Boot from SAN Preparing System BIOS for FCoE Build and Boot To prepare the system BIOS, modify the system boot order and specify the BIOS boot protocol, if required. Specifying the BIOS Boot Protocol FCoE boot from SAN is supported in UEFI mode only. Set the platform in boot mode (protocol) using the system BIOS configuration to UEFI.
8–FCoE Configuration FCoE Boot from SAN On the Device Settings page, select the QLogic device (Figure 8-2). Figure 8-2. Dell System Setup: Device Settings, Port Selection AH0054602-00 A...
8–FCoE Configuration FCoE Boot from SAN On the Main Configuration Page, select NIC Configuration (Figure 8-3), and then press ENTER. Figure 8-3. Dell System Setup: NIC Configuration On the NIC Configuration page, select Boot Mode, and then press ENTER to select FCoE as a preferred boot mode. AH0054602-00 A...
8–FCoE Configuration FCoE Boot from SAN NOTE FCoE is not listed as a boot option if the FCoE Mode feature is disabled at the port level. If the Boot Mode preferred is FCoE, make sure the FCoE Mode feature is enabled as shown in Figure 8-4.
8–FCoE Configuration FCoE Boot from SAN LUN Busy Retry Count: Default value or as required Figure 8-5. Dell System Setup: FCoE General Parameters Return to the FCoE Configuration page. Press ESC, and then select FCoE Target Parameters. Press ENTER. In the FCoE Target Parameters Menu, enable Connect to the preferred FCoE target.
8–FCoE Configuration FCoE Boot from SAN Figure 8-6. Dell System Setup: FCoE General Parameters Windows FCoE Boot from SAN FCoE boot from SAN information for Windows includes: Windows Server 2012, 2012 R2, and 2016 FCoE Boot Installation Configuring FCoE FCoE Crash Dump ...
8–FCoE Configuration FCoE Boot from SAN The following procedure prepares the image for installation and booting in FCoE mode. To set up Windows Server 2012/2012R2/2016 FCoE boot: Remove any local hard drives on the system to be booted (remote system). Prepare the Windows OS installation media by following the slipstreaming steps in “Injecting (Slipstreaming) Adapter Drivers into Windows Image...
8–FCoE Configuration Injecting (Slipstreaming) Adapter Drivers into Windows Image Files Injecting (Slipstreaming) Adapter Drivers into Windows Image Files To inject adapter drivers into the Windows image files: Obtain the latest driver package for the applicable Windows Server version (2012, 2012 R2, or 2016). Extract the driver package to a working directory: Open a command line session and navigate to the folder that contains the driver package.
8–FCoE Configuration Configuring Linux FCoE Offload NOTE Note the following regarding the operating system installation media: Operating system installation media is expected to be a local drive. Network paths for operating system installation media are not supported. The slipstream.bat script injects the driver components in all the SKUs that are supported by the operating system installation media.
8–FCoE Configuration Differences Between qedf and bnx2fc Differences Between qedf and bnx2fc Significant differences exist between qedf—the driver for QLogic FastLinQ 41xxx 10/25GbE Controller (FCoE)—and the previous QLogic FCoE offload driver, bnx2fc. Differences include: qedf directly binds to a PCI function exposed by the CNA. ...
8–FCoE Configuration Verifying FCoE Devices in Linux Verifying FCoE Devices in Linux Follow these steps to verify that the FCoE devices were detected correctly after installing and loading the qedf kernel module. To verify FCoE devices in Linux: Check lsmod to verify that the qedf and associated kernel modules were loaded: # lsmod | grep qedf libfcoe 69632...
8–FCoE Configuration Boot from SAN Considerations Check for discovered FCoE devices using lsblk -S: # lsblk -S NAME HCTL TYPE VENDOR MODEL REV TRAN 5:0:0:0 disk SANBlaze VLUN P2T1L0 V7.3 fc 5:0:0:1 disk SANBlaze VLUN P2T1L1 V7.3 fc 5:0:0:2 disk SANBlaze VLUN P2T1L2 V7.3 fc...
iWARP Configuration Internet wide area RDMA protocol (iWARP) is a computer networking protocol that implements RDMA for efficient data transfer over IP networks. iWARP is designed for multiple environments, including LANs, storage networks, data center networks, and WANs. This chapter provides instructions for: ...
9–iWARP Configuration Configuring iWARP on Windows Click Back. Figure 9-1. Dell System Setup for iWARP: NIC Configuration On the Main Configuration Page, click Finish. In the Warning - Saving Changes message box, click Yes to save the configuration. In the Success - Saving Changes message box, click OK. Repeat Step 2 through...
9–iWARP Configuration Configuring iWARP on Windows Under Property, select Network Direct Functionality, and then select Enabled for the Value. Under Property, select RDMA Mode, and then select iWARP under Value. Click OK to save your changes and close the adapter properties. To verify that Network Direct Functionality is enabled, launch PowerShell, and then issue the Get-NetAdapterRdma command.
9–iWARP Configuration Configuring iWARP on Windows Figure 9-4 shows an example. Figure 9-4. Perfmon: Add Counters If iWARP traffic is running, counters appear as shown in the Figure 9-5 example. Figure 9-5. Perfmon: Verifying iWARP Traffic To verify the SMB connection: At a command prompt, issue the net use command as follows: C:\Users\Administrator>...
9–iWARP Configuration Configuring iWARP on Linux Status Local Remote Network --------------------------------------------------------- \\192.168.10.10\Share1 Microsoft Windows Network The command completed successfully. Issue the net -xan command as follows, where Share1 is mapped as an SMB share: C:\Users\Administrator> net -xan Active NetworkDirect Connections, Listeners, ShareEndpoints Mode IfIndex Type Local Address...
9–iWARP Configuration Configuring iWARP on Linux iWARP configuration on a Linux system includes the following: Installing the Driver Detecting the Device Supported iWARP Applications Running Perftest for iWARP Using iSER with iWARP Configuring NFS-RDMA Installing the Driver To install the driver: Complete installation of the OS and OFED.
9–iWARP Configuration Configuring iWARP on Linux On one client, issue the following command: [root@localhost ~]# ib_send_bw -d qedr1 -F -R 192.168.11.3 ---------------------------------------------------------------------------- Send BW Test Dual-port : OFF Device : qedr1 Number of qps Transport type : IW Connection type : RC Using SRQ : OFF TX depth...
9–iWARP Configuration Configuring iWARP on Linux To configure a target for LIO: Create an LIO target using the targetcli utility. Issue the following command: # targetcli targetcli shell version 2.1.fb41 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'. Issue the following commands: />...
9–iWARP Configuration Configuring iWARP on Linux Change the transport mode to iser as follows: # iscsiadm -m node -o update -T iqn.2017-04.com.org.iserport1.target1 -n iface.transport_name -v iser Log into the target using port 3261: # iscsiadm -m node -l -p 192.168.21.4:3261 -T iqn.2017-04.com.org.iserport1.target1 Logging in to [iface: iser, target: iqn.2017-04.com.org.iserport1.target1, portal: 192.168.21.4,3261] (multiple) Login to [iface: iser, target: iqn.2017-04.com.org.iserport1.target1, portal:...
Page 152
9–iWARP Configuration Configuring iWARP on Linux To configure the NFS client: Load the xprtrdma module as follows: # modprobe xprtrdma Mount the NFS file system as appropriate for your version: For NFS Version 3: #mount -o rdma,port=20049 192.168.2.4:/tmp/nfs-server /tmp/nfs-client For NFS Version 4: #mount -t nfs4 -o rdma,port=20049 192.168.2.4:/ /tmp/nfs-client Verify that the file system is mounted by issuing the mount command.
SR-IOV Configuration Single root input/output virtualization (SR-IOV) is a specification by the PCI SIG that enables a single PCI Express (PCIe) device to appear as multiple, separate physical PCIe devices. SR-IOV permits isolation of PCIe resources for performance, interoperability, and manageability. NOTE Some SR-IOV features may not be fully enabled in the current release.
10–SR-IOV Configuration Configuring SR-IOV on Windows On the Integrated Devices page (Figure 10-1): Set the SR-IOV Global Enable option to Enabled. Click Back. Figure 10-1. Dell System Setup for SR-IOV: Integrated Devices On the Main Configuration Page for the selected adapter, click Device Level Configuration.
10–SR-IOV Configuration Configuring SR-IOV on Windows In the Warning - Saving Changes message box, click Yes to save the configuration. In the Success - Saving Changes message box, click OK. To enable SR-IOV on the miniport adapter: Access Device Manager. Open the miniport adapter properties, and then click the Advanced tab.
10–SR-IOV Configuration Configuring SR-IOV on Windows Select the Enable single-root I/O virtualization (SR-IOV) check box, and then click Apply. Figure 10-4. Virtual Switch Manager: Enabling SR-IOV The Apply Networking Changes message box advises you that Pending changes may disrupt network connectivity. To save your changes and continue, click Yes.
Page 157
10–SR-IOV Configuration Configuring SR-IOV on Windows Output of the Get-VMSwitch command will include the following SR-IOV capabilities: IovVirtualFunctionCount : 96 IovVirtualFunctionsInUse To create a virtual machine (VM) and export the virtual function (VF) in the Create a virtual machine. Add the VMNetworkadapter to the virtual machine. Assign a virtual switch to the VMNetworkadapter.
10–SR-IOV Configuration Configuring SR-IOV on Windows In the Settings for VM <VM_Name> dialog box (Figure 10-5), Hardware Acceleration page, under Single-root I/O virtualization, select the Enable SR-IOV check box, and then click OK. Figure 10-5. Settings for VM: Enabling SR-IOV Install the QLogic drivers for VF in the VM.
10–SR-IOV Configuration Configuring SR-IOV on Linux After installing the drivers, the QLogic adapter is listed in the VM. Figure 10-6 shows an example. Figure 10-6. Device Manager: VM with QLogic Adapter To view the SR-IOV VF details, issue the following PowerShell command: PS C:\Users\Administrator>...
10–SR-IOV Configuration Configuring SR-IOV on Linux On the System BIOS Settings page, click Processor Settings. On the Processor Settings (Figure 10-8) page: Set the Virtualization Technology option to Enabled. Click Back. Figure 10-8. Dell System Setup: Processor Settings for SR-IOV On the System Setup page, select Device Settings.
10–SR-IOV Configuration Configuring SR-IOV on Linux On the Device Level Configuration page (Figure 10-9): Set the Virtualization Mode to SR-IOV. Click Back. Figure 10-9. Dell System Setup for SR-IOV: Integrated Devices On the Main Configuration Page, click Finish, save your settings, and then reboot the system.
10–SR-IOV Configuration Configuring SR-IOV on Linux To enable and verify virtualization: Open the grub.conf file and configure the intel_iommu parameter as shown in Figure 10-10. Figure 10-10. Editing the grub.conf File for SR-IOV Save the grub.conf file and then reboot the system. To verify that the changes are in effect, issue the following command: dmesg | grep iommu...
10–SR-IOV Configuration Configuring SR-IOV on Linux For a specific port, enable a quantity (n or 8) of VFs; Issue the following command: [root@ah-rh68 ~]# echo 8 > /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.0/sriov_numvfs Review the command output (Figure 10-11) to confirm that actual VFs were created. Figure 10-11.
10–SR-IOV Configuration Configuring SR-IOV on Linux Assign and verify MAC addresses: To assign a MAC address to the VF, issue the following command: ip link set <pf device> vf <vf index> mac <mac address> Ensure that the VF interface is up and running with the assigned MAC address.
10–SR-IOV Configuration Configuring SR-IOV on VMware Click Finish. Figure 10-14. Add New Virtual Hardware Power on the VM and then issue the following command: check lspci -vv|grep -I ether If no inbox driver is available, install the driver. As needed, add more VFs in the VM. Configuring SR-IOV on VMware To configure SR-IOV on VMware: Access the Dell System Setup, and then click System BIOS Settings.
Page 166
10–SR-IOV Configuration Configuring SR-IOV on VMware Save the configuration settings and reboot the system. To enable the quantity of VFs per port as n or 16, issue the following command: "esxcfg-module -s "max_vfs=16,16" qedentv" Reboot the host. To verify that the changes are complete at the module level, issue the following command: "esxcfg-module -g qedentv"...
Page 167
10–SR-IOV Configuration Configuring SR-IOV on VMware 0000:05:0f.6 Network controller: QLogic Corp. QLogic FastLinQ QL41xxx Series 10/25 GbE Controller (SR-IOV VF) [PF_0.5.1_VF_14] 0000:05:0f.7 Network controller: QLogic Corp. QLogic FastLinQ QL41xxx Series 10/25 GbE Controller (SR-IOV VF) [PF_0.5.1_VF_15] To validate the VFs per port, issue the esxcli command as follows: [root@localhost:~] esxcli network sriovnic vf list -n vmnic6 VF ID Active...
10–SR-IOV Configuration Configuring SR-IOV on VMware To save your configuration changes and close this dialog box, click Figure 10-15. VMware Host Edit Settings Power on the VM, and then issue the ifconfig -a command to verify that the added network interface is listed. If no inbox driver is available, install the driver.
iSCSI Extensions for RDMA This chapter provides procedures for configuring iSCSI Extensions for RDMA (iSER) for Linux (RHEL and SLES), including: Before You Begin Configuring iSER for RHEL Configuring iSER for SLES 12 Optimizing Linux Performance Before You Begin As you prepare to configure iSER, consider the following: ...
11–iSCSI Extensions for RDMA Configuring iSER for RHEL Load the RDMA services. systemctl start rdma modprobe ib_iser modprobe ib_isert Verify that all RDMA and iSER modules loaded on the initiator and target devices by issuing the lsmod | grep qed and lsmod | grep iser commands.
11–iSCSI Extensions for RDMA Configuring iSER for RHEL You can use a Linux TCM-LIO target to test iSER. The setup is the same for any iSCSI target, except that you issue the command enable_iser Boolean=true on the applicable portals. The portal instances are identified as iser in Figure 11-2.
11–iSCSI Extensions for RDMA Configuring iSER for RHEL Confirm that the Iface Transport is iser in the target connection, as shown Figure 11-3. Issue the iscsiadm command; for example: iscsiadm -m session -P2 Figure 11-3. Iface Transport Confirmed To check for a new iSCSI device, as shown Figure 11-4, issue the lsscsi command.
11–iSCSI Extensions for RDMA Configuring iSER for SLES 12 Configuring iSER for SLES 12 Because the targetcli is not inbox on SLES 12.x, you must complete the following procedure. To configure iSER for SLES 12: To install targetcli, copy and install the following RPMs from the ISO image (x86_64 and noarch location): lio-utils-4.1-14.6.x86_64.rpm python-configobj-4.7.2-18.10.noarch.rpm...
11–iSCSI Extensions for RDMA Optimizing Linux Performance Optimizing Linux Performance Consider the following Linux performance configuration enhancements described in this section. Configuring CPUs to Maximum Performance Mode Configuring Kernel sysctl Settings Configuring IRQ Affinity Settings Configuring Block Device Staging Configuring CPUs to Maximum Performance Mode Configure the CPU scaling governor to performance by using the following script to set all CPUs to maximum performance mode:...
11–iSCSI Extensions for RDMA Optimizing Linux Performance Configuring IRQ Affinity Settings The following example sets CPU core 0, 1, 2, and 3 to IRQ XX, YY, ZZ, and XYZ respectively. Performs these steps for each IRQ assigned to a port (default is eight queues per port).
Windows Server 2016 This chapter provides the following information for Windows Server 2016: Configuring RoCE Interfaces with Hyper-V (NDKPI Mode-2) RoCE over Switch Embedded Teaming Configuring QoS for RoCE Configuring VMMQ Configuring VXLAN Configuring Storage Spaces Direct ...
12–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V (NDKPI Mode-2) Creating a Hyper-V Virtual Switch with an RDMA Virtual NIC Follow the procedures in this section to create a Hyper-V virtual switch and then enable RDMA in the host VNIC. To create a Hyper-V virtual switch with an RDMA virtual NIC: Launch Hyper-V Manager.
12–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V (NDKPI Mode-2) Click OK. Figure 12-2. Hyper-V Virtual Ethernet Adapter Properties To enable RDMA, issue the following PowerShell command: PS C:\Users\Administrator> Enable-NetAdapterRdma "vEthernet (New Virtual Switch)" PS C:\Users\Administrator> Adding a VLAN ID to Host Virtual NIC To add VLAN ID to a host virtual NIC: To find the host virtual NIC name, issue the following PowerShell command: PS C:\Users\Administrator>...
12–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V (NDKPI Mode-2) PS C:\Users\Administrator> Set-VMNetworkAdaptervlan -VMNetworkAdapterName "New Virtual Switch" -VlanId 5 -Access -Management05 NOTE Note the following about adding a VLAN ID to a host virtual NIC: A VLAN ID must be assigned to a host virtual NIC. The same VLAN ID must be assigned to all the interfaces, and on the switch.
12–Windows Server 2016 Configuring RoCE Interfaces with Hyper-V (NDKPI Mode-2) Mapping the SMB Drive and Running RoCE Traffic To map the SMB drive and run the RoCE traffic: Launch the Performance Monitor (Perfmon). Complete the Add Counters dialog box (Figure 12-5) as follows: Under Available counters, select RDMA Activity.
12–Windows Server 2016 RoCE over Switch Embedded Teaming If the RoCE traffic is running, counters appear as shown in Figure 12-6. Figure 12-6. Performance Monitor Shows RoCE Traffic RoCE over Switch Embedded Teaming Switch Embedded Teaming (SET) is Microsoft’s alternative NIC teaming solution available to use in environments that include Hyper-V and the Software Defined Networking (SDN) stack in Windows Server 2016 Technical Preview.
12–Windows Server 2016 RoCE over Switch Embedded Teaming Creating a Hyper-V Virtual Switch with SET and RDMA Virtual NICs To create a Hyper-V virtual switch with SET and RDMA virtual NICs: To create a SET, issue the following PowerShell command: PS C:\Users\Administrator>...
12–Windows Server 2016 Configuring QoS for RoCE NOTE Note the following when adding a VLAN ID to a host virtual NIC: Make sure that the VLAN ID is not assigned to the physical Interface when using host virtual NIC for RoCE. ...
12–Windows Server 2016 Configuring QoS for RoCE Click OK. Figure 12-9. Advanced Properties: Enable QoS Assign the VLAN ID to the interface as follows: Open the miniport window, and then click the Advanced tab. On the adapter’s Advanced Properties page (Figure 12-10) under Property, select VLAN ID, and then set the value.
12–Windows Server 2016 Configuring QoS for RoCE Figure 12-10. Advanced Properties: Setting VLAN ID To enable priority flow control for RoCE on a specific priority, issue the following command: PS C:\Users\Administrators> Enable-NetQoSFlowControl -Priority 4 NOTE If configuring RoCE over Hyper-V, do not assign a VLAN ID to the physical interface.
Page 186
12–Windows Server 2016 Configuring QoS for RoCE To disable priority flow control on any other priority, issue the following commands: PS C:\Users\Administrators> Disable-NetQosFlowControl 0,1,2,3,5,6,7 PS C:\Users\Administrator> Get-NetQosFlowControl Priority Enabled PolicySet IfIndex IfAlias -------- ------- --------- ------- ------- False Global False Global False Global...
Page 187
12–Windows Server 2016 Configuring QoS for RoCE NetDirectPort : 445 PriorityValue To configure ETS for all traffic classes defined in the previous step, issue the following commands: PS C:\Users\Administrators> New-NetQosTrafficClass -name "RDMA class" -priority 4 -bandwidthPercentage 50 -Algorithm ETS PS C:\Users\Administrators> New-NetQosTrafficClass -name "TCP class" -priority 0 -bandwidthPercentage 30 -Algorithm ETS PS C:\Users\Administrator>...
12–Windows Server 2016 Configuring QoS for RoCE Create a startup script to make the settings persistent across the system reboots. Run RDMA traffic and verify as described in “RoCE Configuration” on page Configuring QoS by Enabling DCBX on the Adapter All configuration must be completed on all of the systems in use.
12–Windows Server 2016 Configuring QoS for RoCE Enable QoS in the miniport as follows: On the adapter’s Advanced Properties page (Figure 12-11) under Property, select Quality of Service, and then set the value to Enabled. Click OK. Figure 12-11. Advanced Properties: Enabling QoS Assign the VLAN ID to the interface (required for PFC) as follows: Open the miniport window, and then click the Advanced tab.
12–Windows Server 2016 Configuring QoS for RoCE Figure 12-12. Advanced Properties: Setting VLAN ID To configure the switch, issue the following PowerShell command: PS C:\Users\Administrators> Get-NetAdapterQoS Name : Ethernet 5 Enabled : True Capabilities Hardware Current -------- ------- MacSecBypass : NotSupported NotSupported DcbxSupport : CEE NumTCs(Max/ETS/PFC) : 4/4/4...
12–Windows Server 2016 Configuring VMMQ NetDirect 445 RemoteTrafficClasses : TC TSA Bandwidth Priorities -- --- --------- ---------- 0 ETS 0-3,5-7 1 ETS RemoteFlowControl : Priority 4 Enabled RemoteClassifications : Protocol Port/Type Priority -------- --------- -------- NetDirect 445 NOTE The preceding example is taken when the adapter port is connected to an Arista 7060X switch.
12–Windows Server 2016 Configuring VMMQ Enabling VMMQ on the Adapter To enable VMMQ on the adapter: Open the miniport window, and then click the Advanced tab. On the Advanced Properties page under Property, select Virtual Switch RSS, and then set the value to Enabled. Click OK.
12–Windows Server 2016 Configuring VMMQ Click OK. Figure 12-14. Advanced Properties: Setting VMMQ Creating a Virtual Machine Switch with or Without SRIOV To create a virtual machine switch with or without SRIOV: Launch the Hyper-V Manager. Select Virtual Switch Manager (see Figure 12-15).
12–Windows Server 2016 Configuring VMMQ Click OK. Figure 12-15. Virtual Switch Manager Enabling VMMQ on the Virtual Machine Switch To enable VMMQ on the virtual machine switch: Issue the following PowerShell command: PS C:\Users\Administrators> Set-VMSwitch -name q1 -defaultqueuevmmqenabled $true -defaultqueuevmmqqueuepairs 4 AH0054602-00 A...
12–Windows Server 2016 Configuring VMMQ Getting the Virtual Machine Switch Capability To get the virtual machine switch capability: Issue the following PowerShell command: PS C:\Users\Administrator> Get-VMSwitch -Name ql | fl Figure 12-16 shows example output. Figure 12-16. PowerShell Command: Get-VMSwitch Creating a Virtual Machine and Enabling VMMQ on VMNetworkadapters in the Virtual Machine To create a virtual machine and enable VMMQ on VMNetworksadapters in...
Page 196
12–Windows Server 2016 Configuring VMMQ To enable VMMQ on virtual machine, issue the following PowerShell command: PS C:\Users\Administrators> set-vmnetworkadapter -vmname vm1 -VMNetworkAdapterName "network adapter" -vmmqenabled $true -vmmqqueuepairs 4 NOTE For an SR-IOV capable virtual switch: If the virtual machine switch and hardware acceleration is SR-IOV enabled, you must create 10 virtual machines with 8 virtual NICs each, to utilize VMMQ.
12–Windows Server 2016 Configuring VXLAN Ethernet 3 00-15-5D-36-0A-FA Activated Adaptive PS C:\Users\Administrator> get-netadaptervmq Name InterfaceDescription Enabled BaseVmqProcessor MaxProcessors NumberOfReceive Queues ---- -------------------- ------- ---------------- ------------- --------------- Ethernet 4 QLogic FastLinQ QL45212-DE...#238 False Default and Maximum VMMQ Virtual NIC According to the current implementation, a maximum quantity of 4 VMMQs is available per virtual NIC;...
12–Windows Server 2016 Configuring VXLAN Enabling VXLAN Offload on the Adapter To enable VXLAN offload on the adapter: Open the miniport window, and then click the Advanced tab. On the Advanced Properties page (Figure 12-17) under Property, select VXLAN Encapsulated Task Offload. Figure 12-17.
12–Windows Server 2016 Configuring Storage Spaces Direct Configuring Storage Spaces Direct Windows Server 2016 introduces Storage Spaces Direct, which allows you to build highly available and scalable storage systems with local storage. For more information, refer to the following Microsoft TechnNet link: https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces /storage-spaces-direct-windows-server-2016 Configuring the Hardware...
12–Windows Server 2016 Configuring Storage Spaces Direct Deploying a Hyper-Converged System This section includes instructions to install and configure the components of a Hyper-Converged system using the Windows Server 2016. The act of deploying a Hyper-Converged system can be divided into the following three high-level phases: ...
Page 201
12–Windows Server 2016 Configuring Storage Spaces Direct Example Dell switch configuration: no ip address mtu 9416 portmode hybrid switchport dcb-map roce_S2D protocol lldp dcbx version cee no shutdown Enable Network Quality of Service. NOTE Network Quality of Service is used to ensure that the Software Defined Storage system has enough bandwidth to communicate between the nodes to ensure resiliency and performance.
12–Windows Server 2016 Configuring Storage Spaces Direct To configure the host virtual NIC to use a VLAN, issue the following commands: Set-VMNetworkAdapterVlan -VMNetworkAdapterName "SMB_1" -VlanId 5 -Access -ManagementOS Set-VMNetworkAdapterVlan -VMNetworkAdapterName "SMB_2" -VlanId 5 -Access -ManagementOS NOTE These commands can be on the same or different VLANs. To verify that the VLAN ID is set, issue the following command: Get-VMNetworkAdapterVlan -ManagementOS To disable and enable each host virtual NIC adapter so that the VLAN...
Page 203
12–Windows Server 2016 Configuring Storage Spaces Direct Step 1. Running Cluster Validation Tool Run the cluster validation tool to make sure server nodes are configured correctly to create a cluster using Storage Spaces Direct. Issue the following PowerShell command to validate a set of servers for use as Storage Spaces Direct cluster: Test-Cluster -Node <MachineName1, MachineName2, MachineName3, MachineName4>...
12–Windows Server 2016 Deploying and Managing a Nano Server The following PowerShell command creates a virtual disk with both mirror and parity resiliency on the storage pool: New-Volume -StoragePoolFriendlyName "S2D*" -FriendlyName <VirtualDiskName> -FileSystem CSVFS_ReFS -StorageTierfriendlyNames Capacity,Performance -StorageTierSizes <Size of capacity tier in size units, example: 800GB>, <Size of Performance tier in size units, example: 80GB>...
Page 206
12–Windows Server 2016 Deploying and Managing a Nano Server Table 12-1. Roles and Features of Nano Server (Continued) Role or Feature Options File Server role and other storage components -Storage Windows Defender Antimalware, including a -Defender default signature file Reverse forwarders for application compatibility; -ReverseForwarders for example, common application frameworks such as Ruby, Node.js, and others.
12–Windows Server 2016 Deploying and Managing a Nano Server Deploying a Nano Server on a Physical Server Follow these steps to create a Nano Server virtual hard disk (VHD) that will run on a physical server using the preinstalled device drivers. To deploy the Nano Server: Download the Windows Server 2016 OS image.
Page 208
12–Windows Server 2016 Deploying and Managing a Nano Server "C:\Nano\Drivers" In the preceding example, C:\Nano\Drivers is the path for QLogic drivers. This command takes about 10 to 15 minutes to create a VHD file. A sample output for this command is shown here: Windows(R) Image to Virtual Hard Disk Converter for Windows(R) 10 Copyright (C) Microsoft Corporation.
12–Windows Server 2016 Deploying and Managing a Nano Server NOTE In this example, the VHD is attached under D:\. Right-click Disk Management and select Detach VHD. Reboot the physical server into the Nano Server VHD. Log in to the Recovery Console using the administrator and password you supplied while running the script in Step Obtain the IP address of the Nano Server computer.
Page 210
12–Windows Server 2016 Deploying and Managing a Nano Server .\Base -TargetPath .\NanoServerPhysical\NanoServer.vhd -ComputerName <computer name> –GuestDrivers Example: New-NanoServerImage –DeploymentType Guest –Edition Datacenter -MediaPath C:\tmp\TP4_iso\Bld_10586_iso -BasePath .\Base -TargetPath .\Nano1\VM_NanoServer.vhd -ComputerName Nano-VM1 –GuestDrivers The preceding command takes about 10 to 15 minutes to create a VHD file. A sample output for this command follows: PS C:\Nano>...
12–Windows Server 2016 Deploying and Managing a Nano Server Done. The log is at: C:\Users\ADMINI~1\AppData\Local\Temp\2\NanoServerImageGenerator.log Create a new virtual machine in Hyper-V Manager, and use the VHD created Step Boot the virtual machine. Connect to the virtual machine in Hyper-V Manager. Log in to the Recovery Console using the administrator and password you supplied while running the script in Step...
12–Windows Server 2016 Deploying and Managing a Nano Server NOTE The preceding command sets all host servers as trusted hosts. Starting the Remote Windows PowerShell Session At an elevated local Windows PowerShell session, start the remote Windows PowerShell session by issuing the following commands: $ip = "<IP address of Nano Server>"...
12–Windows Server 2016 Deploying and Managing a Nano Server If the Nano Server connects successfully, the following is returned: [172.28.41.152]: PS C:\Users\Administrator\Documents> To determine if the drivers are installed and the link is up, issue the following PowerShell command: [172.28.41.152]: PS C:\Users\Administrator\Documents> Get-NetAdapter Figure 12-19 shows example output.
12–Windows Server 2016 Deploying and Managing a Nano Server Figure 12-21 shows example output. Figure 12-21. PowerShell Command: New-Item [172.28.41.152]: PS C:\> New-SMBShare -Name "smbshare" -Path c:\smbshare -FullAccess Everyone Figure 12-22 shows example output. Figure 12-22. PowerShell Command: New-SMBShare To map the SMBShare as a network drive in the client machine, issue the following PowerShell command: NOTE The IP address of an interface on the Nano Server is 192.168.10.10.
12–Windows Server 2016 Deploying and Managing a Nano Server Figure 12-23 shows the command output. Figure 12-23. PowerShell Command: Get-NetAdapterStatistics AH0054602-00 A...
Troubleshooting This chapter provides the following troubleshooting information: Troubleshooting Checklist Verifying that Current Drivers Are Loaded Testing Network Connectivity Microsoft Virtualization with Hyper-V Linux-specific Issues Miscellaneous Issues Troubleshooting Checklist CAUTION Before you open the server cabinet to add or remove the adapter, review the “Safety Precautions”...
13–Troubleshooting Verifying that Current Drivers Are Loaded Replace the failed adapter with one that is known to work properly. If the second adapter works in the slot where the first one failed, the original adapter is probably defective. Install the adapter in another functioning system, and then run the tests again.
13–Troubleshooting Testing Network Connectivity If you loaded a new driver, but have not yet rebooted, the modinfo command will not show the updated driver information. Instead, issue the following dmesg command to view the logs. In this example, the last entry identifies the driver that will be active upon reboot.
13–Troubleshooting Microsoft Virtualization with Hyper-V Testing Network Connectivity for Linux To verify that the Ethernet interface is up and running: To check the status of the Ethernet interface, issue the ifconfig command. To check the statistics on the Ethernet interface, issue the netstat -i command.
13–Troubleshooting Miscellaneous Issues Miscellaneous Issues Problem: The 41xxx Series Adapter has shut down, and an error message appears indicating that the fan on the adapter has failed. Solution: The 41xxx Series Adapter shut down to prevent permanent damage. Contact QLogic Technical Support for assistance. AH0054602-00 A...
Adapter LEDS Table A-1 lists the LED indicators for the state of the adapter port link and activity. Table A-1. Adapter Port Link and Activity LEDs Port LED LED Appearance Network State No link (cable disconnected) Link LED Continuously illuminated Link No port activity Activity LED...
Cables and Optical Modules This appendix provides the following information for the supported cables and optical modules: Supported Specifications Tested Cables and Optical Modules Supported Specifications The 41xxx Series Adapters support a variety of cables and optical modules that comply with SFF8024.
B–Cables and Optical Modules Tested Cables and Optical Modules Tested Cables and Optical Modules QLogic does not guarantee that every cable or optical module that satisfies the compliance requirements will operate with the 41xxx Series Adapters. QLogic has tested the components listed in Table B-1 and presents this list for your convenience.
Page 224
Dell Z9100 Switch Configuration The 41xxx Series Adapters support connections with the Dell Z9100 Ethernet Switch. However, until the auto-negotiation process is standardized, the switch must be explicitly configured to connect to the adapter at 25Gbps. To configure a Dell Z9100 switch port to connect to the 41xxx Series Adapter at 25Gbps: Establish a serial port connection between your management workstation and the switch.
Page 225
C–Dell Z9100 Switch Configuration For information about changing the adapter link speed, see “Testing Network Connectivity” on page 199. Verify that the port is operating at 25Gbps: Dell# Dell#show running-config | grep "port 5" stack-unit 1 port 5 portmode quad speed 25G To disable auto-negotiation on switch port 5, follow these steps: Identify the switch port interface (module 1, port 5, interface 1) and confirm the auto-negotiation status:...
Page 226
Feature Constraints This appendix provides information about feature constraints implemented in the current release. These feature coexistence constraints will be removed in the next release and customers can use the feature combinations without any additional configuration steps beyond what would be usually required to enable the features. Concurrent FCoE and iSCSI Is Not Supported on the Same Port The current release does not support configuration of both FCoE and iSCSI on PFs belonging to the same physical port.
Page 227
D–Feature Constraints RoCE and iWARP Configuration Is Not Supported if NPAR Is Already Configured If NPAR is already configured on the adapter, you cannot configure RoCE or iWARP. Currently, RDMA can be enabled on all PFs and the RDMA transport type (RoCE or iWARP) can be configured on a per-port basis.
Page 228
Glossary ACPI bandwidth The Advanced Configuration and Power A measure of the volume of data that can Interface (ACPI) specification provides an be transmitted at a specific transmission open standard for unified operating rate. A 1Gbps or 2Gbps Fibre Channel system-centric device configuration and port can transmit or receive at nominal power management.
Page 229
User’s Guide—Converged Network Adapters QLogic 41xxx Series data center bridging exchange dynamic host configuration protocol See DCBX. See DHCP. eCore Data center bridging. Provides enhance- A layer between the OS and the hardware ments to existing 802.1 bridge specifica- and firmware. It is device-specific and tions to satisfy the requirements of OS-agnostic.
Page 230
User’s Guide—Converged Network Adapters QLogic 41xxx Series human interface infrastructure Enhanced transmission selection. A See HII. standard that specifies the enhancement of transmission selection to support the allocation of bandwidth among traffic Human interface infrastructure. A specifi- classes. When the offered load in a traffic cation (part of UEFI 2.1) for managing user class does not use its allocated bandwidth, input, localized strings, fonts, and forms,...
Page 231
User’s Guide—Converged Network Adapters QLogic 41xxx Series iWARP Internet wide area RDMA protocol. A Large send offload. LSO Ethernet adapter networking protocol that implements feature that allows the TCP\IP network RDMA for efficient data transfer over IP stack to build a large (up to 64KB) TCP networks.
Page 232
User’s Guide—Converged Network Adapters QLogic 41xxx Series network interface card See NIC. Quality of service. Refers to the methods used to prevent bottlenecks and ensure business continuity when transmitting data Network interface card. Computer card over virtual ports by setting priorities and installed to enable a dedicated network allocating bandwidth.
Page 233
User’s Guide—Converged Network Adapters QLogic 41xxx Series SCSI A target is a device that responds to a requested by an initiator (the host system). Small computer system interface. A Peripherals are targets, but for some high-speed interface used to connect commands (for example, a SCSI COPY devices, such as hard drives, CD drives, command), the peripheral may act as an...
Page 234
User’s Guide—Converged Network Adapters QLogic 41xxx Series VLAN User datagram protocol. A connectionless Virtual logical area network (LAN). A group transport protocol without any guarantee of hosts with a common set of require- of packet sequence or delivery. It functions ments that communicate as if they were directly on top of IP.
Page 235
Cavium, Inc. All other brand and product names are trademarks or registered trademarks of their respective owners. This document is provided for informational purposes only and may contain errors. Cavium reserves the right, without notice, to make changes to this document or in product design or specifications.
Need help?
Do you have a question about the QL41112HFCU-DE and is the answer not in the manual?
Questions and answers