This document is provided for informational purposes only and may contain errors. QLogic reserves the right, without notice, to make changes to this document or in product design or specifications. QLogic disclaims any warranty of any kind, expressed or implied, and does not guarantee that any results or performance described in the document will be achieved by you.
Page 3
PFs on that port can support SR-IOV VF connections.. Updated all instances of Advanced Server Pro- Throughout gram and ASP to QLogic Advanced Server Pro- gram and QLASP, respectively. Added a paragraph at the end of the section “Data Center Bridging in Windows Server 2012” on...
Page 4
User’s Guide Converged Network Adapters and Intelligent Ethernet Adapters QLogic FastLinQ 3400, 8400 Series 83840-546-00 E...
Page 14
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters QLogic FastLinQ 3400, 8400 Series Spanning Tree Algorithm ........
Page 15
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters QLogic FastLinQ 3400, 8400 Series User Diagnostics in DOS Introduction..........
Page 16
User’s Guide—Converged Network Adapters and Intelligent Ethernet Adapters QLogic FastLinQ 3400, 8400 Series 83840-546-00 E...
Agents Installer, it will uninstall the QCS GUI (if installed on the system) and any related components from your system. To obtain the new GUI, download QCC GUI for your adapter from the QLogic Downloads Web page: driverdownloads.qlogic.com Intended Audience This guide is intended for personnel responsible for installing and maintaining computer networking equipment.
Troubleshootingdescribes a variety of troubleshooting methods and resources. Appendix A, Adapter LEDS describes the adapter LEDs and their significance. Related Materials For information about downloading documentation from the QLogic Web site, see “Downloading Updates” on page xxv. xxii 83840-546-00 E...
“Installation Checklist” on page For more information, visit www.qlogic.com. Text in bold font indicates user interface elements such as a menu items, buttons, check boxes, or column headings. For example: ...
License Agreements Refer to the QLogic Software End User License Agreement for a complete listing of all license agreements affecting this product. xxiv...
Service Program Web page at http://www.qlogic.com/Support/Pages/ServicePrograms.aspx. Downloading Updates The QLogic Web site provides periodic updates to product firmware, software, and documentation. To download firmware, software, and documentation: Go to the QLogic Downloads and Documentation page: driverdownloads.qlogic.com.
Training QLogic Global Training maintains a Web site at www.qlogictraining.com offering online and instructor-led training for all QLogic products. In addition, sales and technical professionals may obtain Associate and Specialist-level certifications to qualify for additional benefits from QLogic. Contact Information QLogic Technical Support for products under warranty is available during local standard working hours excluding QLogic Observed Holidays.
Agency Certification The following sections contain a summary of EMC and EMI test specifications performed on the QLogic adapters to comply with emission and product safety standards. EMI and EMC Requirements FCC Rules,CFR Title 47, Part 15, Subpart B:2013 Class A This device complies with Part 15 of the FCC Rules.
Preface Legal Notices CE Mark 2004/108/EC EMC Directive Compliance EN55022:2010 Class A1:2007/CISPR22:2009+A1:2010 Class A EN55024:2010 EN61000-3-2:2006 A1 +A2:2009: Harmonic Current Emission EN61000-3-3:2008: Voltage Fluctuation and Flicker VCCI VCCI:2012-04; Class A AS/NZS CISPR22 AS/NZS; CISPR 22:2009+A1:2010 Class A KC-RRA KN22 KN24(2013) Class A Product Safety Compliance UL, cUL product safety: UL60950-1 (2nd Edition), 2007...
Adapter Specifications Functional Description The QLogic 8400/3400 Series adapters are based on a new class of Gigabit Ethernet (GbE) and 10GbE converged network interface controller (C-NIC) that can simultaneously perform accelerated data networking and storage networking on a standard Ethernet network. The C-NIC offers acceleration for popular protocols used in the data center, such as: ...
1–Product Overview Features Using the QLogic teaming software, you can split your network into virtual LANs (VLANs) and group multiple network adapters together into teams to provide network load balancing and fault tolerance. See Chapter 15, QLogic Teaming Services Chapter 16, Configuring Teaming in Windows Server for detailed information about teaming.
Page 31
1–Product Overview Features Manageability QConvergeConsole GUI. See the QConvergeConsole GUI Installation Guide, QConvergeConsole GUI online help and the QLogic Control Suite Command Line Interface User’s Guide for more information. ® ™ QConvergeConsole Plug-ins for vSphere through VMware vCenter Server software.
TCP layer. iSCSI processing can also be offloaded, thereby reducing CPU use even further. The QLogic 8400/3400 Series adapters target best-system performance, maintains system flexibility to changes, and supports current and future OS convergence and integration. Therefore, the adapter's iSCSI offload architecture is unique because of the split between hardware and host processing.
CPU cycles. ASIC with Embedded RISC Processor The core control for QLogic 8400/3400 Series adapters resides in a tightly integrated, high-performance ASIC. The ASIC includes a RISC processor that provides the flexibility to add new features to the card and adapt to future network requirements through software downloads.
8400/3400 Series Adapters or any QLogic adapter based on 57xx/57xxx controllers on both local and remote computer systems. For information about installing and using the QCS CLI, see the QLogic Control Suite CLI User’s Guide. QLogic QConvergeConsole Graphical User Interface The QCC GUI is a Web-based management tool for configuring and managing QLogic Fibre Channel adapters and Intelligent Ethernet adapters.
Adapter Specifications Physical Characteristics The QLogic 8400/3400 Series Adapters are implemented as low-profile PCIe cards. The adapters ship with a full-height bracket for use in a standard PCIe slot or an optional spare low-profile bracket for use in a low-profile PCIe slot.
Installation of the Network Adapter Connecting the Network Cables System Requirements Before you install a QLogic 8400/3400 Series adapter, verify that your system meets the following hardware and operating system requirements: Hardware Requirements IA32- or EMT64-based computer that meets operating system requirements ...
Requirements. Verify that your system is using the latest BIOS. NOTE If you acquired the adapter software on a disk or from the QLogic Web Site driverdownloads.qlogic.com), verify the path to the adapter driver files. If your system is active, shut it down.
Installation of the Network Adapter Installation of the Network Adapter The following instructions apply to installing the QLogic 8400/3400 Series adapters in most systems. Refer to the manuals that were supplied with your system for details about performing these tasks on your particular system.
Connecting the Network Cables Connecting the Network Cables The QLogic 8400/3400 Series adapters have either an RJ-45 connector used for attaching the system to an Ethernet copper-wire segment, or a fiber optic connector for attaching the system to an Ethernet fiber optic segment.
Multi-Boot Agent (MBA) is a software module that allows your network computer to boot with the images provided by remote servers across the network. The QLogic MBA driver complies with the PXE 2.1 specification and is released with split binary images. This provides flexibility to users in different environments where the motherboard may or may not have built-in base code.
QLogic network adapter using the Comprehensive Configuration Management (CCM) utility. To configure the MBA driver on LOM models of the QLogic network adapter, check your system documentation. Both the MBA driver and the CCM utility reside on the adapter Flash memory.
Page 42
3–Multi-boot Agent (MBA) Driver Software Setting Up MBA in a Client Environment You can use the CCM utility to configure the MBA driver one adapter at a time as described in this section. To simultaneously configure the MBA driver for multiple adapters, use the MS-DOS-based user diagnostics application described in “Performing Diagnostics”...
Linux. The Initrd.img file distributed with Red Hat Enterprise Linux, however, does not have a Linux network driver for the QLogic 8400/3400 Series adapters. This version requires a driver disk for drivers that are not part of the standard distribution.
3–Multi-boot Agent (MBA) Driver Software Setting Up MBA in a Server Environment MS-DOS UNDI/Intel APITEST To boot in MS-DOS mode and connect to a network for the MS-DOS environment, download the Intel PXE PDK from the Intel website. This PXE PDK comes with a TFTP/ProxyDHCP/Boot server.
NOTE QLogic now supports QConvergeConsole GUI as the only GUI management tool across all QLogic adapters. The QLogic Control Suite (QCS) GUI is no longer supported for the 8400/3400 Series Adapters and adapters based on 57xx/57xxx controllers, and has been replaced by the QCC GUI management tool.
8400/3400 Series adapter can be used with your Windows operating system. Drivers are located on the installation CD. Using the Installer If supported and if you will use the QLogic iSCSI Crash Dump utility, it is important to follow the installation sequence: ...
4–Windows Driver Software Installing the Driver Software Install Microsoft iSCSI Software Initiator along with the patch (MS KB939875) NOTE If performing an upgrade of the device drivers from the installer, re-enable iSCSI Crash Dump from the Advanced section of the QCC Configuration tab.
4–Windows Driver Software Manually Extracting the Device Drivers To perform a silent install from within the installer source folder Type the following: setup /s /v/qn To perform a silent upgrade from within the installer source folder Type the following: setup /s /v/qn To perform a silent reinstall of the same installer Type the following: setup /s /v"/qn REINSTALL=ALL"...
Removing the Device Drivers Removing the Device Drivers Uninstall the QLogic 8400/3400 Series device drivers from your system only through the InstallShield wizard. Uninstalling the device drivers with Device Manager or any other means may not provide a clean uninstall and may cause the system to become unstable.
4–Windows Driver Software Setting Power Management Options Setting Power Management Options You can set power management options to allow the operating system to turn off the controller to save power. If the device is busy doing something (servicing a call, for example) however, the operating system will not shut down the device. The operating system attempts to shut down every possible device only when the computer attempts to go into hibernation.
Page 51
4–Windows Driver Software Setting Power Management Options CAUTION Do not select Allow the computer to turn off the device to save power for any adapter that is a member of a team. 83840-546-00 E...
5–Linux Driver Software Introduction Introduction This section discusses the Linux drivers for the QLogic 8400/3400 Series network adapters. Table 5-1 lists the 8400/3400 Series Linux drivers. For information about iSCSI offload in Linux server, see “iSCSI Offload in Linux Server” on page 110.
netxtreme2-kmp-[kernel]-version.x86_64.rpm Red Hat kmod-kmp-netxtreme2-{kernel]-version.i686.rpm kmod-kmp-netxtreme2-{kernel]-version.x86_64.rpm The QCS CLI management utility is also distributed as an RPM package (QCS-{version}.{arch}.rpm). For information about installing the Linux QCS CLI, see the QLogic Control Suite CLI User’s Guide. 83840-546-00 E...
5–Linux Driver Software Installing Linux Driver Software Source Packages Identical source files to build the driver are included in both RPM and TAR source packages. The supplemental tar file contains additional utilities such as patches and driver diskette images for network installation. The following is a list of included files: ...
5–Linux Driver Software Installing Linux Driver Software Installing the Source RPM Package The following are guidelines for installing the driver source RPM Package. Prerequisites: Linux kernel source C compiler Procedure: Install the source RPM package: rpm -ivh netxtreme2-<version>.src.rpm Change the directory to the RPM path and build the binary RPM for your kernel: For RHEL:...
Page 57
5–Linux Driver Software Installing Linux Driver Software For RHEL 6.4 and SLES11 SP2 and legacy versions, the version of fcoe-utils/open-fcoe included in your distribution is sufficient and no out of box upgrades are provided. Where available, installation with yum will automatically resolve dependencies.
Page 58
5–Linux Driver Software Installing Linux Driver Software For FCoE offload and iSCSI-offload-TLV, disable lldpad on QLogic converged network adapter interfaces. This is required because QLogic uses an offloaded DCBX client. lldptool set-lldp –i <ethX> adminStatus=disasbled For FCoE offload and iSCSI-offload-TLV, make sure /var/lib/lldpad/lldpad.conf is created and each <ethX>...
5–Linux Driver Software Installing Linux Driver Software For FCOE offload, restart fcoe service to apply new settings For SLES11 SP1, RHEL 6.4, and legacy versions: service fcoe restart For SLES11 SP2: rcfcoe restart For SLES12: systemctl restart fcoe Installing the KMP Package NOTE The examples in this procedure refer to the bnx2x driver, but also apply to the bxn2fc and bnx2i drivers.
Load and Run Necessary iSCSI Software Components The QLogic iSCSI Offload software suite consists of three kernel modules and a user daemon. Required software components can be loaded either manually or through system services.
5–Linux Driver Software Unloading/Removing the Linux Driver Unloading/Removing the Linux Driver Unloading/Removing the Driver from an RPM Installation Removing the Driver from a TAR Installation Unloading/Removing the Driver from an RPM Installation NOTE The examples used in this procedure refer to the bnx2x driver, but also apply to the bnx2fc and bnx2i drivers.
5–Linux Driver Software Patching PCI Files (Optional) Uninstalling the QCC GUI For information about removing the QCC GUI, see QConvergeConsole GUI Installation Guide. Patching PCI Files (Optional) NOTE The examples used in this procedure refer to the bnx2x driver, but also apply to the bnx2fc and bnx2i drivers.
5–Linux Driver Software Setting Values for Optional Properties Setting Values for Optional Properties Optional properties exist for the different drivers: bnx2x Driver bnx2i Driver bnx2fc Driver bnx2x Driver disable_tpa The disable_tpa parameter can be supplied as a command line argument to disable the Transparent Packet Aggregation (TPA) feature.
5–Linux Driver Software Setting Values for Optional Properties dropless_fc The dropless_fc parameter can be used to enable a complementary flow control mechanism on 8400/3400 Series adapters. The default flow control mechanism is to send pause frames when the on-chip buffer (BRB) is reaching a certain level of occupancy.
5–Linux Driver Software Setting Values for Optional Properties pri_map The optional parameter pri_map is used to map the VLAN PRI value or the IP DSCP value to a different or same CoS in the hardware. This 32-bit parameter is evaluated by the driver as an 8 value of 4 bits each. Each nibble sets the desired hardware queue number for that priority.
CAUTION Do not use error_mask if you are not sure about the consequences. These values are to be discussed with QLogic development team on a case-by-case basis. This is just a mechanism to work around iSCSI implementation issues on the target side. Without proper knowledge of iSCSI protocol details, users are advised not to experiment with these parameters.
28 connections. Defaults: 128 Range: 32 to 128 Note that QLogic validation is limited to a power of 2; for example, 32, 64, 128. rq_size “Configure RQ size”, used to choose the size of asynchronous buffer queue size per offloaded connections.
5–Linux Driver Software Driver Defaults bnx2fc Driver Optional parameter debug_logging can be supplied as a command line arguments to the insmod or modprobe command for bnx2fc. debug_logging "Bit mask to enable debug logging", enables/disables driver debug logging. Defaults: None. For example: insmod bnx2fc.ko debug_logging=0xff modprobe bnx2fc debug_logging=0xff IO level debugging = 0x1...
NIC Link is Up, 10000 Mbps full duplex Link Down Indication bnx2x: eth# NIC Link is Down MSI-X Enabled Successfully bnx2x: eth0: using MSI-X bnx2i Driver BNX2I Driver Signon QLogic 8400/3400 Series iSCSI Driver bnx2i v2.1.1D (May 12, 20xx) 83840-546-00 E...
5–Linux Driver Software Driver Messages Network Port to iSCSI Transport Name Binding bnx2i: netif=eth2, iscsi=bcm570x-050000 bnx2i: netif=eth1, iscsi=bcm570x-030c00 Driver Completes handshake with iSCSI Offload-enabled CNIC Device bnx2i [05:00.00]: ISCSI_INIT passed NOTE This message is displayed only when the user attempts to make an iSCSI connection.
5–Linux Driver Software Driver Messages No Valid License to Start FCoE bnx2fc: FCoE function not enabled <ethX> bnx2fC: FCoE not supported on <ethX> Session Failures Due to Exceeding Maximum Allowed FCoE Offload Connection Limit or Memory Limits bnx2fc: Failed to allocate conn id for port_id <remote port id> bnx2fc: exceeded max sessions..logoff this tgt bnx2fc: Failed to allocate resources Session Offload Failures...
5–Linux Driver Software Teaming with Channel Bonding Teaming with Channel Bonding With the Linux drivers, you can team adapters together using the bonding kernel module and a channel bonding interface. For more information, see the Channel Bonding information in your operating system documentation. Statistics Detailed statistics and configuration information can be viewed using the ethtool utility.
VMware Driver Software Packaging Download, Install, and Update Drivers Networking Support FCoE Support Packaging The VMware driver is released in the packaging formats shown in Table 6-1. For information about iSCSI offload in VMware server, see “iSCSI Offload on VMware Server”...
6–VMware Driver Software Download, Install, and Update Drivers Download, Install, and Update Drivers To download, install, or update the VMware ESXi driver for 8400/3400 Series 10 GbE network adapters, go to http://www.vmware.com/resources/compatibility/search.php?deviceCategory=io and do the following: Type the adapter name (in quotes) in the Keyword window (such as "QLE3442"), and then click Update and View Results (Figure 6-1).
Page 78
6–VMware Driver Software Download, Install, and Update Drivers Mouse over the QLE3442 link in the results section to show the PCI identifiers (Figure 6-3). Figure 6-3. PCI Identifiers Click the model link to show a listing of all of the driver packages (Figure 6-4).
Page 79
6–VMware Driver Software Download, Install, and Update Drivers Log in to the VMware driver download page and click Download to download the desired driver package (Figure 6-5). Figure 6-5. Download Driver Package This package is double zipped—unzip the package once before copying the offline bundle zip file to the ESXi host.
6–VMware Driver Software Networking Support Networking Support This section describes the bnx2x VMware ESXi driver for the QLogic 8400/3400 Series PCIe 10 GbE network adapters. Driver Parameters Several optional parameters can be supplied as a command line argument to the vmkload_mod command.
6–VMware Driver Software Networking Support num_tx_queues The optional parameter num_tx_queues may be used to set the number of Tx queues on kernels starting from 2.6.27 when multi_mode is set to 1 and interrupt mode is MSI-X. The number of Rx queues must be equal to or greater than the number of Tx queues (see num_rx_queues parameter).
QLogic network adapters. The default flow control mechanism is to send pause frames when the BRB is reaching a certain level of occupancy. This is a performance targeted flow control mechanism. On QLogic network adapters, you can enable another flow control mechanism to send pause frames if one of the host buffers (when in RSS mode) is exhausted.
Most systems are set to level 6 by default. To see all messages, set the level higher. Driver Sign On QLogic 8400/3400 Series 10Gigabit Ethernet Driver bnx2x 0.40.15 ($DateTime: 2007/11/22 05:32:40 $) NIC Detected eth0: QLogic 8400/3400 Series XGb (A1) PCI-E x8 2.5GHz found at mem e8800000, IRQ 16, node addr...
CPUs on the machine. FCoE Support This section describes the contents and procedures associated with installation of the VMware software package for supporting QLogic FCoE C-NICs. Enabling FCoE To enable FCoE hardware offload on the C-NIC Determine the ports that are FCoE-capable:...
Page 85
NOTE The label Software FCoE is a VMware term used to describe initiators that depend on the inbox FCoE libraries and utilities. QLogic's FCoE solution is a fully state connection-based hardware offload solution designed to significantly reduce the CPU burden encumbered by a non-offload software initiator.
(for L2 networking) and on behalf of the bnx2fc (FCoE protocol) and CNIC drivers. bnx2fc The QLogic VMware FCoE driver is a kernel mode driver used to provide a translation layer between the VMware SCSI stack and the QLogic FCoE firmware/hardware. In addition, the driver interfaces...
Firmware Upgrade QLogic provides a Windows and Linux utility for upgrading adapter firmware and bootcode. Each utility executes as a console application that can be run from a command prompt. Upgrade VMware firmware with the VMware vSphere plug-in. Upgrading Firmware for Windows...
Page 88
C Brd Name - ---- ------------ --- ------------------------------------------------------ 0 16A1 000E1E508E20 Yes [0061] QLogic 57840 10 Gigabit Ethernet #61 1 16A1 000E1E508E22 Yes [0062] QLogic 57840 10 Gigabit Ethernet #62 Upgrading MFW Forced upgrading MFW1 image: from ver MFW1 7.10.39 to ver MFW1 7.12.31 Upgrading MFW2 image to version MFW2 7.12.31...
Page 89
7–Firmware Upgrade Upgrading Firmware for Windows 1 16A1 000E1E508E22 Yes [0062] QLogic 57840 10 Gigabit Ethernet #62 Upgrading MBA Updating PCI ROM header with Vendor ID = 0x14e4 Device ID = 0x16a1 Updating PCI ROM header with Vendor ID = 0x14e4 Device ID = 0x16a1 Forced upgrading MBA image: from ver PCI30 MBA 7.11.3 ;EFI x64 7.10.54 to ver...
Linux firmware upgrade utility for your adapter. In a Linux command line window, type the following command: # ./LnxQlgcUpg.sh ****************************************************************************** QLogic Firmware Upgrade Utility for Linux v2.7.13 ****************************************************************************** C Brd Name - ---- ------------ --- ------------------------------------------------------ 0 1639 0026B942B53E Yes PowerEdge R710 BCM5709 Gigabit Ethernet rev 20 (em1)
Page 91
The System Reboot is required in order for the upgrade to take effect. Quitting program ... Program Exit Code: (95) Successfully upgraded mf800v7c.31 ****************************************************************************** QLogic Firmware Upgrade Utility for Linux v2.7.13 ****************************************************************************** C Brd Name - ---- ------------ --- ------------------------------------------------------...
Page 92
The System Reboot is required in order for the upgrade to take effect. Quitting program ... Program Exit Code: (95) Successfully upgraded evpxe.nic ****************************************************************************** QLogic Firmware Upgrade Utility for Linux v2.7.13 ****************************************************************************** C Brd Name - ---- ------------ --- ------------------------------------------------------...
Page 93
The System Reboot is required in order for the upgrade to take effect. The System Reboot is required in order for the upgrade to take effect. Quitting program ... Program Exit Code: (95) Successfully upgraded ibootv712.01 ****************************************************************************** QLogic Firmware Upgrade Utility for Linux v2.7.13 ****************************************************************************** 83840-546-00 E...
Page 94
The System Reboot is required in order for the upgrade to take effect. Quitting program ... Program Exit Code: (95) Successfully upgraded fcbv712.04 ****************************************************************************** QLogic Firmware Upgrade Utility for Linux v2.7.13 ****************************************************************************** C Brd Name - ---- ------------ --- ------------------------------------------------------...
Page 95
7–Firmware Upgrade Upgrading Firmware for Linux Forced upgrading L2C image: from ver L2C 7.10.31 to ver L2C 7.10.31 Forced upgrading L2X image: from ver L2X 7.10.31 to ver L2X 7.10.31 Forced upgrading L2U image: from ver L2U 7.10.31 to ver L2U 7.10.31 C Brd Name - ---- ------------ --- ------------------------------------------------------...
For both Windows and Linux operating systems, iSCSI boot can be configured to boot with two distinctive paths: non-offload (also known as Microsoft/Open-iSCSI initiator) and offload (QLogic’s offload iSCSI driver or HBA). Configuration of the path is set with the HBA Boot Mode option located on the General Parameters screen of the iSCSI Configuration utility.
Target LUN Initiator IQN CHAP ID and secret Configuring iSCSI Boot Parameters Configure the QLogic iSCSI boot software for either static or dynamic configuration. Refer to Table 8-1 for configuration options available from the General Parameters screen. 83840-546-00 E...
Page 98
8–iSCSI Protocol iSCSI Boot Table 8-1 lists parameters for both IPv4 and IPv6. Parameters specific to either IPv4 or IPv6 are noted. NOTE Availability of IPv6 iSCSI boot is platform/device dependent. Table 8-1. Configuration Options Option Description TCP/IP parameters via This option is specific to IPv4.
Multi-Function mode. MBA Boot Protocol Configuration To configure the boot protocol Restart your system. In the QLogic 577xx/578xx Ethernet Boot Agent banner (Figure 8-1), press CTRL+S. Figure 8-1. QLogic 577xx/578xx Ethernet Boot Agent...
Page 100
8–iSCSI Protocol iSCSI Boot In the CCM device list (Figure 8-2), use the up or down arrow keys to select a device, and then press ENTER. Figure 8-2. CCM Device List In the Main menu, select MBA Configuration (Figure 8-3), and then press ENTER.
8–iSCSI Protocol iSCSI Boot In the MBA Configuration menu (Figure 8-4), use the up or down arrow keys to select Boot Protocol. Use the left or right arrow keys to change the boot protocol option to iSCSI. Press ENTER. Figure 8-4. Selecting the iSCSI Boot Protocol NOTE If iSCSI boot firmware is not programmed in the 8400/3400 Series network adapter, the iSCSI Boot Configuration option will not be...
Page 102
8–iSCSI Protocol iSCSI Boot To configure the iSCSI boot parameters using static configuration: In the Main menu, select iSCSI Boot Configuration (Figure 8-5), and then press ENTER. Figure 8-5. Selecting iSCSI Boot Configuration In the iSCSI Boot Main menu, select General Parameters (Figure 8-6), and then press ENTER.
Page 103
8–iSCSI Protocol iSCSI Boot IP Version: As Required (IPv6, non-offload) HBA Boot Mode: As required NOTE For initial OS installation to a blank iSCSI target LUN from a CD/DVD-ROM or mounted bootable OS installation image, set Boot to iSCSI Target to One Time Disabled.
Page 104
8–iSCSI Protocol iSCSI Boot In the 1st Target Parameters menu, enable Connect to connect to the iSCSI target. Type values for the following parameters for the iSCSI target, and then press ENTER: IP Address TCP Port Boot LUN ...
Page 105
Initiator Parameters screen. If no value was selected, then the controller defaults to the name: iqn.1995-05.com.qlogic.<11.22.33.44.55.66>.iscsiboot where the string 11.22.33.44.55.66 corresponds to the controller’s MAC address. If DHCP option 43 (IPv4 only) is used, then any settings on the Initiator Parameters, 1st Target Parameters, or 2nd Target Parameters screens are ignored and do not need to be cleared.
Page 106
8–iSCSI Protocol iSCSI Boot Boot to iSCSI Target: As Required DHCP Vendor ID: As Required Link Up Delay Time: As Required Use TCP Timestamp: As Required Target as First HDD: As Required LUN Busy Retry Count: As Required ...
DHCP iSCSI Boot Configuration for IPv6 DHCP iSCSI Boot Configurations for IPv4 The DHCP protocol includes a number of options that provide configuration information to the DHCP client. For iSCSI boot, QLogic adapters support the following DHCP configurations: DHCP Option 17, Root Path ...
Page 108
The target name in either IQN or EUI format (refer to RFC 3720 for details on both IQN and EUI formats). An example IQN name would be “iqn.1995-05.com.QLogic:iscsi-target”. DHCP Option 43, Vendor-Specific Information DHCP option 43 (vendor-specific information) provides more configuration options to the iSCSI client than DHCP option 17.
DHCPv6 Option 16, Vendor Class Option DHCPv6 Option 17, Vendor-Specific Information NOTE The DHCPv6 standard Root Path option is not yet available. QLogic suggests using Option 16 or Option 17 for dynamic iSCSI Boot IPv6 support. 83840-546-00 E...
8–iSCSI Protocol iSCSI Boot DHCPv6 Option 16, Vendor Class Option DHCPv6 Option 16 (vendor class option) must be present and must contain a string that matches your configured DHCP Vendor ID parameter. The DHCP Vendor ID value is QLGC ISAN, as shown in General Parameters of the iSCSI Boot Configuration menu.
Load the latest QLogic MBA and iSCSI boot images onto NVRAM of the adapter. Configure the BIOS on the remote system to have the QLogic MBA as the first bootable device, and the CDROM as the second device. Configure the iSCSI target to allow a connection from the remote device.
Page 112
8–iSCSI Protocol iSCSI Boot Boot up the remote system. When the PXE banner appears, press CTRL+S to enter the PXE menu. At the PXE menu, set Boot Protocol to iSCSI. Enter the iSCSI target parameters. Set HBA Boot Mode to Enabled or Disabled. (Note: This parameter cannot be changed when the adapter is in Multi-Function mode.) Save the settings and reboot the system.
Page 113
Remove any local hard drives on the system to be booted (the “remote system”). Load the latest QLogic MBA and iSCSI boot images into the NVRAM of the adapter. Configure the BIOS on the remote system to have the QLogic MBA as the first bootable device and the CDROM as the second device.
Page 114
Following another system restart, check and verify that the remote system is able to boot to the desktop. After Windows Server 2012 boots to the OS, QLogic recommends running the driver installer to complete the QLogic drivers and application installation.
Page 115
8–iSCSI Protocol iSCSI Boot In some network configurations, if additional time is required for the network adapters to become active (for example, with a use of “netsetup=dhcp,all”) add “netwait=8”. This would allow the network adapters additional time to complete the driver load and re-initialization of all interfaces.
Page 116
8–iSCSI Protocol iSCSI Boot Change the grub menu to point to the new initrd image. To enable CHAP, you need to modify iscsid.conf (Red Hat only). Reboot and change CHAP parameters if desired. Continue booting into the iSCSI Boot image and select one of the images you created (non-offload or offload).
Page 117
8–iSCSI Protocol iSCSI Boot Content of the new boot.open-iscsi file: #!/bin/bash # /etc/init.d/iscsi ### BEGIN INIT INFO # Provides: iscsiboot # Required-Start: # Should-Start: boot.multipath # Required-Stop: # Should-Stop: $null # Default-Start: # Default-Stop: # Short-Description: iSCSI initiator daemon root-fs support # Description: Starts the iSCSI initiator daemon if the root-filesystem is on an iSCSI device...
Page 118
8–iSCSI Protocol iSCSI Boot iscsi_mark_root_nodes() $ISCSIADM -m session 2> /dev/null | while read t num i target ; ip=${i%%:*} STARTUP=`$ISCSIADM -m node -p $ip -T $target 2> /dev/null | grep "node.conn\[0\].startup" | cut -d' ' -f3` if [ "$STARTUP" -a "$STARTUP" != "onboot" ] ; then $ISCSIADM -m node -p $ip -T $target -o update -n node.conn[0].startup -v onboot done...
Page 119
For example, type c:\temp. Follow the driver installer instructions to install the drivers in the specified folder. In this example, the driver files are installed in c:\temp\Program File 64\QLogic Corporation\QDrivers. Download the Windows Assessment and Deployment Kit (ADK) version 8.1 from http://www.microsoft.com/en-in/download/details.aspx?id=39982.
<path> is the drive and subfolder that you specified in Step 2. For example: slipstream.bat “c:\temp\Program Files 64\QLogic Corporation\QDrivers NOTE Operating system installation media is expected to be a local drive. Network paths for operating system installation media are not supported.
Layer-2 VLAN to segregate it from general traffic. When this is the case, make the iSCSI interface on the adapter a member of that VLAN. During a boot of the Initiator system, press CTRL+S to open the QLogic CCM pre-boot utility (Figure 8-8).
8–iSCSI Protocol iSCSI Boot In the MBA Configuration menu (Figure 8-11), use the up or down arrow keys to select each of following parameters. VLAN Mode: Press ENTER to change the value to Enabled VLAN ID: Press ENTER to open the VLAN ID dialog, type the target VLAN ID (1–4096), and then press ENTER.
Ensure that all Runlevels of network service are on. Ensure that the 2, 3, and 5 Runlevels of iSCSI service are on. Update iscsiuio. You can get the iscsiuio package from the QLogic CD. This step is not needed for SuSE 10.
IPv6 address and the target configured using a router-configured IPv6 address. Solution: This is a known Windows TCP/IP stack issue. Problem: The QLogic iSCSI Crash Dump utility will not work properly to capture a memory dump when the link speed for iSCSI boot is configured for 10Mbps or 100Mbps.
SFP+ defaults to 10Gbps operation and does not support autonegotiation. iSCSI Crash Dump If you will use the QLogic iSCSI Crash Dump utility, it is important to follow the installation procedure to install the iSCSI Crash Dump driver. See “Using the Installer”...
Now that the IP address has been configured for the iSCSI adapter, you need to use Microsoft Initiator to configure and add a connection to the iSCSI target using QLogic iSCSI adapter. See Microsoft’s user guide for more details on Microsoft Initiator.
Page 128
8–iSCSI Protocol iSCSI Offload in Windows Server Type the initiator IQN name, and then click OK. Figure 8-14. iSCSI Initiator Node Name Change Select the Discovery tab (Figure 8-15), and click Add to add a target portal. Figure 8-15. iSCSI Initiator—Add a Target Portal 83840-546-00 E...
Page 129
Enter the IP address of the target and click Advanced (Figure 8-16). Figure 8-16. Target Portal IP Address From the General tab, select QLogic 10 Gigabit Ethernet iSCSI Adapter for the local adapter (Figure 8-17). Figure 8-17. Selecting the Local Adapter...
Page 130
8–iSCSI Protocol iSCSI Offload in Windows Server Select the adapter IP address for the Initiator IP, and then click OK (Figure 8-18). Figure 8-18. Selecting the Initiator IP Address 83840-546-00 E...
Page 131
8–iSCSI Protocol iSCSI Offload in Windows Server In the iSCSI Initiator Properties dialog box (Figure 8-19), click OK to add the target portal. Figure 8-19. Adding the Target Portal 83840-546-00 E...
Page 132
8-21), click Advanced..Figure 8-21. Log On to Target Dialog Box On the General tab, select QLogic 10 Gigabit Ethernet iSCSI Adapter for the local adapter, and then click OK to close Advanced settings. Click OK to close the Microsoft Initiator.
8–iSCSI Protocol iSCSI Offload in Windows Server To format your iSCSI partition, use Disk Manager. NOTE Teaming does not support iSCSI adapters. Teaming does not support NDIS adapters that are in the boot path. Teaming supports NDIS adapters that are not in the iSCSI boot path, but only for the SLB team type.
Page 134
8–iSCSI Protocol iSCSI Offload in Windows Server Table 8-5. Offload iSCSI (OIS) Driver Event Log Messages Message Severity Message Number Error Maximum command sequence number is not serially greater than expected command sequence number in login response. Dump data contains Expected Command Sequence number followed by Maximum Command Sequence number.
Page 135
8–iSCSI Protocol iSCSI Offload in Windows Server Table 8-5. Offload iSCSI (OIS) Driver Event Log Messages Message Severity Message Number Error CHAP Response given by the target did not match the expected one. Dump data contains the CHAP response. Error Header Digest is required by the initiator, but target did not offer it.
Page 136
8–iSCSI Protocol iSCSI Offload in Windows Server Table 8-5. Offload iSCSI (OIS) Driver Event Log Messages Message Severity Message Number Information A connection to the target was lost, but Initiator successfully reconnected to the target. Dump data contains the target name.
Page 137
8–iSCSI Protocol iSCSI Offload in Windows Server Table 8-5. Offload iSCSI (OIS) Driver Event Log Messages Message Severity Message Number Error Target failed to respond in time to a logout request sent in response to an asynchronous message from the target. Error Initiator Service failed to respond in time to a request to con- figure IPSec resources for an iSCSI connection.
Offload in Linux Server Open iSCSI User Applications User Application - qlgc_iscsiuio Bind iSCSI Target to QLogic iSCSI Transport Name VLAN Configuration for iSCSI Offload (Linux) Making Connections to iSCSI Targets Maximum Offload iSCSI Connections ...
# qlgc_iscsiuio -v Start brcm_iscsiuio # qlgc_iscsiuio Bind iSCSI Target to QLogic iSCSI Transport Name In Linux, each iSCSI port is an interface known as iface. By default, the open-iscsi daemon connects to discovered targets using a software initiator (transport name = tcp) with the iface name default.
Iface.port = 0 #END Record NOTE Although not strictly required, QLogic recommends configuring the same VLAN ID on the iface.iface_num field for iface file identification purposes. Setting the VLAN ID on the Ethernet Interface If using RHEL5.X versions of Linux, it is recommended that you configure the iSCSI VLAN on the Ethernet interface.
This is a sample list of commands to discovery targets and to create iSCSI connections to a target. Add Static Entry iscsiadm -m node -p <ipaddr[:port]> -T iqn.2007-05.com.qlogic:target1 -o new -I <iface_file_name> iSCSI Target Discovery Using 'SendTargets' iscsiadm -m discovery --type sendtargets -p <ipaddr[:port]> -I <iface_file_name>...
In the scenario where multiple CNIC devices are in the system and the system is booted with QLogic’s iSCSI boot solution, ensure that the iSCSI node under /etc/iscsi/nodes for the boot target is bound to the NIC that is used for booting.
Page 143
Offload on VMware Server Similar to bnx2fc, bnx2i is a kernel mode driver used to provide a translation layer between the VMware SCSI stack and the QLogic iSCSI firmware/hardware. Bnx2i functions under the open-iscsi framework. iSCSI traffic on the network may be isolated in a VLAN to segregate it from other traffic.
Page 144
8–iSCSI Protocol iSCSI Offload on VMware Server Configure the VLAN on VMKernel (Figure 8-23). Figure 8-23. Configuring the VLAN on VMKernel 83840-546-00 E...
(NAS), management, IPC, and storage, are used to achieve the desired performance and versatility. In addition to iSCSI for storage solutions, FCoE can now be used with capable QLogic C-NICs. FCoE is a standard that allows Fibre Channel protocol to be transferred over Ethernet by preserving existing Fibre Channel infrastructures and capital investments by classifying received FCoE and FIP frames.
Preparing System BIOS for FCoE Build and Boot Modify System Boot Order The QLogic initiator must be the first entry in the system boot order. The second entry must be the OS installation media. It is important that the boot order be set correctly or else the installation will not proceed correctly.
9–Fibre Channel Over Ethernet FCoE Boot from SAN Prepare QLogic Multiple Boot Agent for FCoE Boot During POST, press CTRL+S at the QLogic Ethernet Boot Agent banner to invoke the CCM utility. Select the device through which boot is to be configured ().
Page 148
9–Fibre Channel Over Ethernet FCoE Boot from SAN Ensure DCB/DCBX is enabled on the device (Figure 9-2). FCoE boot is only supported on DCBX capable configurations. As such, DCB/DCBX must be enabled, and the directly attached link peer must also be DCBX capable with parameters that allow for full DCBX synchronization.
Page 149
9–Fibre Channel Over Ethernet FCoE Boot from SAN Configure the desired boot target and LUN. From the Target Information Menu (Figure 9-4), select the first available path. Figure 9-4. FCoE Boot<Variable>—Target Information 83840-546-00 E...
Page 150
9–Fibre Channel Over Ethernet FCoE Boot from SAN Enable the Connect field. Enter the target WWPN and Boot LUN information for the target to be used for boot (Figure 9-5). Figure 9-5. FCoE Boot<Variable>—Specify Target WWPN and Boot LUN Figure 9-6. FCoE Boot Target Information Press ESC until prompted to exit and save changes.
9–Fibre Channel Over Ethernet FCoE Boot from SAN UEFI Boot LUN Scanning UEFI boot LUN scanning eases the task of configuring FCoE boot from SAN by allowing you to choose from a list of targets and selecting a WWPN instead of typing the WWPN.
Page 152
9–Fibre Channel Over Ethernet FCoE Boot from SAN In the FCoE Target Parameters window, there are eight target entries in which you can enable the target (Connect n), select or type a WWPN (WWPN n), and type a boot LUN number (Boot LUN n) (Figure 9-8).
9–Fibre Channel Over Ethernet FCoE Boot from SAN Provisioning Storage Access in the SAN Storage access consists of zone provisioning and storage selective LUN presentation, each of which is commonly provisioned per initiator WWPN. Two main paths are available for approaching storage access: ...
126. One-Time Disabled QLogic's FCoE ROM is implemented as Boot Entry Vector (BEV). In this implementation, the Option ROM only connects to the target once it has been selected by BIOS as the chosen boot device. This is different from other implementations that will connect to the boot device even if another device has been selected by the system BIOS.
Page 155
9–Fibre Channel Over Ethernet FCoE Boot from SAN Wait through all option ROM banners. Once FCoE boot is invoked, it will connect to the target, and provide a four second window to press CTRL+D to invoke the bypass. Press CTRL+D to proceed to installation. Figure 9-10.
9–Fibre Channel Over Ethernet FCoE Boot from SAN Windows Server 2008 R2 and Windows Server 2008 SP2 FCoE Boot Installation Ensure that no USB flash drive is attached before starting the OS installer. The EVBD and OFC/BXFOE drivers need to be loaded during installation. Go through the normal procedures for OS installation.
Page 157
9–Fibre Channel Over Ethernet FCoE Boot from SAN Then load the bxfcoe (OFC) driver (Figure 9-12). Figure 9-12. Load bxfcoe Driver Select the boot LUN to be installed (Figure 9-13). Figure 9-13. Selecting the FCoE Boot LUN 83840-546-00 E...
Windows Server 2012/2102 R2 FCoE Boot Installation For Windows Server 2012/2012 R2 Boot from SAN installation, QLogic requires the use of a “slipstream” DVD or ISO image with the latest QLogic drivers injected. “Injecting (Slipstreaming) Adapter Drivers into Windows Image Files” on page 91 in the iSCSI chapter.
9–Fibre Channel Over Ethernet FCoE Boot from SAN Linux FCoE Boot Installation Configure the adapter boot parameters and Target Information (press CTRL+S and enter the CCM utility) as detailed in “Preparing System BIOS for FCoE Build and Boot” on page 118.
Page 160
9–Fibre Channel Over Ethernet FCoE Boot from SAN Follow the on screen instructions to choose the Driver Update medium and load drivers (Figure 9-15). Figure 9-15. Choosing Driver Update Medium Once the driver update is complete, select Next to continue with OS installation.
Page 161
9–Fibre Channel Over Ethernet FCoE Boot from SAN When requested, click Configure FCoE Interfaces. Ensure FCoE Enable is set to yes on the 10GbE QLogic initiator ports you wish to use as the SAN boot path(s). 83840-546-00 E...
Page 162
9–Fibre Channel Over Ethernet FCoE Boot from SAN For each interface to be enabled for FCoE boot, click Change Settings and ensure FCoE Enable and AUTO_VLAN are set to yes and DCB required is set to no. For each interface to be enabled for FCoE boot, click on Create FCoE VLAN Interface.
Page 163
9–Fibre Channel Over Ethernet FCoE Boot from SAN Once complete with configuration of all interface, click OK to proceed. Click Next to continue installation. YaST2 will prompt to activate multipath. Answer as appropriate. 83840-546-00 E...
Page 164
9–Fibre Channel Over Ethernet FCoE Boot from SAN Continue installation as usual. Under the Expert tab on the Installation Settings screen, select Booting. Select the Boot Loader Installation tab and then select Boot Loader Installation Details, make sure you have one boot loader entry here. Delete all redundant entries.
9–Fibre Channel Over Ethernet FCoE Boot from SAN Click OK to proceed and complete installation. RHEL6 Installation Boot from the installation medium. For RHEL6.3, an updated Anaconda image is required for FCoE BFS. That updated image is provided by Red Hat at the following URL http://rvykydal.fedorapeople.org/updates.823086-fcoe.img.
Page 166
9–Fibre Channel Over Ethernet FCoE Boot from SAN When prompted Do you have a driver disk, enter Yes. NOTE RHEL does not allow driver update media to be loaded over the network when installing driver updates for network devices. Use local media.
Page 167
9–Fibre Channel Over Ethernet FCoE Boot from SAN Select Specialized Storage Devices when prompted. Click Add Advanced Target. 83840-546-00 E...
Page 168
9–Fibre Channel Over Ethernet FCoE Boot from SAN Select Add FCoE SAN. and select Add drive. For each interface intended for FCoE boot, select the interface, deselect Use DCB, select Use auto vlan, and then click Add FCoE Disk(s). 83840-546-00 E...
Page 169
9–Fibre Channel Over Ethernet FCoE Boot from SAN Repeat steps 8 through 10 for all initiator ports. Confirm all FCoE visible disks are visible under Multipath Devices and/or Other SAN Devices. Click Next to proceed. Click Next and complete installation as usual. Upon completion of installation, the system will reboot.
9–Fibre Channel Over Ethernet FCoE Boot from SAN Linux: Adding Additional Boot Paths Both RHEL and SLES require updates to the network configuration when adding new boot through an FCoE initiator that was not configured during installation. The following sections describe this procedure for each supported operating system.
Page 171
9–Fibre Channel Over Ethernet FCoE Boot from SAN SLES 11 SP2 and Above On SLES11 SP2, if the system boots through an initiator that has not been configured as an FCoE interface during installation, the system will fail to boot. To add new boot paths, the system must boot up through the configured FCoE interface.
FCoE Boot from SAN VMware ESXi FCoE Boot Installation FCoE Boot from SAN requires that the latest QLogic 8400 Series asynchronous drivers be included into the ESXi (5.1, 5.5, 6.0) install image. Refer to Image_builder_doc.pdf from VMware on how to slipstream drivers.
Page 173
9–Fibre Channel Over Ethernet FCoE Boot from SAN Press F11 to accept the agreement and continue. Select the boot LUN for installation and press ENTER to continue. 83840-546-00 E...
Page 174
9–Fibre Channel Over Ethernet FCoE Boot from SAN Select the desired installation method. Select the keyboard layout. Enter a password. 83840-546-00 E...
Page 175
9–Fibre Channel Over Ethernet FCoE Boot from SAN Press F11 to confirm the install. Press ENTER to reboot after installation. On 57800 and 57810 boards, the management network is not vmnic0. After booting, open the GUI console and display the configure management network >...
9–Fibre Channel Over Ethernet FCoE Boot from SAN For BCM57800 and BCM57810 boards, the FCoE boot devices need to have a separate vSwitch other than vSwith0. This allows DHCP to assign the IP address to the management network rather than to the FCoE boot device.
9–Fibre Channel Over Ethernet Booting from SAN After Installation Booting from SAN After Installation Now that boot configuration and OS installation are complete, you can reboot and test the installation. On this and all future reboots, no other user interactivity is required.
9–Fibre Channel Over Ethernet Booting from SAN After Installation Install the binary RPM containing the new driver version. Refer to the linux-nx2 package README for instructions on how to prepare a binary driver RPM. Use the following command to update the ramdisk: ...
Configuring FCoE By default, DCB is enabled on QLogic 8400 Series FCoE-, DCB-compatible C-NICs. QLogic 8400 Series FCoE requires a DCB-enabled interface. For Windows operating systems, use the QCC GUI or QLogic’s Comprehensive Configuration Management (CCM) utility to configure the DCB parameters.
Configuration Parameters Overview NIC partitioning divides a QLogic 8400/3400 Series 10 Gigabit Ethernet NIC into multiple virtual NICs by having multiple PCI physical functions per port. Each PCI function is associated with a different virtual NIC. To the OS and the network, each physical function appears as a separate NIC port.
10–NIC Partitioning and Bandwidth Management Configuring for NIC Partitioning Linux 64-bit, RHEL 5.5 and later, SLES11 SP1 and later VMware ESXi 5.0, 5.1, 5.5, and 6.0 NOTE 32-bit Linux operating systems have a limited amount of memory space available for Kernel data structures.
10–NIC Partitioning and Bandwidth Management Configuration Parameters Flow Control The flow control setting of the port. Physical Link Speed The physical link speed of the port, either 1G or 10G. Relative Bandwidth Weight (%) The relative bandwidth setting represents a weight or importance of a particular function.
Page 183
10–NIC Partitioning and Bandwidth Management Configuration Parameters Function 0 Ethernet FCoE Function 1 Ethernet Function 2 Ethernet Function 3 Ethernet iSCSI If Relative Bandwidth Weight is configured as “0” for all four PFs, then all six offloads will share the bandwidth equally.
Up to 64 VLANs (63 tagged and 1 untagged) can be defined for each QLogic adapter on your server, depending on the amount of memory available in your system. VLANs can be added to a team to allow multiple VLANs with different VLAN IDs.
Page 185
VLAN is an accounting group. Main Server A high-use server that needs to be accessed from all VLANs and IP QLogic subnets. The Main Server has a adapter installed. All three IP subnets are accessed through the single physical adapter inter- face.
VLAN tagging is only required to be enabled on switch ports that create trunk links to other switches, or on ports connected to tag-capable end-stations, such as servers or workstations with QLogic adapters. For Hyper-V, create VLANs in the vSwitch-to-VM connection instead of in a team, to allow VM live migrations to occur without having to ensure the future host system has a matching team VLAN setup.
(VF), a lightweight PCIe function that can be directly assigned to a virtual machine (VM), bypassing the hypervisor layer for the main data movement. Not all QLogic adapters support SR-IOV; refer to your product documentation for details. Enabling SR-IOV Before attempting to enable SR-IOV, ensure that: ...
Page 188
Enable SR-IOV. SR-IOV must be done now and cannot be enabled after the vSwitch is created. Install the QLogic drivers for the adapters detected in the VM. Use the latest drivers available from your vendor for the host OS (do not use the inbox drivers).
12–SR-IOV Enabling SR-IOV SR-IOV and Storage Storage (FCoE or iSCSI) can be enabled on an SR-IOV-enabled adapter. However, if storage is used on an NPAR-enabled physical function (PF), then the number of virtual functions for that PF is set to zero; therefore, SR-IOV is disabled on that PF, and the other PFs on that port can support SR-IOV VF connections.
Microsoft Virtualization with Hyper-V Microsoft Virtualization is a hypervisor virtualization system for Windows Server 2008 and 2012. This section is intended for those who are familiar with Hyper-V, and it addresses issues that affect the configuration of 8400/3400 Series network adapters and teamed network adapters when Hyper-V is used.
13–Microsoft Virtualization with Hyper-V Supported Features Supported Features Table 13-1 identifies Hyper-V supported features that are configurable for 8400/3400 Series network adapters. This table is not an all-inclusive list of Hyper-V features. Table 13-1. Configurable Network Adapter Hyper-V Features Supported in Windows Server Feature Comments/Limitation...
13–Microsoft Virtualization with Hyper-V Single Network Adapter Single Network Adapter Windows Server 2008 When configuring a 8400/3400 Series network adapter on a Hyper-V system, be aware of the following: An adapter that is to be bound to a virtual network should not be configured for VLAN tagging through the driver’s advanced properties.
Feature Comments/Limitation 2008 2008 2012 Smart Load Balanc- Multi-member SLB team ing and Failover allowed with latest QLogic (SLB) team type Advanced Server Program (QLASP) version. Note: VM MAC is not pre- sented to external switches. Link Aggregation – (IEEE 802.3ad...
13–Microsoft Virtualization with Hyper-V Teamed Network Adapters Table 13-2. Configurable Teamed Network Adapter Hyper-V Features Supported in Windows Server Version Feature Comments/Limitation 2008 2008 2012 Hyper-V virtual – switch over a teamed adapter Hyper-V virtual – switch over a VLAN iSCSI boot *Remote boot to SAN is sup- ported.
13–Microsoft Virtualization with Hyper-V Teamed Network Adapters Windows Server 2008 R2 When configuring a team of 8400/3400 Series network adapters on a Hyper-V system, be aware of the following: Create the team prior to binding the team to the Hyper-V virtual network. ...
From Windows Server 2008 to Windows Server 2008 R2 From Windows Server 2008 R2 to Windows Server 2012 Prior to performing an OS upgrade when a QLogic 8400/3400 Series adapter is installed on your system, QLogic recommends the procedure below. Save all team and adapter IP information.
Data Center Bridging (DCB) Overview DCB Capabilities Configuring DCB DCB Conditions Data Center Bridging in Windows Server 2012 Overview Data Center Bridging (DCB) is a collection of IEEE specified standard extensions to Ethernet to provide lossless data delivery, low latency, and standards-based bandwidth sharing of data center physical links.
14–Data Center Bridging (DCB) DCB Capabilities DCB Capabilities Enhanced Transmission Selection (ETS) Enhanced Transmission Selection (ETS) provides a common management framework for assignment of bandwidth to traffic classes. Each traffic class or priority can be grouped in a Priority Group (PG), and it can be considered as a virtual link or virtual interface queue.
14–Data Center Bridging (DCB) Configuring DCB Configuring DCB By default, DCB is enabled on QLogic 8400/3400 Series DCB-compatible C-NICs. DCB configuration is rarely required, as the default configuration should satisfy most scenarios. DCB parameters can be configured using the QCC GUI.
PowerShell User Scripting Guide” in the Microsoft Technet Library. To revert to standard QCC control over the QLogic DCB feature set, uninstall the Microsoft QoS feature or disable Quality of Service in the QCC GUI or Device Manager NDIS Advance Properties page.
Page 201
14–Data Center Bridging (DCB) Data Center Bridging in Windows Server 2012 The 8400/3400 Series Adapters support up to two traffic classes (in addition to the default traffic class) that can be used by the Windows QoS service. On 8400 Series Adapters, disable iSCSI-offload or FCoE-offload (or both) to free one or two traffic classes for use by the Windows QoS service.
This section describes the technology and implementation considerations when working with the network teaming services offered by the QLogic software shipped with servers and storage products. The goal of QLogic teaming services is to provide fault tolerance and link aggregation across a team of two or more adapters.
15–QLogic Teaming Services Executive Summary Table 15-1. Glossary (Continued) Item Definition Preboot Execution Environment RAID redundant array of inexpensive disks Smart Load Balanc- Switch-independent failover type of team in which the primary ing™ and Failover team members handle all incoming and outgoing traffic while the standby team member (if present) is idle until a failover event (for example, loss of link occurs).
15–QLogic Teaming Services Executive Summary Network Addressing To understand how teaming works, it is important to understand how node communications work in an Ethernet network. This document is based on the assumption that the reader is familiar with the basics of IP and Ethernet network communications.
15–QLogic Teaming Services Executive Summary For switch-independent teaming modes, all physical adapters that make up a virtual adapter must use the unique MAC address assigned to them when transmitting data. That is, the frames that are sent by each of the physical adapters in the team must use a unique MAC address to be IEEE compliant.
Page 207
15–QLogic Teaming Services Executive Summary Table 15-2. Available Teaming Types (Continued) Link Aggregation Switch-Dependent Control (Switch must Teaming Type Protocol Load Balancing Failover support specific Support type of team) Required on the Switch Generic Trunking ✔ ✔ ✔ (FEC/GEC)/802. 3ad-Draft Static...
Page 208
15–QLogic Teaming Services Executive Summary Smart Load Balancing enables both transmit and receive load balancing based on the Layer 3/Layer 4 IP address and TCP/UDP port number. In other words, the load balancing is not done at a byte or frame level but on a TCP/UDP session basis.
Page 209
15–QLogic Teaming Services Executive Summary When the clients and the system are on different subnets, and incoming traffic has to traverse a router, the received traffic destined for the system is not load balanced. The physical adapter that the intermediate driver has selected to carry the IP flow carries all of the traffic.
Page 210
15–QLogic Teaming Services Executive Summary In this teaming mode, the intermediate driver controls load balancing and failover for outgoing traffic only, while incoming traffic is controlled by the switch firmware and hardware. As is the case for Smart Load Balancing, the QLASP intermediate driver uses the IP/TCP/UDP source and destination addresses to load balance the transmit traffic from the server.
15–QLogic Teaming Services Executive Summary SLB (Auto-Fallback Disable) This type of team is identical to the Smart Load Balance and Failover type of team, with the following exception—when the standby member is active, if a primary member comes back on line, the team continues using the standby member rather than switching back to the primary member.
15–QLogic Teaming Services Executive Summary Repeater Hub A Repeater Hub allows a network administrator to extend an Ethernet network beyond the limits of an individual segment. The repeater regenerates the input signal received on one port onto all other connected ports, forming a single collision domain.
15–QLogic Teaming Services Executive Summary Configuring Teaming The QCC GUI is used to configure teaming in the supported operating system environments and is designed to run on 32-bit and 64-bit Windows family of operating systems. The QCC GUI is used to configure load balancing and fault tolerance teaming, and VLANs.
Page 214
Same IP address for all team mem- bers Load balancing by IP address Load balancing by Yes (used for MAC address no-IP/IPX) SLB with one primary and one standby member. Requires at least one QLogic adapter in the team. 83840-546-00 E...
15–QLogic Teaming Services Executive Summary Selecting a Team Type The following flow chart provides the decision flow when planning for Layer 2 teaming. The primary rationale for teaming is the need for additional network bandwidth and fault tolerance. Teaming offers link aggregation and fault tolerance to meet both of these requirements.
15–QLogic Teaming Services Teaming Mechanisms Teaming Mechanisms Architecture Types of Teams Attributes of the Features Associated with Each Type of Team Speeds Supported for Each Type of Team 83840-546-00 E...
15–QLogic Teaming Services Teaming Mechanisms Architecture The QLASP is implemented as an NDIS intermediate driver (see Figure 15-2). It operates below protocol stacks such as TCP/IP and IPX and appears as a virtual adapter. This virtual adapter inherits the MAC Address of the first port initialized in the team.
Teaming Mechanisms Outbound Traffic Flow The QLogic Intermediate Driver manages the outbound traffic flow for all teaming modes. For outbound traffic, every packet is first classified into a flow, and then distributed to the selected physical adapter for transmission. The flow classification involves an efficient hash computation over known protocol fields.
15–QLogic Teaming Services Teaming Mechanisms When an inbound IP Datagram arrives, the appropriate Inbound Flow Head Entry is located by hashing the source IP address of the IP Datagram. Two statistics counters stored in the selected entry are also updated. These counters are used in the same fashion as the outbound counters by the load-balancing engine periodically to reassign the flows to the physical adapter.
Types of Teams Switch-Independent The QLogic Smart Load Balancing type of team allows two to eight physical adapters to operate as a single virtual adapter. The greatest benefit of the SLB type of team is that it operates on any IEEE compliant switch and requires no special configuration.
Outbound Load Balancing using MAC Address - No. Outbound Load Balancing using IP Address - Yes Multivendor Teaming – Supported (must include at least one QLogic Ethernet adapter as a team member). Applications The SLB algorithm is most appropriate in home and small business environments where cost is a concern or with commodity switching equipment.
Page 222
The following are the key attributes of Generic Static Trunking: Failover mechanism – Link loss detection Load Balancing Algorithm – Outbound traffic is balanced through QLogic proprietary mechanism based L4 flows. Inbound traffic is balanced according to a switch specific mechanism. ...
Page 223
The following are the key attributes of Dynamic Trunking: Failover mechanism – Link loss detection Load Balancing Algorithm – Outbound traffic is balanced through a QLogic proprietary mechanism based on L4 flows. Inbound traffic is balanced according to a switch specific mechanism. ...
15–QLogic Teaming Services Teaming Mechanisms LiveLink LiveLink is a feature of QLASP that is available for the Smart Load Balancing (SLB) and SLB (Auto-Fallback Disable) types of teaming. The purpose of LiveLink is to detect link loss beyond the switch and to route traffic only through team members that have a live link.
Page 225
15–QLogic Teaming Services Teaming Mechanisms Table 15-4. Attributes (Continued) Feature Attribute Failover event Loss of link Failover time <500 ms Fallback time 1.5 s (approximate) MAC address Different Multivendor teaming Generic (Static) Trunking User interface QConvergeConsole GUI Number of teams...
Page 226
15–QLogic Teaming Services Teaming Mechanisms Table 15-4. Attributes (Continued) Feature Attribute Dynamic LACP User interface QConvergeConsole GUI Number of teams Maximum 16 Number of adapters per team Maximum 16 Hot replace Hot add Hot remove Link speed support Different speeds...
15–QLogic Teaming Services Teaming and Other Advanced Networking Properties Speeds Supported for Each Type of Team The various link speeds that are supported for each type of team are listed in Table 15-5. Mixed speed refers to the capability of teaming adapters that are running at different link speeds.
Checksum Offload Checksum Offload is a property of the QLogic network adapters that allows the TCP/IP/UDP checksums for send and receive traffic to be calculated by the adapter hardware rather than by the host CPU. In high-traffic situations, this can allow a system to handle more connections more efficiently than if the host CPU were forced to calculate the checksums.
Teaming and Other Advanced Networking Properties Large Send Offload Large Send Offload (LSO) is a feature provided by QLogic network adapters that prevents an upper level protocol such as TCP from breaking a large data packet into a series of smaller packets with headers appended to them. The protocol...
The only supported QLASP team configuration when using Microsoft Virtual Server 2005 is with a Smart Load Balancing team-type consisting of a single primary QLogic adapter and a standby QLogic adapter. Make sure to unbind or deselect “Virtual Machine Network Services” from each team member prior to creating a team and prior to creating Virtual networks with Microsoft Virtual Server.
This is true for all types of teaming supported by QLogic. Therefore, an interconnect link must be provided between the switches that connect to ports in the same team.
Page 232
15–QLogic Teaming Services General Network Considerations Furthermore, a failover event would cause additional loss of connectivity. Consider a cable disconnect on the Top Switch port 4. In this case, Gray would send the ICMP Request to Red 49:C9, but because the Bottom switch has no entry for 49:C9 in its CAM Table, the frame is flooded to all its ports but cannot find a way to get to 49:C9.
Page 233
15–QLogic Teaming Services General Network Considerations The addition of a link between the switches allows traffic from/to Blue and Gray to reach each other without any problems. Note the additional entries in the CAM table for both switches. The link interconnect is critical for the proper operation of the team.
Page 234
15–QLogic Teaming Services General Network Considerations Figure 15-5 represents a failover event in which the cable is unplugged on the Top Switch port 4. This is a successful failover with all stations pinging each other without loss of connectivity. Figure 15-5. Failover Event...
15–QLogic Teaming Services General Network Considerations Spanning Tree Algorithm Topology Change Notice (TCN) Port Fast/Edge Port In Ethernet networks, only one active path may exist between any two bridges or switches. Multiple active paths between switches can cause loops in the network.
15–QLogic Teaming Services General Network Considerations Topology Change Notice (TCN) A bridge/switch creates a forwarding table of MAC addresses and port numbers by learning the source MAC address that received on a particular port. The table is used to forward frames to a specific port rather than flooding the frame to all ports.
15–QLogic Teaming Services General Network Considerations Teaming with Hubs (for troubleshooting purposes only) Hub Usage in Teaming Network Configurations SLB Teams SLB Team Connected to a Single Hub Generic and Dynamic Trunking (FEC/GEC/IEEE 802.3ad) SLB teaming can be used with 10/100 hubs, but it is only recommended for troubleshooting purposes, such as connecting a network analyzer when switch port mirroring is not an option.
15–QLogic Teaming Services General Network Considerations SLB Team Connected to a Single Hub SLB teams configured as shown in Figure 15-6 maintain their fault tolerance properties. Either server connection could fail without affecting the network. Clients could be connected directly to the hub, and fault tolerance would still be maintained;...
Multiple adapters may be used for each of these purposes: private, intracluster communications and public, external client communications. All QLogic teaming modes are supported with Microsoft Cluster Software for the public adapter only.
Page 240
15–QLogic Teaming Services Application Considerations Figure 15-7 shows a 2-node Fibre-Channel cluster with three network interfaces per cluster node: one private and two public. On each node, the two public adapters are teamed, and the private adapter is not. Teaming is supported across the same switch or across two switches.
15–QLogic Teaming Services Application Considerations High-Performance Computing Cluster Gigabit Ethernet is typically used for the following three purposes in high-performance computing cluster (HPCC) applications: Inter-Process Communications (IPC): For applications that do not require low-latency, high-bandwidth interconnects (such as Myrinet, InfiniBand), Gigabit Ethernet can be used for communication between the compute nodes.
15–QLogic Teaming Services Application Considerations Oracle In our Oracle Solution Stacks, we support adapter teaming in both the private network (interconnect between RAC nodes) and public network with clients or the application layer above the database layer. Figure 15-8. Clustering With Teaming Across Two Switches...
15–QLogic Teaming Services Application Considerations Teaming and Network Backup Load Balancing and Failover Fault Tolerance When you perform network backups in a nonteamed environment, overall throughput on a backup server adapter can be easily impacted due to excessive traffic and adapter overloading.
Figure 15-10 shows a network topology that demonstrates tape backup in a QLogic teamed environment and how smart load balancing can load balance tape backup data across teamed adapters. There are four paths that the client-server can use to send data to the backup server, but only one of these paths will be designated during data transfer.
If a network link fails during tape backup operations, all traffic between the backup server and client stops and backup jobs fail. If, however, the network topology was configured for both QLogic SLB and switch fault tolerance, then this would allow tape backup operations to continue without interruption during the link failure. All failover processes within the network are transparent to tape backup software applications.
Page 246
15–QLogic Teaming Services Application Considerations To understand how backup data streams are directed during network failover process, consider the topology in Figure 15-10. Client-Server Red is transmitting data to the backup server through Path 1, but a link failure occurs between the backup server and the switch.
15–QLogic Teaming Services Troubleshooting Teaming Problems Troubleshooting Teaming Problems Teaming Configuration Tips Troubleshooting Guidelines When running a protocol analyzer over a virtual adapter teamed interface, the MAC address shown in the transmitted frames may not be correct. The analyzer does not show the frames as constructed by QLASP and shows the MAC address of the team and not the MAC address of the interface transmitting the frame.
Disabling the device driver of a network adapter participating in an LACP or GEC/FEC team may have adverse affects with network connectivity. QLogic recommends that the adapter first be physically disconnected from the switch before disabling the device driver to avoid a network connection loss.
Question: What network protocols are load balanced when in a team? Answer: QLogic’s teaming software only supports IP/TCP/UDP traffic. All other traffic is forwarded to the primary adapter.
Page 250
Question: How do I upgrade the intermediate driver (QLASP)? Answer: The intermediate driver cannot be upgraded through the Local Area Connection Properties. It must be upgraded using the QLogic Setup installer. Question: How can I determine the performance statistics on a virtual adapter (team)? Answer: In the QCC GUI, click the Statistics tab for the virtual adapter.
Page 251
15–QLogic Teaming Services Frequently Asked Questions Question: Should both the backup server and client servers that are backed up be teamed? Answer: Because the backup server is under the most data load, it should always be teamed for link aggregation and failover. A fully redundant network, however, requires that both the switches and the backup clients be teamed for fault tolerance and link aggregation.
Table 15-8. As a QLogic adapter driver loads, Windows places a status code in the system event viewer. There may be up to two classes of entries for these event codes depending on whether both drivers are loaded (one set for the base or miniport driver and one set for the intermediate or teaming driver).
Page 253
15–QLogic Teaming Services Event Log Messages Table 15-7. Base Driver Event Log Messages (Continued) Message Severity Message Cause Corrective Action Number Error Failed to access The driver cannot For add-in adapters: configuration infor- access PCI configura- reseat the adapter in mation.
Page 254
15–QLogic Teaming Services Event Log Messages Table 15-7. Base Driver Event Log Messages (Continued) Message Severity Message Cause Corrective Action Number Informational Network controller The adapter has been No action is required. configured for 1Gb manually configured for full-duplex link.
Page 255
15–QLogic Teaming Services Event Log Messages Table 15-7. Base Driver Event Log Messages (Continued) Message Severity Message Cause Corrective Action Number Error This driver does not The driver does not Upgrade to a driver ver- support this device. recognize the installed...
15–QLogic Teaming Services Event Log Messages Intermediate Driver (Virtual Adapter/Team) The intermediate driver is identified by source BLFM, regardless of the base driver revision. Table 15-8 lists the event log messages supported by the intermediate driver, explains the cause for the message, and provides the recommended action.
Page 257
15–QLogic Teaming Services Event Log Messages Table 15-8. Intermediate Driver Event Log Messages (Continued) System Event Severity Message Cause Corrective Action Message Number Warning Network adapter is The physical adapter Check that the net- disconnected. is not connected to the...
15–QLogic Teaming Services Event Log Messages Table 15-8. Intermediate Driver Event Log Messages (Continued) System Event Severity Message Cause Corrective Action Message Number Informational Network adapter is A physical adapter has No action is required. activated and is partic- been added to or acti- ipating in network traf- vated in a team.
Page 259
15–QLogic Teaming Services Event Log Messages Table 15-9. VBD Event Log Messages (Continued) Message Severity Message Cause Corrective Action Number Informational The network link is The adapter has lost Check that the net- down. Check to make its connection with its...
Page 260
15–QLogic Teaming Services Event Log Messages Table 15-9. VBD Event Log Messages (Continued) Message Severity Message Cause Corrective Action Number Informational Network controller The adapter has been No action is required. configured for 1Gb manually configured full-duplex link. for the selected line speed and duplex set- tings.
(called “Channel Bonding”), refer to your operating system documentation. QLASP Overview QLASP is the QLogic teaming software for the Windows family of operating systems. QLASP settings are configured by QCC GUI. QLASP provides heterogeneous support for adapter teaming to include QLogic 8400/3400 Series adapters and QLogic-shipping Intel NIC adapters/LOMs.
16–Configuring Teaming in Windows Server Load Balancing and Fault Tolerance Load Balancing and Fault Tolerance Teaming provides traffic load balancing and fault tolerance (redundant adapter operation when a network connection fails). When multiple Ethernet network adapters are installed in the same system, they can be grouped into teams, creating a virtual adapter.
Load Balancing and Fault Tolerance Smart Load Balancing and Failover Smart Load Balancing and Failover is the QLogic implementation of load balancing based on IP flow. This feature supports balancing IP traffic across multiple adapters (team members) in a bidirectional manner. In this type of team, all adapters in the team have separate MAC addresses.
16–Configuring Teaming in Windows Server Load Balancing and Fault Tolerance Generic Trunking (FEC/GEC)/802.3ad-Draft Static The Generic Trunking (FEC/GEC)/802.3ad-Draft Static type of team is very similar to the Link Aggregation (802.3ad) type of team in that all adapters in the team are configured to receive packets for the same MAC address.
Smart Load Balancing is a protocol-specific scheme. The level of support for IP, IPX, and NetBEUI protocols is listed in Table 16-1. Table 16-1. Smart Load Balancing Failover/Fallback — All Failover/Fallback — QLogic Multivendor Operating System Protocol NetBE NetBE Windows Server 2008...
Other protocol packets are sent and received through one primary interface only. Failover for non-IP traffic is supported only for QLogic network adapters. The Generic Trunking type of team requires the Ethernet switch to support some form of port trunking mode (for example, Cisco's Gigabit EtherChannel or other switch vendor's Link Aggregation mode).
Performing Diagnostics Diagnostic Test Descriptions Introduction QLogic 8400/3400 Series User Diagnostics is an MS-DOS based application that runs a series of diagnostic tests (see Table 17-2) on the QLogic 8400/3400 Series network adapters in your system. QLogic 8400/3400 Series User Diagnostics also allows you to update device firmware and to view and change settings for available adapter properties.
Table 17-1. uediag Command Options Command Description Options uediag Performs all tests on all QLogic 8400/3400 Series adapters in your system. uediag -c Specifies the adapter (device#) to test. Similar to -dev (for back- <device#> ward compatibility).
Page 269
<raw_image> uediag -fump Specifies the bin file to update UMP firmware. <ump_image> uediag -help Displays the QLogic 8400/3400 Series User Diagnostics (uediag) command options. uediag -I Specifies the number of iterations to run on the selected tests. <iteration#> uediag -idmatch Enables matching of VID, DID, SVID, and SSID from the image file with device IDs: Used only with -fnvm <raw_image>.
-T Enables certain groups/tests. <groups/tests> uediag -ver Displays the version of QLogic 8400/3400 Series User Diagnos- tics (uediag) and all installed adapters. Diagnostic Test Descriptions The diagnostic tests are divided into four groups: Basic Functional Tests (Group A), Memory Tests (Group B), Block Tests (Group C), and Ethernet Traffic Tests (Group D).
Page 271
17–User Diagnostics in DOS Diagnostic Test Descriptions Table 17-2. Diagnostic Tests (Continued) Test Description Number Name Verifies that a Message Signaled Interrupt (MSI) causes an MSI message to be DMA’d to host memory. A negative test is also performed to verify that when an MSI is masked, it does not write an MSI message to host memory.
Page 272
Name Group B: Memory Tests TXP Scratchpad The Group B tests verify all memory blocks of the QLogic 8400/3400 Series adapters by writing various TPAT Scratch- data patterns (0x55aa55aa, 0xaa55aa55, walking zeroed, walking ones, address, and so on.) to each...
Page 273
(identifying the TCP, IP, and UDP header data structures) and calculates the checksum/CRC. The TPAT block results are compared with the values expected by QLogic 8400/3400 Series User Diagnos- tics and any errors are displayed. FIO Register The Fast IO (FIO) verifies the register interface that is exposed to the internal CPUs.
Page 274
Enables MAC loopback mode in the adapter and trans- mits 5000 Layer 2 packets of various sizes. As the packets are received back by QLogic 8400/3400 Series User Diagnostics, they are checked for errors. Packets are returned through the MAC receive path and never reach the PHY.
Page 275
MAC loopback mode and transmitting large TCP packets. As the packets are received back by QLogic 8400/3400 Series User Diagnostics, they are checked for proper segmentation (according to the selected MSS size) and any other errors. The adapter should not be connected to a network.
Troubleshooting Hardware Diagnostics Checking Port LEDs Troubleshooting Checklist Checking if Current Drivers are Loaded Possible Problems and Solutions Hardware Diagnostics Loopback diagnostic tests are available for testing the adapter hardware. These tests provide access to the adapter internal/external diagnostics, where packet information is transmitted across the physical link (for instructions and information on running tests in an MS-DOS environment, see Chapter 18,...
18–Troubleshooting Checking Port LEDs Below are troubleshooting steps that may help correct the failure. Remove the failing device and reseat it in the slot, ensuring the card is firmly seated in the slot from front to back. Rerun the test. If the card still fails, replace it with a different card of the same model and run the test.
“Safety Precautions” on page The following checklist provides recommended actions to take to resolve problems installing the QLogic 8400/3400 Series adapters or running them in your system. Inspect all cables and connections. Verify that the cable connections at the network adapter and the switch are attached properly.
18–Troubleshooting Checking if Current Drivers are Loaded Linux To verify that the bnx2.o driver is loaded properly, run: lsmod | grep -i <module name> If the driver is loaded, the output of this command shows the size of the driver in bytes and the number of adapters configured and their names.
Instead, you can view the logs to verify that the proper driver is loaded and will be active upon reboot: dmesg | grep -i "QLogic" | grep -i "bnx2" Possible Problems and Solutions This section presents a list of possible problems and solutions for the components and categories: ...
18–Troubleshooting Possible Problems and Solutions Problem: A system containing an 802.3ad team causes a Netlogon service failure in the system event log and prevents it from communicating with the domain controller during boot up. Solution: Microsoft Knowledge Base Article 326152 (http://support.microsoft.com/kb/326152/en-us) indicates that Gigabit Ethernet adapters may experience problems with connectivity to a domain controller due to link fluctuation while the driver initializes and negotiates link with the network...
Page 282
18–Troubleshooting Possible Problems and Solutions Problem: Routing does not work for 8400/3400 Series 10 GbE network adapters installed in Linux systems. Solution: For 8400/3400 Series 10 GbE network adapters installed in systems with Linux kernels older than 2.6.26, disable TPA with either ethtool (if available) or with the driver parameter (see “disable_tpa”...
Solution: Enable iSCSI Crash Dump from the Advanced section of the QCC GUI Configuration tab. Problem: The QLogic 8400/3400 Series adapters may not perform at optimal level on some systems if it is added after the system has booted. Solution: The system BIOS in some systems does not set the cache line size and the latency timer if the adapter is added after the system has booted.
Page 284
Problem: The network adapter has shut down and an error message appears indicating that the fan on the network adapter has failed. Solution: The network adapter was shut down to prevent permanent damage. Contact QLogic Support for assistance. 83840-546-00 E...
Adapter LEDS For copper-wire Ethernet connections, the state of the network link and activity is indicated by the LEDs on the RJ-45 connector, as described in Table A-1. For fiber optic Ethernet connections and SFP+, the state of the network link and activity is indicated by a single LED located adjacent to the port connector, as described in Table A-2.
Need help?
Do you have a question about the FastLinQ 3400 Series and is the answer not in the manual?
Questions and answers