H3C ETH682i User Manual
H3C ETH682i User Manual

H3C ETH682i User Manual

Mezz network adapter

Advertisement

Quick Links

ETH682i Mezz Network Adapter
User Guide
New H3C Technologies Co., Ltd.
https://www.h3c.com/en/
Document version: 6W100-20230218

Advertisement

Table of Contents
loading

Summary of Contents for H3C ETH682i

  • Page 1 ETH682i Mezz Network Adapter User Guide New H3C Technologies Co., Ltd. https://www.h3c.com/en/ Document version: 6W100-20230218...
  • Page 2 The information in this document is subject to change without notice. All contents in this document, including statements, information, and recommendations, are believed to be accurate, but they are presented without warranty of any kind, express or implied. H3C shall not be liable for technical or editorial errors or omissions contained herein.
  • Page 3 Preface This user guide describes the specifications, supported features, and configuration methods for ETH682i Mezz network adapters. This preface includes the following topics about the documentation: • Audience. • Conventions. • Documentation feedback. Audience This documentation is intended for: •...
  • Page 4 Symbols Convention Description An alert that calls attention to important information that if not understood or followed WARNING! can result in personal injury. An alert that calls attention to important information that if not understood or followed CAUTION: can result in data loss, data corruption, or damage to hardware or software. An alert that calls attention to essential information.
  • Page 5 Documentation feedback You can e-mail your comments about product documentation to info@h3c.com. We appreciate your comments.
  • Page 6: Table Of Contents

    Contents Safety information ·························································································· 1 General operating safety ···································································································································· 1 Electrical safety ·················································································································································· 1 ESD prevention ·················································································································································· 1 Configuring the network adapter ···································································· 1 Viewing mapping relations between network adapter ports and ICM internal ports ·········································· 1 Viewing network adapter port information in the operating system···································································· 1 Linux operation systems ····························································································································...
  • Page 7: Safety Information

    To avoid bodily injury or damage to the device, follow these guidelines when you operate the network adapter: • Only H3C authorized or professional engineers are allowed to install or replace the network adapter. • Before installing or replacing the network adapter, stop all services, power off the blade server, and then remove the blade server.
  • Page 8: Configuring The Network Adapter

    PCI devices can be recognized. Figure 1 Viewing PCI device information Some network adapters (for example, ETH681i and ETH682i) use chips of the same model. If such network adapters are used, first execute the lspci -vvvnn -s BUS | grep Product command to identify the exact model of the target network adapter.
  • Page 9: Windows Operating Systems

    For more information, see "Installing and removing a network adapter driver in the operating system." Figure 4 Viewing information about network adapter ports Windows operating systems Open Network Connections and verify that the ETH682i network adapters can be displayed correctly.
  • Page 10: Installing And Removing A Network Adapter Driver In The Operating System

    Figure 5 Viewing network adapters If the network adapter is not displayed, open Device Manager, and examine if an Ethernet controller exists in the Network adapters > Other devices window. If an Ethernet controller exists, an error has occurred on the driver. Install the most recent ...
  • Page 11 Figure 7 Viewing the driver version If the driver is an .rpm file, run the executable file and install the driver directly. a. Copy the RPM driver file (for example, kmod-qlgc-fastlinq-8.38.2.0-1.rhel7u5.x86_64.rpm) to the operating system. b. Execute the command to install the driver. rpm -ivh file_name.rpm Figure 8 Installing the driver c.
  • Page 12: Windows Operating Systems

    c. Execute the command to compile the file and install the driver. make install Figure 10 Compiling the file and installing the driver d. After the installation finishes, restart the operating system or execute the rmmod qede commands to have the driver take effect. modprobe qede To remove the .rpm file, execute the command.
  • Page 13 Figure 12 Device Manager Install the driver. a. Obtain the driver from the H3C official website. b. Double click the driver and then click Next >. Figure 13 Installing the driver c. After the installation finishes, restart the operating system to have the driver take effect.
  • Page 14 Figure 14 Verifying the driver version Remove the driver. As a best practice, remove the driver from the Control Panel > Programs and Features window. If no drivers are displayed in the window, remove the driver as follows: a. Click the Start icon to enter the menu page. b.
  • Page 15: Configuring Pxe

    Configuring PXE This section describes how to enable PXE on a network adapter in the BIOS. To use the PXE feature, you must set up a PXE server. You can obtain the setup method for a PXE server from the Internet. PXE boot is supported only in UEFI boot mode.
  • Page 16: Configuring Iscsi

    b. Set Boot Mode to PXE. Figure 19 Setting Boot Mode to PXE Press F4 to save the configuration. The server restarts automatically. During startup, press F12 at the POST phase to boot the server from PXE. Configuring iSCSI The iSCSI feature must cooperate with a remote network storage device. The configuration methods for network storage devices vary by device.
  • Page 17 Figure 20 Selecting the network adapter port to be configured Select Port Level Configuration. Figure 21 Selecting Port Level Configuration Set Boot Mode to iSCSI (SW) and iSCSI Offload to Enabled. Save the configuration and restart the server.
  • Page 18 Figure 22 Configuring iSCSI Enter the BIOS. Click the Advanced tab and select iSCSI Configuration. Figure 23 Selecting iSCSI Configuration Configure the name of the iSCSI initiator. Select Add an Attempt, and then select the MAC address of the network adapter port. For how to identify the network adapter port, see "Viewing network adapter port information in the operating...
  • Page 19 Figure 24 Selecting Add an Attempt Figure 25 Selecting the MAC address of the network adapter port Set iSCSI Mode to Enabled. Configure iSCSI parameters and then select save.
  • Page 20 Figure 26 Configuring iSCSI Select Save Changes and Reset. Figure 27 Saving the configuration and restarting the server Install the operating system (for example, RHEL 7.5). Specify the network disk as the system disk. a. Press e to edit the setup parameters.
  • Page 21 Figure 28 Pressing e to edit the setup parameters b. Enter the ip=ibft string after quiet, and then press Ctrl-x. Figure 29 Adding the ip=ibft string c. Click INSTALLATION DESTINATION.
  • Page 22 Figure 30 Clicking INSTALLATION DESTINATION d. On the page that opens, click Add a disk… to add a network disk. Figure 31 Adding a network disk e. Select the target network disk, and click Done at the upper left corner. The network disk is now specified as the system disk.
  • Page 23: Configuring Iscsi San

    iSCSI boot configuration has finished and you can continue to install the operating system. Configuring iSCSI SAN This document uses Windows Server 2016 and RHEL 7.5 as examples to describe how to configure iSCSI SAN for the network adapter. Windows operating systems Assign an IP address to the network interface on the network adapter that connects to the iSCSI network storage device.
  • Page 24 Figure 35 Configuring the name of the iSCSI initiator c. Click the Discovery tab and click Discover Portals to add the address information about the peer device (network storage device). Figure 36 Adding the address information about the peer device d.
  • Page 25 Figure 37 Connecting the target Adding the network disk. Before adding the network disk, make sure the related configuration has been completed on the network storage device. a. Open Control Panel, and then select Hardware > Device Manager > Network adapters. Right click the network adapter port, and then select Scan for hardware changes.
  • Page 26 Figure 39 Disk Management c. Right click the disk name, and then select Online. Figure 40 Bringing the disk online d. Right click the disk name, and then select Initialize Disk.
  • Page 27 Figure 41 Initializing the disk e. Right click the Unallocated area to assign a volume to the disk as prompted. Figure 42 Assigning a volume to the disk...
  • Page 28 Figure 43 Volume assignment completed Verify that the new volume has been added. Figure 44 Verifying the new volume Red Hat systems Before configuring iSCSI SAN, make sure the iSCSI client software package has been installed on the server. To configure iSCSI SAN in RHEL 7.5: Assign an IP address to the network interface which connects to the iSCSI network storage device.
  • Page 29 Figure 45 Configuring the local IP address Execute the command in the /etc/iscsi directory to view the cat initiatorname.iscsi IQN of the local iSCSI initiator. If no IQN is specified, use the command to specify one manually. Figure 46 Configuring the name of the local iSCSI initiator Execute the command to probe the iscsiadm -m -discovery -t st -p target-ip...
  • Page 30: Configuring Fcoe

    Figure 49 Viewing the newly-added network disks NOTE: In this example, two volumes have been created on the storage server so that two network disks are added. Execute the command to format the newly-added disks. mkfs Figure 50 Formatting a newly-added disk Execute the command to mount the disk.
  • Page 31 Figure 52 Selecting the network adapter port to be configured Select Port Level Configuration. Figure 53 Selecting Port Level Configuration Set Boot Mode to FCoE and FCoE Offload to Enabled.
  • Page 32 Figure 54 Configuring FCoE Save the configuration and restart the server. Enter the BIOS Setup utility and select the network adapter port. Select FCoE Configuration. If the external FCoE link is normal, you can view the scanned peer WWPN number on the screen as shown in Figure Figure 55 Selecting FCoE Configuration...
  • Page 33 Figure 56 Scanned peer WWPN number Save the configuration and restart the server. Enter the operation system setup page. Install the operating system (for example, RHEL 7.5) and specify the network disk as the system disk. a. Select Install Red Hat Enterprise linux 7.5. Figure 57 Entering the operating system setup page b.
  • Page 34 Figure 58 Clicking INSTALLATION DESTINATION On the page that opens, you can view the scanned storage disks. Figure 59 The scanned storage disks d. If no storage disks are scanned, click Add a disk to add a network disk. On the page that opens, select the target network disk, and then click Done in the upper left corner.
  • Page 35: Configuring Fcoe San

    Configuring FCoE SAN This document uses Windows Server 2016, RHEL 7.5, CAS E0706, and VMware ESXi 6.7 as examples to describe how to configure FCoE SAN for the network adapter. Windows operating systems Configure FCoE on the FCoE storage device and ICMs and make sure the FCoE link is unblocked.
  • Page 36 Right click the disk name and select Online. Figure 63 Making the disk online Right click the disk name and select Initialize Disk. Figure 64 Initializing the disk Right click the Unallocated area and assign a volume to the disk as prompted.
  • Page 37 Figure 65 Assigning a volume to the disk Volume assignment has finished. Figure 66 Volume assignment completed Verify that the new volume has been added.
  • Page 38 Configure FCoE on the FCoE storage device and ICMs and make sure the FCoE link is unblocked. For more information about configuring FCoE on ICMs, see the command references and configuration guides for ICMs and H3C UniServer B16000 Blade Server Configuration Examples.
  • Page 39 Figure 70 Creating and copying a configuration file for the FCoE port Execute the command to edit and save the configuration file. Make sure the vi cfg-ethM value of the FCOE_ENABLE field is yes and the value of the DCB_REQUIRED field is no. Figure 71 Editing the configuration file Execute the command to set...
  • Page 40 Configure FCoE on the FCoE storage device and ICMs and make sure the FCoE link is unblocked. For more information about configuring FCoE on ICMs, see the command references and configuration guides for ICMs and H3C UniServer B16000 Blade Server Configuration Examples.
  • Page 41 Figure 76 Selecting Local Command Shell If you access the operating system through remote login (for example, SSH), connect to the  CLI of the operating system. Execute the commands to enable the service fcoe start service lldpad start FCoE and LLDP services, respectively. Figure 77 Enabling the FCoE and LLDP services Execute the commands to verify...
  • Page 42 Figure 79 Creating and copying a configuration file for the FCoE port Execute the command to edit and save the configuration file. Make sure the vi cfg-ethM value of the FCOE_ENABLE field is yes and the value of the DCB_REQUIRED field is no. Figure 80 Editing the configuration file Execute the command to set...
  • Page 43 Figure 83 Verifying that a subinterface for ethM has been created 10. Execute the command to view the newly-added network disk. lsblk Before viewing the newly-added network disk, make sure the related configuration has been finished on the network storage device. Figure 84 Viewing the newly-added network disk 11.
  • Page 44: Configuring Npar

    command. The argument represents the esxcli fcoe nic enable -n vmnicX vmnicX port name in a VMware system. Add storage devices on the VMware Web interface. Before adding storage devices on the VMware Web interface, make sure the related configuration has been finished on the peer network storage device. a.
  • Page 45 Figure 88 Setting iSCSI Offload to Disabled NOTE: Each physical port on the network adapter supports either iSCSI Offload or FCoE Offload in NPAR mode. To enable the NPAR mode, disable a minimum of one offload feature. Repeat the procedure to configure the other port on the network adapter. Save the configuration and restart the server.
  • Page 46 Figure 90 Network adapter configuration page Configure PF parameters. Figure 91 Configuring PF parameters Save the configuration and restart the server.
  • Page 47: Configuring Sr-Iov

    Configuring SR-IOV Enter the BIOS Setup utility. Select Advanced > PCI Subsystem Settings, and then press Enter. Figure 92 Advanced screen Select SR-IOV Support and set it to Enabled. Press ESC until you return to the BIOS Setup main screen. Figure 93 Setting SR-IOV Support to Enabled Select Socket Configuration >...
  • Page 48 Figure 96 Enabling SR-IOV During startup, press E. Press the arrow keys to turn pages. Add intel_iommu=on to the specified position to enable IOMMU. Press Ctrl-x to continue to start the server. Figure 97 Enabling IOMMU After you enter the operating system, execute the command to verify dmesg | grep IOMMU that IOMMU is enabled.
  • Page 49 Figure 99 Assigning VFs to a PF port 10. Execute the command to run the VM manager. Select File > New Virtual virt-manager Machine to create a VM. Figure 100 Creating a VM 11. On the New Virtual Machine page, add a virtual NIC as instructed by the callouts in Figure 101.
  • Page 50: Configuring Advanced Features

    Figure 101 Adding a virtual NIC 12. Install the vNIC driver and execute the ifconfig ethVF hw ether xx:xx:xx:xx:xx:xx command to configure an MAC address for the vNIC. The argument represents the ethVF virtual NIC name. The xx:xx:xx:xx:xx:xx argument represents the MAC address. Configuring advanced features Configuring VLAN (802.1Q VLAN) This section uses RHEL 7.5 as an example.
  • Page 51: Configuring Bonding (Linux)

    VLAN interface state to UP, respectively. The argument represents the IP ipaddr/mask address and mask of the VLAN interface. The argument represents the broadcast brdaddr address. The argument represents the VLAN ID. ethX.id To delete a VLAN interface, execute the ip link set dev ethX.id down ip link commands.
  • Page 52 For other slave interfaces to be added to bond0, repeat this step. Figure 105 Editing the configuration file for a slave interface Execute the command to restart the network service and have service network restart bond0 take effect. Figure 106 Restarting the network service Execute the command to view information about bond0 cat /proc/net/bonding/bond0...
  • Page 53: Configuring Teaming (Windows)

    Figure 109 Viewing information about the network adapter (2) Configuring teaming (Windows) Open Server Manager, and then select Local Server > NIC Teaming > Disabled to enter the NIC Teaming page. Figure 110 Entering the NIC Teaming page Select TASKS > New Team to create a team.
  • Page 54 Figure 111 Creating a team Configure the team name and select the network adapters to be added to the team. Select Additional properties, configure the properties, and then click OK. Team creation in Switch Independent mode takes a long time. Figure 112 Configuring a new team After team creation finishes, you can view the network adapter 111 on the Network Connections page.
  • Page 55: Configuring Tcp Offloading

    Figure 113 Viewing the new network adapter Configuring TCP offloading This section uses RHEL 7.5 as an example. To configure TCP offloading in RHEL 7.5: Execute the command to view the support and enabling state for the ethtool -k ethx offload features.
  • Page 56 Execute the command to enable or disable an offload ethtool -K ethX feature on/off feature. The argument represents the port name of the network adapter. The ethx feature argument represents the offload feature name. The value for the argument includes tso, lso, lro, gso, and gro.
  • Page 57: Appendix A Specifications And Features

    Appendix A Specifications and features The ETH682i Mezz network adapter (product model: NIC-ETH682i-Mb-2*25G) is a CNA module that provides two 25-GE ports, each of which supports FCoE and FCoE boot. It can be applied to the B16000 blade server chassis to provide network interfaces connecting blade servers to ICMs. The network adapter exchanges data with blade servers by using PCIe 3.0 x8 channels and uses the two...
  • Page 58: Technical Specifications

    Full duplex 802.1Qbb, 802.1Qaz, 802.1Qau, 802.1Qbg, 802.1Qbh, 802.3ad, Standards 802.1Qau, 802.1BR, 802.1AS, 802.1p/Q Technical specifications Table 2 ETH682i Mezz network adapter technical specifications Category Item Specifications Dimensions (H 25.05 × 61.60 × 95.00 mm (0.99 × 2.43 × 3.74 in) Physical ×...
  • Page 59: Feature Description

    Feature Supported √ TCP/IP Stateless Offloading √ TCP/IP Offload Engine (TOE) Wake-on-LAN × √ RDMA √ NPAR NCSI × √ NIC bonding √ LLDP Remote boot √ PXE boot √ FCoE boot √ iSCSI boot NOTE: • √ indicates that the feature is not available for VMware ESXi. •...
  • Page 60 Each PCIe device can contain multiple PFs. A PF is a PCIe partition which has a complete configuration space and can be found, managed, and operated like a PCIe device. By using a PF, you can configure or control the PCIe device and move data in or out of the device. The software regards a PF as an independent PCIe device so that multiple devices can be integrated in the same chip.
  • Page 61 • mode=6, adaptive load balancing (balance-alb)—Does not require switches. This mode integrates the balance-tlb mode and load balancing of IPv4 packet receiving. It is realized by ARP negotiation. The bonding driver intercepts the ARP replies sent by the local device and changes the source MAC address into a unique MAC address of a backup device in bonding, allowing different peers to communicate with different MAC addresses.
  • Page 62 • Generic segmentation offload (GSO) and generic receive offload (GRO)—Detects features supported by the NIC automatically. If the NIC supports fragmentation, the system sends TCP fragments to the NIC directly. If the network adapter does not support fragmentation, the system fragments the packets first, and then sends the fragments to the NIC. RDMA RDMA is a remote direct memory access technology, aiming to deal with the data processing delay on the server during network transmission.
  • Page 63: Appendix B Hardware And Software Compatibility

    Compatible blade servers Table 4 Compatible blade servers Blade server Blade server Network Installation Applicable slots model type adapter slots positions H3C UniServer 2-processor Mezz1, Mezz2, Figure 117 B5700 G3 half-width Mezz3 H3C UniServer 2-processor Mezz1, Mezz2, Figure 118 B5800 G3...
  • Page 64: Compatible Icms

    Figure 118 Network adapter installation positions on a 2-processor full-width blade server Figure 119 Network adapter installation positions on a 2-processor full-width blade server Compatible ICMs Network adapters and ICM compatibility The network adapter supports the following ICMs:...
  • Page 65: Network Adapter And Icm Interconnection

    • H3C UniServer BT616E • H3C UniServer BT1004E • H3C UniServer BX1010E • H3C UniServer BX1020EF Network adapter and ICM interconnection Network adapters connect to ICMs through the mid plane. The mapping relations between a network adapter and ICMs depend on the blade server on which the network adapter resides. For installation...
  • Page 66 Figure 121 Network adapter and ICM mapping relations (4-processor full-width blade server) LOM P1 LOM P2 Embedded Mezz1 Mid-plane Mezz2 Mezz3 Mezz4 Mezz5 Mezz6 Blade Figure 122 ICM slots...
  • Page 67: Networking Applications

    Networking applications As shown in Figure 123, the network adapters are connected to the ICMs. Each internal port of the ICMs support 25GE service applications, and the external ports are connected to the Internet to provide Internet access for the blade server on which the network adapter resides. Figure 123 Mezzanine network and ICM interconnection Blade Internet...
  • Page 68: Appendix C Acronyms

    Appendix C Acronyms Acronym Full name Address Resolution Protocol Converged Network Adapters Fiber Channel FCoE Fiber Channel Over Ethernet iSCSI Internet Small Computer System Interface LACP Link Aggregation Control Protocol NCSI Network Controller Sideband Interface NPAR NIC Partitioning PCIe Peripheral Component Interconnect Express Physical Function Preboot Execute Environment RDMA...

Table of Contents