Intel XL710-Q2 User Manual page 46

Ethernet adapters and devices
Hide thumbs Also See for XL710-Q2:
Table of Contents

Advertisement

Distributed Multi-Root PCI/Memory Architecture
Example 4: The number of available NUMA node CPUs is not sufficient for queue allocation. If your platform
has a processor that does not support an even power of 2 CPUs (for example, it supports 6 cores), then during
queue allocation if SW runs out of CPUs on one socket it will by default reduce the number of queues to a
power of 2 until allocation is achieved. For example, if there is a 6 core processor being used, the SW will only
allocate 4 FCoE queues if there only a single NUMA node. If there are multiple NUMA nodes, the NUMA node
count can be changed to a value greater than or equal to 2 in order to have all 8 queues created.
Determining Active Queue Location
The user of these performance options will want to determine the affinity of FCoE queues to CPUs in order to
verify their actual effect on queue allocation. This is easily done by using a small packet workload and an I/O
application such as IoMeter. IoMeter monitors the CPU utilization of each CPU using the built-in performance
monitor provided by the operating system. The CPUs supporting the queue activity should stand out. They
should be the first non-hyper thread CPUs available on the processor unless the allocation is specifically
directed to be shifted via the performance options discussed above.
To make the locality of the FCoE queues even more obvious, the application affinity can be assigned to an
isolated set of CPUs on the same or another processor socket. For example, the IoMeter application can be
set to run only on a finite number of hyper thread CPUs on any processor. If the performance options have
been set to direct queue allocation on a specific NUMA node, the application affinity can be set to a different
NUMA node. The FCoE queues should not move and the activity should remain on those CPUs even though
the application CPU activity moves to the other processor CPUs selected.
SR-IOV (Single Root I/O Virtualization)
SR-IOV lets a single network port appear to be several virtual functions in a virtualized environment. If you
have an SR-IOV capable NIC, each port on that NIC can assign a virtual function to several guest partitions.
The virtual functions bypass the Virtual Machine Manager (VMM), allowing packet data to move directly to a
guest partition's memory, resulting in higher throughput and lower CPU utilization. SR-IOV also allows you to
move packet data directly to a guest partition's memory. SR-IOV support was added in Microsoft Windows
Server 2012. See your operating system documentation for system requirements.
For devices that support it, SR-IOV is enabled in the host partition on the adapter's Device Manager property
sheet, under Virtualization on the Advanced Tab. Some devices may need to have SR-IOV enabled in a
preboot environment.

Hide quick links:

Advertisement

Table of Contents
loading

Table of Contents