Intel Ethernet X520 10GbE Dual Port KX4 Mezz User Manual page 93

Table of Contents

Advertisement

Parameter Name
Range/Settings
LLI
LLIPort
0 - 65535
LLIPush
0 - 1
LLISize
0 - 1500
LLIEType
0 - x8FFF
LLIVLANP
0 - 7
Flow Control
Intel® Ethernet Flow
Director
Valid
Default
0 (dis-
abled)
0 (dis-
abled)
0 (dis-
abled)
0 (dis-
abled)
0 (dis-
abled)
Low Latency Interrupts allow for immediate generation of an
interrupt upon processing receive packets that match certain cri-
teria as set by the parameters described below. LLI parameters
are not enabled when Legacy interrupts are used. You must be
using MSI or MSI-X (see cat /proc/interrupts) to successfully use
LLI.
LLI is configured with the LLIPort command-line parameter,
which specifies which TCP should generate Low Latency Inter-
rupts.
For example, using LLIPort=80 would cause the board to gen-
erate an immediate interrupt upon receipt of any packet sent to
TCP port 80 on the local machine.
WARNING: Enabling LLI can result in an excessive
number of interrupts/second that may cause problems
with the system and in some cases may cause a ker-
nel panic.
LLIPush can be set to be enabled or disabled (default). It is
most effective in an environment with many small transactions.
NOTE: Enabling LLIPush may allow a denial of service
attack.
LLISize causes an immediate interrupt if the board receives a
packet smaller than the specified size.
Low Latency Interrupt Ethernet Protocol Type.
Low Latency Interrupt on VLAN Priority Threshold.
Flow Control is enabled by default. If you want to disable a flow
control capable link partner, use ethtool:
ethtool -A eth? autoneg off rx off tx
off
NOTE: For 82598 backplane cards entering 1 Gbps
mode, flow control default behavior is changed to off.
Flow control in 1 Gbps mode on these devices can lead
to transmit hangs.
NOTE: Flow director parameters are only supported on
kernel versions 2.6.30 or later. Flow control in 1 Gbps
mode on these devices can lead to transmit hangs.
This supports advanced filters that direct receive packets by
their flows to different queues and enables tight control on rout-
ing a flow in the platform. It matches flows and CPU cores for
flow affinity and supports multiple parameters for flexible flow
classification and load balancing.
The flow director is enabled only if the kernel is multiple TX
queue capable. An included script (set_irq_affinity.sh) auto-
mates setting the IRQ to CPU affinity. To verify that the driver is
using Flow Director, look at the counter in ethtool: fdir_miss and
fdir_match.
Description

Hide quick links:

Advertisement

Table of Contents
loading

Table of Contents