Dcb Support; Fcoe Connectivity And Fip Snooping; Iscsi Operation - Dell PowerEdge M IO Aggregator Command Reference Manual

Mxl 10/40gbe switch io module ftos command reference guide, ftos 8.3.16.1
Hide thumbs Also See for PowerEdge M IO Aggregator:
Table of Contents

Advertisement

DCB Support

DCB enhancements for data center networks are supported to eliminate packet loss and
provision links with required bandwidth.
The Aggregator provides zero-touch configuration for DCB. The Aggregator auto-configures
DCBX port roles as follows:
Server-facing ports are configured as auto-downstream interfaces.
Uplink ports are configured as auto-upstream interfaces.
In operation, DCBX auto-configures uplink ports to match the DCB configuration in the ToR
switches to which they connect.
The Aggregator supports DCB only in standalone mode; DCB is not supported in stacking
mode.

FCoE Connectivity and FIP Snooping

Many data centers use Fibre Channel (FC) in storage area networks (SANs). Fibre Channel
over Ethernet (FCoE) encapsulates Fibre Channel frames over Ethernet networks.
On an Aggregator, the internal ports support FCoE connectivity and connect to the converged
network adapter (CNA) in blade servers. FCoE allows Fibre Channel to use 10-Gigabit
Ethernet networks while preserving the Fibre Channel protocol.
The Aggregator also provides zero-touch configuration for FCoE configuration. The
Aggregator auto-configures to match the FCoE settings used in the ToR switches to which it
connects through its uplink ports.
FIP snooping is automatically configured on an Aggregator. The auto-configured port
channel (LAG 128) operates in FCoE forwarder (FCF) port mode.

iSCSI Operation

Support for iSCSI traffic is turned on by default when the Aggregator powers up. No
configuration is required.
When the Aggregator powers up, it monitors known TCP ports for iSCSI storage devices on
all interfaces. When a session is detected, an entry is created and monitored as long as the
session is active.
The Aggregator also detects iSCSI storage devices on all interfaces and auto-configures to
optimize performance. Performance optimization operations, such as Jumbo frame size
support, STP port-state fast, and disabling of storm control on interfaces connected to an
iSCSI storage device, are applied automatically.
CLI configuration is necessary only when the configuration includes iSCSI storage devices
that cannot be automatically detected and when non-default QoS handling is required.
Before You Start | 5

Advertisement

Table of Contents
loading

Table of Contents