Post-Cfhp Queuing And Scheduling - Alcatel-Lucent 7950 Quality Of Service Manual

Extensible routing system
Table of Contents

Advertisement

Post-CFHP Queuing and Scheduling

Although CFHP enforces aggregate rate limiting while maintaining sensitivity to strict priority and
fair access to bandwidth within a priority, CFHP output packets still require queuing and
scheduling to provide access to the switch fabric or to an egress port.
Ingress CFHP Queuing
At ingress, CFHP output traffic is automatically mapped to a unicast or multipoint queue in order
to reach the proper switch fabric destinations. In order to manage this automatic queuing function,
a shared queue policy policer-output-queues. For modifying parameters in this shared-queue
policy, refer to
The unicast queues in the policy are automatically created on each destination switch fabric tap
and ingress CFHP unicast packets automatically map to one of the queues based on forwarding
class and destination tap. The multipoint queues within the policy are created on the multicast
paths; 16 multicast paths are supported by default with 28 on 7950 XRS systems and 7750 12-e
systems, with the latter having setting "tools perform the system set-fabric-speed fabric-speed-b."
The multicast paths represent an available multicast switch fabric path - the number of each being
controlled using the command:
CLI Syntax: configure mcast-management bandwidth-policy policy-name t2-
paths secondary-path
For ingress CFHP multicast packets (Broadcast, Unknown unicast or Multicast—referred to as
BUM traffic), the system maintains a conversation hash table per forwarding class and populates
the table forwarding class hash result entry with the one of the multicast paths. Best-effort traffic
uses the secondary paths, and expedited traffic uses the primary paths.When a BUM packet is
output by ingress CFHP, a conversation hash is performed and used along with the packets
forwarding class to pick a hash table entry in order to derive the multicast path to be used. Each
table entry maintains a bandwidth counter that is used to monitor the aggregate traffic per
multicast path. This can be optimized by enabling IMPM on any forwarding complex which
allows the system to redistribute this traffic across the IMPM paths on all forwarding complexes to
achieve a more even capacity distribution. Be aware that enabling IMPM will cause routed and
VPLS (IGMP and PIM) snooped IP multicast groups to be managed by IMPM.
Any discards performed in the ingress shared queues will be reflected in the ingress child policer's
discard counters and reported statistics assuming a discard counter capable stat-mode is
configured for the child policer.
7950 XRS Quality of Service Guide
Shared-Queue QoS Policy Command Reference on page
number-paths number-of-paths [dual-sfm number-of-paths]
Ingress CFHP Queuing
429.
Page 575

Advertisement

Table of Contents
loading

Table of Contents