Switch Core; Introduction; Switch Core Architecture; Ingress Buffer - Renesas IDT 89HPES48H12G2 User Manual

Pci express switch
Table of Contents

Advertisement

Notes
PES48H12G2 User Manual
®

Introduction

This chapter provides an overview of the PES48H12G2's Switch Core. As shown in Figure 2.1 in the
Architectural Overview chapter, the Switch Core interconnects switch ports. The Switch Core's main func-
tion is to transfer TLPs among these ports efficiently and reliably. In order to do so, the Switch Core
provides buffering, ordering, arbitration, and error detection services.

Switch Core Architecture

The Switch Core is based on a non-blocking crossbar design optimized for system interconnect (i.e.,
peer-to-peer) as well as fanout (i.e., root-to-endpoint) applications. At a high level, the Switch Core is
composed of ingress buffers, a crossbar fabric interconnect, and egress buffers. These blocks are comple-
mented with ordering, arbitration, and error handling logic (not shown in the figure).
Each port has dedicated ingress and egress buffer. The ingress buffer stores data received or generated
by the port. The egress buffer stores data that will be sent to the port. The crossbar interconnect is a matrix
of pathways, capable of concurrently transferring data among all possible port pairs (e.g., port 0 can
transfer data to port 1 at the same time port 2 transfers data to port 3).
As packets are received from the link they are stored in the corresponding ingress buffer. After under-
going ordering and arbitration, they are transferred to the corresponding egress buffer via the crossbar
interconnect. The presence of egress buffers provides head-of-line-blocking (HOLB) relief when an egress
port is congested. For example, a packet received on port 0 that is destined to port 1 may be transferred
from port 0's ingress buffer to port 1's egress buffer even if port 1 does not have sufficient egress link
credits. This transfer allows subsequent packets received on port 0 to be transmitted to their destination.

Ingress Buffer

When a packet is received from the link, the ingress port's Application Layer determines the packet's
route and subjects it to TC/VC mapping. The packet is then stored in the appropriate IFB, together with its
routing and handling information (i.e., the packet's descriptor). The IFB consists of three queues. These
queues are the posted transaction queue (PT queue), the non-posted transaction queue (NP queue), and
the completion transaction queue (CP queue).
– The queues for the IFB are implemented using a descriptor memory and a data memory.
– When two x4 ports are merged to create a x8 port, the descriptor and data memories for both x4
ports are merged.
The default size of each of these queues is shown in Table 3.1.
Port
IFB
Mode
Queue
x4
Posted
Bifurcated
Non Posted
Completion
Total Size and
Limitations
(per-port)
6176 Bytes and up to 64
TLPs
1024 Bytes and up to 64
TLPs
6176 Bytes and up to 64
TLPs
Table 3.1 IFB Buffer Sizes (Part 1 of 2)
3 - 1
Chapter 3

Switch Core

Advertised
Advertised
Data
Header
Credits
Credits
386
64
64
64
386
64
April 5, 2013

Advertisement

Table of Contents
loading

Table of Contents