Oracle SuperCluster T5-8 Owner's Manual page 73

Table of Contents

Advertisement

A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if one of the two 10-GbE NICs fail.
You can also connect just one port in each IPMP group to the 10-GbE network rather
Note -
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.
InfiniBand Network
The connections to the InfiniBand network vary, depending on the type of domain:
Database Domain:
Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the first processor module (PM0) in the domain and
P0 (standby) on the InfiniBand HCA associated with the last CPU in the last processor
module (PM3) in the domain.
So, for a Giant Domain in a Full Rack, these connections would be through P1 on the
InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in
slot 16 (standby).
Exadata private network: Connections through P0 (active) and P1 (standby) on all
InfiniBand HCAs associated with the domain.
So, for a Giant Domain in a Full Rack, connections will be made through all eight
InfiniBand HCAs, with P0 on each as the active connection and P1 on each as the
standby connection.
Application Domain:
Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the first processor module (PM0) in the domain and
P0 (standby) on the InfiniBand HCA associated with the last CPU in the last processor
module (PM3) in the domain.
So, for a Giant Domain in a Full Rack, these connections would be through P1 on the
InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in
slot 16 (standby).
Oracle Solaris Cluster private network: Connections through P0 (active) on the
InfiniBand HCA associated with the first CPU in the second processor module (PM1)
in the domain and P1 (standby) on the InfiniBand HCA associated with the first CPU in
the third processor module (PM2) in the domain.
So, for a Giant Domain in a Full Rack, these connections would be through P0 on the
InfiniBand HCA installed in slot 4 (active) and P1 on the InfiniBand HCA installed in
slot 7 (standby).
Understanding the Software Configurations
Understanding the System
73

Hide quick links:

Advertisement

Table of Contents
loading
Need help?

Need help?

Do you have a question about the SuperCluster T5-8 and is the answer not in the manual?

Table of Contents