Page 1
Oracle SuperCluster T5-8 Owner's Guide ® Part No: E40167-17 May 2016...
Page 3
Oracle. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services, except as set forth in an applicable agreement between you and Oracle.
Page 4
Oracle Corporation et ses affiliés déclinent toute responsabilité ou garantie expresse quant aux contenus, produits ou services émanant de tiers, sauf mention contraire stipulée dans un contrat entre vous et Oracle. En aucun cas, Oracle Corporation et ses affiliés ne sauraient être tenus pour responsables des pertes subies, des coûts occasionnés ou des dommages causés par l'accès à...
Understanding Full Rack Configurations ........... 69 Understanding Clustering Software .............. 84 Cluster Software for the Database Domain .......... 85 Cluster Software for the Oracle Solaris Application Domains ...... 85 Understanding the Network Requirements ............ 86 Network Requirements Overview ............. 86 Network Connection Requirements for Oracle SuperCluster T5-8 .... 89 Understanding Default IP Addresses ............
Page 6
▼ Install a Ground Cable (Optional) ............ 124 ▼ Adjust the Leveling Feet .............. 125 Powering on the System for the First Time ............ 126 ▼ Connect Power Cords to the Rack ............. 126 ▼ Power On the System .............. 130 Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 7
▼ Power On the System ................ 145 ▼ Identify the Version of SuperCluster Software .......... 145 SuperCluster Tools .................. 146 Managing Oracle Solaris 11 Boot Environments .......... 147 Advantages to Maintaining Multiple Boot Environments ...... 147 ▼ Create a Boot Environment .............. 148 ▼ Mount to a Different Build Environment .......... 150 ▼...
Page 8
▼ Configure ASR on the ZFS Storage Appliance ........ 210 ▼ Configure ASR on SPARC T5-8 Servers (Oracle ILOM) ...... 213 Configuring ASR on the SPARC T5-8 Servers (Oracle Solaris 11) .... 215 ▼ Approve and Verify ASR Asset Activation .......... 219 Monitoring the System Using OCM .............. 221 OCM Overview ...................
Page 9
▼ Configure the Database to Support Infiniband ........ 238 ▼ Enable SDP Support for JDBC ............ 239 ▼ Monitor SDP Sockets Using netstat on Oracle Solaris ...... 240 Configuring SDP InfiniBand Listener for Exalogic Connections ...... 240 ▼ Create an SDP Listener on the InfiniBand Network ....... 241 Understanding Internal Cabling ............... 245...
Page 10
Rack .................... 316 Connecting an Oracle Exadata Storage Expansion Quarter Rack to Oracle SuperCluster T5-8 ................ 316 Connecting an Oracle Exadata Storage Expansion Half Rack or Oracle Exadata Storage Expansion Full Rack to Oracle SuperCluster T5-8 ...... 320 Two-Rack Cabling ................ 322 Three-Rack Cabling ................ 324 Four-Rack Cabling ................
Page 11
Contents Index ........................ 353...
Page 12
Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Using This Documentation This document provides and overview of Oracle SuperCluster T5-8, and describes configuration options, site preparation specifications, installation information, and administration tools. Overview – Describes how to configure, install, tune, and monitor the system. ■ Audience – Technicians, system administrators, and authorized service providers.
Page 14
Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Oracle SuperCluster T5-8 is well-suited for consolidating multiple databases into a single grid. Delivered as a complete pre-optimized, and pre-configured package of software, servers, and storage, Oracle SuperCluster T5-8 is fast to implement, and it is ready to tackle your large-scale business applications.
Understanding Oracle SuperCluster T5-8 Oracle SuperCluster T5-8 does not include any Oracle software licenses. Appropriate licensing of the following software is required when Oracle SuperCluster T5-8 is used as a database server: Oracle Database Software ■ Oracle Exadata Storage Server Software ■...
One 3 TB High Capacity SAS disk as a spare for the Sun ZFS Storage 7320 storage ■ appliance, or One 4 TB High Capacity SAS disk as a spare for the Oracle ZFS Storage ZS3-ES ■ storage appliance Exadata Smart Flash Cache card ■...
Page 18
Oracle will not support questions or issues with the non-standard modules. If a server crashes, and Oracle suspects the crash may have been caused by a non- standard module, then Oracle support may refer the customer to the vendor of the non- standard module or ask that the issue be reproduced without the non-standard module.
Identifying Hardware Components Full Rack Components Oracle SuperCluster T5-8 Full Rack Layout, Front View FIGURE 1 Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 21
SPARC T5-8 servers (2, with four processor modules apiece) Sun Datacenter InfiniBand Switch 36 spine switch You can expand the amount of disk storage for your system using the Oracle Exadata Storage Expansion Rack. See “Oracle Exadata Storage Expansion Rack Components” on page 287 for more information.
SPARC T5-8 servers (2, with two processor modules apiece) Exadata Storage Servers (4) Sun Datacenter InfiniBand Switch 36 spine switch You can expand the amount of disk storage for your system using the Oracle Exadata Storage Expansion Rack. See “Oracle Exadata Storage Expansion Rack Components” on page 287 for more information.
Page 24
■ SPARC T5-8 Servers The Full Rack version of Oracle SuperCluster T5-8 contains two SPARC T5-8 servers, each with four processor modules. The Half Rack version of Oracle SuperCluster T5-8 also contains two SPARC T5-8 servers, but each with two processor modules.
Page 25
“Cluster Software for the Database Domain” on page ZFS Storage Appliance Each Oracle SuperCluster T5-8 contains one ZFS storage appliance, in either the Full Rack or Half Rack version. The ZFS storage appliance consists of the following: Two ZFS storage controllers ■...
Power Distribution Units Each Oracle SuperCluster T5-8 contains two power distribution units, in either the Full Rack or Half Rack version. The components within Oracle SuperCluster T5-8 connect to both power distribution units, so that power continues to be supplied to those components should one of the two power distribution units fail.
Page 27
Each Oracle SuperCluster T5-8 contains two SPARC T5-8 servers, regardless if it is a Full Rack or a Half Rack. The distinguishing factor between a Full Rack or a Half Rack version of Oracle SuperCluster T5-8 is not the number of SPARC T5-8 servers, but the number of processor modules in each SPARC T5-8 server, where the Full Rack has four processor modules and the Half Rack has two processor modules.
Page 28
Understanding the Hardware Components and Connections Topology for the Full Rack Version of Oracle SuperCluster T5-8 FIGURE 3 Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 29
FIGURE 4 Card Locations (SPARC T5-8 Servers) The following figures show the cards that will be used for the physical connections for the SPARC T5-8 server in the Full Rack and Half Rack versions of Oracle SuperCluster T5-8. Understanding the System...
Page 30
Understanding the Hardware Components and Connections For the Half Rack version of Oracle SuperCluster T5-8, eight of the 16 PCIe slots Note - are occupied with either InfiniBand HCAs or 10-GbE NICs. However, all 16 PCIe slots are accessible, so the remaining eight PCIe slots are available for optional Fibre Channel PCIe cards.
Page 31
Understanding the Hardware Components and Connections Card Locations (Half Rack) FIGURE 6 Figure Legend Dual-port 10-GbE network interface cards, for connection to the 10-GbE client access network (see “10-GbE Client Access Network Physical Connections (SPARC T5-8 Servers)” on page Dual-port InfiniBand host channel adapters, for connection to the InfiniBand network (see “InfiniBand Private Network Physical Connections (SPARC T5-8 Servers)”...
Page 32
Each SPARC T5-8 server contains several dual-ported Sun QDR InfiniBand PCIe Low Profile host channel adapters (HCAs). The number of InfiniBand HCAs and their locations in the SPARC T5-8 servers varies, depending on the configuration of Oracle SuperCluster T5-8: Full Rack: Eight InfiniBand HCAs, installed in these PCIe slots: ■...
Page 33
Understanding the Hardware Components and Connections “Card Locations (SPARC T5-8 Servers)” on page 29 for more information on the location of the InfiniBand HCAs. The two ports in each InfiniBand HCA (ports 1 and 2) connect to a different leaf switch to provide redundancy between the SPARC T5-8 servers and the leaf switches.
Page 34
SPARC T5-8 servers. The number of IP addresses needed for the InfiniBand network will also vary, depending on the type of domains created on each SPARC T5-8 server. For more information, see “Understanding the Software Configurations” on page Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 35
Each SPARC T5-8 server connects to the Oracle Integrated Lights Out Manager (ILOM) management network through a single Oracle ILOM network port (NET MGT port) at the rear of each SPARC T5-8 server. One IP address is required for Oracle ILOM management for each SPARC T5-8 server.
Page 36
Exadata Storage Server. The two ports in the InfiniBand HCA connects to a different leaf switch to provide redundancy between the Exadata Storage Servers and the leaf switches. The following figures show how Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 37
Understanding the Hardware Components and Connections redundancy is achieved with the InfiniBand connections between the Exadata Storage Servers and the leaf switches in the Full Rack and Half Rack configurations. FIGURE 10 InfiniBand Connections for Exadata Storage Servers, Full Rack Understanding the System...
Page 38
Each Exadata Storage Server connects to the Oracle ILOM management network through a single Oracle ILOM network port (NET MGT port) at the rear of each Exadata Storage Server. One IP address is required for Oracle ILOM management for each Exadata Storage Server.
Page 39
The ZFS storage appliance has five sets of physical connections: “InfiniBand Private Network Physical Connections (ZFS Storage ■ Appliance)” on page 39 “Oracle ILOM Management Network Physical Connections (ZFS Storage ■ Appliance)” on page 41 “1-GbE Host Management Network Physical Connections (ZFS Storage ■...
Page 40
Understanding the Hardware Components and Connections InfiniBand Connections for ZFS Storage Controllers, Full Rack FIGURE 12 Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 41
Appliance) The ZFS storage appliance connects to the Oracle ILOM management network through the two ZFS storage controllers. Each storage controller connects to the Oracle ILOM management network through the NET0 port at the rear of each storage controller using sideband management.
Page 42
Storage controller 2 – Both ports from the SAS-2 HBA card to the SIM Link In ports on ■ the Sun Disk Shelf. The following figures show the SAS connections between the two storage controllers and the Sun Disk Shelf. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 43
Understanding the Hardware Components and Connections SAS Connections for the Sun ZFS Storage 7320 Storage Appliance FIGURE 14 Figure Legend Storage controller 1 Storage controller 2 Sun Disk Shelf Understanding the System...
Page 44
Understanding the Hardware Components and Connections SAS Connections for the Oracle ZFS Storage ZS3-ES Storage Appliance FIGURE 15 Cluster Physical Connections (ZFS Storage Appliance) Each ZFS storage controller contains a single cluster card. The cluster cards in the storage controllers are cabled together as shown in the following figure. This allows a heartbeat signal to pass between the storage controllers to determine if both storage controllers are up and running.
Page 45
Understanding the Hardware Components and Connections Power Distribution Units Physical Connections Oracle SuperCluster T5-8 contains two power distribution units. Each component in Oracle SuperCluster T5-8 has redundant connections to the two power distribution units: SPARC T5-8 servers – Each SPARC T5-8 server has four AC power connectors. Two ■...
Understanding the Software Configurations Understanding the Software Configurations Oracle SuperCluster T5-8 is set up with logical domains (LDoms), which provide users with the flexibility to create different specialized virtual systems within a single hardware platform. The following topics provide more information on the configurations available to you: “Understanding Domains”...
Page 47
With dedicated domains, the domain configuration for a SuperCluster (the number of domains and the SuperCluster-specific types assigned to each) are set at the time of the initial installation, and can only be changed by an Oracle representative. Understanding the System...
Page 48
Root Domains essentially exist at the same level as dedicated domains. With the introduction of Root Domains, the following parts of the domain configuration for a SuperCluster are set at the time of the initial installation and can only be changed by an Oracle representative: Type of domain: ■...
Page 49
Understanding the Software Configurations Even though a domain with two IB HCAs is valid for a Root Domain, domains with Note - only one IB HCA should be used as Root Domains. When a Root Domain has a single IB HCA, fewer I/O Domains have dependencies on the I/O devices provided by that Root Domain.
Page 50
Two cores and 32 GB of memory reserved for the last Root Domains in this configuration. ■ 14 cores and 224 GB of memory available from this Root Domain for the CPU and memory repositories. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 51
Understanding the Software Configurations One core and 16 GB of memory reserved for the second and third Root Domains in this ■ configuration. 15 cores and 240 GB of memory available from each of these Root Domains for the ■ CPU and memory repositories.
Page 52
The CPU cores and memory resources owned by an I/O Domain are assigned from the CPU and memory repositories (the cores and memory released from Root Domains on the system) when an I/O Domain is created, as shown in the following graphic. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 53
Understanding the Software Configurations You use the I/O Domain Creation tool to assign the CPU core and memory resources to the I/O Domains, based on the amount of CPU core and memory resources that you want to assign to each I/O Domain and the total amount of CPU core and memory resources available in the CPU and memory repositories.
Page 54
I/O Domains that you can create for your system. In addition, you should not create an I/O Domain that uses more than one socket's worth of resources. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 55
Understanding the Software Configurations For example, assume that you have 44 cores parked in the CPU repository and 704 GB of memory parked in the memory repository. You could therefore create I/O Domains in any of the following ways: One or more large I/O Domains, with each large I/O Domain using one socket's worth of ■...
Page 56
CPU core and memory resources available for I/O Domains to pull from the repositories. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 57
Root Domains after the initial installation. Whatever resources that you asked to have assigned to the Root Domains at the time of initial installation are set and cannot be changed unless you have the Oracle installer come back out to your site to reconfigure your system.
Note - If you have the Full Rack version of Oracle SuperCluster T5-8, you cannot install a Fibre Channel PCIe card in a slot that is associated with a Small Domain. See “Understanding Small Domains (Full Rack)” on page 80 for more information.
Page 59
Storage Servers using the InfiniBand switches on the rack. This non-routable network is fully contained in Oracle SuperCluster T5-8, and does not connect to your existing network. When Oracle SuperCluster T5-8 is configured with the appropriate types of domains, the InfiniBand network is partitioned to define the data paths between the SPARC T5-8 servers, and between the SPARC T5-8 servers and the storage appliances.
Page 60
Storage private network: One InfiniBand private network for the Database Domains to ■ communicate with each other and with the Application Domains running Oracle Solaris 10, and with the ZFS storage appliance Exadata private network: One InfiniBand private network for the Oracle Database 11g ■...
Servers)” on page Understanding Half Rack Configurations In the Half Rack version of Oracle SuperCluster T5-8, each SPARC T5-8 server includes two processor modules (PM0 and PM3), with two sockets or PCIe root complex pairs on each processor module, for a total of four sockets or PCIe root complex pairs for each SPARC T5-8 server.
Page 62
Half Rack. Logical Domain Configurations and PCIe Slot Mapping (Half Rack) FIGURE 16 Related Information “Understanding Large Domains (Half Rack)” on page 63 ■ “Understanding Medium Domains (Half Rack)” on page 65 ■ Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 63
Understanding the Software Configurations “Understanding Small Domains (Half Rack)” on page 67 ■ Understanding Large Domains (Half Rack) These topics provide information on the Large Domain configuration for the Half Rack: “Percentage of CPU and Memory Resource Allocation” on page 63 ■...
Page 64
InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in slot 16 (standby). Oracle Solaris Cluster private network: Connections through P0 (active) on the ■ InfiniBand HCA associated with the second CPU in the domain and P1 (standby) on the InfiniBand HCA associated with the third CPU in the domain.
Page 65
Four cores for the first Medium Domain, the remaining cores for the second Medium ■ Domain (first Medium Domain must be either a Database Domain or an Application Domain running Oracle Solaris 11 in this case) Config H3-1 (One Medium Domain and two Small Domains): The following options are ■...
Page 66
P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in slot 11 (standby). Exadata private network: Connections through P0 (active) and P1 (standby) on all ■ InfiniBand HCAs associated with the domain. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 67
P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in slot 11 (standby). Oracle Solaris Cluster private network: Connections through P0 (active) on the ■ InfiniBand HCA associated with the first CPU in the domain and P1 (standby) on the InfiniBand HCA associated with the second CPU in the domain.
Page 68
A single data address is used to access these two physical ports. That data address allows traffic to continue flowing to the ports in the IPMP group, even if the connection to one of the two ports on the 10-GbE NIC fails. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
(standby) on that InfiniBand HCA. Understanding Full Rack Configurations In the Full Rack version of Oracle SuperCluster T5-8, each SPARC T5-8 server has four processor modules (PM0 through PM3), with two sockets or PCIe root complex pairs on each Understanding the System...
Page 70
It also provides information on the PCIe slots and the InfiniBand (IB) HCAs or 10-GbE NICs installed in each PCIe slot, and which logical domain those cards would be mapped to, for the Full Rack. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 71
Understanding the Software Configurations Logical Domain Configurations and PCIe Slot Mapping (Full Rack) FIGURE 17 Related Information “Understanding Giant Domains (Full Rack)” on page 72 ■ “Understanding Large Domains (Full Rack)” on page 74 ■ “Understanding Medium Domains (Full Rack)” on page 77 ■...
Page 72
The following 10-GbE NICs and ports are used for connection to the client access network for this configuration: PCIe slot 1, port 0 (active) ■ PCIe slot 14, port 1 (standby) ■ Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 73
InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in slot 16 (standby). Oracle Solaris Cluster private network: Connections through P0 (active) on the ■ InfiniBand HCA associated with the first CPU in the second processor module (PM1) in the domain and P1 (standby) on the InfiniBand HCA associated with the first CPU in the third processor module (PM2) in the domain.
Page 74
Four sockets for the Large Domain, one socket apiece for the two Small Domains, two ■ sockets for the Medium Domain Three sockets for the Large Domain, one socket apiece for the two Small Domains, ■ three sockets for the Medium Domain Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 75
Understanding the Software Configurations Two sockets for the Large Domain, one socket apiece for the two Small Domains, four ■ sockets for the Medium Domain Five sockets for the Large Domain, one socket apiece for the two Small Domains and ■...
Page 76
P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in slot 12 (standby). Oracle Solaris Cluster private network: Connections through P0 (active) on the ■ InfiniBand HCA associated with the second CPU in the domain and P1 (standby) on the InfiniBand HCA associated with the third CPU in the domain.
Page 77
Understanding the Software Configurations Understanding Medium Domains (Full Rack) These topics provide information on the Medium Domain configuration for the Full Rack: “Percentage of CPU and Memory Resource Allocation” on page 77 ■ “Management Network” on page 78 ■ “10-GbE Client Access Network” on page 79 ■...
Page 78
The number and type of 1-GbE host management ports that are assigned to each Medium Domain varies, depending on the CPUs that the Medium Domain is associated with: CPU0/CPU1: NET0-1 ■ CPU2/CPU3: NET1, NET3 (through virtual network devices) ■ Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 79
Understanding the Software Configurations CPU4/CPU5: NET0, NET2 (through virtual network devices) ■ CPU6/CPU7: NET2-3 ■ 10-GbE Client Access Network Two PCI root complex pairs, and therefore two 10-GbE NICs, are associated with the Medium Domain on the SPARC T5-8 server in the Full Rack. One port is used on each dual-ported 10- GbE NIC.
Page 80
P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in slot 11 (standby). Oracle Solaris Cluster private network: Connections through P0 (active) on the ■ InfiniBand HCA associated with the first CPU in the domain and P1 (standby) on the InfiniBand HCA associated with the second CPU in the domain.
Page 81
Understanding the Software Configurations “Management Network” on page 82 ■ “10-GbE Client Access Network” on page 82 ■ “InfiniBand Network” on page 83 ■ Percentage of CPU and Memory Resource Allocation The amount of CPU and memory resources that you allocate to the logical domain varies, depending on the size of the other domains that are also on the SPARC T5-8 server: Config F4-2 (One Large Domain, two Small Domains, one Medium Domain): The ■...
Page 82
10-GbE NIC is connected to the 10-GbE network for the Small Domains. The following 10-GbE NICs and ports are used for connection to the client access network for the Small Domains, depending on the CPU that the Small Domain is associated with: Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 83
Understanding the Software Configurations CPU0: ■ PCIe slot 1, port 0 (active) ■ PCIe slot 1, port 1 (standby) ■ CPU1: ■ PCIe slot 9, port 0 (active) ■ PCIe slot 9, port 1 (standby) ■ CPU2: ■ PCIe slot 2, port 0 (active) ■...
Clustering software is typically used on multiple interconnected servers so that they appear as if they are one server to end users and applications. For Oracle SuperCluster T5-8, clustering software is used to cluster certain logical domains on the SPARC T5-8 servers together with the same type of domain on other SPARC T5-8 servers.
Oracle Clusterware is a portable cluster management solution that is integrated with the Oracle database. The Oracle Clusterware is also a required component for using Oracle RAC. The Oracle Clusterware enables you to create a clustered pool of storage to be used by any combination of single-instance and Oracle RAC databases.
■ Network Requirements Overview Oracle SuperCluster T5-8 includes SPARC T5-8 servers, Exadata Storage Servers, and the ZFS storage appliance, as well as equipment to connect the SPARC T5-8 servers to your network. The network connections enable the servers to be administered remotely and enable clients to connect to the SPARC T5-8 servers.
Page 87
To deploy Oracle SuperCluster T5-8, ensure that you meet the minimum network requirements. There are three networks for Oracle SuperCluster T5-8. Each network must be on a distinct and separate subnet from the others. The network descriptions are as follows: Management network –...
Page 88
Application Domain, Oracle Solaris Cluster uses this network for cluster interconnect traffic and to access data on the ZFS storage appliance. This non-routable network is fully contained in Oracle SuperCluster T5-8, and does not connect to your existing network. This network is automatically configured during installation.
Understanding the Network Requirements Network Diagram for Oracle SuperCluster T5-8 FIGURE 18 Network Connection Requirements for Oracle SuperCluster T5-8 The following connections are required for Oracle SuperCluster T5-8 installation: TABLE 1 New Network Connections Required for Installation Connection Type Number of connections...
Diagram for Oracle SuperCluster T5-8,” on page Default Host Names and IP Addresses Refer to the following topics for the default IP addresses used in Oracle SuperCluster T5-8: “Default Host Names and IP Addresses for the Oracle ILOM and Host Management ■...
Page 91
Understanding the Network Requirements Default Host Names and IP Addresses for the Oracle ILOM and Host Management Networks TABLE 2 Default Host Names and IP Addresses for the Oracle ILOM and Host Management Networks Information Assigned at Manufacturing Unit Rack Component (Front View)
Page 92
PDU-A (left from rear view) PCU-B (right from rear view) Exadata Storage Server 8 ssces8-stor 192.168.10.108 (Full Rack only) Exadata Storage Server 7 ssces7-stor 192.168.10.107 (Full Rack only) Exadata Storage Server 6 ssces6-stor 192.168.10.106 Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 93
Understanding the Network Requirements Information Assigned at Manufacturing Unit Rack Component (Front View) InfiniBand Host InfiniBand IP 10-GbE Client 10-GbE Client Number Names Addresses Access Host Access IP Names Addresses (Full Rack only) Exadata Storage Server 5 ssces5-stor 192.168.10.105 (Full Rack only) ZFS Storage Controller 2 ZFS Storage Controller 1 sscsn1-stor1...
Page 94
192.168.40.1 Exadata Storage Server 4 ssces4-stor 192.168.10.104 Exadata Storage Server 3 ssces3-stor 192.168.10.103 Exadata Storage Server 2 ssces2-stor 192.168.10.102 Exadata Storage Server 1 ssces1-stor 192.168.10.101 Sun Datacenter InfiniBand Switch 36 (Spine) Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Preparing the Site This section describes the steps you should take to prepare the site for your system. “Cautions and Considerations” on page 95 ■ “Reviewing System Specifications” on page 96 ■ “Reviewing Power Requirements” on page 99 ■ “Preparing for Cooling” on page 106 ■...
“Rack and Floor Cutout Dimensions” on page 97 ■ “Shipping Package Dimensions” on page 111 ■ Installation and Service Area Select an installation site that provides enough space to install and service the system. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Reviewing System Specifications Location Maintenance Access Rear maintenance 914 mm 36 in. Front maintenance 914 mm 36 in. Top maintenance 914 mm 36 in. Related Information “Physical Specifications” on page 96 ■ “Rack and Floor Cutout Dimensions” on page 97 ■...
Page 98
Width of floor cutout is 280 mm (11 inches) Related Information “Perforated Floor Tiles” on page 110 ■ “Physical Specifications” on page 96 ■ “Installation and Service Area” on page 96 ■ Oracle SuperCluster T5-8 Owner's Guide • May 2016...
“PDU Thresholds” on page 104 ■ Power Consumption These tables describe power consumption of SuperCluster T5-8 and expansion racks. These are measured values and not the rated power for the rack. For rated power specifications, “PDU Power Requirements” on page 101.
Ensure that the facility administrator or qualified electrical engineer verifies the grounding method for the building, and performs the grounding work. Related Information “Facility Power Requirements” on page 100 ■ Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Low or high voltage ■ Single or three phase power ■ Refer to the following tables for Oracle marketing and manufacturing part numbers. Each system has two power distribution units (PDUs). Both PDUs in a rack must be the Note - same type.
Page 102
Total number of data center receptacles required: 6 x Hubbell CS8264C Total Amps per PDU: 110.4A 2-phase 208V Total Amps for entire Oracle SuperCluster T5-8: 220.8A 2-phase 208V (2 PDUs @ 110.4A 2-phase 208V each) Low Voltage 3 Phase PDUs...
Page 103
Reviewing Power Requirements Low Voltage Three Phase (3W+GND) Comments Total Amps for entire Oracle SuperCluster T5-8: 240A 3-phase 208V (2 PDUs @ 120A 3-phase 208V each) High Voltage 1 Phase PDUs TABLE 9 High Voltage Single Phase (2W+GND) Comments kVA Size...
Total number of data center receptacles required: 4 x IEC 309-4P5W-IP44 Total Amps per PDU: 109A 3-phase 230/400V Total Amps for entire Oracle SuperCluster T5-8: 218A 3-phase 230/400V (2 PDUs @ 109A 3-phase 230/400V each) Related Information “Facility Power Requirements” on page 100 ■...
Page 105
Reviewing Power Requirements PDU Thresholds (Quarter-Populated Racks) TABLE 11 22kVA Single-Phase PDUs Low Voltage High Voltage PDU A PDU B Warning (Amps) Alarm (Amps) Warning (Amps) Alarm (Amps) M1-3 M1-1 M1-2 M1-2 M1-1 M1-3 TABLE 12 24kVA 3-Phase PDUs Low Voltage High Voltage PDU A PDU B...
“Facility Power Requirements” on page 100 ■ “Grounding Requirements” on page 100 ■ “Power Consumption” on page 99 ■ Preparing for Cooling “Environmental Requirements” on page 107 ■ “Heat Dissipation and Airflow Requirements” on page 107 ■ Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Heat Dissipation and Airflow Requirements The following table lists the maximum rate of heat released from a system. In order to cool the system properly, ensure that adequate airflow travels through the system. SuperCluster T5-8 Specifications TABLE 17 Comments Full Rack...
Page 108
There is no airflow requirement for the left and right sides, or the top of the rack. If the rack is not completely filled with components, cover the empty sections with filler ■ panels. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 109
Preparing for Cooling Direction of Airflow Is Front to Back FIGURE 20 Airflow (Listed Specifications are Approximate) TABLE 19 Rack Approximate CFM Full rack Maximum 2,523 Typical 2,103 Half rack Maximum 1,436 Typical 1,185 Preparing the Site...
Number of Tiles Full rack Half rack Related Information “Heat Dissipation and Airflow Requirements” on page 107 ■ “Environmental Requirements” on page 107 ■ “Rack and Floor Cutout Dimensions” on page 97 ■ Oracle SuperCluster T5-8 Owner's Guide • May 2016...
“Loading Dock and Receiving Area Requirements” on page 111 ■ “Access Route Guidelines” on page 112 ■ “Unpacking Area” on page 113 ■ Shipping Package Dimensions The following package dimensions apply to Oracle SuperCluster T5-8. Parameter Metric English Height 2159 mm 85 in.
(Shipping weights, see “Shipping (2500 lbs) (2500 lbs) Package Dimensions” on page 111) Related Information “Shipping Package Dimensions” on page 111 ■ “Loading Dock and Receiving Area Requirements” on page 111 ■ Oracle SuperCluster T5-8 Owner's Guide • May 2016...
“Loading Dock and Receiving Area Requirements” on page 111 ■ “Physical Specifications” on page 96 ■ Preparing the Network Prepare your network for Oracle SuperCluster T5-8. “Network Connection Requirements” on page 113 ■ “Network IP Address Requirements” on page 114 ■...
■ Network IP Address Requirements For Oracle SuperCluster T5-8, the number of IP addresses that you will need for each network varies depending on the type of configuration you choose for your system. For more information on the number of IP addresses required for your configuration, refer to the appropriate configuration worksheet.
Page 115
Related Information The configuration worksheets document for additional information about the worksheets ■ Oracle Grid Infrastructure Installation Guide for Linux for additional information about ■ SCAN addresses Your DNS vendor documentation for additional information about configuring round robin ■...
Page 116
Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Installing the System This chapter explains how to install Oracle SuperCluster T5-8. “Installation Overview” on page 117 ■ “Oracle Safety Information” on page 118 ■ “Unpacking the System” on page 119 ■ “Moving the Rack Into Place” on page 121 ■...
“Using an Optional Fibre Channel PCIe from another server. Card” on page 139 Oracle Safety Information Become familiar with Oracle's safety information before installing any Oracle server or equipment: Read the safety notices printed on the product packaging. ■ Read the Important Safety Information for Sun Hardware Systems (821-1590) document ■...
Read all safety notices in the Sun Rack II Power Distribution Units Users Guide (820- ■ 4760). This guide is also available at http://docs.oracle.com/en. Read the safety labels that are on the equipment. ■ Unpacking the System “Tools for Installation”...
Page 120
Find the Unpacking Instructions Unpacking the Rack FIGURE 21 Oracle SuperCluster T5-8 Owner's Guide • May 2016...
“Perforated Floor Tiles” on page 110. Check the data center airflow around the installation site. “Heat Dissipation and Airflow Requirements” on page 107 for more information. Moving the Rack Into Place “Move Oracle SuperCluster T5-8” on page 122 ■ Installing the System...
The front casters do not swivel; you must steer the cabinet by turning the rear casters. Note - Never push on the side panels to move the rack. Pushing on the side panels can tip Caution - the rack over. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 123
Move Oracle SuperCluster T5-8 Never tip or rock the rack. It can fall over. Caution - Installing the System...
The ground cable attachment area might have a painted or coated surface that must be removed to ensure solid contact. Attach the ground cable to one of the attachment points located at the bottom rear of the system frame. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Adjust the Leveling Feet The attachment point is an adjustable bolt that is inside the rear of the rack on the right side. Adjust the Leveling Feet There are leveling feet at the four corners of the rack. Locate the 12-mm wrench inside the rack. Use the wrench to lower the leveling feet to the floor.
If the circuit breakers are in the On position, destructive sparking might occur when Caution - you attach the AC cables to the rack. Open the rear cabinet door. Verify that the switches on the PDUs are in the Off position. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 127
Connect Power Cords to the Rack Ensure that both PDUs are turned completely off. PDU-A is at the left side of the cabinet. PDU-B is at the right. Each PDU has six switches (circuit breakers), one for each socket group. Power Switches on PDU FIGURE 22 Figure Legend...
Page 128
Connect Power Cords to the Rack Ensure that the correct power connectors have been supplied with the power cords. Unfasten the power cord cable ties. The ties are for shipping only and are no longer needed. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 129
Connect Power Cords to the Rack Route the power cords to the facility receptacles either above the rack or below the flooring. Installing the System...
Do not turn on PDU A at this time. Note - PDU B is located on the right side of the rear of the rack. See below. Press the ON (|) side of the toggle switches on PDU B. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 131
Power On the System Powering on PDU B will power on only half of the power supplies in the rack. The remaining power supplies will be powered on in Step For the location of each of the components, see “Identifying Hardware Note - Components”...
The LEDs for the components should be in the following states when all of the PDU A circuit breakers have been turned on. Check the SPARC T5-8 servers: Power OK green LED – Blinking ■ Service Action Required amber LED – Off ■ Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 133
■ Right power supply – Green ■ Check the Exadata Storage Server: Power OK LED – Off while Oracle ILOM is booting (about 3 minutes), then blinking ■ Service Action Required amber LED – Off ■ Check the fronts of the Sun Datacenter InfiniBand Switch 36 switches: Left power supply LED (PS0 LED) –...
USB port of your laptop to the Cisco switch. A USB-to-Serial adapter is installed in the rack on all of the gateway switches (Sun Network QDR InfiniBand Gateway Switches). An extra adapter is included in the shipping kit in Oracle SuperCluster T5-8 configurations. If you have not booted your laptop, start the operating system now.
Page 135
Cisco Ethernet switch resides. You can use the default NET0 IP addresses of SPARC T5-8 servers assigned at the time of manufacturing or the custom IP address that you reconfigured using the Oracle SuperCluster T5-8 Configuration Utility tool. For the list of default NET0 IP addresses, see “Default IP Addresses”...
Connect to a 10-GbE Client Access Network If you or the Oracle installer have not run the Oracle SuperCluster T5-8 Configuration Note - Utility set of tools and scripts to reconfigure IP addresses for the system, you can use a set of default IP addresses.
Page 137
10 Gb to 1 Gb on the other side of the 10- GbE network switch. Oracle SuperCluster T5-8 cannot be installed at the customer site without the 10-GbE client access network infrastructure in place.
Page 138
Example Connection to the 10 GbE Client Access Network FIGURE 23 Figure Legend 10 GbE switch with QSFP connections (Sun Network 10GbE Switch 72p shown) QSFP connector ends of SFP-QSFP cables, connecting to QSFP ports on 10 GbE switch Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Oracle to install Fibre Channel PCIe cards in the Full Rack version of Oracle SuperCluster T5-8 to ensure that the domains are configured correctly after the 10 GbE NIC has been replaced with a Fibre Channel card.
Page 140
When installed in slots associated with Application Domains running either Oracle Solaris ■ 10 or Oracle Solaris 11, the Fibre Channel PCIe cards can be used for any purpose, including database file storage for supported databases other than Oracle Database 11gR2.
“Configuring CPU and Memory Resources (osc-setcoremem)” on page 170 ■ Cautions and Warnings The following cautions and warnings apply to Oracle SuperCluster T5-8 systems. Do not touch the parts of this product that use high-voltage power. Touching them Caution - might result in serious injury.
Log in to the browser interface and click the power icon on the left side of the top pane. 9. Power off the switches, and the entire rack, by turning off the circuit breakers. To power the system back on, see “Power On the System” on page 145. ■ Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 143
Oracle ASM disk group redundancy will not be maintained. Taking Exadata Storage Server offline when one or more grid disks are in this state will cause Oracle ASM to dismount the affected disk group, causing the databases to shut down abruptly.
Page 144
# ldm stop activedomainname LDom activedomainname stopped # ldm unbind-domain activedomainname Halt the primary domain. # shutdown -i5 -g0 -y Because no other domains are bound, the firmware automatically powers off the system. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
If there is an emergency, such as earthquake or flood, an abnormal smell or smoke coming from the machine, or a threat to human safety, then power to Oracle SuperCluster T5-8 should be halted immediately. In that case, use one of the following ways to power off the system.
The tool setcoremem)” on page 170 automatically assigns the appropriate amount of memory to each domain based on how you allocated CPU resources, ensuring optimal performance by minimizing NUMA effects. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Solaris 11 global zones to monitor and tune various parameters. Managing Oracle Solaris 11 Boot Environments When the Oracle Solaris OS is first installed on a system, a boot environment is created. You can use the beadm(1M) utility to create and administer additional boot environments on your system.
-e option in the beadm create command. Then you can use the beadm activate command to specify that this boot environment will become the default boot environment on the next reboot. For more information about the advantages of multiple Oracle Solaris 11 boot environments, see: http://docs.oracle.com/cd/E23824_01/html/E21801/snap3.html#scrolltoc...
Page 149
Create a Boot Environment localsys% ssh systemname -l root Password: Last login: Wed Nov 13 20:27:29 2011 from dhcp-vpn-r Oracle Corporation SunOS 5.11 solaris April 2011 root@sup46:~# Manage ZFS boot environments with beadm. root@sup46:~# beadm list BE Active Mountpoint Space Policy Created ---------------------------------------------------------- solaris NR ...
Connection to systemname closed. localsys% ssh systemname -l root Password: Last login: Thu Jul 14 14:37:34 2011 from dhcp-vpn- Oracle Corporation SunOS 5.11 solaris April 2011 root@sup46:~# Remove Unwanted Boot Environments Use the following commands to remove boot environments. root@sup46:~# beadm list ...
“Disable DISM” on page 152. To decide if DISM is appropriate for your environment, and for more information about using DISM with an Oracle database, refer to the Oracle white paper “Dynamic SGA Tuning of Oracle Database on Oracle Solaris with DISM”: http://www.oracle.com/technetwork/articles/systems-hardware-architecture/using- dynamic-intimate-memory-sparc-168402.pdf...
MEMORY_TARGET=0 scope=spfile; Component-Specific Service Procedures If you have a service contract for your Oracle SuperCluster T5-8, contact your service provider for maintenance. If you do not have a service contract, refer to each component's documentation for general maintenance information.
Storage Server loses power or fails. For Exadata Storage Server releases earlier than release 11.2.1.3, the operation occurs every month. For Oracle Exadata Storage Server Software release 11.2.1.3 and later, the operation occurs every three months, for example, at 01:00 on the 17th day of January, April, July and October.
Page 155
Monitor Write-through Caching Mode the learn cycle. Disk write throughput might be temporarily lower during this time. The message is informational only, no action is required. Use the following command to view the status of the battery: # /opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -GetBbuStatus -a0 The following is an example of the output of the command: BBU status for Adapter: 0 ...
Oracle ASM disk group redundancy will not be maintained. Taking Exadata Storage Server offline when one or more grid disks are in this state will cause Oracle ASM to dismount the affected disk group, causing the databases to shut down abruptly.
Page 157
Bring all grid disks online using the following command: CellCLI> ALTER GRIDDISK ALL ACTIVE When the grid disks become active, Oracle ASM automatically synchronizes the grid disks to bring them back into the disk group. Verify that all grid disks have been successfully put online using the following command: CellCLI>...
From Oracle ASM, drop the Oracle ASM disks on the physical disk using the following command: ALTER DISKGROUP diskgroup-name DROP DISK asm-disk-name To ensure correct redundancy level in Oracle ASM, wait for the rebalance to complete before proceeding. Remove the IP address entry from the cellip.ora file on each database server that accesses the Exadata Storage Server.
Related Information For more information about ssctuner, refer to MOS note 1903388.1 at: ■ https://support.oracle.com For more information about SMF services on the Oracle Solaris OS, see the Oracle Solaris ■ System Administration Guide: Common System Management Tasks at: http://docs.oracle.com/cd/E23824_01/html/821-1451/hbrunlevels-25516.
See “Change ssctuner Properties and Disable Features” on page 163. Oracle Solaris 11 must be at SRU 11.4 or later, or ssd.conf/sd.conf settings might Note - cause panics. Do not set ndd parameters through another SMF service or init script. ssctuner must Note - manage the ndd parameters.
If you plan to change any other ssctuner properties, do so before you perform the remaining steps in this task. “Change ssctuner Properties and Disable Features” on page 163. Restart the SMF service for changes to take effect. # svcadm restart ssctuner Oracle SuperCluster T5-8 Owner's Guide • May 2016...
“Install ssctuner” on page 168 ■ Change ssctuner Properties and Disable Features Do not perform this procedure without Oracle Support approval. Changing properties Caution - or disabling ssctuner features can have unpredictable consequences. Changing certain ssctuner properties such as EMAIL_ADDRESS and disk or memory usage warning levels might be advantageous in some environments.
Page 164
Change the disk (/ and zone roots) usage warning level to 80%. ■ ~# svccfg -s ssctuner setprop ssctuner_vars/DISK_USAGE_WARN=80 Enable thread priority changing for non-exa Oracle DB domains:. ■ ~# svccfg -s ssctuner setprop ssctuner_vars/CRIT_THREADS_NONEXA=true Enable zpool check and repair of vdisk zpools that are not generated by the SuperCluster ■...
Configure ssctuner to Run compliance(1M) Benchmarks The NFS_EXCLUDE, NFS_INCLUDE and ZPOOL_NAME_CUST properties must be simple strings but you can use simple regular expressions. If you need the flexibility of regular expressions, be extremely careful to double quote the expressions. Also verify that the ssctuner service comes back after restarting and no errors are in the SMF log file.
“Monitor and View the Compliance Benchmark” on page 166 ■ “Install ssctuner” on page 168 ■ Monitor and View the Compliance Benchmark (Optional) View the SMF log as the benchmark runs: # tail -f /var/svc/log/site-application-sysadmin-ssctuner\:default.log Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 167
[ Nov 16 11:47:55 CURRENT ISSUES : Please change ssctuner email address from root@localhost ] [ Nov 16 11:47:55 notice: Checking Oracle log writer and LMS thread priority. ] [ Nov 16 11:47:56 notice: completed initialization. ] [ Nov 16 11:47:56 Method "start" exited with status 0.] [ Nov 16 11:49:55 notice: Checking Oracle log writer and LMS thread priority.
By default, ssctuner is installed and running. If for some reason ssctuner is not installed, use this procedure to install it. Install the ssctuner package. Use the Oracle Solaris package command and package name based on the version of the OS. Oracle Solaris 10 OS: ■...
Enable ssctuner # pkginfo ORCLssctuner Oracle Solaris 11 OS: ■ # pkg info ssctuner Verify that the ssctuner service is automatically started after the package installation. # svcs ssctuner If the service does not transition to an online state after a minute or two, check the service log file.
This section describes how to configure Oracle SuperCluster CPU and memory resources using osc-setcoremem. Prior to the Oracle SuperCluster July 2015 quarterly update, you configured CPU and memory resources using the setcoremem tool (in some cases called setcoremem-t4, setcoremem-t5, or setcoremem-m6 based on the SuperCluster model).
Configuring CPU and Memory Resources (osc-setcoremem) The osc-setcoremem tool is supported on all SuperCluster systems that run SuperCluster 1.x software with the July 2015 quarterly update, and SuperCluster 2.x software. Use these topics to change CPU and memory allocations for domains using the CPU/Memory tool called osc-setcoremem.
Page 172
“Display the Current Domain Configuration (ldm)” on page 180 ■ “Change CPU/Memory Allocations (Socket Granularity)” on page 182 ■ “Change CPU/Memory Allocations (Core Granularity)” on page 186 ■ “Park Cores and Memory” on page 190 ■ Oracle SuperCluster T5-8 Owner's Guide • May 2016...
The tool tracks resource allocation and ensures that the selections you make are valid. This section describes how the minimum and maximum resources are determined. This table summarizes the minimum resource requirements for dedicated domains on SuperCluster T5-8: Configuration Control Domain...
Remove a CPU/memory configuration. “Remove a CPU/Memory Configuration” on page 202 Mixed domains – some are Activities you can only perform at initial installation, dedicated, some are Root before any I/O Domains are created: Domains Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Plan CPU and Memory Allocations Domain Configuration Supported Resource Allocation Activities Links ■ Plan how the CPU and memory resources are “Plan CPU and Memory allocated to the domains. Allocations” on page 175 ■ Reallocate all of the resources across domains at “Change CPU/Memory Allocations (Socket the socket or core level (a reboot is required if any Granularity)”...
Page 176
“Display the Current Domain Configuration (osc-setcoremem)” on page 178 ■ “Display the Current Domain Configuration (ldm)” on page 180 ■ In this example, one compute node on a SuperCluster T5-8 Full-Rack has five dedicated domains and one Root Domain. Domain Domain Type...
Page 177
Plan CPU and Memory Allocations Unallocated resources – These resources are placed in the CPU and memory repositories ■ when Root Domains are created, or by leaving some resources unallocated when you use the osc-setcoremem command. In this example, the resources for the dedicated domains and the unallocated resources are summed to provide total resources.
Use the osc-setcoremem command to view domains and resources. If you don't want to continue to use the osc-setcoremem command to change resource Note - allocations, enter CTL-C at the first prompt. Example: Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 179
Display the Current Domain Configuration (osc-setcoremem) # /opt/oracle.supercluster/bin/osc-setcoremem osc-setcoremem v1.0 built on Oct 29 2014 10:21:05 Current Configuration: Full-Rack T5-8 SuperCluster +-------------------------+-------+--------+-----------+--- MINIMUM ----+ | DOMAIN | CORES | MEM_GB | TYPE | CORES | MEM_GB |...
2). The resources listed for Root Domains only represent the resources that are reserved for the Root Domain itself. Parked resources are not displayed. # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 181
-n--v- 5005 0.0% 0.0% 1d 10h 15m Based on your SuperCluster model, use one of these specifications to convert socket, core, and VCPU values. Use these specifications for SuperCluster T5-8. ■ 1 socket = 16 cores ■ 1 core = 8 VCPUs ■...
(If needed) Brings up nonprimary domains with new resources. ■ The examples in this procedure show a SuperCluster T5-8 Full-Rack compute node that has six domains. The concepts in this procedure also apply to other SuperCluster models. In this example, one socket and 256 GB memory are removed from the primary domain and allocated to ssccn2-dom2.
Page 183
Activate any inactive domains using the ldm bind command. The tool does not continue if any inactive domains are present. Run osc-setcoremem to reconfigure the resources. Respond when prompted. Press Enter to select the default value. # /opt/oracle.supercluster/bin/osc-setcoremem osc-setcoremem v1.0 built on Oct 29 2014 10:21:05 ...
Page 184
The following domains will be stopped and restarted: ssccn2-dom2 This configuration requires rebooting the control domain. Do you want to proceed? Y/N : y IMPORTANT NOTE: +----------------------------------------------------------------------------+ Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 185
| eg., dmesg | grep osc-setcoremem ; ldm list | grep -v active ; date | +----------------------------------------------------------------------------+ All activity is being recorded in log file: /opt/oracle.supercluster/osc-setcoremem/log/osc-setcoremem_activity_10-29-2014_16:15:44.log Please wait while osc-setcoremem is setting up the new CPU, memory configuration. It may take a while. Please be patient and do not interrupt.
(If needed) Brings up nonprimary domains with new resources. ■ The examples in this procedure show a SuperCluster T5-8 compute node. The concepts in this procedure also apply to other SuperCluster models. This table shows the allocation plan for this example (see “Plan CPU and Memory...
Page 187
Activate any inactive domains using the ldm bind command. The tool does not continue if any inactive domains are present. Run osc-setcoremem to reconfigure the resources. Respond when prompted. Press Enter to select the default value. # /opt/oracle.supercluster/bin/osc-setcoremem osc-setcoremem v1.0 built on Oct 29 2014 10:21:05 ...
Page 188
Step 2 of 2: Memory selection primary : desired memory capacity in GB (must be 16 GB aligned) [min: 64, max: 1200. default: 256] : <Enter> you chose [256 GB] memory for primary domain Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 189
Do you want to proceed? Y/N : y All activity is being recorded in log file: /opt/oracle.supercluster/osc-setcoremem/log/osc-setcoremem_activity_10-29-2014_16:58:54.log Please wait while osc-setcoremem is setting up the new CPU, memory configuration. It may take a while. Please be patient and do not interrupt.
To find out if you can perform this procedure, see “Supported Domain Note - Configurations” on page 174. The examples in this procedure show a SuperCluster T5-8 full rack. The concepts in this procedure also apply to other SuperCluster models. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 191
The tool does not continue if any inactive domains are present. Run osc-setcoremem to change resource allocations. In this example, some resources are left unallocated which parks them. Respond when prompted. Press Enter to select the default value. # /opt/oracle.supercluster/bin/osc-setcoremem osc-setcoremem v1.0 built on Oct 29 2014 10:21:05 ...
Page 194
Please wait while osc-setcoremem is setting up the new CPU, memory configuration. It may take a while. Please be patient and do not interrupt. Executing ldm commands .. 100% |-----|-----|-----|-----|-----|-----|-----|-----|-----|-----| *=====*=====*=====*=====*=====*=====*=====*=====*=====*=====* Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 195
Park Cores and Memory Task complete with no errors. This concludes socket/core, memory reconfiguration. You can continue using the system. If the tool indicated that a reboot was needed, after the system reboots, log in as root on the compute node's control domain. Verify the new resource allocation.
Use a text reader of your choice to view the contents of a log file. # more log_file_name Example: # cat /opt/oracle.supercluster/osc-setcoremem/log/osc-setcoremem_activity_10-29-2014_16\:58\:54.log # ./osc-setcoremem osc-setcoremem v1.0 built on Oct 29 2014 10:21:05 Current Configuration: Full-Rack T5-8 SuperCluster +-------------------------+-------+--------+-----------+--- MINIMUM ----+ Oracle SuperCluster T5-8 Owner's Guide • May 2016...
View the SP Configuration +-------------------------+-------+--------+-----------+-------+--------+ The following domains will be stopped and restarted: ssccn2-dom4 ssccn2-dom1 ssccn2-dom3 This configuration does not require rebooting the control domain. Do you want to proceed? Y/N : user input: 'y' Please wait while osc-setcoremem is setting up the new CPU, memory configuration.
Page 200
The file called V_B4_4_1_20140804141204 is the initial resource configuration file that was created when the system was installed. # ldm list-config factory-default V_B4_4_1_20140804141204 after_install_backup [next poweron] Output indicating three additional CPU/memory configurations: ■ # ldm list-config factory-default V_B4_4_1_20140804141204 after_install_backup Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Revert to a Previous CPU/Memory Configuration CM_2S1T_1S512G_3S1536G_082020141354 CM_2S1T_1S512G_3S1536G_082120140256 CM_1S512G_1S512G_4S2T_082120141521 [next poweron] View the corresponding log file. # more /opt/oracle.supercluster/osc-setcoremem/log/osc-setcoremem_activity_08-21-2014_15:21*.log Related Information “Access osc-setcoremem Log Files” on page 196 ■ “Revert to a Previous CPU/Memory Configuration” on page 201 ■ “Remove a CPU/Memory Configuration” on page 202 ■...
The compute node's service processor has a limited amount of memory. If you are unable to create a new configuration because the service processor ran out of memory, delete unused configurations using this procedure. List all current configurations. # ldm list-config factory-default V_B4_4_1_20140804141204 after_install_backup CM_2S1T_1S512G_3S1536G_082020141354 Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 203
Remove a CPU/Memory Configuration CM_2S1T_1S512G_3S1536G_082120140256 CM_1S512G_1S512G_4S2T_082120140321 [next poweron] Determine which configurations are safe to remove. It is safe to remove any configuration that contains the string CM_ or _ML, as long as it is not marked [current] or [next poweron]. Remove a configuration.
Page 204
Oracle SuperCluster T5-8 Owner's Guide • May 2016...
“System Requirements” on page 227 ■ Monitoring the System Using Auto Service Request These topics describe how to monitor the system using Oracle Auto Service Request (ASR). Oracle personnel might have configured ASR during the installation of SuperCluster. Note - “ASR Overview”...
SNMP protocol or loss of connectivity to the ASR Manager. You must continue to monitor the systems for faults and call Oracle Support Services if you do not receive notice that a service request has been automatically filed.
2. Designate a standalone system to serve as the ASR Manager and install the ASR Manager software. The server must run either Oracle Solaris or Linux, and Java You must have superuser access to the ASR Manager system. To download ASR Manager software, go to:http://www.oracle.com/technetwork/...
An active ASR Manager must be installed and running before you configure ASR Note - assets. To monitor Oracle Solaris 10 assets, you must install the latest STB bundle on Note - SuperCluster. Refer to Doc ID 1153444.1 to download the latest STB bundle from MOS: https://support.oracle.com...
Page 209
On the ASR Manager, activate the Oracle ILOMs of the Exadata Storage Servers: asr activate_asset -i ILOM-IP-address asr activate_asset -h ILOM-hostname If the last step fails, verify that port 6481 on the Oracle ILOM is open. If port 6481 is Note - open and the step still fails, contact ASR Support.
Configure ASR on the ZFS Storage Appliance You should see both the Oracle ILOM and the host referenced in the list, with the same serial number, as shown in the following example output: IP_ADDRESS HOST_NAME SERIAL_NUMBER ASR PROTOCOL SOURCE ---------- --------- ------------- --- -------- ----- 10.60.40.105 ssc1cel01 1234FMM0CA Enabled SNMP ILOM 10.60.40.106 ssc1cel02 1235FMM0CA Enabled SNMP ILOM 10.60.40.107 ssc1cel03 1236FMM0CA Enabled SNMP ILOM 10.60.40.117 ssc1cel01-ilom 1234FMM0CA Enabled SNMP,HTTP EXADATA-SW...
Page 211
Configure ASR on the ZFS Storage Appliance https://storage-controller-hostname:215 The login screen appears. Type root into the Username field and the root password into this login screen, and press the Enter key. Click the Configuration tab, and click SERVICES, and then on the left navigation pane, click Services to display the list of services.
Page 212
■ Click the pencil icon in the registration section. A Privacy Statement is displayed. Click OK, complete the section for My Oracle Support and password, and click OK. When the account is verified, select the Sun Inventory and Enable Phone Home options.
Manually type commands that span across multiple lines to ensure the commands are typed properly. To configure the Oracle ILOM for SPARC T5-8 servers, complete the following steps on each SPARC T5-8 server: Log in to the SPARC T5-8 server Oracle ILOM.
Page 214
Log in to the ASR Manager server. Activate Oracle ILOM for the SPARC T5-8 server: activate_asset -i ILOM-IP-address asr> Repeat these instructions on Oracle ILOM for both SPARC T5-8 servers in your Oracle SuperCluster T5-8. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Manually type commands that span across multiple lines to ensure the commands are typed properly. Oracle Solaris 11 includes the ability to send ASR fault events and telemetry to Oracle using xml over HTTP to the ASR Manager. To enable this capability use the asr enable_http_receiver command on the ASR Manager.
Page 216
Passwords above can be plain text or obfuscated as follows: java -classpath lib/jetty-6.1.7.jar:lib/jetty-util-6.1.7.jar org.mortbay.jetty.security.Password plaintext-password Then copy and paste the output line starting with OBF: (including the OBF: part) into this jetty. xml config file. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 217
Register SPARC T5-8 Servers With Oracle Solaris 11 or Database Domains to ASR Manager Follow this procedure to register the SPARC T5-8 server with Oracle Solaris 11 or Database Domains to the ASR Manager. Log in to the SPARC T5-8 server as root.
Page 218
Register SPARC T5-8 Servers With Oracle Solaris 11 or Database Domains to ASR Manager online 16:06:05 svc:/system/fm/asr-notify:default then the asr-notify service is installed and is working properly To register the ASR manager, run: asradm register -e http://asr-manager-host:port-number/asr For example: asradm register -e http://asrmanager1.mycompany.com:8777/asr You should see screens asking for your Oracle Support account name and password.
On the standalone system where ASR Manager is running, run the following command to verify the status of your system assets: list_asset This command should list ASR assets in your Oracle SuperCluster T5-8, including SPARC T5-8 servers, Exadata Storage Servers, and ZFS storage controllers. Log in to My Oracle Support (https://support.oracle.com).
Page 220
For each component in the system, you should see two host names associated with Note - each serial number. If you see only the Oracle ILOM host name, that means that you did not activate ASR for that component. If you see more than two host names associated with each serial number, you might need to request help for ASR.
This is a useful feature if your organization has a team that should be informed about Service Requests created by ASR. Click the “Approve” button to complete the ASR activation. A system asset must be in an active ASR state in My Oracle Support in order for Service Note - Request autocreate to work.
For clustered databases, only one instance is configured for Oracle Configuration Manager. A configuration script is run on every database on the host. The Oracle Configuration Manager collects and then sends the data to a centralized Oracle repository.
Page 223
SPARC T5-8 server. Do not proceed with these instructions in this case. If you do not see the emCCR file in this directory, then Oracle Configuration Manager has not ■ been installed on your SPARC T5-8 server. Proceed to the next step.
Page 224
Log in to My Oracle Support. On the home page, select the Customize page link at the top right of the Dashboard page. Drag the Targets button to the left and on to your dashboard. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 225
Install Oracle Configuration Manager on SPARC T5-8 Servers Search for your system in the targets search window at the top right of the screen. Monitoring the System...
Double-click on your system to get information for your system. Monitoring the System Using EM Exadata Plug-in Starting with Oracle SuperCluster 1.1, you can monitor all Exadata-related software and hardware components in the cluster using Oracle Enterprise Manager Exadata 12.1.0.3 Plug-in only in the supported configuration described in the note below.
With the Oracle SuperCluster software version 2.x, the compmon command name changed Note - to osc-compmon. Use the new name if SuperCluster is installed with the Oracle SuperCluster v2.x release bundle. See “Identify the Version of SuperCluster Software” on page 145.
Page 228
Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Configuring Exalogic Software This section provides information about using Exalogic software on Oracle SuperCluster T5-8. “Exalogic Software Overview” on page 229 ■ “Exalogic Software Prerequisites” on page 230 ■ “Enable Domain-Level Enhancements” on page 230 ■ “Enable Cluster-Level Session Replication Enhancements” on page 231 ■...
The following are the prerequisites for configuring the Exalogic software products for the system: Preconfiguring the environment, including database, storage, and network, as described ■ in Chapter 3, “Network, Storage, and Database Preconfiguration” of the Oracle Exalogic Enterprise Deployment Guide, located at: http://docs.oracle.com/cd/E18476_01/doc. 220/e18479/toc.htm Your Oracle Exalogic Domain is configured, as described in Chapter 5, “Configuration...
If you are using Coherence*web, these session replication enhancements do not Note - apply. Skip these steps if you use the dizzyworld.ear application as described in Chapter 8, “Deploying a Sample Web Application to and Oracle WebLogic Cluster” in the Oracle® Fusion Middleware Exalogic Enterprise Deployment Guide at http://docs.oracle.com/cd/ E18476_01/doc.220/e18479/deploy.htm.
Create a custom network channel for each Managed Server in the cluster (for example, WLS1) as follows: Log in to the Oracle WebLogic Server Administration Console. If you have not already done so, click Lock & Edit in the Change Center.
Page 233
Enable Cluster-Level Session Replication Enhancements Click Finish. Under the Network Channels table, select ReplicationChannel, the network channel you created for the WLS1 Managed Server. Expand Advanced, and select Enable SDP Protocol. Click Save. To activate these changes, in the Change Center of the Administration Console, click Activate Changes.
You must create a Grid Link Data Source for JDBC connectivity between Oracle WebLogic Server and a service targeted to an Oracle RAC cluster. It uses the Oracle Notification Service (ONS) to adaptively respond to state changes in an Oracle RAC instance. These topics describe: “Grid Link Data Source”...
Adapt to changes in topology, such as adding a new node. ■ Distribute runtime work requests to all active Oracle RAC instances. ■ See Fast Connection Failover in the Oracle Database JDBC Developer's Guide and Reference at: http://docs.oracle.com/cd/B19306_01/java.102/b14355/fstconfo.htm. Runtime Connection Load Balancing Runtime Connection Load Balancing allows WebLogic Server to: Adjust the distribution of work based on back end node capacities such as CPU, availability, ■...
TNS listener and the ONS listener in the WebLogic console. A Grid Link data source containing SCAN addresses does not need to change if you add or remove Oracle RAC nodes. Contact your network administrator for appropriately configured SCAN urls for your environment.
Page 237
Service Name. myService The Oracle RAC Service name is defined on the database, and it is not a fixed name. Note - Host Name - Enter the DNS name or IP address of the server that hosts the database. For an ■...
■ “Enable SDP Support for JDBC” on page 239 ■ “Monitor SDP Sockets Using netstat on Oracle Solaris” on page 240 ■ Configure the Database to Support Infiniband Before enabling SDP support for JDBC, you must configure the database to support InfiniBand, as described in the "...
Ensure that you have created the Grid Link Data Sources for the JDBC connectivity on ComputeNode1 and ComputeNode2, as described in Section 7.6 “Configuring Grid Link Data Source for Dept1_Cluster1” of the Oracle® Fusion Middleware Exalogic Enterprise Deployment Guide at http://docs.oracle.com/cd/...
Oracle Solaris 11 containing the Oracle Exalogic Elastic Cloud Software in the system. Run the netstat command on these Application Domains running Oracle Solaris 11 and on the Database Domains to monitor SDP traffic between the Application Domains running Oracle Solaris 11 and the Database Domains.
■ Create an SDP Listener on the InfiniBand Network Oracle RAC 11g Release 2 supports client connections across multiple networks, and it provides load balancing and failover of client connections within the network they are connecting. To add a listener for the Oracle Exalogic Elastic Cloud Software connections coming in on the Infiniband network, first add a network resource for the Infiniband network with Virtual IP addresses.
Page 242
-n ssc01db01 -A ssc01db01-ibvip/255.255.255.0/bondib0 -k 2 srvctl add vip -n ssc01db02 -A ssc01db02-ibvip/255.255.255.0/bondib0 -k 2 As the "oracle" user (who owns the Grid Infrastructure Home), add a listener which will listen on the VIP addresses created in...
Understanding Internal Cabling These topics show the cable layouts for Oracle SuperCluster T5-8. “Connector Locations” on page 245 ■ “Identifying InfiniBand Fabric Connections” on page 253 ■ “Ethernet Management Switch Connections” on page 256 ■ “ZFS Storage Appliance Connections” on page 258 ■...
Page 246
PDU,” on page 252) FIGURE 24 Sun Datacenter InfiniBand Switch 36 Figure Legend NET MGT 0 and NET MGT 1 ports InfiniBand ports 0A-17A (upper ports) InfiniBand ports 0B-17B (lower ports) Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 247
Connector Locations SPARC T5-8 Server Card Locations (Full Rack) FIGURE 25 Figure Legend Dual-port 10 GbE network interface cards, for connection to the 10 GbE client access network Dual-port InfiniBand host channel adapters, for connection to the InfiniBand network Understanding Internal Cabling...
Page 248
SPARC T5-8 Server Card Locations (Half Rack) FIGURE 26 Figure Legend Dual-port 10 GbE network interface cards, for connection to the 10 GbE client access network Dual-port InfiniBand host channel adapters, for connection to the InfiniBand network Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 249
NET MGT and NET0-3 Port Locations on the SPARC T5-8 Server FIGURE 27 Figure Legend NET MGT port, for connection to Oracle ILOM management network NET0-NET3 ports, for connection to 1 GbE host management network FIGURE 28 ZFS Storage Controller...
Page 250
Gigabit Ethernet ports NET 0, 1, 2, 3 USB ports 0, 1 HD15 video connector FIGURE 29 Exadata Storage Server Figure Legend Power supply 1 Power supply 2 System status LEDs Serial management port Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 251
Connector Locations Service processor network management port Gigabit Ethernet ports NET 0, 1, 2, 3 USB ports 0, 1 HD15 video connector Ethernet Management Switch (Cisco Catalyst 4948 Ethernet Switch) FIGURE 30 Figure Legend Indicators and reset switch Ports 1-16, 10/100/1000BASE-T Ethernet Ports 17-32, 10/100/1000BASE-T Ethernet Ports 33-48, 10/100/1000BASE-T Ethernet CON (upper), MGT (lower)
Page 252
Connector Locations Circuit Breakers and AC Sockets on the PDU FIGURE 31 Figure Legend PDU A Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Identifying InfiniBand Fabric Connections PDU B Identifying InfiniBand Fabric Connections These topics show the following InfiniBand (IB) connections: For 1 Gb/s Ethernet, see “Ethernet Management Switch Connections” on page 256. Note - “IB Spine Switch” on page 253 ■ “IB Leaf Switch No. 1” on page 253 ■...
30, “Ethernet Management Switch (Cisco Catalyst 4948 Ethernet Switch) ,” on page 251) is located in Oracle SuperCluster T5-8 at location U27. The Ethernet management switch connects to the SPARC T5-8 servers, ZFS storage controllers, Exadata Storage Servers, and PDUs through the ports listed in the following tables.
Page 257
Ethernet Management Switch Connections Ethernet To Device Device Location Device Port Cable Switch Port T5-8 No. 2 NET-1 Black, 10 ft T5-8 No. 2 NET-0 Black, 10 ft T5-8 No. 2 NET MGT Red, 10 ft PDU-B PDU-B NET MGT White, 1 m T5-8 No.
Three-Phase PDU Cabling Slot PDU-A/PSU-0 PDU-B/PSU-0 PDU-A/PSU-1 PDU-B/PSU-1 To Device (T5-8 AC0) (T5-8 AC2) (T5-8 AC1) (T5-8 AC3) Group 5-3 Group 0-3 — — Exadata Storage Server No. 5 (Full Rack) Group 5-2 Group 0-4 — — ZFS storage controller No. 2 Group 5-1 Group 0-5 —...
Page 260
Group 3-4 — — Exadata Storage Server No. 2 Group 0-1 Group 3-5 — — Exadata Storage Server No. 1 Group 0-0 Group 3-6 — — Sun Datacenter InfiniBand Switch 36 Spine Switch Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Connecting Multiple Oracle SuperCluster T5-8 Systems These topics provide instructions for connecting one Oracle SuperCluster T5-8 to one or more Oracle SuperCluster T5-8 systems. “Multi-Rack Cabling Overview” on page 261 ■ “Two-Rack Cabling” on page 263 ■ “Three-Rack Cabling” on page 265 ■...
Page 262
From each leaf switch, distribute eight connections over the spine switches in all racks. In multi-rack environments, the leaf switches inside a rack are no longer directly interconnected, as shown in the following graphic. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Eight connections to both internal leaf switches ■ Eight connections to both leaf switches in rack 2 ■ In Oracle SuperCluster T5-8, the spine and leaf switches are installed in the following locations: Spine switch in U1 ■ Two leaf switches in U26 and U32 ■...
Page 264
R2-U32-P9A to R2-U1-P5A R2-U32-P9B to R2-U1-P6A R2-U32 to Rack 1 R2-U32-P10A to R1-U1-P7A 5 meters R2-U32-P10B to R1-U1-P8A R2-U32-P11A to R1-U1-P9A R2-U32-P11B to R1-U1-P10A R2-U26 within Rack 2 R2-U26-P8A to R2-U1-P3B 5 meters Oracle SuperCluster T5-8 Owner's Guide • May 2016...
5 meters R1-U26-P8B to R1-U1-P4B R1-U26-P9A to R1-U1-P5B R1-U26 to Rack 2 R1-U26-P9B to R2-U1-P6B 5 meters R1-U26-P10A to R2-U1-P7B R1-U26-P10B to R2-U1-P8B R1-U26 to Rack 3 R1-U26-P11A to R3-U1-P9B 5 meters R1-U26-P11B to R3-U1-P10B Connecting Multiple Oracle SuperCluster T5-8 Systems...
Page 266
5 meters R3-U32-P8B to R3-U1-P4A R3-U32-P9A to R3-U1-P5A R3-U32 to Rack 1 R3-U32-P9B to R1-U1-P6A 5 meters R3-U32-P10A to R1-U1-P7A R3-U32-P10B to R1-U1-P8A R3-U32 to Rack 2 R3-U32-P11A to R2-U1-P9A 5 meters Oracle SuperCluster T5-8 Owner's Guide • May 2016...
R1-U26-P8B to R1-U1-P4B R1-U26 to Rack 2 R1-U26-P9A to R2-U1-P5B 5 meters R1-U26-P9B to R2-U1-P6B R1-U26 to Rack 3 R1-U26-P10A to R3-U1-P7B 5 meters R1-U26-P10B to R3-U1-P8B R1-U26 to Rack 4 R1-U26-P11A to R4-U1-P9B 10 meters Connecting Multiple Oracle SuperCluster T5-8 Systems...
Page 268
Leaf Switch Connections for the Third Rack in a Four-Rack System TABLE 29 Leaf Switch Connection Cable Length R3-U32 within Rack 3 R3-U32-P8A to R3-U1-P3A 5 meters R3-U32-P8B to R3-U1-P4A R3-U32 to Rack 1 R3-U32-P10A to R1-U1-P7A 5 meters R3-U32-P10B to R1-U1-P8A Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 269
R4-U26-P8B to R4-U1-P4B R4-U26 to Rack 1 R4-U26-P9A to R1-U1-P5B 10 meters R4-U26-P9B to R1-U1-P6B R4-U26 to Rack 2 R4-U26-P10A to R2-U1-P7B 5 meters R4-U26-P10B to R2-U1-P8B R4-U26 to Rack 3 R4-U26-P11A to R3-U1-P9B 5 meters Connecting Multiple Oracle SuperCluster T5-8 Systems...
TABLE 32 Leaf Switch Connections for the Second Rack in a Five-Rack System Leaf Switch Connection Cable Length R2 U32 within Rack 2 R2-U32-P8A to R2-U1-P3A 3 meters R2-U32-P8B to R2-U1-P4A Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 271
R3-U26-P8B to R3-U1-P4B R3 U26 to Rack 1 R3-U26-P11A to R1-U1-P9B 5 meters R3 U26 to Rack 2 R3-U26-P11B to R2-U1-P10B 5 meters R3 U26 to Rack 4 R3-U26-P9A to R4-U1-P5B 5 meters R3-U26-P9B to R4-U1-P6B Connecting Multiple Oracle SuperCluster T5-8 Systems...
Page 272
Leaf Switch Connections for the Fifth Rack in a Five-Rack System Leaf Switch Connection Cable Length R5 U32 within Rack 5 R5-U32-P8A to R5-U1-P3A 3 meters R5-U32-P8B to R5-U1-P4A R5 U32 to Rack 1 R5-U32-P9A to R1-U1-P5A 10 meters Oracle SuperCluster T5-8 Owner's Guide • May 2016...
R1 U32 to Rack 6 R1-U32-P11B to R6-U1-P10A 10 meters R1 U26 within Rack 1 R1-U26-P8A to R1-U1-P3B 3 meters R1-U26-P8B to R1-U1-P4B R1 U26 to Rack 2 R1-U26-P9A to R2-U1-P5B 5 meters R1-U26-P9B to R2-U1-P6B Connecting Multiple Oracle SuperCluster T5-8 Systems...
Page 274
Leaf Switch Connections for the Third Rack in a Six-Rack System TABLE 38 Leaf Switch Connection Cable Length R3 U32 within Rack 3 R3-U32-P8A to R3-U1-P3A 3 meters R3-U32-P8B to R3-U1-P4A Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 275
10 meters R4 U26 to Rack 2 R4-U26-P11A to R2-U1-P9B 5 meters R4 U26 to Rack 3 R4-U26-P11B to R3-U1-P10B 5 meters R4 U26 to Rack 5 R4-U26-P9A to R5-U1-P5B 5 meters R4-U26-P9B to R5-U1-P6B Connecting Multiple Oracle SuperCluster T5-8 Systems...
Page 276
R6 U32 within Rack 6 R6-U32-P8A to R6-U1-P3A 3 meters R6-U32-P8B to R6-U1-P4A R6 U32 to Rack 1 R6-U32-P9A to R1-U1-P5A 10 meters R6-U32-P9B to R1-U1-P6A R6 U32 to Rack 2 R6-U32-P10A to R2-U1-P7A 10 meters Oracle SuperCluster T5-8 Owner's Guide • May 2016...
R1-U26-P10A to R4-U1-P7B 10 meters R1 U26 to Rack 5 R1-U26-P10B to R5-U1-P8B 10 meters R1 U26 to Rack 6 R1-U26-P11A to R6-U1-P9B 10 meters R1 U26 to Rack 7 R1-U26-P11B to R7-U1-P10B 10 meters Connecting Multiple Oracle SuperCluster T5-8 Systems...
Page 278
R3 U32 to Rack 6 R3-U32-P10A to R6-U1-P7A 10 meters R3 U32 to Rack 7 R3-U32-P10B to R7-U1-P8A 10 meters R3 U26 within Rack 3 R3-U26-P8A to R3-U1-P3B 3 meters R3-U26-P8B to R3-U1-P4B Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 279
The following table shows the cable connections for the fifth spine switch (R5-U1) when cabling seven full racks together. Leaf Switch Connections for the Fifth Rack in a Seven-Rack System TABLE 46 Leaf Switch Connection Cable Length R5 U32 within Rack 5 R5-U32-P8A to R5-U1-P3A 3 meters Connecting Multiple Oracle SuperCluster T5-8 Systems...
Page 280
10 meters R6 U26 to Rack 3 R6-U26-P10B to R3-U1-P8B 5 meters R6 U26 to Rack 4 R6-U26-P11A to R4-U1-P9B 5 meters R6 U26 to Rack 5 R6-U26-P11B to R5-U1-P10B 5 meters Oracle SuperCluster T5-8 Owner's Guide • May 2016...
R1-U32-P8A to R1-U1-P3A 3 meters R1 U32 to Rack 2 R1-U32-P8B to R2-U1-P4A 5 meters R1 U32 to Rack 3 R1-U32-P9A to R3-U1-P5A 5 meters R1 U32 to Rack 4 R1-U32-P9B to R4-U1-P6A 10 meters Connecting Multiple Oracle SuperCluster T5-8 Systems...
Page 282
R2-U26-P10B to R7-U1-P8B 10 meters R2 U26 to Rack 8 R2-U26-P11A to R8-U1-P9B 10 meters The following table shows the cable connections for the third spine switch (R3-U1) when cabling eight full racks together. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 283
R4-U26-P10B to R1-U1-P8B 10 meters R4 U26 to Rack 2 R4-U26-P11A to R2-U1-P9B 5 meters R4 U26 to Rack 3 R4-U26-P11B to R3-U1-P10B 5 meters R4 U26 to Rack 5 R4-U26-P8B to R5-U1-P4B 5 meters Connecting Multiple Oracle SuperCluster T5-8 Systems...
Page 284
10 meters R6 U32 to Rack 2 R6-U32-P10A to R2-U1-P7A 10 meters R6 U32 to Rack 3 R6-U32-P10B to R3-U1-P8A 5 meters R6 U32 to Rack 4 R6-U32-P11A to R4-U1-P9A 5 meters Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 285
R7-U26-P11B to R6-U1-P10B 5 meters R7 U26 to Rack 8 R7-U26-P8B to R8-U1-P4B 5 meters The following table shows the cable connections for the eighth spine switch (R8-U1) when cabling eight full racks together. Connecting Multiple Oracle SuperCluster T5-8 Systems...
Page 286
10 meters R8 U26 to Rack 5 R8-U26-P10B to R5-U1-P8B 5 meters R8 U26 to Rack 6 R8-U26-P11A to R6-U1-P9B 5 meters R8 U26 to Rack 7 R8-U26-P1B to R7-U1-P10B 5 meters Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Oracle Exadata Storage Expansion Rack provides additional storage for Oracle SuperCluster T5-8. The additional storage can be used for backups, historical data, and unstructured data. Oracle Exadata Storage Expansion Racks can be used to add space to Oracle SuperCluster T5-8 as follows: Add new Exadata Storage Servers and grid disks to a new Oracle Automatic Storage ■...
Preparing for Installation These topics provide information to prepare your site for the installation of the Oracle Exadata Storage Expansion Rack. Planning for the expansion rack is similar to planning for Oracle SuperCluster T5-8. This section contains information specific to the expansion rack, and also refers to “Preparing the Site”...
Preparing for Installation Reviewing System Specifications “Physical Specifications” on page 289 ■ “Installation and Service Area” on page 96 ■ “Rack and Floor Cutout Dimensions” on page 97 ■ Physical Specifications TABLE 58 Exadata Expansion Rack Specifications Parameter Metric English Height 1998 mm 78.66 in.
Page 290
PDU Choices TABLE 60 Voltage Phases Reference Table 61, “Low Voltage 1 Phase PDUs for Oracle Exadata Storage Expansion Rack,” on page 291 Table 62, “Low Voltage 3 Phase PDUs for Oracle Exadata Storage Expansion Rack,” on page 291 High Table 63, “High Voltage 1 Phase PDUs for Oracle Exadata Storage Expansion Rack,”...
Page 291
Preparing for Installation Voltage Phases Reference High Table 64, “High Voltage 3 Phase PDUs for Oracle Exadata Storage Expansion Rack,” on page 292 TABLE 61 Low Voltage 1 Phase PDUs for Oracle Exadata Storage Expansion Rack High Voltage One Phase...
Page 292
2 m (6.6 feet) PDU power cords are 4 m long (13 feet), but sections are used for internal routing in the rack. High Voltage 1 Phase PDUs for Oracle Exadata Storage Expansion Rack TABLE 63 High Voltage Single Phase...
Preparing for Installation High Voltage Single Phase Comments Usable PDU power cord length 2 m (6.6 feet) PDU power cords are 4 m long (13 feet), but sections are used for internal routing in the rack. Related Information “Facility Power Requirements” on page 100 ■...
Page 294
There is no airflow requirement for the left and right sides, or the top of the rack. If the rack is not completely filled with components, cover the empty sections with filler ■ panels. Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 295
Preparing for Installation Direction of Airflow Is Front to Back FIGURE 32 TABLE 67 Airflow (listed quantities are approximate) Rack Maximum Typical Full rack 2,000 1,390 Half rack 1,090 Quarter rack Connecting Expansion Racks...
Preparing the Unloading Route and Unpacking Area “Shipping Package Dimensions” on page 297 ■ “Loading Dock and Receiving Area Requirements” on page 111 ■ “Access Route Guidelines” on page 112 ■ “Unpacking Area” on page 113 ■ Oracle SuperCluster T5-8 Owner's Guide • May 2016...
“Network Connection and IP Address Requirements” on page 299 ■ Network Requirements Overview The Oracle Exadata Storage Expansion Rack includes Exadata Storage Servers, as well as equipment to connect the servers to your network. The network connections enable the servers to be administered remotely.
Page 298
Exadata Storage Servers. It connects the servers, Oracle ILOM, and switches connected to the Ethernet switch in the rack. There is one uplink from the Ethernet switch in the rack to your existing management network.
“Understanding Expansion Rack Internal Cabling” on page 300 ■ Installing the Oracle Exadata Storage Expansion Rack The procedures for installing the Oracle Exadata Storage Expansion Rack are the same as those for installing Oracle SuperCluster T5-8. See “Installing the System” on page 117 for those procedure, then return here.
Expansion Rack Default IP Addresses network and using the optional Fibre Channel PCIe cards do not apply for the Oracle Exadata Storage Expansion Rack. Expansion Rack Default IP Addresses Component NET0 IP Addresses Oracle ILOM IP InfiniBand Bonded IP Addresses...
Understanding Expansion Rack Internal Cabling “Front and Rear Expansion Rack Layout” on page 301 ■ “Oracle ILOM Cabling” on page 304 ■ “Administrative Gigabit Ethernet Port Cabling Tables” on page 306 ■ “Single-Phase PDU Cabling” on page 308 ■ “Three-Phase Power Distribution Unit Cabling ” on page 309 ■...
FIGURE 35 Oracle ILOM Cabling This topic contains tables that list the Oracle ILOM network cabling. The Oracle ILOM port on the servers is labeled NET MGT, and connects to the Gigabit Ethernet port located in rack unit 21 on the expansion racks.
Page 305
Understanding Expansion Rack Internal Cabling Table 71, “Oracle ILOM Cabling for the Expansion Rack (Full Rack),” on page 305 ■ Table 72, “Oracle ILOM Cabling for the Expansion Rack (Half Rack),” on page 305 ■ Table 73, “Oracle ILOM Cabling for the Expansion Rack (Quarter Rack),” on page 306 ■...
Understanding Expansion Rack Internal Cabling Oracle ILOM Cabling for the Expansion Rack (Quarter Rack) TABLE 73 From Rack Unit Type of Equipment Gigabit Ethernet Port Exadata Storage Server Exadata Storage Server Exadata Storage Server Exadata Storage Server Administrative Gigabit Ethernet Port Cabling Tables This topic contains tables that list the administrative Gigabit Ethernet network cabling.
Page 307
Understanding Expansion Rack Internal Cabling From Rack Unit Type of Equipment Gigabit Ethernet Port Exadata Storage Server Exadata Storage Server Exadata Storage Server Exadata Storage Server Sun Datacenter InfiniBand Switch 36 switch PDU-A PDU-B TABLE 75 Gigabit Ethernet Cabling for the Expansion Rack (Half Rack) From Rack Unit Type of Equipment Gigabit Ethernet Port...
Page 312
Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable Oracle SuperCluster T5-8 Owner's Guide • May 2016...
Page 313
Understanding Expansion Rack Internal Cabling From InfiniBand Port To Rack Unit Type of Equipment Port Cable Description Switch Rack Unit Exadata Storage Server PCIe 3, P1 3 meter QDR InfiniBand cable Sun Datacenter InfiniBand Switch 2 meter QDR InfiniBand cable 36 switch Sun Datacenter InfiniBand Switch 2 meter QDR InfiniBand cable...
Page 314
Sun Datacenter InfiniBand 2 meter QDR InfiniBand cable Switch 36 switch Sun Datacenter InfiniBand 2 meter QDR InfiniBand cable Switch 36 switch Sun Datacenter InfiniBand 2 meter QDR InfiniBand cable Switch 36 switch Oracle SuperCluster T5-8 Owner's Guide • May 2016...
InfiniBand Switch Information for Oracle SuperCluster T5-8 In Oracle SuperCluster T5-8, the switch at rack unit 1 (U1) is referred to as the spine switch. The switches at rack unit 26 (U26) and rack unit 32 (U32) are referred to as leaf switches. In a single rack, the two leaf switches are interconnected using seven connections.
InfiniBand Switch Information for the Oracle Exadata Storage Expansion Rack In the Oracle Exadata Storage Expansion Rack, the switch at rack unit 1 (U1) is referred to as the spine switch, except for the Oracle Exadata Storage Expansion Quarter Rack which has no spine switch.
Page 317
Oracle Exadata Storage Expansion Quarter Rack to the leaf switches in the Oracle SuperCluster T5-8. In the Full Rack version of Oracle SuperCluster T5-8, only two ports are open on both leaf ■...
Page 318
Connecting an Expansion Rack to Oracle SuperCluster T5-8 The following graphic shows the cable connections from an Oracle Exadata Storage Expansion Quarter Rack to two or more racks. The following racks can connect to a standard Oracle Exadata Storage Expansion Quarter Rack (with no spine switch): Half Rack version of Oracle SuperCluster T5-8 ■...
Page 319
In addition, because the InfiniBand switches are physically located in different rack units in the two racks, the following terms are used when referring to the InfiniBand switches: InfiniBand 1 (IB1) refers to the spine switch, located in U1 in Oracle SuperCluster T5-8 ■...
Half Rack or Oracle Exadata Storage Expansion Full Rack to Oracle SuperCluster T5-8 These topics describe how to connect an Oracle Exadata Storage Expansion Half Rack or Oracle Exadata Storage Expansion Full Rack to your Oracle SuperCluster T5-8. Note -...
Page 321
The following terms are used when referring to the two racks: Rack 1 (R1) refers to Oracle SuperCluster T5-8 ■ Rack 2 (R2) refers to the Oracle Exadata Storage Expansion Rack ■ In addition, because the InfiniBand switches are physically located in different rack units in the...
U24 in the Oracle Exadata Storage Expansion Rack ■ The following sections provide information for connecting Oracle Exadata Storage Expansion Half Rack or Oracle Exadata Storage Expansion Full Rack to your Oracle SuperCluster T5-8: “Two-Rack Cabling” on page 322 ■...
Page 323
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R1-IB3-P10B to R2-IB1-P8A R1-IB3-P11A to R2-IB1-P9A R1-IB3-P11B to R2-IB1-P10A R1-IB2 within Rack 1 R1-IB2-P8A to R1-IB1-P3B 5 meters R1-IB2-P8B to R1-IB1-P4B R1-IB2-P9A to R1-IB1-P5B R1-IB2-P9B to R1-IB1-P6B...
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R2-IB2-P11B to R1-IB1-P10B Three-Rack Cabling The following table shows the cable connections for the first spine switch (R1-IB1) when cabling three full racks together. TABLE 89 Leaf Switch Connections for the First Rack in a Three-Rack System...
Page 325
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R2-IB3-P8B to R2-IB1-P4A R2-IB3-P9A to R2-IB1-P5A R2-IB3 to Rack 1 R2-IB3-P11A to R1-IB1-P9A 5 meters R2-IB3-P11B to R1-IB1-P10A R2-IB3 to Rack 3 R2-IB3-P9B to R3-IB1-P6A 5 meters...
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R3-IB2 to Rack 1 R3-IB2-P9B to R1-IB1-P6B 5 meters R3-IB2-P10A to R1-IB1-P7B R3-IB2-P10B to R1-IB1-P8B R3-IB2 to Rack 2 R3-IB2-P11A to R2-IB1-P9B 5 meters R3-IB2-P11B to R2-IB1-P10B...
Page 327
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connections for the Second Rack in a Four-Rack System TABLE 93 Leaf Switch Connection Cable Length R2-IB3 within Rack 2 R2-IB3-P8A to R2-IB1-P3A 5 meters R2-IB3-P8B to R2-IB1-P4A R2-IB3 to Rack 1...
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R3-IB2 to Rack 1 R3-IB2-P10A to R1-IB1-P7B 5 meters R3-IB2-P10B to R1-IB1-P8B R3-IB2 to Rack 2 R3-IB2-P11A to R2-IB1-P9B 5 meters R3-IB2-P11B to R2-IB1-P10B R3-IB2 to Rack 4...
Page 329
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connections for the First Rack in a Five-Rack System TABLE 96 Leaf Switch Connection Cable Length R1 IB3 within Rack 1 R1-IB3-P8A to R1-IB1-P3A 3 meters R1-IB3-P8B to R1-IB1-P4A R1 IB3 to Rack 2...
Page 330
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R2 IB2 to Rack 3 R2-IB2-P9A to R3-IB1-P5B 5 meters R2-IB2-P9B to R3-IB1-P6B R2 IB2 to Rack 4 R2-IB2-P10A to R4-IB1-P7B 5 meters R2-IB2-P10B to R4-IB1-P8B R2 IB2 to Rack 5...
Page 331
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R4-IB3-P8B to R4-IB1-P4A R4 IB3 to Rack 1 R4-IB3-P10A to R1-IB1-P7A 10 meters R4-IB3-P10B to R1-IB1-P8A R4 IB3 to Rack 2 R4-IB3-P11A to R2-IB1-P9A 5 meters R4 IB3 to Rack 3...
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R5-IB2-P10B to R2-IB1-P8B R5 IB2 to Rack 3 R5-IB2-P11A to R3-IB1-P9B 5 meters R5 IB2 to Rack 4 R5-IB2-P11B to R4-IB1-P10B 5 meters Six-Rack Cabling The following table shows the cable connections for the first spine switch (R1-IB1) when cabling six full racks together.
Page 333
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R2-IB3-P8B to R2-IB1-P4A R2 IB3 to Rack 1 R2-IB3-P11B to R1-IB1-P10A 5 meters R2 IB3 to Rack 3 R2-IB3-P9A to R3-IB1-P5A 5 meters R2-IB3-P9B to R3-IB1-P6A R2 IB3 to Rack 4...
Page 334
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R3 IB2 to Rack 5 R3-IB2-P10A to R5-IB1-P7B 5 meters R3 IB2 to Rack 6 R3-IB2-P10B to R6-IB1-P8B 5 meters The following table shows the cable connections for the fourth spine switch (R4-IB1) when cabling six full racks together.
Page 335
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R5 IB3 to Rack 3 R5-IB3-P11A to R3-IB1-P9A 5 meters R5 IB3 to Rack 4 R5-IB3-P11B to R4-IB1-P10A 5 meters R5 IB3 to Rack 6 R5-IB3-P9A to R6-IB1-P5A...
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Seven-Rack Cabling The following table shows the cable connections for the first spine switch (R1-IB1) when cabling seven full racks together. Leaf Switch Connections for the First Rack in a Seven-Rack System...
Page 337
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R2 IB3 to Rack 7 R2-IB3-P11A to R7-IB1-P9A 10 meters R2 IB2 within Rack 2 R2-IB2-P8A to R2-IB1-P3B 3 meters R2-IB2-P8B to R2-IB1-P4B R2 IB2 to Rack 1...
Page 338
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connections for the Fourth Rack in a Seven-Rack System TABLE 110 Leaf Switch Connection Cable Length R4 IB3 within Rack 4 R4-IB3-P8A to R4-IB1-P3A 3 meters R4-IB3-P8B to R4-IB1-P4A R4 IB3 to Rack 1...
Page 339
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R5 IB2 to Rack 4 R5-IB2-P11B to R4-IB1-P10B 5 meters R5 IB2 to Rack 6 R5-IB2-P9A to R6-IB1-P5B 5 meters R5 IB2 to Rack 7 R5-IB2-P9B to R7-IB1-P6B...
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R7 IB3 to Rack 3 R7-IB3-P10A to R3-IB1-P7A 10 meters R7 IB3 to Rack 4 R7-IB3-P10B to R4-IB1-P8A 10 meters R7 IB3 to Rack 5 R7-IB3-P11A to R5-IB1-P9A...
Page 341
Connecting an Expansion Rack to Oracle SuperCluster T5-8 The following table shows the cable connections for the second spine switch (R2-IB1) when cabling eight full racks together. Leaf Switch Connections for the Second Rack in an Eight-Rack System TABLE 115...
Page 342
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R3 IB2 to Rack 4 R3-IB2-P8B to R4-IB1-P4B 5 meters R3 IB2 to Rack 5 R3-IB2-P9A to R5-IB1-P5B 5 meters R3 IB2 to Rack 6 R3-IB2-P9B to R6-IB1-P6B...
Page 343
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R5 IB3 to Rack 4 R5-IB3-P11B to R4-IB1-P10A 5 meters R5 IB3 to Rack 6 R5-IB3-P8B to R6-IB1-P4A 5 meters R5 IB3 to Rack 7 R5-IB3-P9A to R7-IB1-P5A...
Page 344
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connections for the Seventh Rack in an Eight-Rack System TABLE 120 Leaf Switch Connection Cable Length R7 IB3 within Rack 7 R7-IB3-P8A to R7-IB1-P3A 3 meters R7 IB3 to Rack 1...
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection Cable Length R8 IB2 to Rack 6 R8-IB2-P11A to R6-IB1-P9B 5 meters R8 IB2 to Rack 7 R8-IB2-P1B to R7-IB1-P10B 5 meters Nine-Rack Cabling The following table shows the cable connections for the switches when cabling nine racks together.
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Cable lengths for racks 9 through 18 vary depending on the layout of the racks. Up to Note - 100 meters is supported. Leaf Switch Connections for the Tenth Rack in the System...
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection R11 IB3 to Rack 5 R11-IB3-P10B to R5-IB1-P13A R11 IB3 to Rack 6 R11-IB3-P13A to R6-IB1-P13A R11 IB3 to Rack 7 R11-IB3-P13B to R7-IB1-P13A R11 IB3 to Rack 8...
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection R12 IB2 to Rack 6 R12-IB2-P14A to R6-IB1-P14B R12 IB2 to Rack 7 R12-IB2-P14B to R7-IB1-P14B R12 IB2 to Rack 8 R12-IB2-P8A to R8-IB1-P14B Thirteen-Rack Cabling The following table shows the cable connections for the switches when cabling thirteen racks together.
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Cable lengths for racks 9 through 18 vary depending on the layout of the racks. Up to Note - 100 meters is supported. Leaf Switch Connections for the Fourteenth Rack in the System...
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection R15 IB3 to Rack 5 R15-IB3-P10B to R5-IB1-P17A R15 IB3 to Rack 6 R15-IB3-P17A to R6-IB1-P17A R15 IB3 to Rack 7 R15-IB3-P17B to R7-IB1-P17A R15 IB3 to Rack 8...
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Leaf Switch Connection R16 IB2 to Rack 6 R16-IB2-P0A to R6-IB1-P0B R16 IB2 to Rack 7 R16-IB2-P0B to R7-IB1-P0B R16 IB2 to Rack 8 R16-IB2-P8A to R8-IB1-P0B Seventeen-Rack Cabling The following table shows the cable connections for the switches when cabling seventeen racks together.
Connecting an Expansion Rack to Oracle SuperCluster T5-8 Cable lengths for racks 9 through 18 vary depending on the layout of the racks. Up to Note - 100 meters is supported. TABLE 131 Leaf Switch Connections for the Eighteenth Rack in the System...
Page 353
147 restrictions, 152 displaying CPU and memory allocations, 178 dynamic intimate shared memory, 151 cabling tables for the Oracle SuperCluster T5-8, 245 cautions, 141 changingssctuner properties, 163 emergency power off, 145 Cisco Catalyst 4948 Ethernet management switch...
Need help?
Do you have a question about the SuperCluster T5-8 and is the answer not in the manual?
Questions and answers