Page 1
Bull ESCALA EPC Series EPC Connecting Guide ORDER REFERENCE 86 A1 65JX 03...
Page 3
Bull ESCALA EPC Series EPC Connecting Guide Hardware October 1999 BULL ELECTRONICS ANGERS CEDOC 34 Rue du Nid de Pie – BP 428 49004 ANGERS CEDEX 01 FRANCE ORDER REFERENCE 86 A1 65JX 03...
Page 4
The product documented in this manual is Year 2000 Ready. The information in this document is subject to change without notice. Groupe Bull will not be liable for errors contained herein, or for incidental or consequential damages in connection with the use of this material.
About This Book Typical Powercluster configurations are illustrated, together with the associated sub-systems. Cabling details for each configuration are tabulated, showing cross-references to the Marketing Identifiers (MI) and the Catalogue. Reference numbers associated with the configuration figure titles correspond to those in the Catalogue.
Document Overview This manual is structured as follows: Chapter 1 Introducing the Escala Powercluster Series Introduces the Powercluster family of Escala racks. Chapter 2 EPC400 Describes the Escala RT Series rack with an Escala RT drawer. Chapter 3 EPC800 Describes the Escala RM Series rack with a CPU rack drawer. Chapter 4 EPC1200 Describes the Escala RL470 Basic System which consists of two racks (a...
• Escala AMDAS JBOD Storage Subsystem – User’s Guide Reference: 86 A1 79GX. • General Guide to Data Processing Site Preparation Reference: URL http://bbs.bull.net/aise • Escala RT Series Setting Up the System Reference: 86 A1 18PX • Escala RT Series Rack Service Guide...
Page 8
• Escala S Series System Service Guide Reference: 86 A1 91JX • Escala Mxxx Installation & Service Guide Reference: 86 A1 25PN • Escala Rxxx Installation & Service Guide Reference: 86 A1 29PN • Escala RL470 Installation Procedures for Drawers Reference: 86 A1 29PX •...
Chapter 1. Introducing the Escala Powercluster Series Introducing the Powercluster family of Escala racks. Introducing Powercluster Servers (Cluster Nodes) The Powercluster offer is made up of Escala rack-mountable servers. Three uni-node server models are available: • EPC400, an Escala RT Series rack with an Escala RT drawer, see page 2-1. •...
Chapter 2. EPC400 Series Describing the Escala RT Series rack with an Escala RT drawer. EPC400 Series – Profile These models, contained in a 19” RACK (36U), are RT Nodes, including: Configuration EPC400 EPC430 EPC440 CPXG210–0000 CPXG225–0000 CPXG226–0000 Power Supply Redundant Redundant Hot Swapping...
List of Drawers for EPC400 Series Rack The drawers, with their rack-mount kits, that can be mounted into the EPC400 rack are as follows. Legend The following conventions are used: – not applicable. Yes fitted at manufacture. Customer Fitted at customer’s site by Customer services. No Equipment is not fitted in this rack.
Chapter 3. EPC800 Describing the Escala RM Series rack with a CPU rack drawer. EPC800 – Profile M.I. (CPXG211–0000) This model, contained in a 19” RACK (36U), is one RM Node, including: • 1 dual CPU module Power PC 604e @200Mhz – 2MB L2 cache / CPU •...
List of Drawers for EPC800 Rack The drawers, with their rack-mount kits, that can be mounted into the EPC 400 rack are as follows. Legend The following conventions are used: – not applicable. Yes fitted at manufacture. Customer Fitted at customer’s site by Customer services. No Equipment is not fitted in this rack.
Chapter 4. EPC1200/1200A and 2400 Describing the Escala EPC1200/1200A and 2400 Systems which consist of two racks (a computing rack with a CPU drawer and an expansion rack with an I/O drawer). EPC1200 – Profile These models, contained in a two 19” racks (36U), are two RL Nodes, including: Computing Unit (CEC–Rack): •...
Standard Adapters/Cables One CBLG105–1800 serial cable is automatically generated for every drawer. Two CBL1912 adapter cables (9 pin, 25 pin) are systematically provided and mounted with any CPU drawer. An 8–port asynchronous board is per default generated for every CPU drawer: M.I.
List of Drawers for EPC1200 Rack The drawers, with their rack-mount kits, that can be mounted into the EPC1200 rack are as follows. Legend The following conventions are used: – not applicable. Yes fitted at manufacture. Customer Fitted at customer’s site by Customer services. No Equipment is not fitted in this rack.
Chapter 5. Subsystems Introduces the different types of subsystems. Subsystems – Summary There are several types of subsystems: • User consoles, on page 5-2. • Serial Networks, on page 5-4. • Interconnect, on page 5-5. • HA Library, on page 5-6. •...
User Consoles There are 4 terminal types, see page 6-1. • System Console, an ASCII terminal (BQ306) • Graphics Display (on all models except on the EPC800 model) that substitutes to an ASCII system console with graphical capabilities • Cluster Console, a self–bootable X Terminal •...
Page 25
Number of Adminis- Console Console Type Dedicated Figure Cross nodes tration Concen- Admin. Reference trator Network Page System Console, PWCCF07 on page 6-5 Graphics Display System Console, PWCCF08 on page 6-6 Graphics Display System Console PWCCF01 on page 6-6 & on page 6-9 ClusterConsole PWCCF02...
Serial Networks There are two type of serial networks: A first one is used by HACMP to monitor the nodes. Nodes periodically exchange keep alive messages (heart beat) in particular through this network. A second one is used to wire the nodes on a console concentrator, if any. It enables a single terminal connected to the console concentrator to be the system console of every node.
Interconnect There are two interconnect types: • FDDI interconnect, on page 7-2 • Fast Ethernet interconnect, on page 9-2. For 2–node configuration of same node type, there is a Fast Ethernet Full Kit (2 Fast Ethernet adapters plus a crossed Ethernet cable), as well a FDDI Full Kit (2 FDDI adapters and two FDDI cables).
HA Library For details, see page 10-69. Number of Number of Number of SSA Picture Cross Reference Nodes Drives Adapters / Nodes Page LIBCF01 on page 10-73 LIBCF02 on page 10-73 LIBCF03 on page 10-74 EPC Connecting Guide...
DAS SCSI For details, see page 10-23. Configuration Number of Daisy Chained Number of Figure Cross Type Attached Node-DAS Reference Nodes Cables Page Single SP / DASCF01 on page 10-27 Single SCSI Single SP / DASCF02 on page 10-28 Single SCSI Dual SP / Dual DASCF03 on page 10-28...
DAS FC-AL Not on EPC800. For details, see page 10-36. Configuration Number of Number of FC-AL Hub Number of Figure Cross Type Attached Adapters Reference Nodes Per Node Page Single SP / SLOOP01 on page 10-45 Single Loop Single SP / SLOOP02 on page 10-46 Single Loop...
System Console and Graphics Display Details in: • List of MIs, on page 6-2. • Hardware Components, on page 6-3. • Examples of Use, on page 6-5. • General Cabling Diagrams, on page 6-5. • Cabling Diagrams, on page 6-8. •...
Page 33
Hardware Components System Console (France) CSKU101–1000 (AZERTY) Identificator Description Length Quantity DTUK016–01F0 BQ306 Screen and logic – Europe Power cord KBU3033 BQ306 AZERTY French Keyboard CBLG104–2000 Cable, local RS232 (25F/25M) CBLG106–2000 Cable, remote RS232 (25M/25F) MB323 Interposer (25M/25M) – BE System Console (Europe) CSKU101–2000 (QWERTY) Identificator...
Page 34
System Console (Germany) CSKU101–000G (QWERTY) Identificator Description Length Quantity DTUK016–01F0 BQ306 Screen and logic – Europe Power cord KBU3034 BQ306 QWERTY German Keyboard CBLG104–2000 Cable, local RS232 (25F/25M) CBLG106–2000 Cable, remote RS232 (25M/25F) MB323 Interposer (25M/25M) – BE EPC Connecting Guide...
Examples of Use System Console The System Console (ASCII terminal) is offered in the following cluster configurations: • Uni-node Escala EPC: the system console is attached through serial port S1 of the node. • Two–node Escala EPC: the System Console can be used alone. In this case the System Console is connected to a node’s S1 port, as shown on Figure 10.
Page 36
Figure 6. PWCCF01: 2–node Escala EPC – (one System Console). Note: The 8 async ports board is not mandatory. In this case, use S2 or S3 port to link the two nodes. Figure 7. PWCCF08: 2–node Escala EPC – (two System Consoles). EPC Connecting Guide...
Page 37
Figure 8. PWCCF08: 2–node EPC – (1 System Console, 1 Graphic Display). Note: The Ethernet cable is not mandatory. The LAN network can be used to reach Node 2 from Graphics Displays Console Cabling Requirements...
Page 39
Cabling of the System Console to the Console Concentrator Figure 11. Cabling of the System Console to the Console Concentrator The graphics display is connected to the node of the ordered ESCALA EPC model (EPC400/430/440 or EPC1200 / EPC1200A, EPC2400). There is no graphics for an ESCALA EPC800 model.
–l ttyx pwcons –c ttyx The pwcons utility comes with the Bull Cluster software package. This is a shell script build above the cu command which is installed in the /usr/sbin directory. For more details, refer to the EPC & HA Solutions – Setup Guide.
Examples of Use The Cluster Administration Hub is used to set up a dedicated administration network (10Base-T Ethernet network). The Cluster Administration Hub is used for Escala EPC configurations with a Cluster Console, or a Cluster PowerConsole. The administration network utilizes the LSA adapter of an EPC800 node, an ethernet board on an EPC1200/EPC1200A node, the integrated ethernet card on an EPC400 node or on the Powerconsole.
Page 43
Figure 13. Cluster Administration Hub Ethernet Connections To ClusterConsole or PowerConsole Figure 14. Cluster Administration Hub Connections to Nodes and Console 6-13 Console Cabling Requirements...
Usage cases DCKU115–2000 The Console Concentrator is used with: • a Powerconsole whatever is the number of nodes in the Escala EPC configuration. See Figure . • a Cluster Console if the number of nodes is more than two nodes. See Figure 17. If there is a cluster Hub (case of dedicated administration network), the Console Concentrator is connected to it.
Usage cases DCKU119–2000 The Console Concentrator is used with: • a Powerconsole whatever is the number of nodes in the Escala EPC configuration. See Figure . • a Cluster Console if the number of nodes is more than two nodes. See Figure 17. If there is a cluster Hub (case of dedicated administration network), the Console Concentrator is connected to it.
Console Concentrator Configuration The configuration of the console concentrator is undertaken by Customer Services. This configuration procedure is provided as a reference only. Initial Conditions The configuration of the console concentrator (CS/2600) is done through the ASCII console (BQ306). The ASCII console must be connected to the J0 port of the CS/2600 server, to setup the console baud rate to 9600.
Page 50
Example 1 of Label Format Admin Network Example Example of Your Own Value Your Own Label of Value Label Network Mask 255.0.0.0 N/A Powerconsole IP@ 1.0.0.20 Console Concentra- 1.0.0.10 CS/2600 tor IP@ IP@ of Node #1 1.0.0.1 Node1_admin CS/2600 Port #1 (J1) 1.0.0.11 Node1_cons IP@ of Node #2...
Console Concentrator Configuration Procedure Before you start, note that: • The Console Concentrator (CS/2600) is configured through the ASCII console (BQ306). The ASCII console has to be connected to the J0 port of the CS/2600 server, to set the console baud rate to 9600. However, the Console Concentrator (CS/2600) can also be configured through the PowerConsole provided that it is connected to the J0 port instead of the ASCII console.
Page 52
3. Only if Powerconsole is used Establish the connection between the workstation and CS/2600 on the serial port J0 – using the cu command on the PowerConsole side (see step 1). 4. Switch the CS/2600 to monitor mode: – make a hardware reset (button on the left side of the CS/2600) as described in the CS/2600 Installation Guide.
Page 53
– and key in <Enter> few times to obtain the following prompt: Welcome to the 3Com Communication [1] CS> 15.Add the necessary privileges for network management – with the command (after the CS>): [1] CS> set pri = nm <Enter> –...
Page 54
[10] cs# sh !2 dp <ENTER> – etc. 22.Check the CS/2600 network connection: – Make for example a ping to the PowerConsole station, using the command ping (after cs#): [10] cs# ping @IP_PWC <ENTER> – The ping command must respond with the message: –...
PowerConsole Configuration Procedure Update the file /etc/hosts with the different addresses configured on the CS/2600 server: @IP0 (CS/2600 server address), @IP1 (J1/Node1 S1 address), @IP2 (J1/Node2 S1 address), etc. For example: 120.184.33.10 CS–2600 # @IP0 120.184.33.11 Node1_cons # @IP1 120.184.33.12 Node2_cons # @IP2 Examples of Use...
Cluster Console Details in: • Hardware Components, on page 6-26. • Examples of Use, on page 6-26. • Cabling Diagrams, on page 6-27. • Cabling Legend, on page 6-27. • Cabling Diagrams for a 2–node Configuration, on page 6-29. • Cabling Diagrams For Configuration With More Than 2 Nodes, on page 6-31. •...
If there is no Cluster Administration Hub, that is to say no dedicated administration network, the Console Concentrator and the Cluster Console will be connected to the customer’s LAN network (an Ethernet network) in the customer’s premises. In the case that the customer’s network is COAXIAL THICK or COAXIAL THIN then the Customer is in charge of connecting the Console Concentrator and the Cluster Console to his network with his own cables (As usual for all the Escala platforms).
Page 58
Figure 18. Cluster Console with Console Concentrator Figure 19. Cluster Console with Connection to Node’s S1 Plug 6-28 EPC Connecting Guide...
Cabling Diagrams for a 2–node Configuration There are two CBLG105–1800 cables . A first one is generated automatically and systematically in any Escala EPC order. The second one is included in the Cluster Console component. For connecting the nodes to the cluster hub, please use the native integrated Ethernet board.
Page 60
Figure 21. Alternative Cabling of Cluster Console and System Console – Common administration graphical interface Figure 22. Alternative Cabling of Cluster Console and System Console – Both node console and common administration graphical terminal 6-30 EPC Connecting Guide...
Cabling Diagrams For Configuration With More Than 2 Nodes With a dedicated–administration network, use the integrated Ethernet board for connecting the nodes to the cluster administration hub. Figure 23. PWCCF03: Cluster Console with a Cluster Administration Hub 6-31 Console Cabling Requirements...
Page 62
With no dedicated–administration network, the Console Concentrator and the Cluster Console (X Terminal) must be connected to the customer’s Ethernet–based public network (a single Ethernet LAN @10Mbps). Figure 24. PWCCF05: Cluster Console without Cluster Administration Hub 6-32 EPC Connecting Guide...
Cabling Instructions Documentation References Installing Your Explora Family System 17” Professional Color Monitor – User’s Guide Workstations BQX 4.0. Installation Warning: Do not plug the power cords on the X Terminal box and on the monitor front side before being asked to do so: 1.
Page 64
16.Once the prompt> is displayed, if the Boot is not automatic, then: type >BL and press ENTER 17.Two or three windows appear after the starting has completed: a window of Setup and Configuration (upper left), a telnet window, a system console window corresponding to the serial line RS232 (S1 plug of a EPC node) provided that the X terminal is directly wired to a node’s S1 port.
Cluster PowerConsole Cluster PowerConsole is provided by an AIX workstation from the following: – Escala S Series Details in: • Hardware Components (Escala S Series), on page 6-36. • Examples of Use, on page 6-38. • Cabling Legend, on page 6-40. •...
CBLG179–1900 Cable, RJ45 Ethernet for HUB connection VCW3630 Cable, Ethernet ”Thin” (15M, 15F) to Transceiver Examples of Use The PowerConsole with the Cluster Assistant GUI is a cluster administration graphics workstation which is available to setup, install, manage, and service the EPC nodes and the EPC cluster.
Page 69
Figure 25. PowerConsole With a Dedicated Administration Network Figure 26. PowerConsole Without Dedicated Administration Network Within an Escala EPC, a node pertains to an HACMP cluster or it is a standalone node (without HACMP). There can be zero, one or more HACMP clusters, as there can be zero, one or more standalone nodes.
Page 71
Figure 28. PowerConsole to Console Concentrator and Administration Hub Figure 29. PowerConsole with Remote Access (LAN or Modem) 6-41 Console Cabling Requirements...
Page 72
Cabling Pattern (without Modems) Cabling to be used if there is a dedicated–administration network. Customer’s LAN (Ethernet 10) Optional Figure 30. PWCCF04: PowerConsole with Administration Hub 6-42 EPC Connecting Guide...
Page 73
Cabling to be used when there is no dedicated–administration network. The Console Concentrator and the PowerConsole will be connected to the customer’s LAN network (an ethernet network). Customer’s LAN (Ethernet 10) Figure 31. PWCCF06: PowerConsole without Administration Hub 6-43 Console Cabling Requirements...
Example of Cable Usage (for a 2–node Powercluster) Type Function Cabling From – To Description CBLG 106–2000 Link CS2600/J1 to Node 1 CS2600/J1–>CBL1912 RS232 Direct 25M/25F Link CS2600/J2 to Node 2 CS2600/J2–>CBL1912 Link CS2600/J0 to ASCII CS2600/J0–>Interposer console CBL 1912 Link CS2600/J1 to Node 1 CBLG106–>S1 Node 1 RS232 Direct 25M/9F...
Configuration Rules for PowerConsole 2 Extensions Additional internal disk drives and media drives must be placed in this PowerConsole according to the following rules: Five bays are available in Escala S100: – three of them are already used by: floppy, one CD–ROM 20X and one system disk of 4.5GB.
Hardware Components Fast Ethernet Interconnect Full Kit (2 EPC400-N) DCKG009–0000 DCKG009–0000 component is only used to link two EPC400 nodes with a single Ethernet link without switch. Identificator Description Quantity DCCG085–0000 PCI Ethernet 10&100 Mb/s Adapter (2986) CBLG161–1900 10m Ethernet Cross Cable – RJ45 / RJ45 Fast Ethernet Interconnect Base Kit (EPC400-N) DCKG010–0000 Identificator...
Advanced switch usage When two fast ethernet interconnects are ordered between the same group of nodes, two cross–over Ethernet RJ45/RJ45 cables (CBLG161–1900), if provided, can be used to establish a resilient link pair between the two switches, and to set up in that way a redundant interconnect.
Page 82
• Full Duplex must be used only for connections thru a SWITCH or POINT to POINT connections. In particular you can use Full Duplex for 2 nodes interconnect using crossed cable RJ45/RJ45 MI CBLG161. When using Full Duplex, collision detection is disabled. Switch ports should be configured with Auto–negotiation Enabled, except if you have HACMP problems like ”network_down”...
Page 83
Figure 36. INTCF10: Ethernet Switch Single Interconnect for 3 to 8 Nodes with Single Adapter Per Node Fast Ethernet Interconnect Requirements...
Page 84
Figure 37. INTCF10: Ethernet Switch Single Interconnect for 3 to 8 Nodes with Dual Adapters Per Node EPC Connecting Guide...
Page 85
Figure 38. Redundant Fast Ethernet Interconnect for 3 to 8 Nodes Fast Ethernet Interconnect Requirements...
Cabling Instructions Between 2 Nodes (node #1 and node #2) Connect one end of the cross–over cable (CBLG161–1900) to the RJ45 port on the Ethernet 10/100 adapter on node #1, and the other end to the RJ45 port on the Ethernet 10/100 adapter on node #2. With a Hub First of all, a SuperStack II Hub 10 Management Module (3C16630A) has to be fitted to each Hub 10 12 Port TP unit (3C16670A) to provide SNMP management.
General Configuration Procedure The following steps describe the network configuration phase of an interconnect. Note: This procedure is the same whatever the interconnect type (Ethernet switch, or FDDI hub). Configure IP addresses Ping and Rlogin between nodes. Configuring Network Interfaces Configuring the network interfaces (en1, en0 or fi1, fi0) regarding the interconnect on each node: For node #1 go to the following smit menu:...
In order to configure the other adapters on a node, please use the SMIT Further Configuration menus. Otherwise the HOSTNAME would be changed. For that type #smit tcpip Then through the sequence of displayed menus select Further Configuration Netwok Interface Network Interface Selection Change/Show Characteristics of Network Interface and choose the network interface (en1, fi0) to be configured...
On node #2 ping every node and check reachability with every node # ping node1_X # rsh node1_X uname –a which returns AIX node1_X # ping node3_X # rsh node3_X uname –a which returns AIX node3_X and so on. and proceed the same with all the other nodes. Setting Network Parameters for Testing Ethernet TCP/IP Configuration...
Hardware Components Gigabit Ethernet Interconnect Full Kit (2 EPC400-N) DCKG029–0000 DCKG029–0000 component is only used to link two EPC400 nodes with a single Ethernet link without switch. Identificator Description Quantity DCCG144–0000 Gigabit Ethernet SX-PCI Adapter (2969) CBLU170–1800 6m SC-SC Optical Fibre Cable Gigabit Ethernet Interconnect Base Kit DCKG010–0000 Identificator...
Local management can be performed via an RS–232 (DB–9 port) line, as well as out–of–band management via an RJ45 port. For the latter, the Gigabit Ethernet switch 9300 can be connected to the Cluster Administration Hub, if any; take a cable CBL179 provided with the Cluster Hub, or to the customer’s 10Base–T Ethernet LAN.
Page 95
Figure 43 depicts an interconnect where each node has a single attachment. For nodes having dual gigabit ethernet adapters for HACMP purpose, there are two SC–SC links between a node and the switch. Figure 43. Gigabit Ethernet Interconnect for >2 Nodes Gigabit Ethernet Interconnect Requirements...
Quick Installation Guide Audience: The following provides quick procedures for installing the SuperStack 9300. It is intended for trained technical personnel only who has experience installing communications equipment. To install the SuperStack information on each setup task, see the related sections in this guide or complete details in the indicated documents.
Warning: Hazardous energy exists within the SuperStack II Switch 9300 system. Always be careful to avoid electric shock or equipment damage. Many installation and troubleshooting procedures should be performed only by trained technical personnel. Install optional power supply The system operates using a single power supply assembly and is shipped with one power supply installed.
Figure 46. System and Port LEDs Name Type Color Description Indications Power – System Power Green The system is powered on No light The system is powered off Fault – System Fault Yellow The system has failed diagnostics or other operational error has occurred No light The system is operational Pckt...
Administer and Operate the system See the Administration Guide for information for solving any problems See also Appendix D: Technical Support in the Getting Started Guide for your system. For information on how to administer and operate the SuperStack II Switch 9300, see the Administration Guide on the Documentation CD and the Software Installation and Release Notes.
Hardware Components FDDI Interconnect Full Kit (2 EPC400-N) DCKG013–0000 DCKG013–0000 component is only used to link two EPC400 nodes with a double FDDI link without hub. Identificator Description Quantity DCCG103–0000 PCI FDDI Fibre Dual Ring Adapter (SysK) CBLU170–1800 FDDI Fibre SC–SC cable (6m) FDDI Interconnect Base Kit (EPC400-N) DCKG014–0000 Identificator...
”B” card must be connected to the other hub. Each cable coming from a node is plugged in a M port. Regarding FDDI adapter installation, please refer to the Bull documentation: FDDI Adapter – Installation and Configuration Guide. Cedoc reference 86 A1 53GX.
General Configuration Procedure The network configuration phase of an interconnect are standard. Note: This procedure is the same whatever the interconnect type (Ethernet hub single or double, Ethernet switch single or double, FDDI hub, FDDI switch, or FCS). See General Configuration Procedure, on page 7-11. The network configuration phase differs, and is given below.
Chapter 10. Disk Subsystems Cabling Requirements Describing particular cabling for Disk Drive applications. Disk Subsystems Cabling Requirements – Overview More details in: • SSA Disk Subsystem, on page 10-2. • Disk Array Subsystems (DAS), on page 10-23. • JDA Subsystems, on page 10-54. •...
SSA Disk Subsystem You will find: • MI List, on page 10-2. • General Information, on page 10-2. • Cabling Diagrams, on page 10-4. • Cabling Instructions, on page 10-16. • Optic Fibre Extender, on page 10-17. MI List Identificator Description SSAG007–0000 SSA DISK SUBSYSTEM RACK w/ four 4.5 GB Disk Drives...
FRONT where Pi designates a connector Figure 53. Example of 1 SSA cabinet and 1 adapter per node (one loop). More information can be found in Bull DPX/20 Escala 7133 SSA Disk Subsystems – Service Guide. Mixed Configurations The following table shows the possible mixing of SSA adapters and number of initiators in an SSA loop, according to the SSA adapter type.
2. For PCI nodes (EPC400 and EPC1200) and for mixed configurations, sharing of a SSA loop is limited to 2 nodes with PCI adapters (6215) and MCA adapters (6219). Cabling Diagrams SSACF01: Cabling For 1 to 4 Nodes, With 1 SSA Cabinet and 1 to 4 Segments Figure 54.
Page 115
Figure 55. SSACF01: Loop diagram: 1 to 4 nodes, 1 SSA cabinet, 1 to 4 segments. 10-5 Disk Subsystems Cabling Requirements...
Page 116
Figure 56. SSACF01: Cabling example for 4 nodes, 1 SSA cabinet and 16 disks. Parts List Cabling example for 4 nodes, 1 SSA cabinet and 16 disks. Item M.I. Designation Length CBLG162-1900 SSA Subsystem cable IBM32H1466 CBLG162-2100 SSA Subsystem cable IBM88G6406 CBLG162-1700 SSA Subsystem cable...
Page 117
SSACF02: Cabling For 1 to 6 Nodes, With 2 SSA Cabinets Figure 57. SSACF02: Base mounting diagram (1 to 6 nodes, two SSA cabinets, 1 to 8 segments). Figure 58. SSACF02: Loop diagram: 1 to 6 nodes, 2 SSA cabinets, 5 to 8 segments. 10-7 Disk Subsystems Cabling Requirements...
Page 118
Figure 59. SSACF02: Cabling example for 6 nodes, 2 SSA cabinets and 32 disks. At least 8 disk drives are mandatory. Parts List Cabling example for 6 nodes, 2 SSA cabinets and 32 disks. Item M.I. Designation Length CBLG162-1900 SSA Subsystem cable IBM32H1466 CBLG162-2100 SSA Subsystem cable...
Page 119
SSACF03: Cabling For 5 to 8 Nodes with 1 SSA Cabinet Figure 60. SSACF03: Base mounting diagram (5 to 8 nodes, 1 SSA cabinet, 1 to 4 segments). Figure 61. SSACF03: Loop diagram: 5 to 8 nodes, 1 SSA cabinet, 1 to 4 segments. 10-9 Disk Subsystems Cabling Requirements...
Page 120
Figure 62. SSACF03: Cabling example for 8 nodes, 1 SSA cabinet and 16 disks. Parts List Cabling example for 8 nodes, 1 SSA cabinet and 16 disks. Item M.I. Designation Length CBLG162-1900 SSA Subsystem cable IBM32H1466 CBLG162-2100 SSA Subsystem cable IBM88G6406 CBLG162-1700 SSA Subsystem cable...
Page 121
As soon as there is more than one node connected to a single port on the SSA cabinet, the internal bypass must be suppressed. The operation to manipulate the switch by–pass is manual. Do not forget to plug out the cabinet before intervening. For an 8–node configuration there is no by–pass at all.
Page 122
SSACF04: Cabling For 7 to 8 Nodes With 2 SSA Cabinets Figure 63. SSACF04: Base mounting diagram (7 to 8 nodes, 2 SSA cabinets, up to 8 segments). Figure 64. SSACF04: Loop diagram: (7 to 8 nodes, 5 to 8 segments). 10-12 EPC Connecting Guide...
Page 123
Figure 65. SSACF04: Cabling example for 8 nodes, 2 SSA cabinets and 32 disks. Parts List Cabling example for 8 nodes, 2 SSA cabinets and 32 disks. Item M.I. Designation Length CBLG162-1900 SSA Subsystem cable IBM32H1466 CBLG162-2100 SSA Subsystem cable IBM88G6406 CBLG162-1700 SSA Subsystem cable...
Page 124
SSACF05: Cabling 1 to 8 Nodes With 3 SSA Cabinets Figure 66. SSACF05: Base mounting diagram (1 to 8 nodes, 3 SSA cabinets, up to 12 segments). Figure 67. SSACF05: Loop diagram: (1 to 8 nodes, 9 to 12 segments). 10-14 EPC Connecting Guide...
Page 125
Figure 68. SSACF05: Cabling example for 8 nodes, 3 SSA cabinets and 48 disks. At least 12 disk drives are required. Parts List Cabling example for 8 nodes, 3 SSA cabinets and 48 disks. Item M.I. Designation Length CBLG162-1900 SSA Subsystem cable IBM32H1466 CBLG162-2100 SSA Subsystem cable...
Cabling Instructions The cabling instruction lines are generated by the ordering document. • The nodes are named by N1, N2, .. N8. • U1, U2, U3 designate the SSA units. • P1, P4, P5, P8, P9, P12, P13, P16 and so on designate the ports on an SSA cabinet. •...
Optic Fibre Extender Usage Cases CAUTION: The solutions suggested here are not offered in standard. With the introduction of Optic Fibre Extender, an SSA loop can be extended enabling to construct an architecture for disaster recovery where the Powercluster configuration is spread over two buildings within a campus.
Page 128
Figures 69 and 70 illustrate disaster recovery solutions which differ in terms of number of nodes and shared SSA cabinets. They are extensions of configurations SSACF01 and SSACF02. In these extended configurations two physical loops are implemented. Figure 69 shows an implementation with one SSA cabinet per loop, Figure 70 with two cabinets per loop.
Page 129
Figure 70. Optic Fibre Extender: Global diagram (1 pair of 2 nodes, 2 cabinets). Cabling Diagram With 1 or 2 Nodes, 1 SSA Cabinet on Each Side Figures 71 and 72 show configurations with two loops and one adapter per node. For higher availability it is better to have two adapters, one per loop.
Page 130
Figure 72. Cabling schema with FIbre Optical Extenders (1 or 2 nodes, 1 SSA cabinet on each side). 10-20 EPC Connecting Guide...
Page 131
Cabling Diagram With 1, 2 or 3 Nodes, 2 SSA Cabinets on Each Side Figures 73 and 74 show configurations with two loops and one adapter per node. For higher availability it is better to have two adapters, one per loop. Figure 73.
Page 132
Figure 74. Cabling diagram with FIbre Optical Extenders (1, 2 or 3 nodes, 2 SSA cabinets on each side). 10-22 EPC Connecting Guide...
Disk Array Subsystems (DAS) You will find: • MI List on page 10-23 • Usage Cases for SCSI Technology on page 10-26 • Cabling Diagrams for SCSI Technology on page 10-27 • Cabling for Configuration & Management on page 10-34 •...
Page 135
IDENTIFICATOR DESCRIPTION MSUG100–0D00 17.8GB HI Speed SCSI-2 Disk for DAS MSUG101–0D00 17.8GB HI Speed SCSI-2 Disk for DAS (OVER 10*8.8GB) MSUG102–0D00 17.8GB HI Speed SCSI-2 Disk for DAS (OVER 20*8.8GB) MSPG003–0100 Add’nal Wide Storage Processor (DAS 1300) MSPG005–0000 Add’nal Wide Storage Processor (DAS 2900) MSPG006–0000 Add’nal Wide Storage Processor (DAS 3200) MSPG007–0000...
Cabling Diagrams for SCSI Technology Parts List Item M.I. Designation Length FRU CKTG070–0000 Y SCSI cable (68MD/68MD) 909920001–001 CKTG049–0000 16 Bit Y-cable – IBM52G4234 CBLG137–1200 SCSI-2 F/W adapter to DAS – 3 DGC005–041274–00 CBLG137–1800 SCSI-2 F/W adapter to DAS – 6 DGC005–041275–00 CBLG097–1000 Wide SP cable DAS to DAS...
Page 138
DASCF02: Cabling for: Single SP / Single SCSI with 1 node – Daisy chained DAS Figure 76. DASCF02: Single SP / Single SCSI with 1 node – Daisy chained DAS. DASCF03: Cabling for: Dual SP / Dual SCSI with 1 node – 1 DAS Figure 77.
Page 139
DASCF04: Cabling for: Dual SP / Dual SCSI with 1 node – Daisy chained DAS Figure 78. DASCF04: Dual SP / Dual SCSI with 1 node – Daisy chained DAS. DASCF05: Cabling for: Single SP / Single SCSI with up to 4 nodes – one DAS (1) Figure 79.
Page 140
DASCF06: Example of Single SP / Single SCSI with up to 4 nodes – one DAS (2) Figure 80. DASCF06: Example of Single SP / Single SCSI with up to 4 nodes – one DAS (2). See also Figure 79. DASCF07: Cabling for: Single SP / Single SCSI with up to 4 nodes –...
Page 141
DASCF08: Cabling for: Single SP / Single SCSI with up to 4 nodes – Daisy chained DAS (2) Figure 82. DASCF08: Single SP / Single SCSI with up to 4 nodes – Daisy chained DAS (2). DASCF9: Cabling for: Dual SP / Dual SCSI with up to 4 nodes – 1 DAS (1) Figure 83.
Page 142
DASCF10: Cabling for Dual SP / Dual SCSI with up to 4 nodes – 1 DAS (2) Figure 84. DASCF10: Dual SP / Dual SCSI with up to 4 nodes – 1 DAS (2). DASCF11: Cabling Dual SP / Dual SCSI with up to 4 nodes – Daisy chained DAS (1) Figure 85.
Page 143
DASCF12: Cabling Dual SP / Dual SCSI with up to 4 nodes – Daisy chained DAS (2) Figure 86. DASCF12: Dual SP / Dual SCSI with up to 4 nodes – Daisy chained DAS (2). 10-33 Disk Subsystems Cabling Requirements...
Cabling for Configuration & Management EPC800, EPC1200 EPC1200A, EPC2400, EOC430 and EPC440 Nodes The following cabling configuration requires a serial multi-port asynchronous card. Connect the RS232 cable to a port left on the multiple ways asynchronous boards of a node that shares the DAS.
Page 145
EPC400 Node A multi–port asynchronous board is not appropriate. A serial port is suitable for the DAS management through a serial line. DAS Management Through SCSI Links In any case, DAS management can be performed thru SCSI links from nodes the DAS subsystem is attached to, by using the Navisphere application from a graphical terminal.
Examples of Use for Fibre Channel The following only applies to PCI nodes (EPC400/430/440, EPC1200, EPC1200A and EPC2400) with the Clariion DAS fibre models. This includes DAS 3500, DAS 57xx. DAS5300 (DPE) and its associated DAE. Thre are four types of Clariion storage systems available in a rackmount version. •...
Page 147
Figure 91.Disk Array Enclosure – DAE DAS management software is the Navisphere application. The communication bridge between the Navisphere application and the DPE array is the Navisphere agent. The Navisphere agent resides on every DPE array’s node and communicates directly with the storage system firmware. It requires a graphical interface for setting up configuration parameters.
Page 148
The following table describes the intended uses of the different configurations. Diagram Nb of Nb of Nb of HACMP ATF on Nb of Nb of Nb of Notes number loops nodes adapters on each each SPs per hubs per node node node SLOOP00...
Page 149
2. DAS 5300 model with a RAID subsystem, including: • Either a single–SP DPE or a dual–SP including a DAE disk drawer • One or two chained DAE disk drawers inside an EPC1200/A/2400/400/430 I/O rack, or in a rack containing an EPC440 drawer) •...
Page 150
Rack 400: CKTG109-0000 – Rackmount option (DAE 5000) Rack 1200: CKTG110-0000 – Rackmount option (DAE 5000) Disk Drives: MSUG110-0F00 – 8.8GB Fibre DAE Disk (10 000rpm) MSUG111-0F00 – 17.8GB Fibre DAE Disk (10 000rpm) Attachment: 1 x DCCG141-0000 – PCI Fibre Channel Adapter 1 x DCCG147-0000 –...
Page 151
The micro-modem referenced ME62AF (said mini-driver) in Blackbox catalogues is an example of what you can purchase to extend RS232 lines. The physical characteristics are: • Protocol asynchronous • Speed 9.6 kbps • Transmission Line 2 twisted pair (Wire gauge: 24-AWG, i.e. 0.5mm) •...
• Size 1.3cm x 5.3cm x 10.9cm • Weight <0.1kg Installation of micro-modem Even if the micro-modem are not delivered with EPC product, the following indicates the simple steps to install a micro-modem model ”Mini Driver ME762A-F”. 1. Connect the 4-wire telephone line to the unit’s 5-screw terminal block. 2.
Page 153
Figure 97.DAS Fibre Channel – Configuration for Micro-modem Multi-Function Line Drivers For setting an extended serail line between a Console Concentrator and the S1 port of a distant node, you can use a pair of micro-modems. A micro-modem ME762A-M or ME657A-M on the Console Concentrator side and a micro-modem ME762A-F or ME657A-F on the node side.
Cabling Diagrams for Fibre Channel Parts List Item M.I. Designation Length FRU FCCQ002–1000 Cord 2CU/DB9 0,5m 91060001-001 FCCQ002–1500 Cord 2CU/DB9 91060002-001 FCCQ002–2000 Cord 2CU/DB9 91060010-001 FCCQ001–1800 Cord 2FO/M5/DSC 91061005-001 FCCQ001–2100 Cord 2FO/M5/DSC 91061015-001 DCOQ001–0000 FC MIA 1/M5/DSC – 91071001-001 DCCG147–0000 PCI 64-bit Copper Fibre Channel Adapter –...
Page 155
SLOOP00: Single Loop, 1 Node, 1 or 2 DAE Figure 99.SLOOP00: Single Loop, 1 Node, 1 or 2 DAE. SLOOP01: Single Loop, 1 Node, 1 DAS with 1 SP) Figure 100.SLOOP01: Single Loop, 1 Node, 1 DAS with 1 SP). 10-45 Disk Subsystems Cabling Requirements...
Page 156
SLOOP02: Single Loop, 2 Nodes, 1 DAS (1 SP) Figure 101.SLOOP02: Single Loop, 2 Nodes, 1 DAS (1 SP). SLOOP03: Single Loop, 1 Hub, N Nodes, D DAS with 1 SP 2 < n + D < 10 Figure 102.SLOOP03: Single Loop, 1 Hub, N Nodes, D DAS (1 SP). 10-46 EPC Connecting Guide...
Page 157
SLOOP04: Two Loops, 2 Nodes, 2 DAEs (1 LCC) The following applies to EPC400/430/440 and EPC1200A/2400 HA packages. It is to be used with HACMP/ES 4.3 cluster software. Figure 103.SLOOP04: Two Loops, 2 Nodes, 2 DAEs (1 LCC). 10-47 Disk Subsystems Cabling Requirements...
Page 158
DLOOP01: Dual Loop, 1 Node with 2 Adapters, 1 DAS with 2 SPs Figure 104.DLOOP01: Dual Loop, 1 Node with 2 Adapters, 1 DAS with 2 SPs. DLOOP04: Two Loops, 2 Nodes, 1 DAS with 2 SPs Figure 105.DLOOP04: Two Loops, 2 Nodes, 1 DAS with 2 SPs. 10-48 EPC Connecting Guide...
Page 159
DLOOP02: Dual Loop, 2 Nodes, 1 DAS with 2 SPs Figure 106.DLOOP02: Dual Loop, 2 Nodes, 1 DAS with 2 SPs. 10-49 Disk Subsystems Cabling Requirements...
Page 160
DLOOP03: Dual Loop, Two Hubs, N Nodes, D DAS with 2 SPs Figure 107.DLOOP03: Dual Loop, Two Hubs, N Nodes, D DAS with 2 SPs. 10-50 EPC Connecting Guide...
Page 161
XLOOP01: 1 Node, Single or Dual Loop, 1 Deported DAS Figure 108.XLOOP01: 1 Node, Single or Dual Loop, 1 Deported DAS. XLOOP02: 2 Nodes, Dual Loop, 2 Hubs, 2 DAS (one Deported) Figure 109.XLOOP02: 2 Nodes, Dual Loop, 2 Hubs, 2 DAS (one Deported). 10-51 Disk Subsystems Cabling Requirements...
Page 163
DSWITCH01: Dual Switch, N Nodes, D DAS with 2 SPs Figure 111.DSWITCH01: Dual Switch, N Nodes, D DAS with 2 SPs. 10-53 Disk Subsystems Cabling Requirements...
JDA Subsystems AMDAS JDA disk subsystems (End Of Life) are only available on EPC800 nodes. You will find: • MI List, on page 10-54 • Examples of Use, on page 10-54 • Cabling Diagrams, on page 10-55 • Configuration Procedure, on page 10-60 •...
When it is common to two nodes the disk cabinet can be used as system disk extension or shared disk subsystem. In the former case the disks are not shared. Each node possesses its own SCSI bus. In the latter the configuration allows to support a node failure. There are two SCSI adapters per node, where each adapter is connected to a distinct SP.
8. Set the configuration parameters of these two TTY ways under AIX as follows: 4800 bauds, 8 bit, 1 stop bit, none parity, ssmmgr as term type. Refer to Bull DPX/20 Escala AMDAS JBOD Storage Subsystem User’s Guide, Pages 2 – 14.
Page 171
4. Set the disk drives (2 to 12) in the trays A and D and (13 to 24 disks) in the trays B and E of the AMDAS in compliance with the implementation order. Refer to Bull DPX/20 Escala AMDAS JBOD Storage Subsystem User Guide, Pages 2 – 5.
Using AMDAS JBOD Disks as a System Disk Extension Building a System Disk 1. Stop gracefully HACMP smit clstop 2. Stop all the applications 3. Make a system backup of hdisk0 that is currently running 4. Reboot the node on the AIX installation CD–Rom in service mode 5.
Page 173
4. Build a copy of each other logical volumes of hdisk0 on hdisk1: Mklvcopy hd1 2 hdsik1 # Filesystem /home Mklvcopy hd2 2 hdsik1 # Filesystem /usr Mklvcopy hd3 2 hdsik1 # Filesystem /tmp Mklvcopy hd4 2 hdsik1 # Filesystem /(root) Mklvcopy hd6 2 hdsik1 # paging space Mklvcopy hd8 2 hdsik1...
Symmetrics Disk Subsystem MI List M.I. Designation Length DRWF006–0000 Just a Bunch of Disk Array Drawer CDAF333–1800 CDA3330–18 up to 32DRV–8SLT CDAF343–9000 CDA3430–9 up to 96DRV–12SLT CDAF370–2300 CDAF3700–23 up to 128DRV–20SLT MSUF303–1802 DRV3030–182 18X2GB 3,5” MSUF303–2302 DRV3030–232 23X2GB 5,25” CMMF001–0000 512MB Cache Mem. Init. Order CMMF002–0000 768MB Cache Mem.
Examples of Use Point-to-point Connection One port of a Symmetrix box is connected to an Escala server through a single adapter (either Ultra Wide Differential SCSI or Fibre Channel). As there is no redundancy of any component on the link, a single failure (cable, adapter, Channel Director) may cause the loss of all data.
Page 177
Figure 123. Multiple connection of an EMC Symmetrix subsystem Base Configuration with HACMP The usual HA configuration with Symmetrix subsystems is to duplicate the point to point connection and to configure the Symmetrix in order to make the data volumes available to both servers through the two separate host ports.
Page 178
Configuration with HACMP and Powerpath (multiple paths) Figure 125. Configuration of an EMC Symmetrix subsystem with Powerpath Powrepath is a software driver which allows multiple paths between a node and a Symmetrix subsystem to provide path redundancy, and improve performance and availability.
HA DLT7000 (DE – 68mD) Y cable / adapter [CKTG070–0000 CKTG049–0000 [CKTG070–0000 6m Cable / adapter CBLG157–1700 CBLG157–1700 CBLG157–1700 DLT not shared Cable for DLT7000 CBLG157–1700 CBLG102–1700 CBLG157–1700 Cable for DLT4000 CBLG158–1700 CBLG152–1900 CBLG158–1700 Case of the Shared Library 1. In addition to the Y–cable there is a terminator feed thru included in CKTF003 that allows to plug the 68mD cable (CBLG157–1700) into the DLT4000 (50mD).
Page 181
Case of a Shared Library The following depicts a configuration example of an EPC400 with 2 nodes sharing an LXB for high availability only. EPC400 with 2 nodes sharing an LXB Figure 127. Overall Diagram – Cabling Legend Item M.I. Designation Length FRU CKTG070–0000...
Page 182
Cabling Examples for Non Shared LIbraries No Y cables are used. An external terminator is used to terminate a SCSI chain. One external terminator is included in the library in standard. A second external terminator (90054001-001) should also be provided in a library with two drives. For performance considerations, it is not recommended to chain the drives in a LBX7000 library.
Chapter 11. Tape Subsystems Cabling Requirements Summarizing tape drive applications. Tape Subsystems – Overview Two tape subsystems are available for shelf mounting with the Escala Powercluster series: • DLT 4000 (MI MTSG014) • VDAT Mammoth (MI MTSG015). The DLT 4000 drive can be connected to EPC400 only. The VDAT Mammoth can be connected to EPC400 and EPC800 only.
RSF (Remote Services Facilities) performs system error monitoring and handles communications for remote maintenance operations. The modem, together with RSF, provide a link, via a phone line, between the system at the customer site and a Bull Customer Service. The table below shows the number of modems and their type, according to the Powercluster configuration.
Page 188
For configuration RMCF02, the internal modem of the S100 is prepared and configured at manufacture. In other configurations, the integrated modem of any EPC400 is also prepared at manufacture (configuration of the modem and RSF dial-in). The external modem is provided, installed and configured on the client site by the Customer Service.
Modem on PowerConsole Cabling Diagram with Console Concentrator Diagram with Escala S100 Figure 133 shows an example which is relevant for any Powercluster configuration with an Escala S100 based PowerConsole, though this figure shows a configuration with a dedicated–administration network. In that case the modem is prepared and configured (RSF callscarf module on S100, and RSF cluster module on every node).
Page 190
Diagram with Escala S100 and one modem by node Figure 134 shows an example which is relevant for any Powercluster configuration with an Escala S100 based PowerConsole 2, though this figure shows a configuration with a dedicated–administration network. In that case you may have one modem on the PowerConsole and/or one modem by node.
Modem on a Node’s S2 Plug Basic Cabling for a Uni-node Configuration • On an EPC 800 node the modem is external. • On an EPC 400 node the modem is integrated (ISA board) inside the drawer. • On an EPC1200 or EPC1200A system, the modem is external. For ECP800, the modem support is put into the rack.
Page 192
Figure 136. RMCF03: Remote maintenance: Modem on a Node’s S2 plug w/ Console Concentrator Example of Use This solution is recommended: • when there is a local ClusterConsole (as depicted in the figure) • or when the Powerconsole is not wired to the Console Concentrator. In a multiple-node EPC400 configuration there should be one node with an integrated modem.
Using Two Modems Two modems are provided with every 2-node configuration which does not include any console concentrator. When extending a uni-node configuration with an additional node, an external modem is added. An original uni-node EPC RT model is provided with a modem integrated in the CPU drawer.
Appendix A. Marketing Identifier Cross-References Provides a way to trace the use, in this document, of Marketing Identifiers (M.I.) associated with EPC cabling. M.I.s to page numbers. Numbers 3C16670A-UK, 7-10 3C16670A-ME, 7-10 3C166942A-XX, 8-3 3C1681-0, 7-14, 7-43 3C5411-ME, 7-14 3C5440D, 7-14 3C759, 7-14 3C780-ME, 9-3 3C781, 9-3...
• history of changes to Part Numbers • complete spare parts catalogue (provided as a down-loadable compressed file). On-Line Support URL Address is: http:/bbs.bull.net/bcs/bult.htm Source is: ”Bulletins & How to Use Them”. Access to most technical information is restricted to Customer Support Personnel with a user_id and password, however some information is freely available with the ”Guest”...
Appendix C. PCI/ISA/MCA Adapter List Lists of adapters (controllers) and their identification labels. Adapter Card Identification Adapter cards are identified by a label visible on the external side of the metallic plate guide. For further details, about controllers description, configuration upgrading and removal procedures, refer to Controllers in the Upgrading the System manual.
ISA Bus Label Description B5-2 ISDN Controller B5-A Internal Modem ISA FRANCE B5-B Internal Modem ISA UK B5-C Internal Modem ISA BELGIUM B5-D Internal Modem ISA NETHERLAND B5-E Internal Modem ISA ITALY MCA Bus Label Description SSA 4 Port Adapter Enhanced SSa 4 Port Adapter SSA Multi-Initiator/RAID EL Adapter EPC Connecting Guide...
Appendix D. Cable and Connector Identification Codes Details in: • Cable Identification Markings • Connector Identification Codes Cable Identification Markings Each end of any cable connecting two items has a FROM–TO label conforming to a specific format and object identification rules. Figure shows the format of a FROM–TO label and an example of labeled cable between a DAS and a CPU.
Page 210
Object Identification for FROM–TO Labels CPU Drawer PCI Expansion drawer (EPC400) Computing Rack (EPC1200) I/O (EPC1200) CONS System Console PWCONS Power Console SSA Disk Sub-system DAS Disk Sub-system JBOD AMDAS/JBOD Tape Drive Sub-system CS2600 CS2600 Concentrator CSCONS CS2600 Concentrator Administration Console Ethernet or FDDI Hub FC–AL Fibre Channel Hub...
Page 211
DAS 3500 Disk Sub-system SPA/1 Fibre channel connector of Service processor A SPB/1 Fibre channel connector of Service processor B SPA/RS232 RS232 of Service processor A SPB/RS232 RS232 of Service processor B JBOD Disk Sub-system J21,J22,J31 Asynchronous Console J01 à J08 SCSI Bus SSA Disk Sub-system A1,A2,B1,B2...
Glossary This glossary contains abbreviations, key–words and phrases that can be found in this document. Application-Transparent Failover. Media Dependent Interface. Central Processing Unit. Marketing Identifier. Disk Array Subsystem. Media Interface Adapter. Escala Power Cluster. Peripheral Component Interconnect (Bus). FC–AL Fibre Channel Abritrated Loop. Power Distribution Board.
Page 219
SOCIETE / COMPANY : ADRESSE / ADDRESS : Remettez cet imprimé à un responsable BULL ou envoyez-le directement à : Please give this technical publication remark form to your BULL representative or mail to: BULL ELECTRONICS ANGERS CEDOC 34 Rue du Nid de Pie – BP 428...
Page 220
Customer Code / Code Client : For Bull Internal Customers / Pour les Clients Internes Bull : Budgetary Section / Section Budgétaire : For Others / Pour les Autres : Please ask your Bull representative. / Merci de demander à votre contact Bull.
Page 222
BULL ELECTRONICS ANGERS CEDOC 34 Rue du Nid de Pie – BP 428 49004 ANGERS CEDEX 01 FRANCE ORDER REFERENCE 86 A1 65JX 03...
Page 223
Utiliser les marques de découpe pour obtenir les étiquettes. Use the cut marks to get the labels. ESCALA EPC Series EPC Connecting Guide 86 A1 65JX 03 ESCALA EPC Series EPC Connecting Guide 86 A1 65JX 03 ESCALA EPC Series EPC Connecting Guide 86 A1 65JX 03...
Need help?
Do you have a question about the Escala EPC400 and is the answer not in the manual?
Questions and answers