Contents Introduction Cluster Solution Cluster Hardware Requirements Cluster Nodes Cluster Storage Supported Cluster Configurations Direct-Attached Cluster SAN-Attached Cluster Other Documents You May Need Cabling Your Cluster Hardware Cabling the Mouse, Keyboard, and Monitor Cabling the Power Supplies Cabling Your Cluster for Public and Private Networks Cabling the Public Network Cabling the Private Network...
Page 4
Preparing Your Systems for Clustering Cluster Configuration Overview Installation Overview Installing the Fibre Channel HBAs Installing the Fibre Channel HBA Drivers Installing EMC PowerPath Implementing Zoning on a Fibre Channel Switched Fabric Using Worldwide Port Name Zoning Installing and Configuring the Shared Storage System Installing Navisphere Storage System Initialization Utility...
Troubleshooting Guide on the Dell Support website at support.dell.com/manuals. For a list of recommended operating systems, hardware components, and driver or firmware versions for your Dell Windows Server Failover Cluster, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at dell.com/ha.
Cluster Solution Your cluster implements a minimum of two node to a maximum of either eight nodes (for Windows Server 2003) or sixteen nodes (for Windows Server 2008) clustering and provides the following features: • 8-Gbps and 4-Gbps Fibre Channel technologies •...
NOTE: For more information about supported systems, HBAs and operating system variants, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at dell.com/ha. It is recommended that the NICs on each public network...
The SPS is connected to the disk processor enclosure Introduction One to four supported Dell/EMC storage systems. For specific storage system requirements see Table 1-3. All nodes must be directly attached to a single storage system or attached to one or more storage systems through a SAN.
NOTE: Ensure that the core software version running on the storage system is supported by Dell. For specific version requirements, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Cluster website at dell.com/ha. Supported Cluster Configurations...
EMC PowerPath provides failover capabilities and multiple path detection as well as dynamic load balancing between multiple ports on the same storage processor. However, direct-attached clusters supported by Dell connect to a single port on each storage processor in the storage system. Because of the single port limitation, PowerPath can provide only failover protection, not load balancing, in a direct-attached configuration.
NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com/manuals.
Page 14
The Dell Cluster Configuration Support Matrices provides a list of • recommended operating systems, hardware components, and driver or firmware versions for your Dell Windows Server Failover Cluster. • The HBA documentation provides installation instructions for the HBAs. • Systems management software documentation describes the features, requirements, installation, and basic operation of the software.
Cabling Your Cluster Hardware NOTE: To configure Dell blade server modules in a Dell™ PowerEdge™ cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com/manuals. Cabling the Mouse, Keyboard, and Monitor When installing a cluster configuration in a rack, you must include a switch box to connect the mouse, keyboard, and monitor to the nodes.
Page 16
Figure 2-1. Power Cabling Example With One Power Supply in the PowerEdge Systems and One Standby Power Supply (SPS) in an AX4-5 Storage System primary power supplies on one AC power strip (or on one AC PDU [not shown]) NOTE: This illustration is intended only to demonstrate the power distribution of the components.
Table 2-1. NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com/manuals.
Page 18
Table 2-1. Network Connections Network Connection Public network Private network Figure 2-3 shows an example of cabling in which dedicated network adapters in each node are connected to each other (for the private network) and the remaining network adapters are connected to the public network. Figure 2-3.
Cabling the Public Network Any network adapter supported by a system running TCP/IP may be used to connect to the public network segments. You can install additional network adapters to support additional public network segments or to provide redundancy in the event of a faulty primary network adapter or switch port. Cabling the Private Network The private network connection to the nodes is provided by a different network adapter in each node.
A direct-attached cluster configuration consists of redundant Fibre Channel host bus adapter (HBA) ports cabled directly to a Dell/EMC storage system. Direct-attached configurations are self-contained and do not share any physical resources with other server or storage systems outside of the cluster.
Page 21
Each cluster node attaches to the storage system using two multi-mode optical cables with LC connectors that attach to the HBA ports in the cluster nodes and the storage processor (SP) ports in the Dell/EMC storage system. These connectors consist of two individual Fibre optic connectors with indexed tabs that must be aligned properly into the HBA ports and SP ports.
Page 22
Install a cable from cluster node 2 HBA port 0 to SP-A Fibre port 1(second fibre port). Install a cable from cluster node 2 HBA port 1 to SP-B Fibre port 1(second fibre port). Figure 2-5 and Figure 2-6 illustrate the method of cabling a two-node direct-attached cluster to an AX4-5F and AX4-5FX storage system respectively.
Page 23
Figure 2-6. Cabling a Two-node Cluster to an AX4-5FX Storage System HBA ports (2) SP-B Cabling a Four-Node Cluster to a Dell/EMC AX4-5FX Storage System You can configure a 4-node cluster in a direct-attached configuration using a Dell/EMC AX4-5FX storage system:...
Page 24
Install a cable from cluster node 4 HBA port 1 to the fourth front-end fibre channel port on SP-B. Cabling 2 Two-Node Clusters to a Dell/EMC AX4-5FX Storage System The following steps are an example of how to cable a 2 two-node cluster to a...
Cabling Storage for Your SAN-Attached Cluster A SAN-attached cluster is a cluster configuration where all cluster nodes are attached to a single storage system or to multiple storage systems through a SAN using a redundant switch fabric. SAN-attached cluster configurations provide more flexibility, expandability, and performance than direct-attached configurations.
Page 26
Each HBA port is cabled to a port on a Fibre Channel switch. One or more cables connect from the outgoing ports on a switch to a storage processor on a Dell/EMC storage system. Cabling Your Cluster Hardware public network...
Cabling a SAN-Attached Cluster to an AX4-5F Storage System 1 Connect cluster node 1 to the SAN. Connect a cable from HBA port 0 to Fibre Channel switch 0 (sw0). Connect a cable from HBA port 1 to Fibre Channel switch 1 (sw1). 2 Repeat step 1 for each cluster node.
Page 28
Figure 2-9. Cabling a SAN-Attached Cluster to an AX4-5F Storage System cluster node 1 HBA ports (2) SP-B Cabling a SAN-Attached Cluster to an AX4-5FX Storage System 1 Connect cluster node 1 to the SAN. Connect a cable from HBA port 0 to Fibre Channel switch 0 (sw0). Connect a cable from HBA port 1 to Fibre Channel switch 1 (sw1).
Page 29
Connect a cable from Fibre Channel switch 0 (sw0) to SP-B Fibre port 3(Fourth fibre port). Connect a cable from Fibre Channel switch 1 (sw1) to SP-A Fibre port 2(Third fibre port). Connect a cable from Fibre Channel switch 1 (sw1) to SP-A Fibre port 3(Fourth fibre port).
Page 30
Fibre Channel switches and then connect the Fibre Channel switches to the appropriate storage processors on the processor enclosure. See the Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at dell.com/ha for rules and guidelines for SAN-attached clusters.
Page 31
22 drives per cluster. For more information, see "Assigning Drive Letters and Mount Points" section of Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide or Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide on the Dell Support website at support.dell.com/manuals.
Connecting a PowerEdge Cluster to a Tape Library To provide additional backup for your cluster, you can add tape backup devices to your cluster configuration. The Dell PowerVault™ tape libraries may contain an integrated Fibre Channel bridge, or Storage Network Controller (SNC), that connects directly to your Dell/EMC Fibre Channel switch.
Page 33
Configuring Your Cluster With SAN Backup You can provide centralized backup for your clusters by sharing your SAN with multiple clusters, storage systems, and a tape library. Figure 2-13 provides an example of cabling the cluster nodes to your storage systems and SAN backup with a tape library.
NOTE: For more information on step 3 to step 7 and step 10 to step 13, see "Preparing your systems for clustering" section of Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide or Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide located on the Dell Support website at support.dell.com/manuals.
Page 36
NOTE: You can configure the cluster nodes as Domain Controllers. For more information, see “Selecting a Domain Model” section of Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide or Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide located on the Dell Support website at support.dell.com/manuals.
Installation Overview Each node in your Dell Windows Server failover cluster must have the same release, edition, service pack, and processor architecture of the Windows Server operating system installed. For example, all nodes in your cluster may be configured with Windows Server 2003 R2, Enterprise x64 Edition.
See the Emulex support website located at emulex.com or the Dell Support website at support.dell.com for information about installing and configuring Emulex HBAs and EMC-approved drivers. See the QLogic support website at qlogic.com or the Dell Support website at support.dell.com for information about installing and configuring QLogic HBAs and EMC-approved drivers.
Zoning automatically and transparently enforces access of information to the zone devices. More than one PowerEdge cluster configuration can share Dell/EMC storage system(s) in a switched fabric using Fibre Channel switch zoning. By using Fibre Channel switches to implement zoning, you can segment the SANs to isolate heterogeneous servers and storage systems from each other.
Page 40
HBAs and their target storage systems do not affect each other. When you create your single-initiator zones, follow these guidelines: Preparing Your Systems for Clustering Description Dell/EMC or Brocade switch McData switch Dell/EMC storage processor Emulex HBA ports QLogic HBA ports (non-embedded)
You must configure the network settings and create a user account to manage the AX4-5 storage system from the network. To install and configure the Dell/EMC storage system in your cluster: 1 Install and use Navisphere node or management station to initialize your AX4-5 storage system.
Each storage system in the cluster is centrally managed by one host system (also called a management station) running EMC Navisphere Express—a centralized storage management application used to configure Dell/EMC storage systems. If you have an expansion pack option for the storage system and it has not been installed, install it now by following the steps listed below: 1 From the management host, open an Internet browser.
Installing Navisphere Server Utility The Navisphere Server Utility registers the cluster node HBAs with the storage systems, allowing the nodes to access the cluster storage data. The tool is also used for the following cluster node maintenance procedures: • Updating the cluster node host name and/or IP address on the storage array •...
8 Verify that the PowerPath on the cluster nodes can access all paths to the virtual disks. Advanced or Optional Storage Features Your Dell/EMC AX4-5 storage array may be configured to provide optional features that can be used in with your cluster. These features include Snapshot Management, SANCopy, Navisphere Manager and MirrorView™.
Page 45
® Optionally, you can also upgrade Navisphere Express to EMC Navisphere Manager—a centralized storage management application used to configure Dell/EMC storage systems. EMC Navisphere Manager adds the support for EMC MirrorView, an optional software that enables synchronous or asynchronous mirroring between two storage systems.
Installing and Configuring a Failover Cluster You can configure the operating system services on your Dell Windows Server failover cluster, after you have established the private and public networks and have assigned the shared disks from the storage array to the cluster nodes.
Troubleshooting This section provides troubleshooting information for your cluster configuration. Table A-1 describes general cluster problems you may encounter and the probable causes and solutions for each problem. Table A-1. General Cluster Troubleshooting Problem Probable Cause The nodes cannot The storage system is access the storage not cabled properly to system, or the cluster...
Page 48
Table A-1. General Cluster Troubleshooting (continued) Problem One of the nodes takes a long time to join the cluster. One of the nodes fail to join the cluster. Appendix:Troubleshooting Probable Cause Corrective Action The node-to-node Check the network cabling. network has failed due Ensure that the node-to-node to a cabling or interconnection and the public...
Page 49
Table A-1. General Cluster Troubleshooting (continued) Problem Attempts to connect to a cluster using Cluster Administrator fail. Probable Cause Corrective Action The Cluster Service Verify that the Cluster Service is has not been started. running and that a cluster has been formed.
Page 50
IPs with a specific variant of the Windows Server operating system (for example: Windows Server 2003 or Windows Server 2008), see Dell Failover Clusters with Microsoft Windows Server Installation and Troubleshooting Guide. The private (point-to- Ensure that all systems are...
Page 51
Table A-1. General Cluster Troubleshooting (continued) Problem Unable to add a node to the cluster. The disks on the shared cluster storage appear unreadable or uninitialized in Windows Disk Administration Probable Cause Corrective Action The new node cannot Ensure that the new cluster access the shared node can enumerate the cluster disks.
Page 52
Table A-1. General Cluster Troubleshooting (continued) Problem Cluster Services does not operate correctly on a cluster running Windows Server 2003 and the Internet Firewall enabled. Appendix:Troubleshooting Probable Cause Corrective Action The Windows Perform the following steps: Internet Connection Firewall is enabled, which may conflict with Cluster Services.
Page 53
Table A-1. General Cluster Troubleshooting (continued) Problem Public network clients cannot access the applications or services that are provided by the cluster. Probable Cause Corrective Action One or more nodes Configure the Internet may have the Internet Connection Firewall to allow Connection Firewall communications that are enabled, blocking...
Cluster Data Form You can attach the following form in a convenient location near each cluster node or rack to record information about the cluster. Use the form when you call for technical support. Table B-1. Cluster Information Cluster Solution Cluster name and IP address Server type...
Page 56
Additional Networks Table B-3. Array Array xPE Type Appendix: Cluster Data Form Array Service Tag Number of Attached Number or World Wide DAEs Name Seed...
Page 57
Zoning Configuration Form Node HBA WWPNs or Alias Names Storage Zone Name WWPNs or Alias Names Appendix: Zoning Configuration Form Zone Set for Configuration Name...
Need help?
Do you have a question about the EMC AX4-5 and is the answer not in the manual?
Questions and answers