Rules And Guidelines; Attaching Your Cluster To A Shared Storage System Through Direct-Attached Configuration - Dell POWEREDGE Cluster FE655WI Platform Manual

Dell server user manual
Table of Contents

Advertisement

Rules and Guidelines

When you are connecting two or more PowerEdge 1855/1955 server enclosures in a cluster consider the
parameters in this section,
All cluster nodes must contain identical versions of the following:
Operating systems and service packs
Hardware, drivers, firmware or BIOS for the embedded NICs, Ethernet daughter cards, and any other
peripheral hardware components
Systems management software, such as Dell OpenManage™ systems management software and
®
EMC
Navisphere
Maximum Distance Between Cluster Nodes
The maximum distance allowed between two PowerEdge 1855/1955 server enclosures with embedded
switches, or from a PowerEdge 1855/1955 server enclosure directly connected to a storage system or to an
external switch, or from a switch to a storage system is 300 meters (984 feet) using multimode fiber at
2 Gbps. The maximum distance is 100 meters (328 feet) when using multimode fiber at 4 Gbps.
The total distance between a PowerEdge 1855/1955 server enclosure and a storage system, or between
two PowerEdge 1855/1955 server enclosures with Fibre Channel pass-through modules, may be
increased through the use of single-mode or multi-mode switch Inter-Switch Links (ISLs). The
maximum cable length for Fast Ethernet and copper Gigabit Ethernet is 100 meters (328 feet), and for
optical Gigabit Ethernet is 550 meters (1804 feet). This distance may be extended using switches and
virtual local area network (VLAN) technology.
The maximum latency for a round-trip network packet between nodes is 500 milliseconds.
Obtaining More Information
For hardware and software installation and configuration information of high availability clusters, see the
Dell PowerEdge Cluster FE655Wi Systems Installation and Troubleshooting Guide.
Attaching Your Cluster to a Shared Storage System Through
Direct-Attached Configuration
This section provides the rules and guidelines for attaching your cluster nodes to the shared storage
system using a direct connection (without the switches for iSCSI access).
A direct-attached configuration supports up to two cluster nodes with AX150i and CX3-10c storage
systems and up to four nodes with CX3-20c and CX3-40c storage systems.
®
storage management software
7
Platform Guide

Advertisement

Table of Contents
loading

Table of Contents