Download Print this page

Executive Summary - D-Link DSN-4000 Manual

Dsn-series san arrays vmware esx 3.0.2 & 3.5
Hide thumbs Also See for DSN-4000:

Advertisement

xStack Storage DSN-Series SAN Arrays
VMware ESX 3.0.2 & 3.5

Executive Summary

The xStack Storage Series iSCSI storage arrays from D-Link provide cost-effective, easy-to-
deploy shared storage solutions for applications like the VMware Infrastructure 3 server
virtualization software. In this document the features and performance of the D-Link xStack
Storage™ Series iSCSI storage arrays along with typical server systems are configured,
instructions for use with VMware are given, and recommendations are made.
Overview
Server virtualization programs such as VMware run best when the datacenter or enterprise is
organized into "farms" of servers that are connected to shared storage. By placing the virtual
machines' virtual disks on storage area networks accessible to all the virtualized servers, the
virtual machines can be migrated from one server to another within the farm for purposes of load
balancing or failover. VMware Infrastructure 3 uses the VMotion live migration facility in its
Distributed Resource Scheduling feature and provides a High Availability component which
takes advantage of shared storage to quickly boot up a virtual machine on a different ESX server
after the original ESX server fails. Shared storage is key to enabling VMotion because, when a
virtual machine is migrated from one physical server to another, the virtual machine's virtual disk
doesn't actually move. Only the virtual disk's ownership is changed while it continues to reside in
the same place.
The xStack Storage iSCSI storage arrays provide excellent performance, reliability and
functionality and do not require specialized hardware and skills to set up and maintain. The Fibre
Channel storage network starts with the fabric, which involves the use of FC host bus adapters
(HBAs) in each server, connected by fiber cables to one or more FC switches, which in turn can
network multiple storage arrays, each supporting a scalable number of high speed disk
enclosures. An application's request for an input or output (IO) to storage originates as a SCSI
(Small Computer System Interface) request, is packed into an FC packet by the FC HBA, and
sent down the fiber cable to the FC switch for dispatch to the storage array that contains the
requested data, similar to the way Internet Protocol (IP) packets are sent over Ethernet. For
smaller IT shops or for those that are just starting out in the virtualization arena, an alternate
shared storage paradigm is emerging that employs iSCSI (Internet SCSI) to connect the servers to
the storage. In this case the communication between the server and the data storage uses standard
Ethernet network interface cards (NICs), switches and cables. SCSI IO requests are packed into
standard Internet protocol packets and routed to the iSCSI storage through Ethernet switches and
routers. With iSCSI customers can leverage existing networking expertise and equipment to
simplify their implementation of a storage network. Like Fibre Channel, iSCSI supports block-
level data transmission for all applications, not just VMware. For security iSCSI provides CHAP
authentication as well as IPSEC encryption.
Servers can communicate with iSCSI storage using two methods. The first involves the use of an
add-in card called an iSCSI hardware initiator or host bus adapter, analogous to the Fibre Channel
HBA, which connects directly to the datacenter's Ethernet infrastructure. The second does the
iSCSI conversion in software and sends the Ethernet packets through the standard Ethernet NIC.
3
D-Link Systems, Inc.
Page
December 2, 2009

Advertisement

loading