VMware vSphere 4 Architecture And Performance

Fault tolerance: architecture and performance
Hide thumbs Also See for vSphere 4:

Advertisement

Quick Links

W H I T E P A P E R
VMware vSphere™ 4 Fault Tolerance:
Architecture and Performance

Advertisement

Table of Contents
loading

Summary of Contents for VMware vSphere 4

  • Page 1 W H I T E P A P E R VMware vSphere™ 4 Fault Tolerance: Architecture and Performance...
  • Page 2: Table Of Contents

    3.8. Microsoft exchange Server 2007............. . . 13 4. VMware Fault Tolerance Performance Summary .......14 5.
  • Page 3: Vmware Fault Tolerance Architecture

    VMware® Fault Tolerance (FT) provides continuous availability to virtual machines, eliminating downtime and disruption — even in the event of a complete host failure. This whitepaper gives a brief description of the VMware FT architecture and discusses the performance implication of this feature with data from a wide variety of workloads.
  • Page 4: Vmware Vlockstep Interval

    VMware white paper Figure 2. High Level Architecture of VMware Fault Tolerance Primary Secondary FT Logging Tra c ACKs Record Replay Client Shared Storage The communication channel between the primary and the secondary host is established by the hypervisor using a standard TCP/IP socket connection and the traffic flowing between them is called FT logging traffic.
  • Page 5: Performance Aspects And Best Practice Recommendations

    2. Performance aspects and Best Practice recommendations This section describes the performance aspects of Fault Tolerance with best practices recommendations to maximize performance. For operational best practices please refer to the VMware Fault tolerance recommendations and Considerations on VMware vSphere 4 white paper.
  • Page 6: I/O Latencies

    VMware white paper To ensure that the secondary virtual machine runs as fast as the primary, it is recommended that: • The hosts in the FT cluster are homogenous, with similar CPU make, model, and frequency. The CPU frequency difference should not exceed 400 MHz. • Both the primary and secondary hosts use the same power management policy. • CPU reservation is set to full for cases where the secondary host could be overloaded. The CPU reservation setting on the primary applies to the secondary as well, so setting full CPU reservation ensures that the secondary gets CPU cycles even when there is CPU contention.
  • Page 7: Drs And Vmotion

    VMware white paper It is recommended that FT primary virtual machines be distributed across multiple hosts and, as a general rule of thumb, the number of FT virtual machines be limited to four per host. In addition to avoiding the possibility of saturating the network link, it also reduces the number of simultaneous live migrations required to create new secondary virtual machines in the event of a host failure.
  • Page 8: Kernel Compile

    VMware white paper Figure 4. SPECjbb2005 Performance FT tra c: SPECjbb2005 1.4 Mbits/sec FT Disabled FT Enabled RHEL 5 64-bit, 4GB 3.2. Kernel Compile This experiment shows the time taken to do kernel compile, which is both CPU and MMU intensive workload due to forking of many parallel processes.
  • Page 9: Netperf Throughput

    VMware white paper 3.3. Netperf Throughput Netperf is a micro-benchmark that measures the throughput of sending and receiving network packets. In this experiment netperf was configured so packets could be sent continuously without having to wait for acknowledgements. Since all the receive traffic needs to be recorded and then transmitted to the secondary, netperf Rx represents a workload with significant FT logging traffic.
  • Page 10: Filebench Random Disk Read/Write

    VMware white paper Figure 7. Netperf Latency Comparison FT tra c Netperf - latency sensitive case Rx: 500 Mbits/sec 1000 Tx: 36 Mbits/sec FT Disabled FT Enabled Receives Transmits 3.5. Filebench random Disk read/write Filebench is a benchmark designed to simulate different I/O workload profiles. In this experiment, filebench was used to generate random I/Os using 200 worker threads. This workload saturates available disk bandwidth for the given block size. Enabling FT did not...
  • Page 11: Oracle 11G

    VMware white paper 3.6. Oracle 11g In this experiment, an Oracle 11g database was driven using the Swingbench Order Entry OLTP (online transaction processing) workload. This workload has a mixture of CPU, memory, disk, and network resource requirements. 80 simultaneous database sessions were used in this experiment. Enabling FT had negligible impact on throughput as well as latency of transactions. Figure 9. Oracle 11g Database Performance (throughput) FT tra c: Oracle Swingbench - Throughput 11 –...
  • Page 12: Microsoft Sql Server 2005

    VMware white paper 3.7. Microsoft SQL Server 2005 In this experiment, the DVD Store benchmark was used to drive the Microsoft SQL Server® 2005 database. This benchmark simulates online transaction processing of a DVD store. Sixteen simultaneous user sessions were used to drive the workload. As with the previous benchmark, this workload has a mixture of CPU, memory, disk, and networking resource requirements.
  • Page 13: Microsoft Exchange Server 2007

    VMware white paper 3.8. Microsoft exchange Server 2007 In this experiment, the Loadgen workload was used to generate load against Microsoft Exchange Server 2007. A heavy user profile with 1600 users was used. This benchmark measures latency of operations as seen from the client machine. The performance charts below report both average latency and 95th percentile latency for various Exchange operations. The generally accepted threshold for acceptable latency is 500 ms for the Send Mail operation. While FT caused a slight increase, the observed SendMail latency was well under 500 ms with and without FT.
  • Page 14: Vmware Fault Tolerance Performance Summary

    Fault Tolerance enabled. 5. Conclusion VMware Fault Tolerance is a revolutionary new technology that VMware is introducing with vSphere. The architecture and design of VMware vLockstep technology allows hardware-style Fault Tolerance on single-CPU virtual machines with minimal impact to performance. Experiments with a wide variety of synthetic and real-life workloads show that the performance impact on throughput...
  • Page 15: Appendix A: Benchmark Setup

    VMware white paper appendix a: Benchmark Setup Primary Secondary Cross cable Intel Xeon E5440 Intel Xeon E5440 2.8GHz – 8 CPUs 2.8GHz – 8 CPUs 8GB of RAM 8GB of RAM Intel Optin XF SR Intel Optin XF SR 10GB NIC...
  • Page 16: Appendix B: Workload Details

    VMware white paper appendix B: workload Details SPECjbb2005 Virtual machine configuration: 1 vCPU, 4GB RAM, Enhanced VMXNET virtual NIC, LSI Logic virtual SCSI adapter OS version: RHEL5.1, x64 Java Version: JRockit R27.4.0, Java 1.6.0_22 Benchmark parameters: No of warehouses: Two J VM Parameters: -XXaggressive -Xgc:parallel -XXcompactratio8 -XXminblocksize32k -XXlargeObjectLimit=4k -Xmx1024m -Xms1024m Note: Scores for the first warehouse run were ignored. Kernel Compile Virtual machine configuration: 1 vCPU, 1GB RAM, LSI Logic Virtual SCSI adapter OS version: SLES 10 SP2 x86_64 Kernel version: 2.16.16.60-0-21-default...
  • Page 17: Mssql 2005 - Dvd Store Benchmark

    VMware white paper Swingbench configuration: Swingbench version: 2.2, Calling Circle Database No of orders: 23550492 No of Customers: 864967 Runtime: 30 mins ODBC driver: ojdbc6.jar Driver Type: Thin No of Users: 80 Pooled: 1 LogonDelay: 0 Transaction MinDelay: 50 Transaction MaxDelay: 250 QueryTimeout: 60 Workload Weightage: NewCustomerProcess – 20, BrowseProducts – 50, ProcessOrders – 10, BrowseAndUpdateOrders – 50 Note: Database was restored from backup before every run MSSQL 2005 —...
  • Page 18: Exchange 2007 - Loadgen

    Total Number of tasks: 107192 (1.24 tasks per second) Notes: • Exchange mailbox database was restored from backup before every run • Microsoft Exchange Search Indexer Service was disabled when the benchmark was run VMware vSphere 4 Fault Tolerance: Architecture and Performance Source: Technical Marketing, SD Revision: 20090811...
  • Page 19 VMware, Inc. 3401 Hillview Ave Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com Copyright © 2009 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents.

Table of Contents