Download Print this page

Appendix B - Benchmarking Comments - NComputing U170 Configuring

Hide thumbs Also See for U170:

Advertisement

Appendix B – Benchmarking Comments
NComputing does not offer terminal benchmarks but can provide some direction if you would
like to try it yourself.
Benchmarking a multi-user system is a challenging task. Traditional PC benchmarks often focus
on processor intensive applications and attempt to max-out the system's CPU utilization to
deliver a "score" or "time" taken to perform tasks such as 3-D graphics rendering or audio/video
compression. Such benchmarks may have relevance to high-end users (such as gamers or video
editors) but are not representative of typical office or school environment work loads. Other
"standard" benchmarks that try to simulate office workloads generate scores that represent
maximum utilization of CPU performance and I/O performance for peripherals like disk drives,
but again, a higher score on these types of benchmarks may not necessarily translate into a
significantly different user experience with normal day to day computing tasks. The typical PC
spends most of its time waiting for the user to type, read screen text with basic graphics material,
as opposed to rushing through tasks as quickly as the "benchmark" programs simulate. (To
understand this you only need to watch one of the "office mark" type benchmarks executing; the
screens flash by so quickly you cannot actually "see" what is going on. Nobody types that fast
or switches screens that quickly in real life).
Benchmarking a shared computing or desktop virtualization environment has never been easy.
You cannot take a single-user CPU-intensive benchmark and run multiple simultaneous copies to
get any meaningful multi-user results. (And these benchmarks will not run on terminals with
multiple instances.)
A better metric is to observe the end user experience when running a workload that is "typical"
of what a user will be doing in normal day-to-day computing. The reason the CPU and PC
vendors do not promote such benchmarks is that there would be little difference between today's
PC and last year's model; because the system would be mostly waiting for the user to type the
next keystroke or read a page just downloaded from the Internet.
One methodology of evaluating the performance of a multi-user environment is based on
measuring system utilization during a simulated set of office tasks that includes realistic delays
between tasks, and proves that even a basic PC of today can easily support 11 users running
common applications. The focus here is user experience centric, not CPU cycle centric, and we
believe this is much more relevant to most actual user environments.
Performance Metrics
There are a number of other metrics that most users find relevant in real-world office and school
classroom environments.
i)
System Boot time. How long does it take to get to a Windows logon prompt and
access a usable desktop after powering up the system?
NComputing is a registered trademark. Other trademarks and trade names are the property of their respective owners. Copyright 2010.

Advertisement

loading