Who Calculates Standard Deviation - Symbol WS5000 Series System Reference Manual

Hide thumbs Also See for WS5000 Series:
Table of Contents

Advertisement

14-43
Enhanced RF Statistics
Figure 14.15 Graph dispalying the 3 possible scenarios while monitoring the signal strenght
This begs the question: what percentage of end stations must be experiencing –63dB or better at any given
time? Depending on the situation, the requirement might be that 80% must have 63dB or better, (which the
red and green distributions achieve). Or, the requirement might be that 98% have better than –63dB, (which
only the red distribution achieves).
SLA 2 — 80% of end stations will experience –63dB or better
In any case, both the mean and the standard deviation must be monitored. If success was defined as having
80% of the end stations at –63dB or better, that would suggest that the mean of the measured signal strengths
needs to be at least one standard deviation better than –63dB; (since +/– one standard deviation accounts for
68% of a normally distributed population, that means one 'tail' would leave 32% / 2 = 16%, which is just
slightly better than the 20% we permit to be worse than –63dB).
So, to ensure that our threshold is met, we routinely fetch the mean and standard deviation from the wireless
infrastructure and check that mean + [one] standard deviation is less than or equal to – 63dB.10.

14.6.1.2 Who calculates Standard Deviation?

There are two possibilities: the network infrastructure devices themselves, or an external, server-based
network management application.
When a SLA is negotiated, it must specify not just the threshold, (–63dB in our example), and the percentage
of the population in compliance, (80% in our example), but also the monitoring interval. An environment that

Advertisement

Table of Contents
loading

Table of Contents