Download Print this page

Cub Cadet i1042 Operator's Manual page 6

Zero turn riding mower
Hide thumbs Also See for i1042:

Advertisement

Figure 5. Software mockup of a portable music player
ing performed by the right hand. The system continuously
highlighted in red the directional arrow corresponding to
the system's current finger gesture recognition result. This
visual feedback told a participant what action the system
would take if he squeezed their left hand at that moment.
Participants completed three different tasks with the porta-
ble music player. They (a) navigated from the top of the
menu structure to the list of songs and selected a specified
song, (b) navigated from a random starting point in the
songs list to a particular song, and (c) advanced to the next
song, starting at a random song in the song list. Above the
music player participants were given task instructions such
as "Select Even Flow." They would then do a series of di-
rection gestures to navigate the menu and select the song.
Participants completed five blocks of these three tasks for
each object (mug and heavy bag), for 30 tasks in total.
Data Processing Technique
To classify gestures from an EMG signal, we used a similar
approach to Saponas et al. [18], performing basic signal
processing, computing a set of features, using those features
to train a support vector machine (SVM) [1], and then using
that SVM to classify finger gestures. While Saponas, et al.
did not test this, we show here that this can be used in a
real-time system. We outline the procedure here, but more
details on the approach can be found in their paper [18].
Basic Signal Processing
Our first step is to convert the raw EMG data into a form
suitable for our machine learning algorithm. We divide the
signal into 32 segments per second (about 31ms per seg-
ment). By dividing the data into segments, we transform it
into a time independent dataset. We can then treat each of
these segments as a single sample of data.
Feature Generation
For each 31ms sample, we generated three classes of fea-
tures, which we use for training and testing the classifier.
The first set of features is the Root Mean Square (RMS)
amplitude in each channel, which correlates with magnitude
of muscle activity. From the six base RMS features gener-
ated by sensors on the right arm, we create another fifteen
features by taking the ratio of the base RMS values between
each pair of channels. These ratios make the feature space
more expressive by representing relationships between
channels, rather than treating each as being independent.
The second set of features is Frequency Energy, indicative
of the temporal patterns of muscle activity. To derive these
features, we compute the fast Fourier transform (FFT) for
each sample and square the FFT amplitude, which gives the
energy at each frequency. We create 13 bins over the 32 Hz
sampling range for each of the six channels on the right
arm. This yields 78 frequency energy features per sample.
The third set of features is Phase Coherence, which loosely
measures the relationships among EMG channels. We
create fifteen such features by taking the ratios of the aver-
age phase between all channel pairs on the right arm.
These calculations result in 114 features per sample for
right-hand gesture classification. The only feature we use
for left-hand "squeeze" recognition is a single RMS feature
computed over the subtractions of the two channels availa-
ble on the left hand.
Classification of Right-Hand Finger Gestures
Support vector machines (SVMs) are a set of supervised
machine learning methods that take a set of labeled training
data and create a function that can be used to predict the
labels of unlabeled data. For our experiment, we used the
Sequential Minimal Optimization version of SVMs [16].
In supervised machine learning, training data inherently
needs to be labeled with a 'ground truth'. In our case, this is
the gesture being performed by a participant at a given time
when the muscle-sensing data segment was gathered. Be-
cause people respond to a stimulus with varying delay,
there is some amount of mislabeled information early with-
in each stimulus presentation. We combat this issue by dis-
carding all samples from the first half of presentation and
saving only the latter half as training data for our system.
While classification results were generated 32 times a
second, the system determined the currently recognized
gesture at any given time as the last gesture classified three
times in a row. For example, if the previous four samples
were classified as "index, index, index, middle", the system
would use "index" as the currently recognized gesture. We
chose this approach to reduce sensitivity to momentary
fluctuations in classification. Throughout this paper, our
classifiers were trained and tested independently on data
from each participant during a single participant session.
Classification of Left-Hand Squeeze
Detecting the squeezing gesture performed by the left hand
is much simpler. We take the RMS features from the differ-
ence of the two channels on the left arm. This process re-
moves noise such as a person's cardiac electrical activity,
giving a good estimate of the total muscle activity in the
upper forearm. The system took any value above 40% of
the maximum value seen during calibration to mean that the
left hand had been squeezed. We empirically selected 40%
from results in pilot studies. The system would then "sleep"
for a quarter-second before attempting to detect another
left-hand squeeze. We enforced this "silent" period to pre-
vent unintentional rapid sequences of selections.

Advertisement

loading