Download Print this page

Cub Cadet i1042 Operator's Manual page 8

Zero turn riding mower
Hide thumbs Also See for i1042:

Advertisement

Hands‐Busy Gesture Accuracy
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
Mug
Figure 7. Mean classification accuracies of hands-busy ges-
tures. Error bars represent the standard deviation.
the system's current recognition result. A two-way
ANOVA (finger × presence/absence of visual feedback) on
completion time showed that the difference in feedback
conditions was significant (F
These results suggest that there is a time-accuracy tradeoff
for visual feedback. Participants were probably spending
time inspecting the feedback and making corrections to
increase overall accuracy. In future work, we would like to
explore less intrusive methods of providing feedback.
Part C: Portable Music Player Application Recognition
In the portable music player application, participants com-
pleted five blocks of three tasks with both the mug and
bags. For each of these tasks, we recorded whether they
selected the correct song, how many navigation steps they
used above the minimum steps required to select the correct
song, and how long it took them to complete each task.
In the travel mug scenario, two of the participants found
that the system's classification of their pinky finger did not
work well enough to effectively complete the portable mu-
sic player tasks. We removed this data from our analysis.
When navigating the three-level hierarchical menu to select
a song, participants on average selected the correct song
85% of the time with bags in their hands and 87% of the
time while holding a travel mug. A failure was selecting
any song besides the one specified. On average participants
spent 45 seconds (median 39 seconds) navigating the menus
through an average of 15 gestures per task with bags, and
58 seconds (median 40 seconds) through an average of 14
gestures with the mug. The goal of this phase of the expe-
riment was to demonstrate that our real-time recognition
system functioned well enough to be used in an interactive
system. Among our participants some found it somewhat
difficult to control the music player, while several stated
that it worked very well for them and were interested when
this might be released as a commercial product.
DISCUSSION
We have explored the feasibility of building forearm mus-
cle-sensing based finger gesture recognizers that are inde-
4 Finger no Feedback
3 Finger no Feedback
4 Finger w/ Feedback
3 Finger w/ Feedback
Bags
=19.77, p=0.001).
1,10
pendent of posture and shown that these recognizers per-
formed well even when participants' hands were already
holding objects. In this section, we discuss the implications
of these results for application design.
Posture Independence
The results from Part A suggest that while training data
from one arm posture is most useful in recognizing gestures
in the same posture, it is also possible to use our techniques
to train a single gesture recognizer that works reasonably
well in multiple arm positions. This suggests that electro-
myography based interactions could be deployed without
constraining wrist and hand positions. We feel that this is a
major step toward enabling real-world applications, particu-
larly applications in mobile settings. Users interact with
mobile devices in a variety of body postures (seated, stand-
ing, walking, etc.), and we would therefore expect a similar
variety of postures in the gesturing hand. Requiring a user
to train a separate classifier for multiple hand positions
would be costly, hence we are encouraged by our results
demonstrating the feasibility of cross-posture training.
Hands-Busy Interaction
Traditional input modalities take advantage of our dexterity,
motor ability, and hand-eye coordination. However, in
many scenarios we have to choose between our everyday
behavior and manipulating a physical input device. In these
scenarios, muscle-computer interfaces leveraging gestures
that can be performed while our hands are already gripping
an object provide an opportunity for computing environ-
ments to better support hands-busy activities such as when
using a mobile phone while walking with a briefcase in
hand or operating a music player while jogging. The results
of Part B of our experiment demonstrate the possibility of
classifying gestures involving individual fingers even when
the whole hand is already engaged in a task, and even when
the arm is supporting a heavy load.
Quantity of Training Data and Classification Accuracy
Figure 8 shows that even with limited training data (10
blocks or approximately 70 seconds), average accuracies
exceed 80% for four-finger classification, suggesting that
the required amount of training for a muscle-computer in-
terface would be on par with that typically required to train
a speech recognition system. Future work will explore
building cross-user models that would allow instantaneous
use of our system without per-user training, leveraging per-
user training only to enhance performance.
Cross-User and Cross-Session Models
We trained and tested our classifier for a single participant
in a single session as is common with similar technologies
such as brain-computer interfaces [10, 19]. Future work will
evaluate the degree to which classifiers can be re-used
across sessions, and will focus on automatically configuring
a classification system without careful sensor placement.
Interaction Design Issues
Even if a system can recognize individual gestures with
reasonable accuracy, deployment in real-world scenarios

Advertisement

loading