Download Print this page

Cub Cadet i1042 Operator's Manual page 5

Zero turn riding mower
Hide thumbs Also See for i1042:

Advertisement

phase, participants received their cues via a highlighted
finger on the display. However, rather than timing their
responses to the timing of the stimuli, participants were
asked to perform the gesture with their right hand and "lock
it in" by clenching their left fist. To aid participants in this,
we provided a small ball that they could squeeze with their
left hand. The gesture could have just as easily been per-
formed without the prop, as we demonstrate in Part B of the
experiment. When the system recognized a squeezing
movement with the left hand, it classified the gesture being
performed with the right hand using the muscle-sensing
data immediately preceding the squeeze.
Locking in a gesture by squeezing made the finger hig-
hlighting disappear for half a second, after which the sys-
tem advanced to the next gesture. Since detecting the acti-
vation gesture is quicker and more robust than that of indi-
vidual finger gestures, the bimanual paradigm allows for
rapid selection of the same gesture multiple times in a row,
as well as a robust way to avoid false positives.
Part B: Hands-Busy Finger Gestures
The second part of our experiment explored performing
finger gestures when the hands are already busy holding an
object. We looked at two different classes of objects. First,
we used a travel mug to represent small tool-sized objects
held in the hand. For this task, participants sat in a chair and
held the mug in the air as one might naturally hold a beve-
rage (see Figure 3b). Second, we tested larger and heavier
objects being carried. Participants stood in front of the desk
and carried a laptop bag in each hand (see Figure 3c). Each
bag held a book weighing approximately one kilogram.
As in Part A, for both object types, we conducted a training
phase and a testing phase. These were done one object type
at a time and the order of the two object types was counter-
balanced across users.
Hands-Busy Training Phase
As before, participants performed 25 blocks of finger ges-
tures in response to stimuli. The same stimuli highlighting
fingers in the outline of a hand were used. Participants were
asked to exert a little more pressure with the highlighted
finger than with the other fingers. With the mug, this meant
pressing on it a little more firmly with the highlighted fin-
ger than with the other fingers. With the bag, this meant
pulling on the handle a little harder with the highlighted
finger than with the other fingers. At the conclusion of the
training phase for each object, the collected data was used
to train the gesture recognition system for use in the subse-
quent phases. Once training data is collected, training the
system requires only a few seconds of computation.
Hands-Busy Testing Phase
In the testing phase of this part of the experiment, partici-
pants used the two-handed technique to perform gestures as
they did in Part A. However, unlike in Part A, participants
completed the stimulus-response task twice: once with vis-
ual feedback about the real-time classification, and once
without visual feedback. The order was counterbalanced
across participants and objects to avoid an ordering effect.
The "no visual feedback" condition was in the same style as
Part A's testing phase; a finger was highlighted and a par-
ticipant would perform that gesture then squeeze with their
left hand. When holding the travel mug, participants
squeezed an empty left hand with their fingers against the
lower pad of their thumb to "lock in" the current right-hand
gesture. When holding a bag in each hand, participants
squeezed the handle of the left-hand bag to "lock in" the
current right-hand gesture.
The "with visual feedback" condition added a second com-
ponent to the display of the hand. In addition to the red hig-
hlighting of the finger that should be used in the gesture, the
system also continuously highlighted its current gesture
recognition result in a semi-transparent blue (see Figure 4b-
c). We explained to participants that this was the system's
best guess at their current gesture. Users were asked to per-
form the red gesture and activate their response only when
they were confident it was correctly detected. As a side
effect, visual feedback also allowed participants to under-
stand the system's recognition behavior and to tailor their
gestures accordingly. The goal of this manipulation was to
explore the importance and tradeoffs of having visual feed-
back while using a muscle-computer interface.
Participants completed 25 blocks of gestures for each object
both with and without visual feedback. The order of the
feedback manipulation was balanced across the order of
participants and objects.
Part C: Controlling a Portable Music Player Application
In addition to testing the accuracy with which our system
was able classify gestures performed by participants, we
also applied these gestures to use in a more ecologically
valid application, a portable music player interface.
Our simulated portable music player (see Figure 5) is con-
trolled through a hierarchical menu interface similar to
those found in many mobile computing devices. Our player
contained eight songs and only the songs menu was popu-
lated. The menu system can be navigated using four direc-
tional arrows where the "up" and "down" arrows move a
selection cursor up and down in the current menu, while the
"left" and "right" arrows navigate backward or forward in
the menu structure. Forward navigation is also used to indi-
cate a final selection at the end of a series of navigations. In
music players, this corresponds to selecting a song.
We asked participants to control the portable music player
menu interface and complete a series of tasks using our
real-time muscle-computer interface. The training data from
Part B was used, since the hands were similarly loaded with
either the mug or the heavy bag. The user's inputs were
mapped to the directional controller of the portable music
player by assigning the index finger of the right hand to left,
the pinky finger to right, the middle finger to up, and the
ring finger to down. As in the other experiments, the left-
hand grasping gesture was used to activate the gesture be-

Advertisement

loading