User-defined Motion Gestures for Mobile Interaction
Jaime Ruiz, Yang Li, Edward Lank
Presented at CHI 2011, May 7-12, 2011, Vancouver, British Columbia, Canada
Author Bios
- Jaime Ruiz is a doctoral student in the Human Computer Interaction Lab at the University of Waterloo. His research centers on alternative interactions techniques.
- Yang Li is a senior research scientist at Google and was a research associate at the University of Washington. He holds a PhD in Computer Science from the Chinese Academy of Sciences. He is primarily interested in gesture-based interaction, with many of his projects implemented on Android.
- Edward Lank is an Assistant Professor of Computer Science at the University of Waterloo with a PhD from Queen's University. Some of his research is on sketch recognition.
Summary
Hypothesis
What sort of motion gestures to users naturally develop? How can we produce a taxonomy for motion gestures?
Methods
The authors created a guessability study that directed users to simulate what they thought would be appropriate gestures for a given task. Participants were given 19 tasks and were instructed to think aloud as well as provide a subjective preference rating for each produced gesture. No recognizer feedback was provided, and users were told to consider the phone to be a magic brick that automatically understood their gesture to avoid the gulf of execution. The sessions were video recorded and transcribed. The Android phone logging accelerometer data and locked the screen.
Tasks were divided into actions and navigation-based tasks. Each task acted either on the phone or on an application. These subdivisions each had a task representing it to minimize task duplication. Similar tasks were grouped together. One set including answering, muting, and ending a call. The gestures were not finalized until the user completed all tasks in a set. The user then performed their gesture multiple times and reported on the effectiveness.
The authors considered their user-defined gesture set through an agreement score, which evaluated the degree of consensus among the users.
Results
Users produced the same gesture as their peers for many of the tasks. The transcripts suggested several themes. The first of these, mimicking normal use, imitated motions performed when normally using the phone. Users produced actions that were remarkably similar in this theme and found their gestures to be natural. Another theme was real-world metaphors, where users considered the device to resemble a physical device, like hanging up by turning the phone as though it was an older-styled telephone. To clear the screen, users tended to shake the phone, which is reminscent of an Etch-A-Sketch. The third theme, natural and consistent mappings, considered the users' mental model of how something should behave. The scrolling tasks were designed to test the users' mental model of navigation. In the XY plane, panning left involved moving the phone left. Zooming in and out involved moving the phone closer and further from the user, as though the phone was a magnifying glass. Discrete navigation, as opposed to one that varied on the force used, was preferred.
Users wanted feedback and designed their gestures to allow for visual feedback while performing the gesture. Most gestures indicated that they would use motion gestures at least occasionally.
Based on the authors' taxonomy of the produced gestures, users opted for simple discrete gestures in a single axis with low kinematic impulse.
The agreement scores for the user-defined gestures are similar to a prior study's scores for surface gestures. A consensus could not be reached on switching to another application and acting on a selection. The subjective ratings for goodness of fit were higher for the user-defined set than gestures not in the set. Ease of use and frequency of use had no significant difference between those two groups. Users stated that motion gestures could be reused in similar contexts, so the user-defined set can complete many of the tasks within an application.
Contents
Smartphones have two common input methods: a touchscreen or motion sensors. Gestures on the former are surface gestures, while on the latter are motion gestures. The authors focused on motion gestures, which have many unanswered questions about the design of them. They developed a taxonomy of parameters that can differentiate between different types of motion gestures. A taxonomy of this sort can be used to create more natural motion gestures and aid in the design of sensors and toolkits for motion gesture interaction from both the application and system.
Most of the prior work on classifying gestures focused on human discourse, but the authors focused on the interaction between the human and the device. One study produced a taxonomy for surface gestures based on user elicitation, which is the foundation of participatory design. Little research on classifying motion gestures has been done, though plenty describes how to develop motion gestures.
The authors' taxonomy classifed user-produced motion gestures from their study into two categories. The first, gesture mapping, involves the mapping, whether in the nature, temporal, or context dimension, of motion gestures to devices. The nature dimension considers mapping to a physical object and produces a gesture that is either metaphorical, physical, symbolic or abstract. Temporal describes when the action occurs with relation to when the gesture is made. Discrete gestures have the action after the gesture. Continuous ones have the action occur duing the gesture and afterwards, as in map navigation. Context considers whether the gesture required a particular context. Answering a phone is in-context, while going to the Home screen is out-of-context. The other category, physical characteristics, focuses on kinematic impulse, dimensionality, and complexity. The impulse considers the range of jerk produced in low, moderate, and high groupings. Dimension involves the number of axes involved to perform a gesture. Complexity states whether a gesture is simple or compound, which is multiple simple gestures placed into one.
The authors took their taxonomy and user gestures to create a user-defined gesture set, based on groups of identical gestures from the study.
Some gestures in the consensus set |
The authors suggested that motion gesture toolkits should provide easy access to the user-defined set. The end user should be allowed to add gestures in addition to a designer. The low kinetic impulse gestures could result in false positives, so a button or gesture to denote a motion gesture could be useful. Additional sensors might be necessary. Gestures should also be socially acceptable.
Discussion
The authors wanted to see how users define motion gestures and how to classify these. Their user study was user-oriented and their taxonomy relied heavily on those results. Since the study was well-founded, I concluded that the taxonomy is valid, so I am convinced of the correctness of the results.
The lack of feedback, while necessary for the study, made me wonder if users might produce different gestures when they are able to see what they are doing on-screen. Perhaps a future work could test this idea based on an implemented consensus set.
While I tend to not use motion gestures, this was interesting because, when refined, it allows for greater ease of access in varying contexts.
No comments:
Post a Comment