Tuesday, November 8, 2011

Paper Reading #22: Mid-air Pan-and-Zoom on Wall-sized Displays

Reference Information
Mid-air Pan-and-Zoom on Wall-sized Displays
Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, Wendy Mackay
Presented at CHI 2011, May 7-12, 2011, Vancouver, British Columbia, Canada

Author Bios

  • Mathieu Nancel is a PhD student in Human-Computer Interaction at INRIA. He has work in navigating large datasets.
  • Julie Wagner is a PhD student in the insitu lab at INRIA. She was a Postgraduate Research Assistant in the Media Computing Group at Aachen before that.
  • Emmanuel Pietriga is the Interim Leader for insitu at INRIA. He worked for the World Wide Web Consortium in the past.
  • Olivier Chapuis is a Research Scientist for Universidad Paris-Sud. He is a member of insitu.
  • Wendy Mackay is a Research Director for insitu at INRIA. She is currently on sabbatical at Stanford.


Summary
Hypothesis
What possible forms of indirect (ie. not on the wall) interaction are best for wall-sized displays? Will bimanual gestures be faster, more accurate, and easier to use than unimanual ones? Will linear gestures slow over time and be preferred over circular gestures? Will tasks involving fingers be faster than those involving whole limbs? Will 1D gestures be faster? Will 3D gestures be more tiring?

Methods
The authors tested the 12 different conditions under consideration with groups as enumerated in the Contents section below. They measured the performance time and number of overshoots, when users zoomed too far. The users navigated a space of concentric circles, zooming and panning to reach the correct level and centering for each set of circles. Users performed each of the 12 possible tasks and then answered questions about the experience.

Results
Movement time and number of overshoots correlated. No fatigue effect appeared, though users learned over time. Two-handed techniques won out over unimanual gestures. Tasks using the fingers were faster, and 1D gestures were faster, though not as much for bimanual tests. Linear gestures were faster than circular ones and preferred by users. The users' Likert scale questions confirm these findings. 3D gestures were the most fatiguing.

Contents
Increasingly, high-resolution displays can show petas of pixels. These are inconvient or impossible to manage with touchscreens. Interaction techniques should allow the user the freedom to move while working with the display. Other papers found that large displays are useful and discussing mid-air interaction techniques. One involved a circular gesture technique called CycloStar. The number of degrees of freedom allows for users to parallelize their tasks. Panning and zooming is 3DOF, since the user controls the 2-D Cartesian position and the scale.

For their system, the authors discarded techniques that are not intuitive for mid-air interactions or are not precise enough. Their final considerations were unimanual and bimanual input, linear and circular gestures, and the three types of guidance through passive haptic feedback. Unimanual techniques involve one hand, while bimodal use two. Linear gestures move in a straight line, while circular ones involve rotation. Following a path in space with a limited device is 1D guidance, using a touch-screen is 2D, and free gestures are 3D. The bimanual gestures assign zoom to the non-dominant hand, and the other features to the dominant hand. The limb portions in consideration are the wrist, forearm, and upper arm. The 3D circular gestures resembled CycloStar. The linear gestures push inwards (zoom in) and pull outwards (zoom out). Circular ones involving making circles with the hand.

Discussion
The authors wanted to determine which forms of interaction were best for a large interactive display. Their tests were comprehensive and thorough, so I have no reason to doubt the validity of their claims.

This might be useful for major disaster emergency response teams. A large display combined with available information could be vastly useful, albeit expensive to implement. Dr. Caverlee is starting a project that gleans disaster information from social media sites. These technologies would certain work well together.

With the initial steps put together in this paper, I would like to see future work that more finely hones the details of the preferred types of interactions.

No comments:

Post a Comment