Multitoe: High-Precision Interaction with Back-Projected Floors Based on High-Resolution Multi-Touch Input
Thomas Augsten, Konstantin Kaefer, Rene Meusel, Caroline Fetzer, Dorian Kanitz, Thomas Stoff, Torsten Becker, Christian Holz, Patrick Baudisch
Presented at UIST'10, October 3-6, 2010, New York, New York, USA
Author Bios
- Thomas Augsten is a Master's student in IT Systems Engineering at the Hasso Plattner Institute of the University of Potsdam. This is his second paper and first presented at UIST.
- Konstantin Kaefer is a Master's student in IT Systems Engineering at the Hasso Plattner Institute of the University of Potsdam and also works on mapping software for Development Seed. He is the co-author of a book on Drupal.
- René Meusel is a student at the Hasso Plattner Institute of the University of Potsdam. This is his first paper.
- Caroline Fetzer is a student at the Hasso Plattner Institute of the University of Potsdam. This is her first paper.
- Dorian Kanitz is a student at the Hasso Plattner Institute of the University of Potsdam. This is his first paper.
- Thomas Stoff is a student at the Hasso Plattner Institute of the University of Potsdam. This is his first paper.
- Torsten Becker is a graduate student at the Hasso Plattner Institute of the University of Potsdam, specializing in Human-Computer Interaction and mobile and embedded systems. He has two peer-reviewed papers.
- Christian Holz is a PhD student at the Hasso Plattner Institute of the University of Potsdam. He has six publications.
- Patrick Baudisch is a professor of Computer Science at the Hasso Plattner Institute of the University of Potsdam. He worked at PARC and Microsoft Research.
Summary
Hypothesis
How effective is a back-projected floor-based computer that reads input from users' shoe soles?
Methods
The study of how to not activate a button had participants walk over four buttons, two of which were to be activated, and two that were not. The authors observed strategies used and conducted personal interviews. The buttons were labelled pieces of paper. The user strategies were categorized.
A second study determined which area of the soles users expected to be detected. Users stepped onto the multi-touch floor, which produced a honeycomb grid reflecting where contact with the foot was detected based on user perception.
The third study tried to determine if users have a consistent expected hotspot for foot contact. Users placed their hotspot over the system's generated cross-hair and confirmed their selection. The first contact was with whatever portion of the foot they desired, though the rest used specific portions.
Another study determined precision by asking users to use three differently-sized projected keyboards. Tracking inaccuracy was mitigated, as this test revolved around user capability. The users typed a sentence and were timed.
Results
Users did not have a consistent way of activating only certain paper buttons. Some strategies used were ergonomically unsound. Most participants tapped buttons with their feet.
The second study found that most users expected the foot's arch to be a point of contact, though two excluded the arch. Some users eroded the actual area of contact, but most agreed detection should be based on projection.
The third study showed substantial disagreement between user hotspots, with no hotspot gaining a majority of usage.
The fourth study found that errors and time increased as the key size decreased. Participants were split between a preference for the large and medium keyboards.
Contents
Tabletop computers suffer from size constraints based on a user's arm reach. The authors developed a system that, instead of being a table, is projected under a floor, allowing users to walk to access items. This is based on frustrated total internal reflection (FTIR), which allows for resolution similar to a tabletop computer. Proper foot posture is required for input, pop-up menus are located-independent and activate by jumping, and hotspots to determine foot placement are user-customizable. The system can interpret head tracking, body posture, and recognize users based on pressure on the floor.
Soles are detected with front diffuse illumination, which tracks shadows. The floor surface is specialized and contains a screen, glass, acrylic, and silicone. Because of the expense of creating the surface, the trial version detects a small subregion of the space. Most menus are location independent and are activated by jumping, which was rarely unintentionally done. Based on the second user study, the front diffuse illumination was used over FTIR's tracker, though elements of FTIR were used in the general detection algorithm. FTIR was a predominant component in determining user pressure. Hotspots reduce the foot contact to a single point and are requested whenever a new pair of soles is detected. The new user detection system tries to interpolate from a database of pre-existing soles. Determining actions is based on frames of pressure patterns gleaned from FTIR. Head tracking is approximated based on pressure interpretations of balance. Further subdivision of soles allowed the authors to play a game on their surface.
Discussion
The authors tried to prove that a floor-based system in the same vein as a tabletop computer was usable. Considering that the design of each part of the system relied heavily on user feedback, their work certainly has convinced me of the effectiveness of such a system.
When I first heard about Microsoft Surface, I couldn't help but think that that was an amazing feat of engineering. This raised the bar for me, as it addressed the usability concerns of Surface (for me, arm length). I would imagine that a commercial model would be just as prohibitively expensive as the Surface, but the ability to compute while somewhat exercising is very interesting.
No comments:
Post a Comment