Wednesday, August 31, 2011

Paper Reading #1: Interactive Interfaces: Spatial Interaction with Empty Hands and without Visual Feedback


Reference Information
Interactive Interfaces: Spatial Interaction with Empty Hands and without Visual Feedback
By Sean Gustafson, Daniel Bierwirth, and Patrick Baudisch
Presented at UIST'10, October 3-6, New York, New York, USA

Author Bios
  • Sean Gustafson is a PhD student working in Hasso Plattner Institute's Human-Computer Interaction Lab. He focuses on new forms of interacting with computers and holds bachelor and master's degrees in Computer Science from the University of Manitoba.
  • Daniel Bierwirth received a master's degree in IT-Systems Engineering from Hasso-Plattner Institute and now works for Matt Hatting & Company UG and Agentur Richard GbR. He focuses on mobile application development and design thinking.
  • Dr. Patrick Baudisch is the head of the Human Computer Interaction group at the Hasso-Plattner Institute. His research focuses on interaction techniques, especially with small and large devices.
Summary
Hypothesis
Can users interact successfully with an imaginary user interface without visual feedback and to what extent?

Methods
In three user studies, users consisting of young adults created simple drawings, edited existing ones, experienced deliberate interruptions, and pointed to locations in space based on a user-defined origin. An optical tracker installation tracked gestures based on markers on participants' gloves to minimize system limitations.

Results
Users were partially successful in using the system. The test of simple sketches and single stroke characters had a 95% success rate, which was greater than the target success rate. Users also increased in accuracy with repeated shapes. Multi-segment drawings did not fare as well, with faulty starting points leading to misalignment and condensed whitespace. The test of users' ability to remember where objects were even after a brief interruption from the task somewhat succeeded, with non-rotating participants and a reference point aiding in accuracy. The test of pointing at certain Euclidean coordinate points found increased error as the points moved farther from the user's non-dominant hand location.

Contents
Users worked with objects in a 2D space. Their non-dominant hand, producing an L shape, created the origin point for the plane. Spatial references are derived from the position of this hand. The system requires no devices to be held in-hand. This technology extends wearable computing, gestural input, mobile computing, and spatial interaction. The 3D coordinates created by the test were converted to 2D from the user's perspective. The first test presented users with a page of simple shapes to draw using their left hand as the origin and finished at user discretion. Some of the shapes were letters that posed problems with connecting strokes in prior studies. Others were repeated simple shapes and multi-stroke drawings. The multi-stroke gestures were not formally analyzed. This test suggested that users augmented their visuospatial memory throughout the test. The second test considered the longevity visuospatial memory after interrupting the participants. Initially, participants rotated 90 degrees after drawing a shape and indicated the requested corner of that shape. Notably, the left hand as a reference point increased accuracy even with an interruption. The third test considered accuracy on a coordinate plane, using the left hand forming the x and y unit vectors.

Discussion
This paper managed to measure user ability to use a screen-less, gesture-based interface. Their results suggest that this interface would work at least somewhat well for basic tasks. I am convinced that the authors of this paper had more success with their work than those against whom they compared the percentage of successes. This work is interesting because most computing today assumes the use of a screen. Taking away the visual feedback element poses an unusual question of whether we as people are capable of using the device. While this paper suggests that screen-less interfaces may be able to work successfully, I am not convinced that this is a viable solution for anything more than basic tasks. While this technology seeks to eliminate the problem of having to memorize many cumbersome gestures, the fact remains that a large body of gestures, some relatively unintuitive, must still be committed to memory for any tasks more complex than simply drawing a shape.

Paper Reading #0: On Computers

Does the Mechanical Turk think for itself too?
In 1980, Dr. John Searle made a case against strong AI, the claim that a program can produce consciousness in a system. He claimed that even a program that passed the Turing test did not ensure that the machine could think, as there is no way to ascertain that the program understood the inherent semantics of the text it read. I agree with his claim as the evidence against thinking computers is so strong.

After all, a computer cannot be self-aware unless it was explicitly told to be self-aware. That is, a computer cannot examine itself while running and say, "Hey, I can understand X" unless someone programmed it to have that statement appear as output in the first place. The machine hardly understood the programmer's intent to have the computer inherently understand what it is doing. It is just an echoing automaton, running yet another job.

However, we could argue (and several of the replies to Searle do so) that the human brain and a computer are simply two different Turing machines. They wonder that if one replaces the brain of a human gradually with circuits that are functionally equivalent to individual neurons, then when consciousness ends for that person. I find that the key is not that the brain is gone, since the presence of mind should be removed from that in the case of strong AI. It is not that the presence of organic material in our brain generates a mind, since an organic computer would still suffer the problem of no mind. It is that the spark of consciousness is a distinctly unique element to those that can think. Any computer, whether digital, organic, or otherwise, cannot escape the fact that their function is simulated. Someone wrote the code that breaks any task into a matter of manipulating ones and zeros and this code is nothing more than a simulation, with all the limitations that word implies.

Searle proposes that there are different tiers of machines, from calculators to human brains, with each tier stepping closer and closer to passing the Turing test and thus appearing to or actually think. This is hardly a preposterous claim, as we accept that only certain kinds of programs that can pass the Turing test, and that all others are unthinking machines. What separates the machines from the understanding humans, then, is that the machines have been told what to do in a form that they can process. Humans that understand may still be run by a sort of mental code, but the important detail is that the human brain built this code itself and understands the meaning of what it is doing. It is inherently self-aware.

With computing at its best, however, we cannot create a program that actually understands some concept. We cannot create emotions in a system. We cannot create a system that makes a decision based on how it feels about a certain situation. All we can do is provide axioms and data broken down into a element the computer can handle. The computer doesn't understand any concept, doesn't feel any emotion, and doesn't judge based on intuition. It is just a system running blindly. Such a system simply cannot think now nor ever.

Introduction Blog Assignment #-1

E-mail: rfloresmeath@gmail.com
3rd year Junior

I am taking this class because I want to learn how we can minimize the learning curve of computing to make it more accessible and intuitive.

I am proficient in Java, C, C++, and PHP.

In ten years, I hope to either be researching computer security or machine learning.

I think that a system that is capable of producing original thoughts (ie. not a chatterbot) will be the next big technological advancement in computer science.

If I could travel back in time, I would like to meet Grace Hopper because she was a radical innovator in Computer Science.

My favorite shoes are my flip-flops because they're very portable and cheap to replace.

I would like to be fluent in Chinese because that opens up so many more technical documents that cover obscure but interesting technology.

An interesting fact about myself is that I dabble digital audio synthesis, specifically computer-generated compositions.