Wednesday, August 31, 2011

Paper Reading #0: On Computers

Does the Mechanical Turk think for itself too?
In 1980, Dr. John Searle made a case against strong AI, the claim that a program can produce consciousness in a system. He claimed that even a program that passed the Turing test did not ensure that the machine could think, as there is no way to ascertain that the program understood the inherent semantics of the text it read. I agree with his claim as the evidence against thinking computers is so strong.

After all, a computer cannot be self-aware unless it was explicitly told to be self-aware. That is, a computer cannot examine itself while running and say, "Hey, I can understand X" unless someone programmed it to have that statement appear as output in the first place. The machine hardly understood the programmer's intent to have the computer inherently understand what it is doing. It is just an echoing automaton, running yet another job.

However, we could argue (and several of the replies to Searle do so) that the human brain and a computer are simply two different Turing machines. They wonder that if one replaces the brain of a human gradually with circuits that are functionally equivalent to individual neurons, then when consciousness ends for that person. I find that the key is not that the brain is gone, since the presence of mind should be removed from that in the case of strong AI. It is not that the presence of organic material in our brain generates a mind, since an organic computer would still suffer the problem of no mind. It is that the spark of consciousness is a distinctly unique element to those that can think. Any computer, whether digital, organic, or otherwise, cannot escape the fact that their function is simulated. Someone wrote the code that breaks any task into a matter of manipulating ones and zeros and this code is nothing more than a simulation, with all the limitations that word implies.

Searle proposes that there are different tiers of machines, from calculators to human brains, with each tier stepping closer and closer to passing the Turing test and thus appearing to or actually think. This is hardly a preposterous claim, as we accept that only certain kinds of programs that can pass the Turing test, and that all others are unthinking machines. What separates the machines from the understanding humans, then, is that the machines have been told what to do in a form that they can process. Humans that understand may still be run by a sort of mental code, but the important detail is that the human brain built this code itself and understands the meaning of what it is doing. It is inherently self-aware.

With computing at its best, however, we cannot create a program that actually understands some concept. We cannot create emotions in a system. We cannot create a system that makes a decision based on how it feels about a certain situation. All we can do is provide axioms and data broken down into a element the computer can handle. The computer doesn't understand any concept, doesn't feel any emotion, and doesn't judge based on intuition. It is just a system running blindly. Such a system simply cannot think now nor ever.

No comments:

Post a Comment