Searle in the Chinese Room - Some Objections

These objections relate to an article,"Minds, Brains and Programs," as published in the Oxford Reader 'The Philosophy of Artificial Intelligence'.

The Chinese Room

Creative Commons License
This work is licensed under a Creative Commons Attribution-Share Alike 2.5 License
Firstly, Searle states that he would have no understanding of Chinese, even if he internalized the rules. I object that the Strong AI standpoint is that the program would have understanding and not the computer. Even viewing the system as a whole it would still be the program which did the understanding. This invites the Dualism objection, but more of that later.

Secondly, Searle blithely goes on about internalizing the program. There is talk about 'Chinese understanding' subsystems, these are shown to be the ridiculous ideas they are. Internalizing the program would not internalize the understanding. For a start, the program is not written in a language compatible with the brain (or human computer). English rules are not equivalent to human programs. If Searle were actually to internalize the rules he would need to input them in some form his brain could deal with. Simple memorization is not the same, the Chinese understanding program would then be running on a virtual machine implemented in the virtual machine of his mind - for understanding the chinese program must be at the same level as the mind. I posit that such a process is possible, it happens every time someone learns a new language. We would be unlikely to argue that a native English speaker who learned Chinese, and could talk fluently, did not understand Chinese. However, this is merely a process of relating the understanding already present in the system to new rules - so it does not settle the issue one way or the other. Leaving aside the issue of whether the stomach understands digestion, the stomach digests, the brain understands - internalizing the rulebook is equivalent to telling the brain to do digestion. Whatever the brain did, it clearly would not be real digestion.

The Systems Reply

The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not at all easy to see how someone who was not in the grip of an ideology would find that idea at all plausible.

Okay, I don't understand quantum mechanics. I, combined with a good textbook, can come to understand quantum mechanics. I know that we cannot really consider the textbook to be a program for understanding, but somehow we come to understand through the medium of the textbook. Somehow, out of the system including me and the textbook, understanding has occurred. I have somehow been given the program to understand quantum mechanics. It could be said we write our own program in the process of learning. The subtext here is 'learning is programming', more thought is needed in this area.

The Robot Reply

The refutation by Searle rests on the dualist assumptions made by the robot designers. Senses and motor responses are somehow seen as additions to the understanding system in the 'brain'. It seems obvious to me that any sensible understanding program must incorporate senses and motor responses, not as additions but as components of understanding. The eyes are not a seperate thing from the brain, pain receptors in the foot are not a seperate thing from the brain. The brain and nervous system are not seperately functioning entities with simple links, they are the same thing. How this relates to the Strong AI position is an area of doubt.

Conclusions

No one supposes that computer simulations of a five-alarm fire will burn the neighbourhood down or that a computer simulation of a rainstorm will leave us all drenched.

Would a virtual mind, in a virtual person, experience a virtual rainstorm in the same way as we experience a rainstorm? The objection would be that, even if they did, this would not satisfy the Strong AI hypothesis. To me the claim that 'the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition' has a built in achilles heel. A computer, however it is programmed, is never going to have the same mental states as a human. This does not rule out the possibility of the computer having mental states. To have the same mental states as a human, a machine would have to be, in many ways, like a human. This point is made by Searle towards the end of the article, it has to be said he is right. The assumption that a computer and a simulation of that same computer are equivalent and equal is going to have to go. In actual simulations of equivalently powerful computers the simulations are often qualitively different from the actual instantiations. While a Mac might emulate a Wintel machine, and seem equivalent to those programs run on it, to an outside observer (or meta program) the two machines will differ radically in speed as well as other ways. It is clear they are different on at least one level. If the most efficient way to run Windows is on a PC compatible, the most efficient way to run the human 'program' has got to be on human hardware. The contention is that this is a unique configuration, running human software on any other hardware is not going to have the same effects.

One point I'd like to bring up is the relationship between real computers and Universal Turing Machines. What should be remembered is that minds and computers are not UTM's, they might be considered as implementations of UTM's but they don't have to be. An actual computer is not a UTM, it is not an isolated system with no link to the wider world. While consideration of UTM's can help to analyse algorithms it has little relevance to I/O (which is usually ignored). A little reflection should indicate how important a part I/O is of a modern computer and it's software, and also to people in society. When Searle argues that a UTM could not have consciousness he is not necessarily making an argument against the possibility of a digital computer having consciousness. However, if we were to abandon the theoretical framework of UTM's we would be moving away from the Strong AI claim - we would have to concede that maybe we couldn't build a machine out of tin cans that would have consciousness because timing issues would preclude it. We would be in a position that an actual machine with actual consciousness would require certain physical characteristics such as rapid communication between elements. An advantage would be that all Searle's arguments, based so strongly on UTM principals, would be irrelevant.

Can we abandon Strong AI but still claim more than Weak AI? I think so, properly configured programs, capable of learning, sensing and acting in a meaningful manner, will be capable of understanding. The idea that they should understand in the equivalent way to humans is misconceived.

© Robert Crowther