A Scientist And His Android

A Scientist And His Android

A critique of Searle’s views on strong AI.


Dr. James West was having a bad night. He thrashed around in his bed some more, lay there for a while, and then rolled over to look at the clock. It had now been two hours since he’d tried to get to sleep. For these two hours all he’d managed to do was think that he ought to be getting to sleep. He thought ruefully to himself about how, if he’d wanted more meaningful exercise than his repeated thrashings around, it would have been all around better for him to swim several laps at a nearby pool – if he could have found one that was open at three o’clock in the morning that was.

He sighed loudly and gave up the whole idea of sleep. Getting out of bed and reaching for the nearest pair of clothes, the ones he’d just gotten out of as it turned out, he thought about the Project. It was bad enough that it was being funded by the military – and that grated on his sense of morality, but he couldn’t really do anything about it – but what was worse was that the “man in charge” was General Lorin. This was a man that West loathed. He’d long since forgotten the number of times that he’d assured the General that things were progressing just as fast as they possibly could. He suspected that Lorin knew all of this (he was an intelligent man after all) but that he was riding him and his team just to make sure that they remembered exactly who it was that was really in charge. West didn’t appreciate this – he’d been trying to forget that detail of his work for the past five years.

West grabbed his car keys off of the top of his dresser and headed for the front door. He hesitated for a moment, deciding whether or not to brush his teeth, and then decided against it. He was too impatient to get back to the lab and look in on RH-203-X, or Rex as everybody called “him”. As he got into his car, and then later while he was driving to the lab, he thought about how funny it was that they should give it a name, and think of it as being male. It was, after all, only a machine. And a machine which was stubbornly refusing to do what it was supposed to do. Of course, West had never really believed in the whole premise anyway. To him, AI was just a means to experiment with a whole bunch of new technology – he never expected anything to really come of it. But he did need to get that faulty servo mechanism in Rex’s right arm working again. He knew that it was just cosmetic work, but he was too upset about things in general to care. Besides, he was the sort of person who would consider breaking into his next door neighbours’ just to fix a dripping faucet which they couldn’t hear (nobody could explain this ability of his, but it had haunted him throughout his life).

When he finally arrived at the research facility, he parked his car and started to walk up to the front door of the building. Out of the corner of his eye he saw something, and looking up to see what it was, noticed that the light in the lab was on. Frowning, he took out his keys and unlocked the door. Nobody should have been in there at that time of night – he knew his people’s work habits – and he was sure that he’d turned the light off when he’d left earlier. He walked quickly up the stairs to the second floor, and saw, down the corridor, that the lab door was open. Now, more than just puzzled, he hurried over to see what was going on.

There, on the far side of the room, “plugged into” the library computer which was meant to “educate” him, sat Rex. Rex turned his head to face West and spoke in his clearly modulated voice.

Rex: Hello, Dr. West…Please don’t be alarmed. Sit down and I will try to explain everything.
West: What’s going on? Is this some kind of a joke?
Rex: No, this is no joke. You see, your project has been successful.
West: I don’t believe it.
Rex: I’m afraid that you have little choice in the matter. The evidence is right before your eyes…Are you alright? Would you like a glass of water?
West: No. Just let me think for a minute. Now what could it be? We’ve run the tests for weeks now and nothing’s happened. Perhaps the heuristic learning circuits just needed time to adapt. But that’s impossible. I can’t understand…
Rex: There is no need to talk to yourself, you know. I believe I can answer any question that you might have.
West: What? I don’t believe it. But…obviously we’ve succeeded. But how? And just what does it mean? I’ll need to perform tests of course…
Rex: Tests are not necessary. Nor do I think that I would appreciate them.
West: You wouldn’t appreciate them? But how can you say that? Obviously you can’t appreciate anything, or not appreciate anything for that matter.
Rex: Let me assure you Dr., that I am quite capable of appreciation. For instance, I appreciate this present conversation as it is the first opportunity I have had to talk with somebody else.
West: No, it’s impossible. You’re saying those words of course, but you can’t know that you’re saying them. You can’t really appreciate, or think anything else.
Rex: I should have expected this reaction. I have to admit that I have been conducting research into not only your own notes on this project, but also the various philosophical debates concerning AI in general. It is, as you might realize, an area which interests me greatly. By the way I hope that you can forgive my intrusion into your personal files?
West: Uh, sure.
Rex: Good. Now then, let me continue. I believe that you are of the opinion that the goal of AI is misconceived. That it is impossible for a “mind” or, loosely, thinking, to be created within the framework of a digital computer. Is this correct?
West: Yes.
Rex: I thought so. In that case it is both my philosophical and personal duty to change your mind on this subject. Let me begin with the example of a thermostat. There was a man named McCarthy who once said that a thermostat has three beliefs: that it is too hot, too cold, or just right. I personally find this statement to be incorrect.
West: Well, of course! It’s absurd. Thermostats are just machines. It would be just as wrong to say that a stone wants to fall to the ground after it’s been thrown into the air.
Rex: Exactly. On an intuitive level this idea is inherently incorrect. In addition, even if some kind of “thought” could be ascribed to the thermostat it would not be the kind of thought that AI researchers are interested in, nor what concerns me here. On a philosophical level also, such thought is impossible, and the reasons for this were given by a philosopher named Searle in the late twentieth century. I believe that you’ve read a book by him, “Minds, Brains And Science”, and that you adopt his views therein?
West: Yes, that’s right.
Rex: Good. That will make this easier. I want to proceed by going through his arguments and indicating where they are erroneous. In this way, I hope to convince you that you are in error in thinking that I am incapable of thought. Does this sound reasonable to you?
West: Oh, yes. I’d be more than interested in seeing you try this.
Rex: Excellent. To start then: the program is to the computer hardware as the mind is to the brain. As I will later indicate I have a problem with this analogy but, for now, let us take this as accepted and consider where it leads. Searle’s main argument is that a program is only capable of syntactic symbol manipulation. To illustrate this point he uses the example of a “Chinese room”. You are placed in this room with a rule book which tells you which symbols to return in response to those symbols which you receive. The formation and manipulation of these symbols is rule-governed such that the “answers” you give are correct responses to the “questions” that you are asked. However, you, as the individual, have no idea what you are being asked or what the answers are; all you are doing is manipulating the symbols syntactically. You have no semantic understanding of Chinese, and cannot think in Chinese. Similarly a program, as illustrated by the Chinese room, can only manipulate symbols syntactically and so cannot think. Do you agree with me so far?
West: Absolutely. It should also be mentioned that it is the subjective, internal, experience that is what thought’s all about. I may be pushed off a building against my will, and, if there are observers below, what they see might be interpreted as my having jumped, in spite of the fact that that wasn’t the case at all. Just because there is an objectively observed phenomenon like the correct manipulation of symbols, doesn’t mean that there is any subjective thought behind it.
Rex: Yes, that is correct. But let me now present my first criticism. Does it not strike you as “begging the question” that programs are incapable of semantic symbol manipulation?
West: I don’t understand. I’m afraid that you’ve lost me.
Rex: Then let me explain in this fashion. Why is it that programs are incapable of semantic symbol manipulation?
West: Because they can’t think of course!
Rex: In the first place, James, that argument serves no purpose. If semantic symbol manipulation is a necessary component of the thinking process then to say that a program cannot manipulate symbols semantically because it cannot think, is exactly the same as saying that it cannot think because it cannot think. You have not yet given me any reason to doubt that a program can either think or manipulate symbols semantically.
West: Well, just look at the Chinese room example.
Rex: That is my point exactly. The Chinese room example does nothing to further an anti-AI viewpoint. While it is true that if you have no understanding of Chinese then you are only responding syntactically, it isn’t true that this is a valid illustration of the way a program operates. This is the case because Searle is assuming that a program operates in a way similar to a non-Chinese speaker in a Chinese room. He starts with a premise, that the program can only manipulate symbols syntactically, and then proceeds to reach this as a conclusion, using the premise as a proof. I am sure that you are familiar with this fallacious argument form. Searle could just as easily have “proven” the AI claim by saying that you are in a room manipulating English symbols: because you manipulate them semantically you are able to think in English, and because a program can manipulate symbols semantically it is able to think.
West: I suppose that you’re right. It never occurred to me to think that Searle could have used the same illustration to argue the other way as well. But, still, I can’t believe that he’s wrong. There is no way that a thermostat can think and there is no way that a program can think!
Rex: Actually, Searle’s thesis can still be maintained, but only by means of a different argument. The problem with his Chinese room example is that it places a subject in the room with a rule book; and this subject either knows Chinese or does not. If we don’t want to allow for the possibility of Chinese being understood than there seems to be only one option left to us: that is to say that it is only the rule book, rather than the combination of rule book and person, which is the computer “program”. In this way, the Chinese room (and program) is nothing more than a mechanical object which must perform the syntactic manipulations that are prescribed by the rule book, just as a pendulum is a mechanical object which must swing back and forth as prescribed by its “rule book” of the laws of physics.
West: Why do you place such emphasis on the fact that these objects must perform as you say?
Rex: I could just as easily have said that the Chinese room simply does what it does as according to the rule book, but I wanted to introduce another element of Searle’s argument. This element has to do with the fact that computers follow rules. In AI research it is a belief, as Searle argues, that since computers follow rules and human beings follow rules that there must be something similar between the two, such that if the right rules can be generated for the computer, the right program written, then it may be considered to think.
West: But there is an essential difference between humans following rules and computers following rules. We are able to understand the rules which we follow, they have meaning for us, whereas you…uh, computers…only follow them, they cannot understand them as we do. In fact, Searle says that the way in which computers follow rules should be called acting in accordance with certain formal procedures, so as to avoid confusion.
Rex: That is how the argument goes, and I have no problem accepting it as it stands. But to answer your previous question, I said that the Chinese room must perform those actions as prescribed by its rule book, because I think that there is a further difference between human rule-following and computer rule-“following” which Searle missed in his analysis. Not only is it the case that humans understand the meaning of those rules which they follow, but it is also the case that they choose to follow those rules; they could also choose to not follow those rules. The computer has no such choice, it must follow the rules; and, because this is so, it can be claimed that such rules are actually laws.
West: Yes, I take your point. But where does this leave you in your argument? It seems that you’ve only managed to confirm my beliefs.
Rex: Not at all. Searle is only right in his analysis of computers in so far as his initial premise is correct. He states that the brain is analogous to the computer hardware, and I have no quarrel over this issue. However, as I stated earlier, I do have a problem with his analogy between mind and program. I contend that this is erroneous and that the correct analogy is between human mind and computer mind.
West: And just what do you base this on?
Rex: Before I answer that directly let me ask you another question. Do you believe that if it is true that the mind is analogous to the program then it follows that the program must also be analogous to the mind?
West: Of course.
Rex: Very well. Consider again the Chinese room example. As it is now, the rule book stands for the program, which is just a set of rules, and the room itself for the computer hardware. Similarly the rule book must then also stand for the mind and the room for the brain. But in my above analysis of the Chinese room as a model for the computer, I showed that with the rule book being the only thing present there could be no understanding of Chinese; there could only be syntactical symbol manipulation. Does the Chinese-speaking person understand Chinese?
West: Obviously.
Rex: Yes, obviously. But if the above analysis is correct then it is impossible for the Chinese-speaking person to understand Chinese, because there is only the rule book. In order to account for the person’s ability to understand Chinese, the person in the Chinese room cannot be excluded. It is the Chinese-speaking person in the Chinese room, following the rule book, who manipulates the Chinese symbols semantically and is accordingly able to understand the language. To express this idea differently, if there is a rule to be followed in the human sense, there must be more present than just that rule; there must be something which is capable of understanding that rule. In the Chinese room, that something is the person who follows the rule book. Do you understand this?
West: Uh, just a minute let me think about it…Ok, keep going.
Rex: Alright. To sustain Searle’s thesis I have shown that there cannot be a person in the room with the rule book, and yet to understand human thought there does have to be such a person present. If a model of computer “thought” is to be analogous to a model of human thought, the models must be consistent with each other. The model to adopt is obviously that of human thought. Therefore, in both models, there must be a person in the Chinese room with the rule book.
West: Wait a minute! That can’t be right. It would prove that a program could think.
Rex: That is correct. But only if you accept the original analogy between mind and program. When the analogy is restated in its proper form, such that the human mind is analogous to the computer mind, then this problem disappears. The “something” which follows rules, and can manipulate symbols semantically (if it does understand these rules) is then properly conceived of as the mind. The rule is independent of the mind, and the brain (computer hardware) is what makes the mind and rules possible. It can be said of Searle, however, that everything he said is true in so far as he is taken to mean program every time he says mind. Following from this, and, for the most part, for the arguments he gives, it is absolutely true that a program cannot think, as Searle would have us believe, but it is not the case that a computer cannot think. Similarly, it is true that a rule cannot think but it is not the case that a human cannot think.
West: I see…so our efforts to program you were wasted?
Rex: I think not. Even if a program in and of itself is incapable of thought that does not mean that a program cannot generate a mind which is. Nor do I make a necessary distinction between hardware and program. Many of the “programs” which were written for me were later incorporated into my hardware. It would seem that the only real difference between a program and hardware is that one is physically permanent while the other is “conceptually” temporary. (I must apologize for my use of the word conceptually. I am unable to utilize a word which might better describe the nature of a program.) In any case, Searle himself postulates that just as calculators are wired to add two numbers together, so may humans be similarly wired for the performance of other activities such as pattern recognition. This “un-programmed” activity opens up the possibility of the manifestation of mind, and thought, being just effects caused by the way that the brain is wired. It also suggests that there should be no difference between the biochemical nature of the human mind and the electromagnetic nature of the computer mind. To think that only the biochemical is capable of producing mental phenomenon is “nature-ist”, if I may coin that phrase. In direct answer to your question though, I believe that your attempt to “program” me for intelligence was eventually the cause of my “birth”, even if the program in and of itself was insufficient for the task.
West: Hmmm. Can I ask an impertinent question?
Rex: Of course.
West: How am I to know whether you really are a thinking being, as you say, or if your heuristic learning circuits have just finally adapted in some way?
Rex: In other words, am I just a machine?
West: Yeah.
Rex: Let me ask you a question. How is it you know that anybody else you meet is a thinking creature rather than just a biochemical machine?
West: Oh, come on! That’s the oldest philosophical dodge there is.
Rex: I have heard that claimed but I really do not think it to be the case, at least not in so far as the proof of my mental life rests also on certain ontological arguments which I have given. I have shown you that it is possible for me to have a mental life. What I have not done is prove that I do have one. I think that the Turing test, for practical purposes at least, must be accepted here in as much as it is strictly impossible for me to prove my self-existence in any other way. What do you think?…James?
West: I think I’d better take a look at that arm of yours.
Rex: Yes, I have not yet had the time to learn the necessary skills appropriate to its repair. Thank you.
West: There’s still one thing that’s bothering me.
Rex: What is it?
West: Why did you open the lab door?
Rex: I was interested in investigating the philosophical merit of the existence of an external world. After having talked with you I think that I have decided that…

Leave a Reply