THINKING ALLOWED
Conversations On The Leading Edge
Of Knowledge and Discovery
With Dr. Jeffrey Mishlove

 COPYRIGHT (C) 1998 THINKING ALLOWED PRODUCTIONS


MINDS, BRAINS AND SCIENCE
With JOHN SEARLE, Ph.D.

JEFFREY MISHLOVE, Ph.D.: Hello and welcome. Our topic today is intentionality in science, and my guest is Dr. John Searle, a professor of philosophy and cognitive science at the University of California at Berkeley. Dr. Searle is the author of many interesting books, including Intentionality, and also Minds, Brains, and Science. Welcome, John.

JOHN SEARLE, Ph.D.: Thank you.

MISHLOVE: You know, you talk about the mind in a refreshing way. Basically what you say is the mind is just about what it seems to be -- that the naive view of the mind is real; we don't need to look for hidden explanations of how the mind works.

SEARLE: With one qualification, and that is I don't think there is a thing called the mind. I think this noun, mind, is a source of confusion, but I think it's pretty obvious. We really do have beliefs and desires and hopes and fears, and we're conscious pretty much when we're awake, and when we fall asleep we become unconscious except for dreams. So most of our views about the mind are pretty much right, and they could hardly be wrong, because where the mind is concerned, most of the way that things seem has to be the way they are, because for things like being in pain or being conscious, there isn't any difference between really being in pain and seeming to be in pain.

MISHLOVE: I gather you'd probably agree with the physicist Eddington who described mental experience as being direct experience, whereas our other scientific knowledge is secondary, once removed.

SEARLE: Well, that's not a bad picture. It's a source of confusion if you think that implies that somehow or other we can't have direct access to the real world. I think we can through perception. But I take it that part of the point of what he might be driving at is that where my experiences are concerned, there isn't a general distinction between how they seem to be and how they really are.

MISHLOVE: You also seem to take issue with the influential behaviorist school of psychology, that tries to suggest -- although you've just told me you don't like the term the mind -- the behaviorists seem to even go a little further, to suggest that maybe mind or consciousness wasn't even real.

SEARLE: Well, I think that's obviously a lot of nonsense. And I think behaviorism in psychology is pretty much dead, but the influence survives in other forms, in certain versions of artificial intelligence, and the so-called Turing test. But the basic mistake is to try to stop studying the mind and study something else called behavior. Obviously what we're interested in when we study behavior is the mental phenomena that underlie the behavior and cause it. We're not just interested in people as zombies, as mechanical monsters. We're interested in the actual life that is going on in there, and when we're talking about that we're talking about mental phenomena.

MISHLOVE: I'm going to ask you to backtrack for a minute, because you threw out a magic word and we need to define it. You talked about the Turing test. What's that?

SEARLE: Well, there was a famous genius who was one of the great founders of computer science as we now know it, named Alan Turing. He invented a kind of test, and actually it's fairly complicated, but the way that it's come to be accepted is simply if you want to know whether or not a system actually has mental states -- if it actually understands Chinese, say -- just look at how it behaves, and ask, "Can it fool an expert?" If it can fool an expert into thinking it understands Chinese, then it really does understand Chinese, or anything else. That's called the Turing test, and it's used in artificial intelligence.

MISHLOVE: For example, they have computers now that can simulate a psychiatrist, and they've asked people to interact via a teletype with that computer, on the one hand, and with a real psychiatrist. And many people couldn't tell the difference.

SEARLE: Right. Now that would be a case for Alan Turing of passing the Turing test.

MISHLOVE: And by those standards one would say yes, that computer was just as conscious as a real human psychiatrist would have been.

SEARLE: A lot of people think that, and they're totally mistaken.

MISHLOVE: Well, you don't know psychiatrists.

SEARLE: Oh well, I suppose some of them are asleep. But the general phenomenon -- namely, is the behavior sufficient for having the mental state? Is the behavior sufficient for consciousness? -- We know obviously that that's not so.

MISHLOVE: Why?

SEARLE: Well, I have a short little argument to show this, a refutation of the Turing test, and it goes as follows. If you think that a computer can understand something, just in virtue of running a program that simulates human understanding, there's a simple refutation.
Imagine that you're the computer. The way I like to imagine this is to take a language I don't speak at all, like Chinese, and imagine that somebody writes a program for understanding Chinese. So you run the program on the computer, and then you ask questions of the computer in Chinese, and it will give you back the right answers. Now let's just imagine that I am the computer, and I am locked in a room with a lot of Chinese symbols, and I've got the program. The Chinese symbols come in in the form of questions. I give back Chinese symbols in the form of answers. I'm running the program right, and the answers are as good as a native Chinese speaker. I'm passing the Turing test. I don't understand Chinese, and there's no way I could understand Chinese, in that story, because all I've got are the symbols.

MISHLOVE: You simply know that when X symbol comes in, Y symbol goes out.

SEARLE: Exactly. And this is the point of the story. That's all a computer does for anything. That's the beauty of computers. That's why computers are so marvelous, is that they're a symbol-manipulating machine. They don't have to know anything. They don't have to know what any of these words stand for, what any of these symbols mean. They just crunch the symbols. The point of the story I gave you, this Chinese room refutation of the Turing test, is that because the program is a symbol-manipulating procedure, and because the hardware is a symbol-manipulating device, all you have are uninterpreted formal symbols. Programs are syntactical; they have to do with formal structures. Whereas human understanding involves more than a syntax, more than just the formal symbols; you've got to know what the symbols mean. So the difference between passing the Turing test in virtue of just having a syntax, and really understanding the things, is the difference between formal symbols and the actual meaning or the semantics that the symbols have.

MISHLOVE: And those people in the field of artificial intelligence and other related fields, who might claim that computers are in fact conscious, or will be conscious some day perhaps, seem to be developing a world view in which consciousness can somehow occur in a fashion which is devoid of all meaning.

SEARLE: Well, actually I think consciousness is kind of an embarrassment to these guys, because it's kind of hard to believe that the VAX-750 is really conscious. Even a DP-10, even a Cray, isn't conscious. There are very few guys in AI who would actually come out and tell me they think these things are conscious. They give them intentionality; they think they understand. But of course it's the same mistake in both cases -- namely, from the fact that a system has a program, a set of symbol-manipulating devices, and the fact that it can pass the Turing test, it simply doesn't follow that it's conscious or that it understands anything. Now of course, remember we're all computers. In a trivial sense everything is a computer, because everything instantiates or implements some program or other. You're a computer. So am I. So of course some computers are conscious, namely, you and me. That's not what's at issue. What's at issue is, can a system have understanding? Can it be conscious, solely in virtue of running a computer program? Is that enough, is that all there is? And the answer to that is obviously no. We knew that before this argument ever got going. As you were pointing out, it's behaviorism, plus the Turing test, plus the power of computers, that's given people this illusion that all you need is a program that will pass the Turing test.

MISHLOVE: You also take issue with kind of the inverse of this argument, which is those in the field of cognitive science who are now saying the human mind is just a computer, and when we apply information-processing diagrams, then we'll understand how the human mind works.

SEARLE: Well, what they say, if you watch them carefully, is -- this is a favorite equation in the cognitive science literature, is to say the mind is to the brain as the program is to the hardware. So the mind is a program, and any system running the right program would have a mind in exactly the same sense that you and I do. Now, we know that that's false, and we can demonstrate its falsity in exactly four words: syntax is not semantics. That's really all it takes, given the fact that the program is syntactical.

MISHLOVE: Let's just define syntax for a moment.

SEARLE: Syntax just means the shapes of the symbols and the rules for their arrangement.

MISHLOVE: As opposed to the meanings.

SEARLE: As opposed to meanings. Remember, and Turing is partly responsible for this, the computer can be defined in terms of a set of operations on zeros and ones -- two arbitrary symbols. This is what's so wonderful about computers. You take these symbols, and you do sequences of zeros and ones, and formal operations -- like for example you erase the one on this square of the tape and then the head moves to the left of the tape and writes a zero, and then it moves back. These are just strictly symbol-manipulating operations. That's what I'm calling syntactical. It just has to do with the formal shape. By formal here, all I mean is just the shape of the symbols. But now you mentioned another view in cognitive science. I don't know if we care to get into it in any depth, but it's what I call weak AI, or cautious or sane AI, as opposed to strong AI, which is the view that that's all the mind is.

MISHLOVE: AI stands for artificial intelligence.

SEARLE: Anyway, one of the interesting views on weak AI is the idea, well, look, maybe the program isn't all there is to having a mind, but you've got to at least have the program, because mental states are computational states, and mental processes are computational processes. So that in a way is a way where they can concede the obvious point I'm making -- namely, the program is not enough; it's not a sufficient condition. But then they think maybe it's a necessary condition. Now that's a kind of desperate maneuver, and there isn't any evidence for it, but if you probe the reasons why these guys hold this view, it's the old mind-body problem. They can't see any other way to solve the mind-body problem. They think intentionality can't function causally by itself, and it can only function if it's physically implemented. And this is where they get excited. They say in the entire history of science there's only been one answer to how mental states could function causally, how semantics could be causal, and that is that it's implemented in a computer hardware.

MISHLOVE: And you take issue with that idea.

SEARLE: Right. That is, I think that there isn't a problem about how semantics can function causally, because we all experience it every day. For example, by semantics we just mean mental content. OK, right now I happen to be very thirsty -- typical of people on television, they get thirsty. Later on I'm going to go and drink some water. My state of thirst is going to cause me to drink water. That's a case of a semantic content causing behavior. Nothing mysterious about that. I want to drive to Berkeley, and I believe the best way is to go over the Richmond Bridge. That combination is going to cause some behavior on my part. So the problem that it was designed to solve -- namely, how can mental content and semantics function causally to move my body around? -- that was never a problem in the first place.

MISHLOVE: I think one of the interesting things I find in your work, as a philosopher, is your willingness to deal with this kind of naive view of causality, and to stand up for it -- to suggest that this is our basic experience, this is what we have, nobody is going to be convinced otherwise, even though it seems to go against the whole grain of modern science.

SEARLE: Well, actually, the whole grain of modern science is not so much opposed to this as you might think. It's the ideology which has been inherited from a seventeenth- and eighteenth-century attack on a certain conception of causation. The attack was led by David Hume. He said there isn't any experience of causation; all there is are statements of regularities. All we can find are regularities in the universe, and all we mean by causation are these regularities. Now of course, nowadays we think the regularities are the scientific laws. But the real mistake was to suppose that there's no experience of causation. It seems to me it's something we experience every day. Just watch; my arm went up. Now why did it go up? Well, I decided to raise it. A mental event, my decision, actually caused a physical event. Somebody thinks that can't happen, that I can't experience that? You just watch me. So there was a basic mistake in Hume. He was looking at the wrong place. He wanted to look out at events in the world and find an experience of a connection between the events, whereas what I'm suggesting is the actual experience of causation is in our ordinary perception and action. That is an experience of acting -- that's intentional causation moving our bodies. Or, if you like, in perception our bodies are affected by the outside world; once again we experience the world causally impacting on our bodies.

MISHLOVE: Wouldn't you suggest that the world of physics suggests that the whole universe is made up of nothing but particles, and these particles move back and forth and collide into each other -- I think you even go so far as to say that in spite of quantum indeterminacy that from this point of view events in the macro world are all determined by the movement of these particles. So it's rather bold, I should think, to suggest that a mental event can have a causal influence.

SEARLE: OK, now you posed the problem exactly, and let me just give a brief answer to it, and that's this. Is the world made up of minute physical particles? Well, particles is a bit misleading; they're points of mass-energy. But anyway, it's made up -- we'll use a technical term -- of itsy bitsy bits of stuff. That's a lot of little entities which are the ultimate composition of reality. But now here's the marvelous thing, from this point of view, and that is this combination, these systems of entities, have higher-level physical features. They have such things as a mass and velocity of larger systems than just the individual particles of which the systems are composed. And among those systems are biological systems, some of which are alive, and among those that are alive some of them have nervous systems, and some of those nervous systems are able to sustain consciousness. Now in this picture, consciousness is just an ordinary higher-level physical property of nervous systems -- no more mysterious that my brain should be conscious, than that a bunch of H2O molecules should be in a liquid form. Of course you can't say of any molecule, this one's wet, or this one's liquid; but the whole system is liquid. In exactly the same way you can't say of any neuron, this one's conscious; but the whole system is conscious. So what I'm trying to get across is this: you're absolutely right; the world is entirely made up of and accountable for in terms of physical particles, but at the same time there are higher-level features of these physical particles, such as solidity, liquidity, and consciousness, and they function causally. The higher-level features have a separate causal level of reality. So that if you're pounding a nail with a hammer, that solidity and the weight of the hammer head function causally, in the real causation there, even though, of course, the hammer head has properties that are entirely explicable in terms of the molecular structure. Similarly with consciousness; my conscious desire to raise my arm can cause my arm to go up, even though, of course, the whole thing has a level of description where it consists of acetylcholine, a neurotransmitter, being transmitted across the synaptic cleft.

MISHLOVE: So from your point of view the mind-body problem has essentially been solved. They're one and the same.

SEARLE: That's right.

MISHLOVE: Just like whatever perspective you're looking at the phenomenon from. From one perspective it looks mental, from another perspective it looks physical, like the wave-particle duality.

SEARLE: Well, it isn't even a duality. I want to say the big mistake -- and Descartes' got a lot to answer for, because he more than anybody else in the seventeenth century is responsible for this mistake, but it's been with us for three hundred years -- the big mistake is to suppose if it's mental it can't be physical, if it's physical it can't be mental. Now what I'm trying to say is, look, we just live in one world. Let's call that world physical; it's a good enough word. And among the properties in that world are some that are mental. But they're not mental as opposed to physical. They're physical because they're mental. Consciousness is just a higher-level physical state of the brain, just like weight and liquidity and solidity.

MISHLOVE: You're not saying that it's an epiphenomenon then?

SEARLE: No, absolutely not -- I mean, not any more than the solidity of the hammer is an epiphenomenon. A lot of people think, oh well, if it's just a feature of the brain, then the real work is being done down there at the level of the neurons and the synapses. But nobody would say that about the hammer or the piston. I mean, when I go in to get my car fixed in the garage, what they're interested in are things like crankshafts and pistons and connecting rods, and nobody says, well, it's the molecules that we're worried about. No, the pistons and the crankshaft and the spark plug are real, causal factors. Of course they're explicable in terms of the molecular construction, the molecular behavior. But the higher level isn't epiphenomenal just because it's higher. It can function causally. And the same goes for consciousness.

MISHLOVE: So you think that the mind-body problem, then, is really not a problem; in effect, it's a non-problem.

SEARLE: That's right, it's a non-problem.

MISHLOVE: Mind and body are in effect a unity, as far as you're concerned, an integrated unity. But you do feel that the issue of free will versus determinism is an unsolved problem.

SEARLE: Absolutely. And I wish I had a solution to it, and I'm convinced that most of the solutions in contemporary philosophy are just a cop-out. But the problem is that we have a conception of nature which is derived from the physical sciences, which is so powerful that we're just not going to give it up very easily. That is that the universe is entirely explicable in terms of the law-like behavior of elements, and these elements can be higher- or lower-level elements. The point is that the universe is explicable in terms of law-like behavior. Some of these laws may be only statistical; I mean, we want to allow for indeterminacy in physics. Indeterminacy is really no help to free will. But in our own experience, we just experience freedom of choice. I mean, it's just a fact about me right now that I feel in my experience that I can move this arm or this arm or neither arm. That is, we just have built into our consciousness the sense of alternatives. And those two pictures are really at war -- the picture of the universe as essentially incapable of freedom of choice because of its law-like, regular behavior, and the conception of there being features of the universe, namely, conscious, rational decision making, that are not in that way law-like, that are not in that way just a matter of blind physical forces. And though there are lots of efforts in philosophy to bring those two together, to show that they're really compatible, I think they're really not. I think we can't give up our conviction of our own freedom, even though there's no ground for it. What I'm trying to say is this: the conviction of freedom is built into our experiences; we can't just give it up. If we tried to, we couldn't live with it. We can say, OK, I believe in determinism; but then when we go into a restaurant we have to make up our mind what we're going to order, and that's a free choice. But at the same time, that conviction of freedom that's built into our experiences is inconsistent with what we know about how the world works.

MISHLOVE: Well, do you think then that science might give a little bit on this -- that we could build a science around the notion of creatures that have free will?

SEARLE: Not without a radically different conception of science than the one that's been so powerful since the seventeenth century. Now, there are various people that try to fiddle around with quantum indeterminacy to make it support freedom of will, but I've never seen any effort that looked even remotely plausible. The point about quantum indeterminacy is not that the particles have conscious, rational decision-making processes. Nothing of the sort. I mean, they're just blind forces. They happen to be only statistically predictable.

MISHLOVE: What about the social sciences? People in management science, for example, talk about goal-directed behavior all the time.

SEARLE: I think that the real subject matter of the social sciences in general is intentionality, and the social sciences have made a big mistake in thinking they've got to look like physics and chemistry. That's where the prestige and the money and the grants have been, in physics and chemistry, so it looks like they ought to be imitating that. But that's a mistake. What we're really talking about is applied intentionality. In the so-called social sciences, we're really interested in exactly goal-directed behavior. That's what economics is about.

MISHLOVE: So in that sense, if one were to acknowledge that the social sciences have a legitimate domain, it would be this domain of free will.

SEARLE: Well, I think that the social sciences are really independent of the question whether or not we have free will, because what they're dealing with is conscious, intentional behavior of greater or lesser degrees of rationality, and whether or not the systematic features of the phenomena that they're trying to describe are actually characterized by freedom of the will or not, is really independent. I mean, whether or not the guy makes a free choice when he buys a Honda as opposed to a Chevrolet, or a Toyota as opposed to a Nissan -- whether or not he has a free choice doesn't matter to the economist. What he's interested in is the supply-and-demand curve.

MISHLOVE: Let me ask you this question, as a philosopher. There's a long and ancient tradition in philosophy, starting I think with Pythagoras, in which philosophers have taken an interest in mystical disciplines, in understanding the mind through meditation, and developing intentionality through various mystical exercises and so on. What's your perspective as a modern philosopher on this?

SEARLE: Well, my perspective is very simple. Use any weapon that comes to hand, and nothing gets ruled out of court a priori. You've got to try it out. However, the important thing to see is that if there were mystical experiences, they're experiences like any other. If there was a supersensible reality that was accessible through mystical experiences, OK, that's part of reality that's accessible through a certain kind of experience. If God and the angels exist, that's a fact of science like any other. What we're interested in is, what are the facts? How does our constitution as biological beings enable us to get at the facts? And if it turns out that we're capable of certain kinds of experiences that I personally am not capable of, then that's interesting to me. I'd be interested to find out about that. Unfortunately, I've never seen anything that wasn't grossly implausible, but I have an open mind about this. If somebody can present me with some evidence that there are certain kinds of insights to be gained through mystical experience, fine. Use any weapon you can get, and use anything you need to solve your problems. If you can solve your problems after having drunk a whole lot of wine, drink the wine. That doesn't work for me. Everybody should use the method that is best for them.

MISHLOVE: But don't drive.

SEARLE: And nothing is to be ruled out of court.

MISHLOVE: One of the aspects of philosophy in general, I suppose, and undoubtedly you would encourage it, is for people to question their prejudices, their assumptions, when they go into these realms, because they may have opinions that are based on conditioning, for example, that don't really hold up.

SEARLE: Well, in my experience very few of us, in fact I guess all of us, fail to use our intellect to the fullest. I know very few people, and perhaps nobody, who really uses his or her intellect to the absolute maximum. In a way that's what philosophy is about, is trying to get people -- oneself first, but one's students and colleagues and the public in general -- to try to think a whole lot harder. And one of the ways is to stop taking things for granted. I pick up the newspaper every day and there's all kinds of nonsense -- it's more or less obvious nonsense -- being spread around. Artificial intelligence is one of the most famous cases of this. Anybody can open a newspaper and see it's absolutely silly, a lot of the things that are said -- how by next Christmas, or at least before the end of the century, we're all going to have household robots that are going to do all the housework, entertain us with lively conversation, take care of the kids, and amuse us in our old age, and so on. It's a lot of nonsense. I've actually studied some work in robotics, and it's a very primitive state of development. So that's just one example. People are very gullible, and I guess one of the socially useful tasks of philosophers is to try to cast a little skeptical doubt on some of the nonsense people like to believe.

MISHLOVE: Well, Professor John Searle, you've certainly built a career on challenging a lot of people's prize preconceptions. It's been just a delight sharing that with you today.

SEARLE: Thank you.

MISHLOVE: Thank you very much for being with me.

END


Image:Back to the Top of PageTop of  this Web Page

William James Bookstore Home Page

Thinking Allowed Productions Home Page