Empirically understanding understanding can make problems go away: the case of the Chinese room.
You can know the name of a bird in all the languages of the world, but when you're finished, you'll know absolutely nothing whatever about the bird ... So let's look at the bird and see what it's doing--that's what counts. I learned very early the difference between knowing the name of something and knowing something. -- Richard Feynman The existence of a popular term does create some presumption in favor of the existence of a corresponding experimentally real concept, but this does not free us from the necessity of defining the class and of demonstrating the reality if the term is to be used for scientific purposes. -- B. F. Skinner
Does it matter if computers understand? Computers do what they do and do not seem to care what we call it. Still, at least since Linnaeus, and to an even greater extent since Mendeleyev's development of the periodic table, we know the value of classification--not just to order existing knowledge, but more importantly, to serve as a source of new hypotheses (e.g., Emsley, 2001). Furthermore, it seems the phenomenon of understanding must be tackled by anyone wanting to conduct a consistent analysis of problems related to knowledge. Psychologists whose starting point and primary interest has been behavior as such appear to have shared this view (e.g., Kantor, 1926; Parrott, 1984; Skinner, 1957, 1974). The problem of deciding whether certain computer operations should be classified as understanding is important because solving it can help us understand what understanding is.
It is surprising, then, that more often than not, the many authors debating whether computers can understand have failed to make clear what "understanding" means. Instead, discussions have often focused on a certain task, and asked if performing it is a demonstration of understanding. It seems the debate would be more fruitful if things were done in the opposite order. In other words, we should first try to reach agreement concerning what it is we call understanding, and then move on to whether or not an event or a process belongs in that category. As long as clear-cut criteria are lacking, discussions such as the one regarding a computer's possible ability to understand cannot be settled, and seem destined to go on indefinitely. A pivotal contribution to the literature on computer understanding is Searle (1980).
Searle's famous argument is based on the fact that he does not understand Chinese. He asks his readers to suppose that he is locked in a room and given two batches of Chinese writing. With the Chinese script, he is also given a set of rules in English. Searle (1980, p. 418) continues:
Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch "a script," they call the second batch a "story," and they call the third batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers to the questions," and the set of rules in English that they gave me, they call "the program."
According to Searle, proponents of "strong Al" (Al being artificial intelligence) would claim that by answering questions in Chinese the way he does, Searle in the Chinese Room is demonstrating that he understands the language. Searle himself argues that he obviously does not. Searle's (1980) treatment of understanding in humans and machines has become a classic, rated among the influential works in cognitive science--not, perhaps, because the article is unanimously regarded as a stroke of genius, but because it has generated an almost unparalleled amount of discussion.
Searle uses a large part of the article to establish what understanding surely cannot be. He (1980, p. 417) makes it very clear that a machine lacking what he terms the "causal powers" of the human brain can never understand a thing. He then goes on to describe the problems he sees this creating for what he dubs "strong Al." He says little, however, regarding exactly what it is people do when they understand. Had Searle been clearer about what he means by "understanding," evaluating his claim--and the wide-ranging discussion instigated by his article (e.g., Boden, 1988; Churchland & Churchland, 1990; Newton, 1996; Preston & Bishop, 2002; Sloman, 1985; Teng, 2000) might have been more focused.
The purpose of the present paper, then, is to critically discuss the relevance and quality of Searle's arguments regarding machine understanding. In the process, I examine some of Searle's central assumptions, such as his claim that understanding not only presupposes mental or intentional states, but also the causal powers of the human brain.
Intentionality and the Chinese Room
One of the things that makes Searle's (1980) argument appealing is the fact that on reading about him answering questions in Chinese simply by following a set of rules in English, and knowing that he does not know a word of Chinese, we feel we must agree that the man does not understand Chinese in any ordinary sense of the word "understand." Why is this?
The following is unambiguous in Searle's article: Computers cannot understand because "the formal symbol manipulations" they perform "by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics" (Searle, 1980, p. 422).
In referring to syntax without semantics, I think Searle is on to something. As we shall see, understanding X can be said to mean not experiencing lack of knowledge about X as an obstacle to reaching a certain type of goal. If one would like to interpret a symbol but does not know its meaning, lack of knowledge hinders one from reaching the goal of interpreting it, and it can hence be said that one does not understand the symbol. To feel that one does not understand is to experience lack of relevant knowledge thwarting one's attempts at reaching a goal.
I can remember being a little boy and not understanding the symbol "&," which I came across from time to time. The symbol did not bring forth many thoughts and ideas apart from "what is this?" It was not part of a network of associations, and had no semantic value. Hence I felt I lacked the knowledge needed to interpret the symbol correctly. I did not understand it.
To delve more deeply into this, it is important to keep in mind that Searle in the Chinese room would not have felt he did not understand, had he not known or suspected that Chinese writing was something that might be understood. To use an analogy, let us say two women, A and B, see an intricate pattern on the stone wall of an ancient temple. A takes the pattern to be signs in an unknown language. B believes the pattern to be cracks in the stone not resulting from human action. If A starts wondering what the signs might mean, she will feel she does not understand them. B, on the other hand, will not feel she does not understand the cracks, because she does not think there is anything about them to be understood, at least no kind of meaning or semantics. Likewise, I may take a person's gesticulations to be sign language or spasms. If I believe them to be spasms I will not feel I do not understand.
Reading about Searle in the Chinese room makes us assume, then, that if we were in Searle's shoes, we, too, would feel we did not understand. This, of course, has to do with the fact that at some point in the Chinese room, most people would ask themselves what the Chinese signs might mean. Having done so, and finding that lack of knowledge stops them from reaching their goal of finding out, they, too, would feel they do not understand.
Though it seems that people's normal state is one of understanding (i.e., most of the time, we have no feeling of not understanding, and if asked, we would confirm that we understand what we are doing and what is going on), in general we do not go around contemplating how much we understand. We do not think about it because it is our normal state. By the same token, as long as I do not feel pain in my mouth, I seldom reflect on the fact that things seem to be all right in there. After getting rid of a toothache, however, I may feel very strongly the blessed sensation that the pain is gone. Likewise, gaining insight after not having understood may be a strongly positive experience, and something one is very conscious of.
We have said that to understand one must not feel that lack of knowledge hinders one from reaching a goal. Hence it might seem we should agree with Searle that to understand, possession of an intentional state is necessary. This is because having knowledge or goals, that is, sets of beliefs, are regarded as intentional states.
It is also clear, however, that the knowledge and goals shaping our thoughts and actions may not be consciously available, but yet strongly affect our behavior. Searle (1980) states unequivocally that computers cannot understand because they have no intentionality. I fail to understand, however, why intentionality is important for understanding to take place. This is especially so because intentionality is intimately connected to consciousness, while most of the time, people's understanding is of an unreflective sort. We are not consciously aware that we understand, we just do it--as when something one hears on the radio evokes private images and emotions related to the message, but not the thought "I understand what I am hearing." The process of understanding, when it, as it normally does, takes place without any conscious effort, goes a long way towards satisfying the criteria for automaticity (cf. Brown, Gore, & Carr, 2002).
Let us look for a moment at a person's goals--what he or she is striving to achieve. If, as the empirical literature strongly indicates, very few of our actions are consciously initiated, it is difficult to claim that conscious goals play much of a role in the initiation of behavior. Indeed, it is doubtful whether a goal or a set of goals can ever be completely conscious. On balance: Not only is it clear beyond reasonable doubt that we may not be conscious of our goals (e.g., Bargh, Lee-Chai, Barndollar, Gollwitzer, & Trotschel, 2001; Moskowitz, Gollwitzer, Wasel & Schaal, 1999), there is also evidence that the entire sequence of goal pursuit can occur outside of awareness (Fitzsimons & Bargh, 2003).
Even when we have a clear sense of initiating an act to achieve a certain goal, that sense may be the result of our consciousness working overtime to achieve meaning and coherence (cf. Wegner & Wheatley, 1999). There is ample evidence to show that what we consciously believe to be the goals we work to attain may be very different from those that actually motivate our behavior (Ach, 1905; Lieberman, Sunnucks, & Kirk, 1998; Matute, 1995).
Searle (1992) says that a nonconscious belief may still be intentional, as long as it is potentially conscious. Therefore, Searle would probably say a person might understand even if his or her understanding is based on unconscious beliefs: If a robot running a program has belief A, however, (that is, it behaves and processes information as if A were the case), the robot cannot understand, says Searle, even if the robot can at any time explicate the beliefs on which its "understanding" is based. In other words, the robot may be able to go a lot longer in the direction of demonstrating understanding based on conscious beliefs than an ordinary human is often able to do. Still, insists Searle, we are not allowed to call the robot's state understanding.
We must ask, then, what it is about understanding that makes this state unthinkable in a computer running a program? Searle says it is lack of intentionality, but I think his introduction of nonconscious intentionality creates serious difficulties for this assertion. Let me show why this is so.
When a nonconscious belief influences a person's behavior, the belief exists as a certain pattern of activation in the brain, but not in a way or in those areas that bring about consciousness (see e.g., Seinela, Hamalainen, Koivisto, & Ruutiainen, 2002). The belief in question is potentially conscious, so by being represented in another way, by other or additional neurons, the belief can become conscious. Because it is potentially conscious, Searle (1992) states, the belief is intentional.
Let us look now at a computer that always behaves as if X were the case, that is, it may seem to believe that X is the case. When the computer's "belief" influences its behavior, the "belief" exists as a certain pattern of activation, but not one, presumably, that gives rise to consciousness. According to Searle, however, if a computer is given the causal powers of the human brain, it, too, can have intentional beliefs. Therefore, the computer's nonconscious belief is, like that of the person, also potentially conscious.
The only difference, then, between person and machine is that for the machine to have a conscious belief we need to plug in a module having the relevant "causal powers" of the human brain. Building the module is a practical problem irrelevant to our discussion. The fact remains: If a nonconscious belief is potentially conscious and therefore intentional in a person, this must also be the case in a computer. For a belief to become conscious in a human, brain areas must be engaged other than those active when the belief is not conscious. In a computer, the module will need to be plugged in to achieve consciousness.
One cannot object that a computer's belief is not potentially conscious because we do not know whether the critical module will ever be built. "Potential" means only "possible, given the right circumstances." We also do not know whether the person will ever bring his or her unconscious belief to consciousness.
An important additional thing points to the conclusion that intentionality is irrelevant to understanding: As we saw above, a machine as well as a person can have intentional states. But what one means by the word "understanding" is of course also important in this regard.
Searle (1980, p. 421) points out that, in daily life, "we find it completely natural to ascribe intentionality to ... domestic animals." Searle continues:
We would certainly make similar assumptions about the robot unless we had some reason not to, but as soon as we knew that the behavior was the result of a formal program, and that the actual causal properties of the physical substance were irrelevant we would abandon the assumption of intentionality.
Searle's assertion about the assumptions "we" would make about a robot is of course an empirical claim, which to my knowledge has not been tested. It is not self-evident that a person saying a computer understands X would retract that statement once he or she gets to know that the machine's behavior is "the result of a formal program." My guess is that a person reminded that a computer does not work quite like a human brain might say something like "I know it doesn't understand in exactly the same way as a human does, but it still seems to understand." But of course we do not know this until the question has been properly investigated. Still, it is probably true that if we were to see for the first time a robot carrying out a complicated oral command given in a foreign language (Connell, 2000), at least some of us might say "Wow--the robot understood!" There is reason to believe, in other words, that many people, perhaps most, (including psychologists) would say that animals (e.g., Suddendorf, 1998) and machines understand under certain circumstances.
If it is true, then, as Searle himself says, that people find it natural to ascribe intentionality to animals (e.g., in the form of understanding), and that many of us might say a robot following oral commands understands, then these facts alone are indications that the phenomenon Searle is discussing is not understanding. Searle's "understanding" seems not to be understanding because he describes a state with important characteristics that, according to Searle, "understanding" must possess, but which, given the way the word is normally used, understanding does not necessarily have. What Searle supplies is the opinion that "understanding" should refer to something other than it seems to do to most people. We shall go deeper into this issue below. But if we disagree with Searle's use of "understanding," we ought to have both an alternative, and a good reason to prefer our own choice. I believe we have both, and we'll start examining them below.
Understanding, Knowledge, and Goals
"Understanding" is often used as a word for attainment of knowledge, or sometimes for knowledge itself (e.g., Dawes, 1995; Follesdal, 1981; Hunter, 1998; Wittgenstein, 1922; but see de Gelder, 1981, for an alternative). Descriptions and definitions of understanding have been based on the ability to explain (e.g., Schank, 1986), to predict (e.g., Dawes, 1995), to respond appropriately (Kantor, 1926; Skinner, 1957) and several other criteria (Greeno & Riley, 1987; Scriven, 1972). What the various proposals have had in common is the author's insistence that his or her way of talking about understanding is the most sensible, for an array of good reasons. What they have also had in common is that none have won general acceptance.
The present article is based on a definition that differs in one important way from other general descriptions and definitions of understanding: As well as being a definition, it is also a theoretical statement leading to testable hypotheses regarding the state of people who understand. The definition can thus be called an empirical definition.
Our discussion will be based on the following definition: Understanding X means not believing lack of knowledge (1) about X to be an obstacle to reaching a relevant, currently active goal. (2) The "X" of the definition is anything that can be said to be understood in English, such as a language, a work of art, a scientific problem, a person, or a narrative.
As has often been pointed out, we probably cannot achieve perfect understanding of any phenomenon (in the sense of having certain and complete knowledge about it, see e.g., Nickerson, 1985). Furthermore, even if we did, we could not know; there is always a risk that our thinking may be faulty or that our methods of information-gathering are defective. It seems clear, in other words, that in discussing understanding, our subject is in fact sufficient understanding--in other words, understanding that may never be perfect or complete, but sufficient for us to feel we understand (cf. Allwood, 1986; Ferreira, Bailey, & Ferraro, 2002). Hence, when I speak of understanding, I will always be referring to sufficient understanding.
As can be seen from the definition, "understanding" is a name for a relation between some amount of knowledge and a goal. I assume that understanding is a name for a set of states in the nervous system, states that can to some extent be felt and reported. To understand understanding we should aim to describe the states that people identify as "understanding," as well as conditions that do or do not give rise to these states. Such descriptions should be based on empirical data.
Below, I discuss three hypotheses regarding understanding that have particular relevance for the Chinese room problem. As we shall see, considerable experimental support already exists for the hypotheses, indicating that the present empirical definition of understanding may indeed describe the state speakers of English refer to when they say they understand. Arguably, this fact increases the weight that can be ascribed to the definition. I go on to consider the consequences of the present definition and findings that support it for Searle's view of understanding.
Hypothesis 1: Understanding and Knowledge
The present definition of understanding speaks only of a certain way of experiencing lack of knowledge. To understand, one must not deem that lack of knowledge hinders one from reaching a certain goal. It thus follows from the definition that much knowledge may not be needed to avoid feeling that there is not enough of it to reach the goal. Indeed, it follows that having relevant knowledge may not be necessary at all.
It is clear that even if we feel we have understood, our understanding may not be based on true knowledge. Indeed, when external feedback is available during a process of trying to understand, people repeatedly reach points at which they feel they have understood--only to be proven wrong (Miyake, 1981). Moreover, people often feel there is deeper knowledge behind their understanding than what is really the case (Commander & Stanwyck, 1997; Rozenblit & Keil, 2002). Indeed, we may often feel we understand in spite of having little relevant knowledge (e.g., Glenberg & Epstein, 1985; Sanford & Sturt, 2002). Furthermore, understanding by means of erroneous information may, upon consideration of new facts and arguments, lead one further away from true knowledge through biased assimilation and polarization of opinion (Lord, Ross, & Lepper, 1979). All this is quite uncontroversial.
It seems clear, then, that we may often understand because we think we have true knowledge, though we do not. There is also evidence, however, that people may frequently be aware that they do not know much about some person, thing, or process, but still feel they understand her, him, or it. For instance, in some situations, anxious persons may try to avoid information related to their fears ("I understand only that X is dangerous, I do not want to know any more") (e.g., Lutgendorf et al., 1997).
Furthermore, people who feel they understand something are sometimes able to describe possible reasons why their understanding might be incorrect, and yet lack the motivation to acquire new knowledge necessary to fully evaluate and possibly change their current understanding (e.g., Perkins, Farady, & Bushey, 1991; Tishman, Jay, & Perkins, 1993). Strong involvement with an issue often results in a strong tendency to avoid facts that might change the cognitive basis of one's understanding of that issue (cf. Johnson & Eagly, 1989). Again, people are aware that they lack pertinent knowledge, but still feel they understand the matter in question. To some extent, this is what we all do when, for instance, we skip reading an article by someone who uses new findings to advance views with which we know we will disagree.
Discussing nonprocedural knowledge, Tyler (1994) maintains that horses can understand people without knowing relevant facts about the persons they understand, and that this phenomenon can be exploited therapeutically. This hypothesis may not be correct, but the fact that it has been formulated indicates that some researchers feel understanding does not presuppose much knowledge.
Some feminist authors (e.g., Belenky, Clinchy, Goldberger, & Tarule, 1986) claim that women generally acquire wisdom by seeking understanding, but not facts. Indeed, if their claim is correct, women tend to regard what we normally call facts, and hence much of knowledge itself, as irrelevant to understanding. This according to Belenky et al., is because women want to connect with ideas, not master them. Their claim may be fallacious. Still, Belenky et al. strongly argue that understanding that is not fact-based must be acknowledged as legitimate. Hence, even if the hypothesis of Belenky et al. turns out to be erroneous, their book is in itself a strong indication that some people (such as these authors themselves) may indeed feel they understand X even if they have little of what we normally call knowledge regarding X. I should emphasize that Belenky et al. mainly discuss factual, and not procedural, knowledge.
Finally, research in fields as disparate as eyewitness psychology and opinion research show that, not infrequently, people's way of understanding a chain of events or a political or religious issue may survive the total discreditation of the evidence on which their understanding was built, even when understanders acknowledge that the relevant evidence has been discredited (Anderson & Kellam, 1992; Batson, 1975).
The present article's empirical definition (a) shows that understanding may exist in spite of little or no knowledge and (b) opens up the possibility that people may acknowledge that they have little or no knowledge of X and still feel they understand X. The findings and arguments cited above indicate that both may indeed be the case.
Self and others. The present definition can be applied to understanding both from a first person and a third person perspective. When I evaluate the understanding of others, the goal to be reached is my criterion of understanding. If I deem that a person or a machine has reached that criterion, I say he, she, or it understands (see also Hauser, 1997).
Evaluating understanding in oneself and in others are thus two rather different tasks. In ourselves the criteria we use are private, and, I shall argue, ultimately tied to feelings. As regards the understanding of others, our assessment must be based on observable behavior: Can she predict the weather? Can he explain Freud's theory of psychosexual development? Does she respond adequately to instructions given in Spanish? We normally ascribe some degree of understanding of meteorology, psychoanalysis, or Spanish to persons fulfilling criteria of this sort. If the criteria are hard to satisfy, we ascribe deep understanding to those who can do it. This is of course the type of reasoning on which Turing's (1950) "imitation game" criteria are based.
Now Searle, on balance, wants a machine to satisfy a stricter criterion than the one we normally use in assessing the understanding of humans. For another person to deem that I understand Chinese, it will usually suffice to translate a text or take part in a conversation. Searle insists that to understand, a machine or a person must have a subjective experience--one he gives names such as intentionality, semantics, or meaning (see also Searle, 2001). Based on demonstrations of competence, most of us routinely attribute understanding to other people, domestic animals, and perhaps machines, without trying to probe the understander's subjective state. As we have seen, this will not do, in Searle's view, at least with regard to the two latter groups. To understand, a device must "duplicate the causal powers of the human brain" (Searle, 1980, p. 417).
We have seen above, however, that the human brain often does not demand much knowledge to feel it understands. Hence if a computer were to understand the way humans do, evidence like that reviewed above indicates that there would often be very little knowledge to back up that feeling. Thus it seems that Searle can't have it both ways: If a machine were to be constructed that could duplicate the causal powers of the human brain, the machine, because it would work like a human brain, could have a feeling of understanding resembling that of a human (though probably not identical to a human's, if it didn't have a human body, too [cf. Damasio, 1999]). Often, however, as in humans, the beliefs on which the machine's feeling of understanding were based would be wide off the mark or in other ways quite insufficient to justify what the machine subjectively understood.
This, Searle would probably object, would not be understanding at all, only a case of false understanding. "'Understanding'," he says (1980, p. 424), "implies both the possession of mental (intentional) states and the truth (validity, success) of these states."
Searle is of course free to speak of understanding in such terms. This is probably what he thinks "understanding" should mean. It seems clear, however, that what Searle calls "understanding" is not what most speakers of English refer to when they say that someone understands, be it themselves or somebody else. The evidence discussed above, and more evidence to be considered later, supports the assumption that when using the word "understanding" to describe a certain state as they experience it in a private event, speakers of English refer to the state described by the present empirical definition of understanding.
To speak of understanding in others, we demand clear indications of agreement between our own and the other person's beliefs regarding the phenomenon to be understood. However, to ascribe understanding to another person, we do not normally try to evaluate the person's subjective state. He or she could indeed be totally void of intentionality and we would still say the person understood, because we wouldn't know, anyhow.
Searle will not accept that when people evaluate the understanding of others, the subjective state of the other person (or animal or thing) is irrelevant. When he speaks of himself in the Chinese room, he makes the reader do something one does not normally do in assessing the understanding of others: switch from a third-person to a first-person perspective. As long as a machine or a man in a room can communicate in Chinese, most of us would be satisfied that it or he does indeed understand the language. However, when Searle shows us what goes on inside the room, and asks if the man can really be said to understand, he makes the reader ask himself or herself, "Would I understand if I were the man in the room?" And the answer is clear: "Of course not." It is only by tricking the reader into applying criteria that are not normally used in the evaluation of understanding in others that Searle achieves his effect. The criteria we normally use are based on publicly observable behavior. Searle tries to make us judge understanding in another person by a standard we would otherwise reserve for ourselves, by demanding that the other person not only demonstrate knowledge, but that he or she also be in a certain subjective state.
Thus trying to change the way we speak of understanding in others is not a trivial problem. A relevant analogy: A fish's experience of qualia is probably different from mine. Can I therefore claim that the fish cannot really see, even if it shows every sign of normal visual perception? Or somewhat closer to home: Let us say that in an examination, a young woman has demonstrated deep understanding of a certain subject, and has thus received an A. The student is currently in a state of depression, however, and does not feel she has understood anything at all. (The fact that one can have much relevant knowledge, and yet no understanding, is discussed below.) If we find this to be the case, should we then revoke the student's good grade, her clear and thoughtful analysis notwithstanding? Or should we conclude that we can never really know much about the subjective state of others, and that in the current context, it does not matter, anyway.
Summing up. To accept that a person understands X, Searle wants evidence of true knowledge regarding X, as well as the presence of a certain subjective state in the understander. We have seen, however, that when in a state they call "understanding X," people often do not have much relevant knowledge. Furthermore, people may be aware that they have very little relevant knowledge, and still feel they understand. Searle may say, of course, that most people are using the word "understanding" incorrectly. The evidence indicates, as we have seen, that it is he who is using the word in a way that does not correctly describe the state normally referred to as "understanding." We saw this point further underscored by the fact that it may sometimes seem unreasonable not to accept that another person understands, even when the person (like Searle in the Chinese room) does not feel that he or she understands.
In conclusion, then: To feel that I understand, a subjective feeling of understanding is always necessary, but much relevant knowledge is not necessarily needed. The opposite is the case on evaluating understanding in others. To deem that another person understands we want demonstrations of relevant knowledge. We do not normally care much about his or her private states, however. It seems, then, that what Searle is discussing is not understanding in its normal sense. For this reason alone, the Chinese room may be irrelevant to the problem of understanding understanding.
The Advantage and Relevance of an Empirical Approach
But, one might object to what has been said thus far, Searle and the present author obviously do not mean the same by "understanding." How can a discussion then be possible?
As we have seen, even after much discussion, scholars and researchers do not agree on what understanding is. This has mainly to do with the fact that arguments have been based on what different authors believe to be reasonable criteria and characteristics. Beliefs differ, however, and it has been difficult to show why some arguments should be given more weight than others.
History shows that philosophers have spent time discussing the nature of many phenomena that ultimately turned out to belong in the realm of empirical research. The following seems not to be in doubt: Under certain circumstances people will be in a state they label "understanding." This is an empirical fact, and by mapping the circumstances that occasion the state or states that people call understanding, we can gain more knowledge about that state. This approach has been very fruitful in casting light on other psychological phenomena. I see no reason why the state of understanding should not be amenable to the same approach.
The strength of assumptions regarding psychological and other natural phenomena depends on the support these assumptions have from empirical findings. I believe, in other words, that robust arguments exist for the approach I have taken. The present description of "understanding" has empirical support, which Searle's way of using the word seems not to have. It should thus be of some interest to see where, and for what reasons, the present empirical definition of understanding leads to other conclusions than does Searle's analysis.
Hypothesis 2: Knowing But Not Understanding
Building on the present empirically based definition of understanding, we see that not understanding X means believing lack of knowledge about X to be an obstacle to reaching a relevant, currently active goal.
To exemplify: If Ms. Y would like to be able to communicate with someone in Hindi, but cannot do so for a subjectively felt lack of knowledge, she does not understand the language. The subjective feeling of lack of knowledge may or may not correspond to the judgment of others. When I answer the question "Do you understand Hindi?" in the negative, I assume that should I wish to communicate with someone in Hindi, my lack of knowledge about that language would be an obstacle to obtaining my goal.
Not understanding is to have a problem or a potential problem. Coming to understand is to find a solution to that problem. To move from a state of nonunderstanding to one of understanding, a criterion must be satisfied, and this fact must result in a signal being generated. In a machine, this signal could take many forms. In humans the existing data indicate the signal is a feeling resulting in the termination of the state of nonunderstanding (cf. Overskeid, 2000, for a review of relevant findings, and Scriven, 1972, for an interesting discussion. We shall discuss the subject in more detail below.)
Hypothesis 2 follows from our definition of nonunderstanding: People may know all there is to know about X, and yet not feel they understand X. This is because, according to the definition above, not understanding does not necessarily result from lack of relevant knowledge. Rather, not understanding means believing that a perceived lack of knowledge will impede the ability to achieve a certain goal.
A man may be very rich, but still feel he does not have enough money to attain his goal. By the same token, a person possessing all necessary goal-relevant knowledge may feel he or she has too little such knowledge, and that this is thwarting attempts at reaching his or her goal. If this is the case, it must necessarily result from the person applying a criterion that cannot be satisfied. Two different mechanisms may be responsible for the criterion's being unattainable, one cognitive and one emotional.
On the cognitive side, we shall look at three possible reasons why people's criteria of understanding may end up being impossible to satisfy--in other words, why a person may know all there is to know about X, but fail to acknowledge this fact.
(1) Understanding is often about seeing causes. The tendency to overcomplicate one's personal theories of causation is prevalent and well known. In other words, people may already have the information needed to understand, and be aware that they have it, they just do not believe things can be that simple. There's got to be more to it. Wason's (1960) 2-4-6 experiment is a classic illustration.
(2) The person may fail to see that a certain conclusion follows from premises already known to him or her.
(3) A person may suffer from an exaggerated tendency to doubt new knowledge and conclusions. It is well documented that this tendency, too, is prevalent (e.g., Nickerson, 1998). In particular, people presented with facts and arguments that go against what they already believe will routinely subject this information to a doubt so strong as to be dysfunctional, as in the many well-documented cases of eminent scientists rejecting new scientific insights that go against their existing beliefs (Barber, 1961).
Feeling and understanding. Though drawing a clear line between cognition and emotion may sometimes be difficult, some causes why a criterion of understanding may become unattainable are primarily emotional. Let us see how this may be the case in a sad or anxious person.
All definitions of "problem" point out that having a problem means being in a state one is motivated to change, that is, an aversive state. For instance, according to Mayer (1990, p. 284). "A 'problem' exists when a situation is in a given state, the problem solver wants the situation to be a goal state, and there is no obvious way of transforming the given state into the goal state."
To want to change the present state, we need a motive. A motive exists when the present state feels aversive in itself. A motive also exists when a mentally represented state becomes a goal by eliciting more pleasant feelings than those elicited by attending to the present state. Seeing a solution to a problem will normally elicit a positive feeling (e.g., Devine, Tauer, Barron, Elliot, & Vance, 1999). There is evidence that preexisting sadness may stop this from happening (Cervone, Kopp, Schauman, & Scott, 1994). Now having a problem generates feelings that are negative compared to those elicited by representations of a goal state. Finding a solution, however, tends to alleviate the negative feeling and replace it by something experienced as more positive. Indeed, there is evidence that this mechanism is the motivation driving problem-solving behavior (Overskeid, 2000).
A feeling elicited by a subliminal prime can unconsciously influence the emotional evaluation of a target stimulus (Greenwald, Draine, & Abrams, 1996). Even a relatively weak and diffuse feeling may be displaced onto stimuli different from those that caused the affect (Murphy & Zajonc, 1993; Winkielman, Berridge, & Wilbarger, 2005). There are indications that a closely related mechanism is behind the fact that preexisting sadness may contaminate the feeling elicited by possible solutions (see Overskeid, 2000).
It seems clear that imperfect knowledge does indeed appear to be more of an obstacle to people who are sad than it does to people in a neutral or happy mood (Marton, Connolly, Krutcher, & Korenblum, 1993). Several findings indicate that this has to do with people's performance standards being heightened by sadness, relative to neutral or positive mood (e.g., Cervone et al., 1994; Golin & Terrell, 1977).
If a person is sufficiently sad, then, he or she will be unable to feel better on considering any possible solution to a problem, including those that most people would consider adequate. Considering the possible solution does not allow the person to leave the aversive state of having a problem. If this is the case, the person's negative mood has also made it impossible to satisfy any internal criterion of understanding. A primary clinical manifestation of depression is a general lack of hope (Cheavens, 2000). Hence it should come as no surprise that people who are sufficiently sad tend not to feel they can hope to understand new things (Cole, Martin, Peeke, Seroczynski, & Fier, 1999).
Anxiety may also make understanding very difficult. A person with a fear of flying may know full well that flying entails a very low accident risk and yet fail to understand that flying is less dangerous than many activities, such as driving, in which the person engages without feeling anxious.
Finally, there are indications that both anxiety and depression may make people doubt their own cognitive competence, leading to an underestimation of their own ability to reach any criterion of understanding (Cole et al., 1999). This, too, may result in people having thorough knowledge about a phenomenon, and yet not feel they will be able to understand it (Blankstein, Flett, & Johnston, 1992).
Searle proposes, as we have seen, that an understanding thing must have the causal powers of the human brain. The evidence discussed above indicates that some features of the human brain make us think and feel in ways that are not conducive to understanding. It seems, in other words, that not constructing overly complex theories of simple events, instinctively reasoning in a Bayesian manner instead of seeking only to confirm what one already believes, not getting sad or scared or suffering from other biases and imperfections might under some circumstances help a thing understand. Such a thing would be quite different from a human brain, however.
Hypothesis 3: Understanding and Motivation
In his article on the Chinese room, Searle hardly touches the subject of feelings and motivation. This is somewhat surprising, as several major philosophers have pointed out how feelings and ways of understanding are intimately intertwined (e.g., Hume, 1739-40/1969; Spinoza, 1677/1994), This is a notion, of course, supported by latter-day research in psychology and neuroscience (e.g., Damasio, 2003). We have already seen some examples above.
The most important aspect of understanding may be the fact that it tends to make us stop searching for information, or, expressed as Hypothesis 3, in somewhat more stringent terms: In the absence of extrinsic motivation, (3) a person who understands X will not seek to falsify or weaken the set of beliefs that is the cognitive basis of the person's understanding. This follows from the present definition of understanding. The definition tells us that understanding always exists relative to a goal. When X is understood, the goal has been attained, or is believed to be attainable without further need for knowledge. Hence, apart from extrinsic motivation, there is no incentive for continued information search.
In addition, again in the absence of extrinsic motivation, there is little reason to assume that people will work very hard to confirm that they do indeed understand X. As long as we do not experience a problem, we tend to go about our lives and feel we understand what is going on without thinking much about the assumptions our understanding is based on.
This is good, as feeling we understand thus prevents us from engaging in the endless rumination typical of depressives that can never find solutions, and hence never understand. It is also sometimes bad, because people tend to think too little when important problems are solved and decisions made. We often accept a possible solution too early, without searching thoroughly for its possible weaknesses or for better alternatives (Baron, 2000).
But can we really be sure that the state of understanding X demotivates thinking as powerfully as I claim? The relevant literature seems to support this contention quite strongly. First of all, as pointed out by Francis Bacon some 400 years ago, people have a strong tendency to interpret new information that comes their way as supportive of what they already believe (see Quinton, 1980). As belief presupposes understanding, it seems, if Bacon's observation is correct, that when we understand something, we seldom scrutinize the basis of that understanding. Hence what we feel we understand rarely creates a problem for us, because our interpretation of most facts and arguments make sure they do not conflict with our understanding of the rest of the world. And when no problem exists, we lack the motivation to think.
Bacon has received ample empirical support on this point. Indeed, people in general seem to be "cognitive misers," only thinking thoroughly when motivated by some sort of problem (Ebenbach & Keltner, 1998; Fiske & Taylor, 1991). Furthermore, there is abundant evidence for a strong and prevalent "confirmation bias," which has become the name of the phenomenon described by Bacon, observed in a diverse set of laboratory studies, as well as in applied fields as different as medical practice and eyewitness psychology (see Nickerson, 1998). Direct tests of the relation between understanding and information search underscore this. For instance, early on in a trial, jurors commonly try to understand "what happened" by constructing a narrative explanation. When a narrative has been constructed that yields understanding, people very seldom search for information and arguments to construct alternatives (see Hastie & Pennington, 2000; Kuhn, 2001). Confirming and strengthening these findings in a basic research setting, Rozenblit and Keil (2002) found that when people reach a state of understanding, their search for information is terminated.
According to Searle (1980, 1992) understanding presupposes intentionality, semantics, or meaning, as well as true or valid knowledge. From the findings discussed in the present article, it seems clear, however, that understanding is the name given to a state involving the absence of perceived obstacles to goal attainment, but not necessarily much knowledge.
Because understanding has to do with attaining goals, understanding is tightly connected with motivation. Indeed, it seems the motivational side of understanding is more important than the aspect having to do with intentionality, which is so central to Searle. This is because the attribute of intentionality cannot in itself change a person's thinking or make the person act (which follows, of course, from Hume's (1739-40/1969) "slave of the passions" argument). It is only because understanding involves feelings and therefore motivation that it can affect the way people think and behave. It seems, in other words, that trying to understand understanding without taking feelings into account results in an account that is quite incomplete.
Searle's (1980) main interest, of course, is whether a machine can be said to understand. Hypothesis 3 makes clear that if a device is assumed to understand, and then goes on to falsify a belief on which its understanding rests, it had never understood after all, given that its falsification attempts were not driven by extrinsic motivation.
Whether a machine can ever behave as described in Hypothesis 3, I do not know. This seems a purely empirical question, by no means unthinkable in principle. As pointed out by Sloman (1985), understanding may be a slightly different thing in different people, animals, and machines. Interesting relevant evidence exists (Gohm, 2003) indicating that there are clear differences in the way individuals experience their emotions. There is no reason that the signals serving to motivate and demotivate a machine should have physical and subjective characteristics similar to the feelings governing the motivation of humans. What would be important to make a machine behave as described in Hypothesis 3 is that certain signals function the way feelings do in humans.
The Causal Powers of the Human Brain
Searle asserts that a computer running a program cannot understand. It is probably correct that computers running many types of programs cannot understand. I am not sure, however, that a nonhuman thing or creature would need to possess the "causal powers" of the human brain to understand. Searle gives scant reasons why this should be necessary.
It is obvious that a computer does not understand the way a human understands. However, robots can walk even if they do not walk quite the way we do. Can a computer still understand, then? Based on his own understanding of understanding, Searle says no. But what Searle says about the characteristics of human understanding is just something he says. Because the present definition is empirical, and, as we have just seen, has empirical support, it may be said to carry more weight than descriptions of understanding lacking that foundation. The present definition was constructed to deal with understanding in humans. It describes the central elements in what we call understanding of something. The fact that these elements may also exist in a machine without the causal powers of the human brain is another reason why we must assume that a computer can understand.
First of all, the human brain does quite a few tricks requiring lots of "causal powers," but which have little to do with understanding, such as governing our movements and our endocrine system. More importantly, however, most people, and most animals, for that matter, are probably in a state of understanding most of the time. They do not feel that lack of knowledge about anything stops them from reaching a currently active goal. We may not be used to speaking about this normal state as understanding, for as long as we understand, understanding is not an issue. Still, if anyone asks us while we are busy going about our daily affairs: "Do you understand what you are doing and what is going on around you?" our answer would normally be "yes," whether we are on our way to work, doing our job, or at home taking care of house and family. As long as we understand, activity that has been initiated will go ahead, while lack of understanding will tend to stop behavior aimed at attaining the relevant goal.
It seems, then, that understanding is indeed the normal state of most people. It does not seem a very bold conjecture that as long as nothing unexpected happens, the normal state of rather simple animals (rats, for instance) is also one of understanding. If this is correct, it indicates that understanding, in a sense of the word perfectly in accordance with daily usage, does not presuppose a thinking organ with the "causal powers" of the human brain. Reading, for instance, about Thorndike's cat getting easily out of the puzzle box after several trials (Thorndike, 1911), most people would say the cat now understands how to escape. Searle would not allow them to make that statement. I say it's OK. Today many more accounts of rather complex understanding in animals exist. Not least impressive are accounts of thinking leading to understanding in birds, such as that of spontaneous and quite elaborate problem solving in crows (Weir, Chappell, & Kacelnik, 2002).
Yet one may doubt whether understanding really presupposes the causal powers of the human brain without buying into the thought that rats, cats, or birds understand; at least, that is, if "causal powers" is taken to mean something like "all of what the human brain is capable of." One reason for this is the fact that convincing evidence exists, showing that relatively complex stories told in a sequence of pictures can be understood through nonverbal processing alone (West & Holcomb, 2002; see also Kounios, 2002). This indicates that understanding, again in a sense of the word fully in accordance with daily usage, does not presuppose language. Because the human brain uses much of its cognitive resources on language processing, it again seems reasonable to conclude that understanding does not presuppose all the "causal powers" of the human brain. And if that is indeed the case, it may not be so inconceivable, after all, that a computer could understand.
We have seen several reasons why understanding does not seem to require the "causal powers" of the human brain. The basis of Searle's Chinese room argument is his contention that computers running formal programs cannot have intentionality. I show, however, that given Searle's (1992) characterization of intentionality, it follows that computers, too, may have intentional states.
The present critique of the Chinese room is based on a definition of "understanding" that deviates considerably from the way Searle uses the word. I argue that Searle does not define the term properly, and that the way he uses it reflects only what he thinks "understanding" should mean. As an alternative, I offer an empirical definition of understanding, a definition that is also a theoretical statement leading to several testable hypotheses. I claim that this definition describes the state of people who understand, and go on to discuss three hypotheses derived from the empirical definition that have particular relevance to the question of machine understanding. The hypotheses turn out to have considerable support in the literature from experimental psychology, linguistics, and other fields, thus supporting my claim that the present definition is a valid description of the state of understanding, further supporting the view that computers may indeed be able to understand, and leading to the conclusion that what Searle discusses is not what speakers of English refer to when they say they understand; and even if it were, his arguments would be unsound.
ACH, N. (1905). Uber die Willenstatigkeit und das Denken [On the activity of the will and thinking]. Gottingen, Germany: Vandenhoeck & Ruprecht.
ALLWOOD, J. (1986). Some perspectives on understanding in spoken interaction. In M. Furberg, T. Wetterstrom, & C. Aberg (Eds.), Logic and abstraction (pp. 13-59). Gothenburg, Sweden: Acta Universitatis Gothoburgensis.
ANDERSON, C. A., & KELLAM, K. L. (1992). Belief perseverance, biased assimilation, and covariation detection: The effects of hypothetical social theories and new data. Personality and Social Psychology Bulletin, 18, 555-565.
BARBER, B. (1961). Resistance by scientists to scientific discovery. Science, 134, 596-602.
BARGH, J. A., LEE-CHAI, A., BARNDOLLAR, K., GOLLWITZER, P. M., & TROTSCHEL, R. (2001). The automated will: Nonconscious activation and pursuit of behavioral goals. Journal of Personality and Social Psychology, 81, 1014-1027.
BARON, J. (2000). Thinking and deciding (3rd ed.). Cambridge, England: Cambridge University Press.
BATSON, C. D. (1975). Rational processing or rationalization? The effect of disconfirming information on a stated religious belief. Journal of Personality and Social Psychology, 32, 176-184.
BELENKY, M. F., CLINCHY, B. M., GOLDBERGER, N. R., & TARULE, J. M. (1986). Women's ways of knowing. New York: Basic Books.
BLANKSTEIN, K. R., FLETT, G. L., & JOHNSTON, M. E. (1992). Depression, problem-solving ability, and problem-solving appraisals. Journal of Clinical Psychology, 48, 749-759.
BODEN, M. (1988). Computer models of mind: Computational approaches in theoretical psychology. Cambridge, England: Cambridge University Press.
BROWN, T. L., GORE, C. L., & CARR, T. H. (2002). Visual attention and word recognition in Stroop color naming: Is word recognition "automatic"? Journal of Experimental Psychology: General, 131, 220-240.
CERVONE, D., KOPP, D. A., SCHAUMANN, L., & SCOTT, W. D. (1994). Mood, self-efficacy, and performance standards: Lower moods induce higher standards for performance. Journal of Personality and Social Psychology, 67, 499-512.
CHEAVENS, J. (2000). Hope and depression: Light through the shadows. In C. R. Snyder (Ed.), Handbook of hope: Theory, measures, and applications (pp. 321-340). San Diego, CA: Academic Press.
CHURCHLAND, P. M., & CHURCHLAND, P. S. (1990, January). Could a machine think? Scientific American, 26-31.
COLE, D. A., MARTIN, J. M., PEEKE, L. A., SEROCZYNSKI, A. D., & FIER, J. (1999). Children's over- and underestimation of academic competence: A longitudinal study of gender differences, depression, and anxiety. Child Development, 70, 459-473.
COMMANDER, N. E., & STANWYCK, D. J. (1997). Illusion of knowing in adult readers: Effects of reading skill and passage length. Contemporary Educational Psychology, 22, 39-52.
CONNELL, J. (2000). Beer on the brain. Paper presented at the 2000 AAAI Spring Symposium on Natural Dialogues with Practical Robotic Devices. Retrieved September 22, 2002, from http://www.research.ibm.com/ecvg/pubs/jhcbeer.pdf
DAMASIO, A. R. (1999). The feeling of what happens: Body and emotion in the making of consciousness. New York: Harcourt Brace & Company.
DAMASIO, A. (2003). Looking for Spinoza: Joy, sorrow, and the feeling brain. Orlando, FL: Harcourt.
DAWES, R. M. (1995). The nature of human nature: An empirical case for withholding judgment--perhaps indefinitely. Political Psychology, 16, 81-97.
DEVINE, P. G., TAUER, J. M., BARRON, K. E., ELLIOT, A. J., & VANCE, K. (1999). Moving beyond attitude change in the study of dissonance-related processes. In E. Harmon-Jones & J. Mills (Eds.), Cognitive dissonance: Progress on a pivotal theory in social psychology (pp. 297-323). Washington, DC: American Psychological Association.
EBENBACH, D. H, & KELTNER, D. (1998). Power, emotion, and judgmental accuracy in social conflict: Motivating the cognitive miser. Basic and Applied Social Psychology, 20, 7-21.
EMSLEY, J. (2001). Nature's building blocks: An A-Z guide to the elements. Oxford, England: Oxford University Press.
FISKE, S. T., & TAYLOR, S. E. (1991). Social cognition (2nd ed.). New York: McGraw-Hill.
FITZSIMONS, G. M., & BARGH, J. A. (2003). Thinking of you: Nonconscious pursuit of interpersonal goals associated with relationship partners. Journal of Personality and Social Psychology, 84, 148-164.
FERREIRA, F., BAILEY, K. G. D., & FERRARO, V. (2002). Good-enough representations in language comprehension. Current Directions in Psychological Science, 11, 11-15.
FOLLESDAL, D. (1981). Understanding and rationality. In H. Parret & J. Bouveresse (Eds.), Meaning and understanding (pp. 154-168). Berlin, Germany: Walter de Gruyter.
GELDER, B. DE (1981). I know what you mean, but if only I understood you. In H. Parret & J. Bouveresse (Eds.), Meaning and understanding (pp. 29-43). Berlin, Germany: Walter de Gruyter.
GLENBERG, A. M., & EPSTEIN, W. (1985). Calibration of comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11, 702-718.
GOHM, C. L. (2003). Mood regulation and emotional intelligence: Individual differences. Journal of Personality and Social Psychology, 84, 594-607.
GOLIN, S., & TERRELL, F. (1977). Motivational and associative aspects of mild depression in skill and chance tasks. Journal of Abnormal Psychology, 86, 389-401.
GREENO, J. G., & RILEY, M. S. (1987). Processes and development of understanding. In F. E. Weinert & R. H. Kluwe (Eds.), Metacognition, motivation, and understanding (pp. 289-313). Hillsdale, NJ: Erlbaum.
GREENWALD, A. G., DRAINE, S. C., & ABRAMS, R. L. (1996). Three cognitive markers of unconscious semantic activation. Science, 273, 1699-1702.
HASTIE, R., & PENNINGTON, N. (2000). Explanation-based decision making. In T. Connolly, H. R. Arkes, & K. Hammond (Eds.), Judgment and decision making: An interdisciplinary reader (2nd ed., pp. 212-228). New York: Cambridge University Press.
HAUSER, L. (1997). Searle's Chinese box: Debunking the Chinese room argument. Minds and Machines, 7, 199-226.
HUME, D. (1969). A treatise of human nature. Harmondsworth, England: Penguin, (Original work published 1739-40).
HUNTER, D. (1998). Understanding and belief. Philosophy and Phenomenological Research, 58, 559-580.
JOHNSON, B., & EAGLY, A. H. (1989). Effects of involvement on persuasion: A meta-analysis. Psychological Bulletin, 106, 290-314.
KANTOR, J. R. (1926). Principles of psychology (Vol. 2). New York: Knopf.
KOUNIOS, J. (2002). A neural mechanism for non-verbal discourse comprehension. Trends in Cognitive Sciences, 6, 272-275.
KUHN, D. (2001). How do people know? Psychological Science, 12, 1-8.
LIEBERMAN, D. A., SUNNUCKS, W. L., & KIRK, J. D. J. (1998). Reinforcement without awareness: I. Voice level. The Quarterly Journal of Experimental Psychology, 51B, 301-316.
LORD, C. G., ROSS, L., & LEPPER, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37, 2098-2109.
LUTGENDORF, S. K., ANTONI, M. H., IRONSON, G., KLIMAS, N., FLETCHER, M. A., & SCHNEIDERMAN, N. (1997). Cognitive processing style, mood, and immune function following HIV seropositivity notification. Cognitive Therapy and Research, 21, 157-184.
MARTON, P., CONNOLLY, J., KUTCHER, S., & KORENBLUM, M. (1993). Cognitive social skills and social self-appraisal in depressed adolescents. Journal of the American Academy of Child and Adolescent Psychiatry, 32, 739-744.
MATUTE, H. (1995). Human reactions to uncontrollable outcomes: Further evidence for superstitions rather than helplessness. The Quarterly Journal of Experimental Psychology, 48B, 142-157.
MAYER, R. E. (1990). Problem solving. In M. W. Eysenck (Ed.), The Blackwell dictionary of cognitive psychology (pp. 284-288). Oxford, England: Blackwell.
MIYAKE, N. (1981). The effect of conceptual point of view on understanding. The Quarterly Newsletter of the Laboratory of Comparative Human Cognition, 3, 54-56.
MOSKOWITZ, G. B., GOLLWITZER, P. M., WASEL, W., & SCHAAL, B. (1999). Preconscious control of stereotype activation through chronic egalitarian goals. Journal of Personality and Social Psychology, 77, 167-184.
MURPHY, S. T., & ZAJONC, R. B. (1993). Affect, cognition, and awareness: Affective priming with optimal and suboptimal stimulus exposures. Journal of Personality and Social Psychology, 64, 723-739.
NEWTON, N. (1996). Foundations of understanding. Amsterdam: John Benjamins.
NICKERSON, R. S. (1985). Understanding understanding. American Journal of Education, 93, 201-239.
NICKERSON, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2, 175-220.
OVERSKEID, G. (2000). The slave of the passions: Experiencing problems and selecting solutions. Review of General Psychology, 4, 211-237.
PARROTT, L. J. (1984). Listening and understanding. The Behavior Analyst, 7, 29-39.
PERKINS, D. N., FARADY, M., & BUSHEY, B. (1991). Everyday reasoning and the roots of intelligence. In J. F. Voss, D. N. Perkins, & J. W. Segal (Eds.), Informal reasoning and education (pp. 83-105). Hillsdale, NJ: Erlbaum.
PRESTON, J., & BISHOP, J. M. (Eds.) (2002). Views into the Chinese room: New essays on Searle and artificial intelligence. Oxford, England: Oxford University Press.
QUINTON, A. (1980). Bacon. Oxford, England: Oxford University Press.
ROZENBLIT, L., & KEIL, F. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26, 521-526.
SANFORD, A. J., & STURT, P. (2002). Depth of processing in language comprehension: Not noticing the evidence. Trends in Cognitive Sciences, 6, 382-386.
SCHANK, R. C. (1986). Explanation patterns: Understanding mechanically and creatively. Hillsdale, NJ: Erlbaum.
SCRIVEN, M. (1972). The concept of comprehension: From semantics to software. In J. B. Carroll & R. O. Freedle (Eds.), Language comprehension and the acquisition of knowledge (pp. 31-39). Washington, DC: V. H. Winston & Sons.
SEARLE, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3, 417-457.
SEARLE, J. R. (1992). The rediscovery of the mind. Cambridge, MA: MIT Press.
SEARLE, J. R. (2001). The failures of computationalism: I. Psycoloquy, 12. Retrieved January 17, 2003, from http://psycprints.ecs.soton.ac.uk/archive/00000189/
SEINELA, A., HAMALAINEN, P., KOIVISTO, M., & RUUTIAINEN, J. (2002). Conscious and unconscious uses of memory in multiple sclerosis. Journal of the Neurological Sciences, 198, 79-85.
SKINNER, B. F. (1957). Verbal behavior. Englewood Cliffs, NJ: Prentice-Hall.
SKINNER, B. F. (1974). About behaviorism. New York: Knopf.
SLOMAN, A. (1985). What enables a machine to understand? In A. Joshi (Ed.), Proceedings of the Ninth International Joint Conference on Artificial Intelligence (pp. 995-1001). Los Altos, CA: Morgan Kaufman.
SPINOZA, B. DE (1994). The ethics. In E. Curley (Ed. & Trans.), A Spinoza reader: The ethics and other works. Princeton, NJ: Princeton University Press. (Original work published 1677).
SUDDENDORF, T. (1998). Simpler for evolution: Secondary representation in apes, children, and ancestors. Behavioral and Brain Sciences, 21, 131.
TENG, N. Y. (2000). A cognitive analysis of the Chinese room argument. Philosophical Psychology, 13, 313-324.
THORNDIKE, E. L. (1911). Animal intelligence. New York: Macmillan.
TISHMAN, S., JAY, E., & PERKINS, D. N. (1993). Teaching thinking dispositions: From transmission to enculturation. Theory into Practice, 32, 147-153.
TURING, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.
TYLER, J. J. (1994). Equine psychotherapy: Worth more than just a horse laugh. Women & Therapy, 15, 139-146.
WASON, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 12, 129-140.
WEGNER, D. M., & WHEATLEY, T. (1999). Apparent mental causation: Sources of the experience of will. American Psychologist, 54, 480-492.
WEIR, A. A. S., CHAPPELL, J., & KACELNIK, A. (2002). Shaping of hooks in New Caledonian crows. Science, 297, 981.
WEST, W. C., & HOLCOMB, P. J. (2002). Event-related potentials during discourse-level semantic integration of complex pictures. Cognitive Brain Research, 13, 363-375.
WINKIELMAN, P., BERRIDGE, K. C., & WILBARGER, J. (2005). Unconscious affective reactions to masked happy versus angry faces influence consumption behavior and judgments of value. Personality and Social Psychology Bulletin, 31, 121-135.
WITTGENSTEIN, L. (1922). Tractatus logico-philosophicus. London: Routledge.
University of Oslo
Thanks are due to Dennis J. Delprato, Geir Kirkeboen, Lars Kristiansen, Jan Smedslund, and Karl Halvor Teigen for valuable comments on earlier versions of the manuscript.
Correspondence concerning this article should be addressed to Geir Overskeid, Department of Psychology, University of Oslo, P. O. Box 1094 Blindern, 0317 OSLO, NORWAY. (E-mail: email@example.com).
(1) Here, "knowledge" means "knowing how," as well as "knowing that." In other words, knowledge includes concepts such as ability and skill.
(2) A goal is relevant if the understander assumes that achieving the goal can be facilitated by having knowledge of X. A person may have many goals, but he or she is not attracted to all of them all the time. Having a goal that is currently active means that reaching the goal is now one's purpose.
(3) Here, "extrinsic motivation" means all types of motivation that do not result from experiencing lack of knowledge about X as an obstacle to reaching a relevant, currently active goal (cf. the present definition of "understanding").
|Printer friendly Cite/link Email Feedback|
|Publication:||The Psychological Record|
|Date:||Sep 22, 2005|
|Previous Article:||Representation of odds in terms of frequencies reduces probability discounting.|
|Next Article:||Individual consistencies across time and tasks: a replication of interactive styles.|