Where a Blind Man Ends: Five Comments on Context, Artifacts and the Boundaries of the Mind.
The mind as constructed by orthodox cognitive science excludes artifacts. The current paper challenges this by using Gregory Bateson's radical notion of the mind, according to which the mind is a system whose boundaries are demarcated by context, rather than by stable physiological boundaries, allowing thus for the inclusion of artifacts. The cognitive science orthodoxy is challenged through the identification of five problems concerning its treatment of the synergism that can develop between artifacts and human agents. Insights from cybernetics and activity theory are used in commenting as to how these problems can be alleviated by adopting Bateson's radical notion of mind.
In one of his most famous pedagogical reflections Gregory Bateson questioned the boundaries of the mind by asking where a blind man ends:
Suppose I am a blind man, and I use a stick. I go tap, tap, tap. Where do I start? Is my mental system bounded at the handle of the stick? Is it bounded by my skin? Does it start halfway up the stick? Does it start at the tip of the stick? But these are nonsense questions ... The way to delineate the system is to draw the limiting line in such a way that you do not cut any of these pathways in ways which leave things inexplicable. If what you are trying to explain is a given piece of behavior, such as the locomotion of the blind man, then, for this purpose, you will need the street, the stick, the man; the street, the stick and so on, round and round. (Bateson, 1973, p. 434)
Even after more than 30 years since Bateson's ideas were published, the idea that the mind may include artifacts seems quite bizarre for all those who have been educated under the hegemony of cognitive science. Cognitive science as an interdisciplinary enterprise seeks to understand the mind in computational terms, with the computer as its leading metaphor (Anderson, 1993; Newell, 1990; Simon, 1969; Turing, 1950). This enterprise totally ignores the notion that artifacts may be included in the mind and exclusively focuses on the computation process, which takes place in the brain. Therefore cognitive science does not support a theoretical framework for discussing the possible broadening of the mind through the use of artifacts. In contrast, for Bateson, the mental world -- the mind -- is not limited by the skin (Bateson, 1973, p. 429), and the boundaries of the mind are determined by context rather than by anatomical boundaries. That is, instead of a well-demarcated mind located within the physical skull, Bateson considers the mind as a dynamic system that has dynamic boundaries set by the specific context of the activity. Bateson's conception of the mind suggests that within the right context an artifact may be included in the mind. For example, in a case involving the locomotion of the blind man, his stick counts. In a case where his eating behavior is the activity in question, his stick does not count (unless he is eating with chopsticks!).
The idea that the mind does not have fixed boundaries and that it may include artifacts is far more perplexing than it seems, since Bateson does not fully specify the characteristics and constraints of a system in which artifacts and human agents become members of a unified thinking system. This issue is particularly pertinent to those interested in theoretical aspects of thinking environments that incorporate artifacts (e.g., information systems technologies) and human agents. In this paper we adopt and elaborate Bateson's radical notion of the mind as a thinking system that may include artifacts.(a) More specifically we aim to comment on the synergism of artifacts and human agents, in light of general principles of cybernetics (Bateson, 1973; Maturana and Varela, 1972; yon Foerster, 1974) and activity theory (Kozulin, 1986; Luria and Vygotsky 1992; Vygotsky, 1978). The comments we support are deliberately fragmented and each opens with the specific problem it addresses.
The synergism of artifacts and human agents is usually explained by reducing it to the level of components -- the artifact or the human agent.
Comment 1: The status of an artifact as a component of the mind is determined by the context, which is a spatial-symbolic phenomenon
The most apparent issue pertaining to artifacts and the mind is that the location of an artifact within a thinking system is determined by its spatial-symbolic position relative to other components(b) of the system rather than by any `essential' properties of the artifacts and human agent itself. By describing context as a spatial-symbolic phenomenon, we emphasize the notion of a phenomenon, which is to be analyzed through the abstract (symbolic) organization (the spatial metaphor) of its components. The specific term `spatial-symbolic position' refers to the coordinates accompanied by a given artifact within a dynamic system of human practices, including the given community in which the activity takes place, and the division of labour between human agents and artifacts (Kuutti, 1996).
The fallacy of regarding the artifact without considering it as a component of a wider dynamic system should not surprise us represents a basic fallacy of understanding systems in general (Beer, 1974). For example, Problem 1 is often neglected by popular psychology of information technologies, and by cognitive theories of human-computer interaction, which describe computers as enlarging the mind through their built-in computational power (Nardi, 1996), without acknowledging that in fact, the only thing that matters is the spatial-symbolic position of the artifact -- the computer, within a dynamic system of thinking agents, social constructs, etc.
The first conclusion that may be drawn from the first comment is that every artifact can potentially be incorporated into a thinking system as long as it occupies a spatial-symbolic position that renders its macro-system a thinking system, or a system that generates meaning/ knowledge (Bateson, 1973).
The second conclusion is that when considering the potential artifacts to enlarge the mind, our inquiry should be directed toward the geography of thinking (Deleuze and Guattari, 1994) rather than to a cognitive analysis of the agent or to a technological analysis of the artifact as two indifferent entities. To date, such a geographic language of analysis of lacking in our cognitive vocabulary and its development is a key component in understanding unified systems of thinking.
In a case where a human agent and an artifact are components of a thinking system, it is not clear whether these components hold the same status. Are they both essential components of the system?
Comment 2: The thinking system's spatial symbolic organization is characterized by unequal relations between human agents and artifacts.
A human agent is phenomenologically prior to an artifact, in the sense that he or she is always an essential component of the mind whereas an artifact is not. An essential component of a system is defined as `one without which the system cannot perform its defining function' (Ackoff and Gharajedaghi, 1996, p. 13). This notion points to the unique non-redundant status of the human agent in its synergism with artifacts. While notions of distributed cognition (Hutchins, 1995) emphasize the similarity between human agents and artifacts, it should not be forgotten that the agents hold a unique phenomenological status within the thinking system (Nardi, 1998).
The most evident conclusion that can be drawn from the above argument is that an artifact should be defined as a component of a thinking system not by virtue of its essentiality in a unified thinking system but due to its potential ability to mediate basic bio-cognitive functions of the human agent (Luria and Vygotsky, 1992; Vygotsky, 1978). Thus, an artifact can be defined as an unessential component of a thinking system that supports the bio-cognitive function of the system by mediating on its behalf and creating new forms of meaning.
Bateson emphasized the role of context in determining the boundaries of the mind. However, the role of the context in determining whether an artifact is included in the mind is not clear.
Comment 3: Context never determines whether an artifact is included in the mind, but invites such incorporation.
Context, instead of determining the boundaries of the mind in a positive sense, is always a subsystem that is cognized and therefore constituted by an observer (Maturana and Varela, 1972; von Foerster, 1974). Thus, despite the spatial-symbolic organization of the components in regard to a given function, context alone cannot determine whether an artifact will be unified with a human agent into a thinking system or not. It only can set forth an array that will be potentially cognized as a thinking system.
Why is it so important to discuss the notion of an observer within the context of the boundaries of the mind? The most evident answer is that the notion of an observer shifts the burden of establishing the boundaries of a thinking system from nature to the human agent. In other words, the notion of an observer-dependent system is used in order to reject the rigid realist position of cognitive science that the mind exists independently of human activity. The mind is always established through the spectacles of human activity and can therefore be potentially enlarged as long as it changes from an observer's point of view, the basic system's activity of generating meaning.
Take for example the principal of a school for the blind who is uncertain whether to purchase a new Braille machine. This machine includes a Braille keyboard and a Braille touch-screen that is designed to support the pupils' writing activity. The principal is not interested in technology, but has some vague ambitions about improving the cognitive functioning of his pupils. The principal consults with the child psychologist, who knows nothing about technology, but tells him that from the cognitive science perspective this device does not seem to significantly influence the pupils' representation of language.
Neither the device, nor the blind pupils nor the location of both within a larger context (which does not pre-exist to the observer) may explain their potential synergism. What matters is that from an observer's point of view a new synergism has evolved that may dramatically change the pupil's writing activity, for example, by creating interactive writing communities of blind pupils.
Following Russell's theory of logical types, Bateson points out the common errors accompanying the shift between different logical levels of analysis. The synergism of human agents and artifacts involves a logical shift that may be accompanied by errors in logical typing.
Comment 4: The embodiment of artifacts and human agents in a unified thinking system should involve the use of language that accounts for the synergism of artifact and agent in order to avoid logical type errors.
If an artifact is incorporated into the mind of a human agent then the resultant synergetic system could constitute a new class -- as, for example, a description of people's ability to manipulate mathematical symbols is qualitatively different from a description of the mediating process of an abacus used in doing so. In order to avoid logical type errors the language(s) used in the two descriptions should allow its (their) users to sufficiently differentiate between the qualitative differences of the two descriptions.
The embodiment of artifacts and human agents in a unified thinking system assumes the existence of an observer who has the potential of using a language that corresponds to, as well as constitutes, the shift between different logical levels of the mind. Without a proper language to describe the unique synergism between artifact and human we would find it extremely difficult as observers to properly cognize and create thinking systems that incorporate artifacts and human agents.
As suggested elsewhere (Bateson, 1991), our fascination with nouns and entities should be replaced by a fascination with processes that do not depend on a preconceived set of components. The shift to such a language has to be accompanied by a radical transformation in our epistemology (Bateson, 1991). We might look for its web of metaphors in human practices that are intrinsically embedded in the study of processes and patterns, such as the art of music.
Determining the boundaries of the mind is a process that involves self-reference. Without a proper language for dealing with self-reference, we may fall into a vicious circularity.
Comment 5: In order to determine the boundaries of the mind one must surpass them. This assumes ever-increasing contexts and an observer who is able to shift between languages.
The embodiment of human agents and artifacts within a unified thinking system may expand the boundaries of the mind. However, in order to broaden the mind, one must go beyond it, an enterprise that seems ostensibly paradoxical. This problem is highly similar to what may be described as the Baron von Munchhausen phenomenon (Y. Neuman, unpublished) or the problem of self-reference. Baron von Munchhausen a legendary figure, has been described as hoisting himself out of a swamp by pulling himself out (Humphries, 1971). The Munchhausen story illustrates the antilogy of a reflective act, an antilogy that the Western tradition has failed to address for years(c) specifically within the cognitive science doctrine.
Cognitive science provides several important suggestions for overcoming the Baron von Munchhausen phenomenon (Newell, 1990). However, all of these suggestions ignore the uniqueness of mediated human thinking (Y. Neuman, unpublished) and the use of artifacts as a quintessential aspect of the mind (Luria and Vygotsky, 1992). Therefore we suggest that surpassing the boundaries of the mind will require the development of new languages, i.e. replacement or extensions of the old ones that will allow human agents to cope with ever-increasing contexts, e.g. by shifting between logical levels without committing errors.
The question of where a person's mind ends is a question that should be of interest to those who believe that technology is not a technical matter per se. Inquiring into this notion requires a rejection of the orthodox cognitive science, which does not consider artifacts as potentially enlarging one's mind, and a shift to radical cybernetic notions. This intellectual journey ought to be accompanied by the construction of a new language(s) that accounts for the synergism between human agents and artifacts without confusing between different logical levels of analysis. The development of this new kind of language is therefore a matter of prime importance.
The authors would like to thank our anonymous reviewer, Sonia Baumer, and Bonnie Nardi for their helpful comments.
(a) In fact Bateson is not one of the first one to suggest that artifacts can be included in the mind. Activity theory psychologists like Luria and Vygotsky were the first to suggest that human thinking can be mediated through artifacts.
(b) We differentiate between the term `component' and the term `artifact'. The term `component' is used in order to describe a part of a system without specifying whether the part is a person or a material object. In contrast, the term `artifact' is used exclusively to refer to a human-made material tool.
(c) The problem of shifting between different levels of logical analysis has been extensively discussed by Russell (Copy, 1971) who inspired Bateson. Our comment, which is in line with Russell and Bateson, suggests that careful attention should be given to the shift between different logical levels of analysis while discussing the synergism of artifacts and human agents.
Ackoff, R. L., and Gharajedaghi, J. (1996). Reflections on systems and their models. Systems Research 13, 13-23.
Anderson, J. (1993). Rules of the Mind, Erlbaum, Hillsdale NJ.
Bateson, G. (1973). Steps to an Ecology of Mind, Granada, London.
Bateson, M. C. (1991). Our Own Metaphor, Smithsonian Institution Press, Washington, DC.
Beer, S. (1974). Designing Freedom, Wiley, New York.
Copy, I. (1971). The Theory of Logical Types, Routledge & Kegan Paul, London.
Deleuze, G., and Guattari, F. (1994). Geophilosophy. In Tomlinson H, Burchell G, (translators), What is Philosophy?, Columbia University Press, New York.
Humphries, S. (1971). Baron Munchausen and Other Comic Tales from Germany, Dent, London.
Hutchins, E. (1995). Cognition in the Wild, MIT Press, Cambridge, MA.
Kozulin, A. (1986). The concept of activity in soviet psychology. American Psychologist 41, 264-274.
Kuutti, K. (1996). Activity theory as a potential framework for human-computer interaction research. In Nardi, B. A. (ed.), Context and Consciousness: Activity Theory and Human-Computer Interaction, MIT Press, Cambridge, MA.
Luria, A. R. and Vygotsky, L. S. (1992). Ape, Primitive Man, and Child: Essays in the History of Behavior, Deutsch Press, Orlando, FL.
Maturana, H. R., and Varela, F. J. (1972). Autopoiesis and Cognition: The Realization of the Living, Reidel, London.
Nardi, B. A. (1996). In Nardi, B. A. (ed.), Context and Consciousness: Activity Theory and Human-Computer Interaction, MIT Press, Cambridge, MA.
Nardi, B. A. (1998). Concepts of cognition and consciousness: Four Voices. Journal of Computer Documentation 22, 31-48.
Newell, A. (1990). Unified Theories of Cognition, Harvard University Press, Cambridge, MA.
Simon, H. A. (1969). The Science of the Artificial, MIT Press, Cambridge, MA.
Turing, A. M. (1950). Computing machinery and intelligence. Mind 59, 433-460.
Von Foerster, H (1974). The Cybernetics of Cybernetics, University of Illinois, Urbana, IL.
Vygotsky, L. S. (1978). Mind in Society, Harvard University Press, Cambridge, MA.
Yair Neuman(1)(*) and Zvi Bekerman(2)
(1) Department of Education, Ben-Gurion University of the Negev, Beer-Sheva, Israel
(2) Melton Center, School of Education, Hebrew University, Jerusalem, Israel
(*) Correspondence to: Dr. Yair Neuman Department of education, Ben-Gurion University of Negev, Beer-Sheva 84105, Israel. e-mail: email@example.com
|Printer friendly Cite/link Email Feedback|
|Author:||Neuman, Yair; Bekerman, Zvi|
|Publication:||Systems Research and Behavioral Science|
|Date:||May 1, 2000|
|Previous Article:||An Integration of Systems Science Methods and Object-oriented Analysis for Determining Organizational Information Requirements.|
|Next Article:||The Living Company.|