Mind is Artificial

I B. Dahlbom (red.) Dennett and His Critics. Demystifying Mind. Oxford: Blackwell 1993.

”We come to the full possession of our power of drawing inferences the last of all our faculties, for it is not so much a natural gift as a long and difficult art.” C. S. Peirce

When Daniel Dennett’s Content & Consciousness was published in 1969 two other books of great importance appeared. Herbert Simon’s The Sciences of the Artificial was a forceful plea for a radical reorientation of the sciences towards ”a science of design” based on an appreciation of the fact that ”the world we live in today is much more a man-made, or artificial, world than it is a natural world” and the fact that man himself, or the human mind, is the ”most interesting of all artificial systems.” W. V. Quine’s Ontological Relativity and Other Essays expressed a more conservative appreciation for the natural sciences, wanting to ”naturalize epistemology,” defining it ”as a chapter of psychology and hence of natural science” studying ”a natural phenomenon, viz., a physical human subject.”

Quine’s ideas about a naturalized philosophy were perceived as radical by the philosophical community and created quite a commotion. Simon’s ideas about cognitive science, as a science of the artificial with a sociological aspect, were much too radical to be perceived at all. The cognitive science coming out of the creative atmosphere of the 60s was and still remains a natural science. As one of the more forceful proponents of naturalism, Daniel Dennett is original in that he has, over the years, shown a growing appreciation of the artificial. Indeed, the way I like to think of his philosophy, is as being situated in between Quine and Simon, rather closer to Quine but very much aware of the attraction of Simon. In this paper I shall try various strategies to push him further in that direction, and then return at the very end to a discussion of some of Simon’s ideas for an artificial science. What I will do, in effect, is outline an artificial, or social, alternative to naturalism, and what would be more natural than to call it ”socialism”? That word seems to have no better use at the moment.

1 From Naturalism to Socialism

Dennett is doing ”naturalized philosophy” and naturalism is in many ways typically American. Europeans tend to be uneasy with it, -with ”tough-minded philosophy” as James once put it. When in the late 19th century philosophers were breaking away from transcendental idealism, Americans turned to naturalism, while Europeans returned to the safety of a Lockean ”under-labourer” position, trying out different varieties of conceptual analysis. Dennett has rather little patience with this type of meticulous, pedestrian philosophy, with what he now calls ”slogan-honing.”

Naturalism in philosophy may mean many things, depending on what other kind of ism—idealism, supernaturalism, platonism, phenomenalism, dualism—one is attacking. When Quine was naturalizing philosophy in the early 60s, he was arguing explicitly against the idea of a ”first philosophy.” But very few readers of Quine at that time had much faith in phenomenalism or metaphysics. More important was the fact that Quine’s definition of epistemology as ”a chapter of psychology” provided an alternative both to epistemology as philosophy of science (Vienna) and as linguistic analysis (Oxford), the ruling styles at the time. Thus Quine changed the subject matter of epistemology from scientific products and ordinary language to stimuli, nervous systems and behavior. But he also changed the very idea of epistemology from being a normative to becoming a descriptive project. The norms of a scientific methodology or the norms of ordinary language no longer had any direct bearing on epistemology. Without much ado Quine did away with an element generally held to be essential to epistemology—even by fellow Americans such as Wilfrid Sellars, who were otherwise almost as tough as Quine.

When you stress the normative element of epistemology, as Sellars did, the idea of a naturalized epistemology will seem less attractive. Norms are social and consequently demand an epistemology ”socialized” rather than naturalized. Quine (1960) stressed that ”language is a social art” but only used this fact to defend his behaviorism. His naturalism left epistemology wide open for a more sociological approach. And we have seen a post-Kuhnian philosophy of science develop in direct opposition to the idea of a naturalized epistemology, stressing instead sophistic themes such as: reality is socially constructed; truth is not correspondence; science is continuous with poetry and politics, and like them a search without progress for the best-selling metaphor.

Most American philosophers went with Quine rather than with Sellars, and reading ”chapter of psychology” to mean ”promise of progress,” they did not worry about knowledge as a social phenomenon. Turning to Darwin to define knowledge as an organ, wielded in the struggle for survival, they cut themselves off from the ”epistemological” discussion going on in post-Kuhnian philosophy of science.

Now, Dennett has a remedy for this neglect, that unfortunately he has not been particularly interested in using. Distinguishing, in ”How to Change Your Mind” (1978) between ”opinions” and ”beliefs,” between, roughly, ”sentences held true” and ”dispositions to behave,” he has the distinction he needs to engage in a discussion of knowledge as a social phenomenon. But a naturalized philosophy does not have much to say about opinions. They seem to be peculiarly human, less functional, less biological, than beliefs. And, beliefs are certainly important. With wrong ones we die. But then, on the other hand, for most of us, thanks to evolution, the right ones come rather naturally. Most of our dispositions to behave are generally shared, tacit natural habits, not much to fuss about. Opinions, in contrast, are the medium of disagreement, argument, and explicit conflict. Opinions make a difference. Sentences held true are what human beings live and die for. They are what religion, politics and science are made of, so how can you do epistemology without them?
Dennett can, of course, be excused for not seriously including opinions as objects of study in his philosophy of mind, by the fact that doing so would force him out of his natural habitat, the human organism. The human organism was not designed for language, he will say. The underlying motive, here, seems to be rooted in an unfortunate divide between contemporary psychology and sociology. In psychology people are organisms with beliefs, while in contemporary sociology they are persons with opinions. It is not easy to cross that divide.

The crucial difference between these two camps seems to be the question of scientific progress. Dennett sees cognitive science as bringing progress to our understanding of the mind, progress as the accumulation of knowledge, while sociology sees all attempts at understanding the mind as part of our attempts at self-understanding and edification, relative to and part of our culture, and changing with the Zeitgeist.
I would like to see bridges being built between these two camps. I would like to see a socialized philosophy of mind being developed as a complement to the naturalized philosophy that philosophers like Dennett have turned into the most exciting field within contemporary philosophy. A socialized philosophy of mind studies a social human subject in an artificial environment. In such a philosophy, mind, consciousness, and the mental processes are first and foremost social phenomena and to be studied as such.

I will try to make this view seem plausible by arguing, in Section Two, that the theories of mind and consciousness typically put forth in psychology and philosophy are much more revealing about the society of their protagonists than about the human organism, and that this should make one question demarcations between mind and society and think of the mind as part of society. After thus having made a more general case for the social nature of mind, I will go on to show what a socialized theory of mind might look like. I will do this by testing, in Section Three, the specific idea that thinking can be viewed as a craft, as a kind of tool use relying on culturally supplied, cognitive or intellectual artifacts, and by discussing, in Section Four, how, like manual crafts, thinking can be automated, resulting in automatic intelligence (AI). In Section Five, I will present Simon’s conception of cognitive science as artificial and use that presentation to discuss the difference between organisms and artifacts, between a natural and an artificial approach, between a functional stance and a design stance. In Section Six, finally, I will try to show how an artificial approach has the implication of adding an ”artificial stance” to Dennett’s three stances.

2 Society in Mind or Mind in Society?

Dennett is doing naturalized philosophy, but he is not immune to the changes of fashion. And Consciousness Explained ends with the declaration that our attempts to understand consciousness is a search for illuminating metaphors:

I haven’t replaced a metaphorical theory, the Cartesian Theater, with a nonmetaphorical (”literal, scientific”) theory. All I have done, really, is to replace one family of metaphors and images with another, trading in the Theater, the Witness, the Central Meaner, the Figment, for Software, Virtual Machines, Multiple Drafts, a Pandemonium of Homunculi. It’s just a war of metaphors, you say—but metaphors are not ”just” metaphors; metaphors are the tools of thought. No one can think about consciousness without them, so it is important to equip yourself with the best set of tools available. Look what we have built with our tools. Could you have imagined it without them? (Consciousness Explained, p. 455)

Quine would never have said that. A naturalized philosophy seeks illumination in the experimental results and theorizing of a biologically oriented psychology, that is, in science. Such a philosophy can appreciate the role of metaphors in science, and Quine’s certainly does, but the substitution of one set for another is made on the basis of empirical evidence. What empirical evidence motivates the substitution of Multiple Drafts for the Cartesian Theater? Is the substitution Dennett is advocating in Consciousness Explained simply a change in fashion?

”Multiple Drafts”—what a wonderful way to summarize the idea of anarchistic liberalism, of a free market! And who would not prefer such a model, in society and in mind, to the ”Final Solution”—with all its chilling connotations of a planned, bureaucratic economy? Luring us away from the metaphors of the Cartesian Theater to the metaphors of the Multiple Drafts model, Dennett is obviously in step with a more general ideological shift taking place in the 80s, in management thinking as well as, to some extent, in management practice. Dennett is inviting us to apply ”postfordism” to consciousness, to give up thinking of mind as a centralized, bureaucratic organization of Ford production lines, and begin thinking of it, rather, as a decentralized, flexible, organic organization.

This change of metaphors is radical, but it is by no means the first such change since Descartes once drew the outlines of his model of the mind. A comparable change of mental metaphors took place in psychology in the early 1960s. This change was a battle in an ongoing tug-of-war between empiricism and rationalism, between Hume and Descartes. For several decades, behavioristic psychology had managed to stick to its very inductive view of the mind—learning by conditioning—in spite of the success of the so-called hypothetico-deductive method of learning in the empirical sciences. But in the 60s, people like Noam Chomsky and Ulric Neisser were to annul this discrepancy, bringing psychology back in step with the Cartesian Zeitgeist.

The change in metaphors for the mind taking place in the 60s was motivated, we must say with hindsight, by rather flimsy empirical evidence. Neisser (1967) built his synoptic view of cognition on little more than Sperling’s beautiful but rather contrived tachistoscope experiments. And Chomsky had no empirical evidence to motivate his new paradigm for psycholinguistics. Cognition as the reasoned testing of hypotheses, as analysis–by–synthesis, really needed no empirical evidence. People were ready for it, and just waiting for geniuses like Chomsky or Neisser to develop a comprehensive version of the ”new” model of the mind.

Analytic philosophers like Quine, Putnam and Goodman, who had been brought up on Hume and inductivism, were not so easily seduced by the new set of deductive metaphors, and Dennett tended to side with them from the beginning. Now, when the times are changing again, and empiricism is back in favor, Dennett is ready with his, fundamentally inductive, Multiple Drafts model. But is this just another change in fashion? As Dennett admits, his alternative is ”initially deeply counterintuitive, but it grows on you.” Does it grow on us because his Multiple Drafts model fits better the current ideology of decentralization, flexibility and postfordism, than the Cartesian theory does? Or is there empirical evidence, less marginal than the Sperling experiments, to motivate a change of metaphors?

Consciousness Explained has rather little to offer in the line of evidence. And the recent developments of computer technology from mainframes to networks, from von Neumann machines to parallel architectures, from classical AI to new connectionism, a vogue for society theories of mind, a general trend from new deal thinking to laissez faire, from functionalism to postmodernism, and the collapse of the ”planned” economies in East Europe, seem to provide a much more convincing basis for explanation of why a ”new” model of the mind ”will grow on us.”

Dennett is in the best of company, of course, in proposing a theory of the mind matching current ideas about social organization. Plato’s theory of mind is a mirror of the social classes of Athens and Freud gives us a striking picture of his society: an enlightened, realistic bourgeoisie trying to negotiate between an aggressive, craving proletariat and a stubborn, conservative feudal class. The current fashion in mind metaphors is heavily influenced by the 20th century interest in industrial production. A business company is divided into management, production, storage, purchase and delivery. This is the architecture of the von Neumann machine, but it is also the mind of much contemporary cognitive science. Recently both computer technology and cognitive science has come under the influence (or is it the other way around?) of the very fashionable philosophy of production called ”just-in-time.” Adequate communication with suppliers makes expensive long term storage unnecessary: ”It is all in the connections!”

These metaphors for the mind are tools for thought. We cannot think about the mind without them. But are they also right or wrong, that is, do they belong in a naturalized philosophy? Is Dennett’s Multiple Drafts model a myth making us all feel more at ease with the ”mystery of consciousness,” or is it really a theory, generating hypotheses and explaining facts in the sciences of the mind? When we notice the close parallels between, on the one hand, our attempts to understand and organize our societies and, on the other, the theories of mind, we may well begin to wonder if through these changes of fashion we have really learned anything about the mind.

Anthropologists have made much of parallels like these between social phenomena and theories of nature. Maybe we’re only trying to understand our own society when we think we are uncovering deep truths about the nature of nature? Maybe Dennett’s Multiple Drafts model belongs in sociology rather than in the natural sciences? Well, why not both at the same time? When the metaphors prove valuable they gain strength back home, as when liberal ideas about society after a tour in biology return as social darwinism. But when mind is our object of inquiry, it is not clear that we ever leave the social realm. We have learned to distinguish between nature and society, between laws of nature and social customs. That our brains are natural objects we don’t doubt, but how about our minds? Dennett has been eagerly contributing to the solution of the mind–body problem, but there is very little in his writings about the mind–society problem.

We begin by thinking of a theory like the Multiple Drafts model as a natural science theory of mental processes, describing brain mechanisms in the language of psychology. We then notice that this theory, like other such theories, is very much the expression of currently fashionable ways of thinking about society. Society is used as a metaphor to describe the mind. We begin to despair: is this kind of theorizing only a fashionable play with metaphors without empirical substance? We then see that if such theories do not teach us anything much about our minds, they at least teach us things about our society. Suddenly we remember that we, and our minds, are very much part of that society, and we begin to view these theories as culturally relative expressions of our attempts at self-understanding rather than as natural science theories. If mind is a social phenomenon rather than a brain process, then the use of social concepts in a theory of mind may not be metaphorical after all.

Psychoanalysis is both a theory of people in fin de siècle Europe and a theory of that Europe. It was interesting because of what it said about that society and its minds, about repression, censorship and class struggle, that people preferred not to see. Theories like Minsky’s (1985) society of mind or Dennett’s Multiple Drafts model are more obvious expressions of contemporary American self-understanding: free competition in a free market. To the extent that they are true of that society, it is likely that they are true of the minds in that society. To the extent that they teach us things about our times, they tell us things about our minds. If mind is in society, there is no difference here.

If our minds are social rather than natural, then mental life is ruled by social customs rather than by natural laws, and it won’t do to identify psychology with brain science. We can still try to distinguish our minds from society by carving out a niche within society for our minds, arguing that our mental life is internal and private, while our social life is external and public. But if, like Dennett, we doubt that there is any deep distinction to be made between the subjective and the objective dimensions of life, we should perhaps take more seriously the relations between mind and society. Rather than developing theories with an isolated subject in focus, we should develop theories which embed the subject in her (social) environment, stressing the way in which mental processes depend on and take place in that environment.
To say that mind is in society, that thinking is a social phenomenon, is to express a view very different from the view of thinking which dominates contemporary naturalistic cognitive science. It is to claim that thinking is regulated by social norms, that much thinking is better understood as a socially organized process involving several individuals (and of course their brains) with an organization as ”the thinking thing.” It is to claim that the process of thinking relies on tools and materials supplied by culture, some of which are internalized but a great deal of which are external, provided by the actual environment. It is to claim that mind is a social artifact, a collection of tools, rather an organ in a body. It is to claim that the symbols, rules, categories, and objects which human beings use in their thinking belong to their social, artifactual environment, and that therefore questions such as whether human beings think in words or in images are ill-formed: human thinking itself is an artifact, done in whatever medium found suitable.

3 Thinking with Tools

Thinking is a process. A theory of the mind must say something about that process. In Consciousness Explained, Dennett concentrates on the structure of mind, and has very little to say about the nature of mental processes. I have tried to indicate how the architecture of mind outlined in the Multiple Drafts model corresponds to the current fashion in organization theory. When you reorganize a business company you rely on the employees to go on producing within the new structure. The vital elements in the process are not supplied by the structure. Similarly, Dennett’s new architecture of the mind tells us very little about the processes of mind. And in this case we have no employees to rely on. The Central Meaner in the Cartesian Theater has been fired and even if the Actors are still around somewhere, it is no longer clear what they are doing. Producing drafts, presumably, but what does that amount to?

What little Dennett has to say about the processes of mind is couched in the metaphorical language of speaking, writing, and communicating—perhaps the most popular among the many metaphors applied to thinking. In current cognitive psychology these ”office” metaphors are only equalled in popularity by the industrial metaphor ”processing.” In modern psychology, the latter metaphor has replaced the craftsman as craftsmanship in society has given way to industrial production. Mind is transforming information like an industry processes its material. The metaphor of information processing brings Henry Ford and his production line to mind and invites the use of flow charts to map the mental processes.

A craftsman uses tools. Industrialization replaced both the craftsman and his tools with a machine. As we move into an information society, losing interest in industrialization, the tool metaphor has been revived. It has been used with great commercial success as a metaphor for computer software. This talk has stressed the fact that what you can do depends on the tools you have. I think that the craftsman metaphor, especially if you stress this aspect, can increase our understanding of the mind. Thinking of thinking as the use of tools will give us a richer view of mental processes the more mental tools we come up with.

Dennett wants, of course, a depersonalized architecture, but the craftsman metaphor is only used here to direct our attention to the variety of tools used in thinking. Once we have a rich enough view of that process, we can automate the tools and get rid of the craftsman. So let us consider the following thesis: Mind is a social object rather than a natural, it is not a compound of organs but a collection of tools.

When I want to use the craftsman with his tools as a metaphor for the mind—and with tools I mean logic, language, books, logarithm tables, slide-rules, Italian book-keeping, statistical methods, maps, pencils, blackboards, databases—then I don’t worry about distinguishing internal tools from external ones. Both mind and society need matter to run on. Society is implemented on buildings, high-ways, air force bases, law schools,…and brains. Mind needs a brain to run on, but not just a brain. Some of the matter in which it is realized is inside the skull, some is outside.

Inventors of tools have sometimes been imitating the organs of other species. And mastering the use of a tool, means incorporating it as an organ, as an extension of your body, as well noted by Michael Polanyi (1958). But the tool remains an artifact. Cognitive artifacts are no different from other tools in this respect. To learn to master such a tool will of course involve an organic change. If that change goes deeply enough it may make sense to view thinking as a brain process, but if not it will be more fruitful to distinguish the artifact in that process, viewing thinking as a craft.

Artifacts are built upon artifacts, we use our intellectual tools to construct more advanced such tools, as Spinoza put it, and as with most manual tools, very little can be learned about the artifact by studying the organism and vice versa. Even when it comes to such a fundamental cognitive artifact as language it is difficult to tell how much of this artifact is organically incorporated, that is, automated with the brain as mechanism, and how much is accessed only from the outside, as it were, being used as a tool. Dennett expresses this well when he comments on the ”virtual architecture” of language:

So there is nothing ad hoc or unmotivated about the acknowledgment that some areas of human cognition require a higher-level ”symbolic” virtual architecture; after all, language, arithmetic, logic, writing, map-making—these are all brilliant inventions that dramatically multiply our capacities for cognition, and it should come as no surprise if they invoke design principles unanticipated in the cognitive systems of other animals. They are, in the laudatory sense, cognitive wheels.

Thus when Millikan (this volume) argues that the language of thought, for reasons of organic realization, cannot be very language-like, she is not really saying anything about the medium in which we think. We think in all kinds of artificial media, of course. We should not take her advice (at the end of section VII) and ”abandon the traditional image of the computing of mental sentences as like a formal system unfolding on paper,” since it is on paper that much of our mental inferences are unfolding. I am not denying that our brains play a vital role in thinking. If thinking is tool use, then the role of the brain can best be studied by concentrating on the very complex interface between brain and cognitive artifacts. To Millikan, the question if our brains can use language is not about this interface, but about the brain itself, as if she were asking if our hands could be screw-drivers.

Millikan is interested in the biology of thinking, and she has (Millikan 1984) expounded a powerful conception of cognitive science as biology. Normally, Dennett is much closer to Millikan than the positive characterization of cognitive wheels above might indicate. He is wont to stress not only the secondary, dependent nature of opinions, but their secondary importance to cognitive science as well:

Opinions play a large, perhaps even decisive, role in our concept of a person, but….If one starts, as one should, with the cognitive states and events occurring in non-human animals, and uses these as the foundation on which to build theories of human cognition, the language-infected states are more readily seen to be derived, less directly implicated in the explanation of behavior…

But certainly our ”language-infected states” are seriously implicated in the explanation of verbal behavior and verbal behavior is at center stage in human cognitive affairs, in science, politics, and art. Like Quine, Dennett is caught halfway between language and biology. Wanting to use biology as his foundation, he cannot give to language its proper place as a cognitive artifact, but yet he cannot abandon language completely in view of its obvious importance in human cognitive affairs. He remains uncomfortable with much of the work in the philosophy of language, because he has no real place for language in his theory of the mind. Sometimes, however, as in ”Two Contrasts: Folk Craft versus Folk Science and Belief versus Opinion” (1991), he comes close to seeing the possibility of working out a compromise with a believer in the language of thought such as Jerry Fodor, according to which he and Fodor could both be right, one about natural beliefs, the other about artificial opinions.

When we begin to examine the analogy that thinking is a craft involving the use of intellectual tools, questions jump at us from all quarters. Is it possible to think without tools, with your bare brain so to speak? If you need the tools, the techniques to use them, and judgment to use them well, then how is intelligence distributed between tools, techniques, and judgment? Are there any natural intellectual tools or are all such tools, as I have presupposed above, artifacts? But if so, is not what we generally call ”natural intelligence” really ”artificial”? Or are the techniques we bring to such tools somehow natural? In order to answer these questions we need a much more careful and concrete examination of the relations between the tools, techniques and judgment of thinking, than I have been able to give. Here I want to consider briefly another aspect of tool use: the division of labor.

A craft is often practiced by a team, and no one expects the lowly apprentice to really understand what she is doing. She is one of those ”dumb homunculi,” Dennett speaks of, that we know how to replace with a mechanism, like the human computers who were replaced by computing machines. As we move up the hierarchy of a medieval craft system, the members’ understanding of what the craft is all about will increase. The person on the top of the hierarchy has a deeper understanding of what goes on and why, but is probably less efficient at the practical task. She may very well use clumsy techniques and tools with the deepest understanding. It is the same with thinking, if it is a craft: it isn’t an all or nothing affair.

When we think of thinking as craft, we will stress its productive character. And we will observe that in our modern society most crafts have undergone a process of industrialization. In that process, the idea is to divide and ”dequalify” labor by automating techniques and letting management take care of judgment and understanding. Thinking is certainly no exception. What used to be an individual craft using fairly simple tools has become a complex production process performed by organizations relying on advanced rule systems, planning, division of labor and high technology. The more bureaucratic the organization, the more divided the labor, and the more fragmented the understanding will be. Even such anarchistic, conservative craftsmen as philosophers are becoming organized. Their thinking is like traffic: individuals drive the cars, but together they make up a complex system, an organization with many more elements than just people and automobiles (philosophers and their thoughts).

If we think of thinking as a craft it is a complex process that does not take place in the person or her brain, but in the system which includes the cognitive artifacts used, wherever they may be. So, even if the brain is a machine it is not a thinking machine, except metonymically, as when we say after a long day of sewing: ”I am really a sewing machine!” Our tendency to think of thinking as a process in the brain tends to hide from our view its dependence on artifacts. As a brain process, thinking is a natural process, and this makes it difficult for us to see that thinking today is about as artificial as anything else—communication, production, consumption—in our modern artificial world. Just as our society will grind to a halt when our artifacts break down, so thinking would be reduced to next to nothing were we to suffer a breakdown of our intellectual artifacts. What could we think, how could we reason, if we did not have words, books, diagrams, figures, concrete examples, algebra, logic, lisp or legal systems?

Thinking of thinking as a brain process makes us think of human intelligence as natural (in spite of the fact that most so-called intelligence tests measure artificial capacities like vocabulary and numerical ability). This has made the discussion of research on artificial intelligence more confusing than it need be. Once it is seen how artificial human thinking is, to what extent it relies on cultural artifacts, the artificial intelligence project to build thinking machines is seen as a rather mundane attempt to automate artifacts, in this case intellectual tools rather than manual ones, but so what? Let us examine the tool metaphor a little more by taking a closer look at that project understood in this way.

4 Automatic Intelligence

The use of tools can be improved by better tools or by better techniques. Thus our history is one of developing tools, of making them more specialized and easier to handle, and of developing educational systems ensuring effective technical training, as well as modern attempts like that of F. W. Taylor (1911) to increase the efficiency of workers by making their behavior stereotypical, more machine-like.

The use of tools can also be improved by better understanding of both tools and their operation: conditions of usage, context, motivations, implications, and so on. This third aspect, often called ”judgment,” is by many people considered to be particularly human. It is a rag-bag category, and let us leave it like that. If Taylor took a few steps towards the replacement of human workers with industrial robots, it was because he considered the role of judgement to be marginal in manual labor. Taylor concentrated on the techniques at the interface between user and tool.

Tools and techniques develop together by coadaptation. We can think of automation as the incorporation in the tool of some of the techniques previously demanded of the user. The typical tool is inert. The user puts it in motion. With an automatic sander you don’t have to move the sander back and forth. You just hold the machine and the sander moves by itself. With automation, part of the coadaptation between tool and technique moves into the tool. The tool becomes a machine.

Not all automation transfers techniques from users to tools. Instead of developing the tools into machines, we can leave the tools unchanged and build the techniques of the user into a separate machine, creating a robot. An automobile is a horseless carriage: the techniques of the horse have been incorporated in the tool. But a tractor is more like an imitation horse than a horseless carriage. The tractor replaces the user while the tools, the carriages, plows, tillers, remain virtually the same, except for slight adjustments of the user interface. Thus, the automation of a work process can either mean the incorporation in the tool of some of the techniques of the user or the replacement of the user with a substitute machine.

This distinction is by no means as clear even as the one between tool and technique. It is a distinction between specialized and more general purpose automation. A tayloristic approach is an attempt to dequalify work by dividing labor into simple techniques. Each individual worker will perform a single, simple routine with one and the same tool. There is no difference between replacing such a user and automating the tool she is using. But when the worker uses several different tools, relying on several different techniques, having to rely on judgement in selecting tools and techniques, the difference between automating a tool and imitating the user should be clear. Automation has in most cases meant the automation of tools, but there are exceptions. In the future we may see more clear cut examples of automation by imitation of the user.

Intellectual tools can of course be improved by automation. Intellectual operations that can be identified with techniques, algorithms if you like, can be automated and incorporated in the tool. In this regard there is no difference between intellectual and manual work. The story of the development of the abacus, the desk calculators, the invention of computing machines, and the resulting programmable electronic calculators of today is a good example of how techniques are incorporated, one after the other, in a tool that consequently grows in complexity. This is an example of the automation of a tool rather than the replacement of a user, but of course the first electronic computing machines were built, nonetheless, with the express purpose of replacing the ”computers,” a substantial, and growing, profession in the early 40s.

Electronic calculators are examples of what might be called ”automatic intelligence.” This is not an expression in wide use. Instead we speak of ”artificial intelligence,” and we rarely think of common electronic calculators in those terms. We prefer to think of intelligence as a general capacity, and describe the ambition of artificial intelligence research as the imitation of a distinguished user rather than as the automation of some simple tools.

With tool use as a metaphor for thinking, we can define intelligence as the proficiency at using one’s tools, whatever they are. The typical way to express your frustration with not being as intelligent as you’d like to be is then: ”I knew it, why didn’t I think of it.” Intelligence is a matter of having certain techniques. But as those techniques are automated, or more generally, when thinking tools are improved and become more ”user-friendly,” techniques that used to distinguish the intelligent person will suffer the fate of manual techniques subjected to automation: they no longer confer prestige. And when techniques have lost their prestige, they are no longer signs of intelligence.

Intelligence can of course be defined as the general capacity to develop intellectual techniques, rather than as the possession of intellectual techniques and tools. We may still want to describe, as intelligent, the skillful use of intellectual tools. And then, perhaps, calculators deserve to be called intelligent machines in spite of the fact that they rigidly repeat simple combinations of elementary operations. The issue is not what to call these machines, of course, but how we should think of artificial intelligence: as the imitation of a general intelligence or as the piecemeal automation of intellectual tools.

Suppose that we manage to program a computer to solve problems in a certain domain, preferably as general as possible. What have we learned about how human beings solve problems in this domain? We will probably be aided by our computer program in our efforts to determine how human beings solve problems, and we may find that, in our culture at least, people use the very artifact we have been automating. (We may well wonder how our ability to develop the program hinged on the fact that we were already, unwittingly, using this artifact.) But, if that is not the case, our computer program itself has nothing to tell us about human beings. We can learn from it about problem solving, as an artificial discipline, but there is nothing particularly human about that discipline.

In ”classical” or ”good old fashioned” artificial intelligence research, as defined by Allen Newell and Herbert Simon, people study and develop cognitive artifacts and in doing so they study intelligence. Not human intelligence, but just intelligence, as a craft, varying culturally, with certain natural constraints, like other crafts. This is why those in AI never really worry about testing their ideas, principles, systems on human beings. They are developing intelligence techniques, not a psychological theory. It has created unnecessary confusion when people in classical AI have failed to see this.

Even Newell and Simon misdescribed their work on problem solving, viewing the heuristic rules they tested in the Logic Theorist and the General Problem Solver as some sort of psychological laws, while it should be clear that these are better viewed as principles of problem solving and nothing else. Like primitive mathematical theories and legal rule systems, these principles are artifacts, tools, developed for a certain use. Mathematics, generally, is a collection of artifacts, some of which are implemented on some brains, and some even on machines, but this certainly does not make them psychological laws in any interesting sense.

Recently, research in artificial intelligence has begun to shift from automating a variety of different cognitive tools to imitating more general intelligence techniques, or even the brain. This change is radical. To classical AI, thinking is the rule governed manipulation of symbols, and the implementation is in principle irrelevant. The automation of cognitive tools will naturally take one tool at a time, fulfilling its tasks by ”modularization.” You learn about the modules by studying the tools. When the project is to imitate the brain, the modules will be different systems in the brain. And the task is typically conceived as an all or nothing affair: ”Can artificial intelligence be achieved?”

When artificial intelligence research changes its orientation from automation to imitation, this move can be justified by an interest in the human brain rather than in intelligence, like when someone initially interested in wood carving turns his attention away from knife and wood to study the human hand. This change seems to me ill-advised. Computer technology can no doubt be fruitfully used to study the human brain, but little of relevance to intelligence will result from that research. To reconstruct our cognitive tools from neural net theory is as difficult as reconstructing complex applications from operating systems.

The new technology of parallel architectures, so useful when making large technical calculations, has played an important role in supporting the renaissance of an interest in the brain, in driving artificial intelligence research back to nature. But rather than using this technology to imitate the human brain, we can use it to automate the kind of complex cognitive artifacts exemplified by work organizations such as offices. Artificial intelligence research will then make a substantial contribution to the computerization of society, instead of getting deeply involved in the dream of creating an artificial brain, a dream that may be exciting to neurophysiology, but that is otherwise comparatively uninteresting.

Part of the blame for misunderstanding classical AI and for an interest in imitating the human brain must fall on Turing himself. His imitation test makes us think of artificial intelligence as the imitation of human beings rather than as the automation of their cognitive tools. This test, in effect, defines intelligence as the capacity to carry on a conversation, and the aim and nature of artificial intelligence research accordingly. The imitation game is a parlour game of interest to philosophers, but it has very little to do with AI research. Our interest in machine translation or parsing, the ambition to design ”automatic English,” is not motivated by a desire to have machines as partners in conversation.

There is such a lot we still don’t know about the cognitive tools we use, looked upon as tools, and the techniques we need to use them. Computers are wonderful devices for automating such tools. And the important task of artificial intelligence research lies not in an attempt to make copies of us, but in an effort to increase our intelligence by improving and automating our cognitive tools, as well as designing new ones.

5 An Artificial Science

I have argued above against the dualism between mind and society, by arguing against naturalism in psychology, trying to show how culture-relative our theorizing in psychology normally is, and by questioning the dominating conceptions of the mind in favor of a view of thinking as a social practice with tools. When mind is seen to be a part of society, then psychology will be a part of (a new kind of) sociology as well. I now want to place this argument within the context of Herbert Simon’s ideas about a general, design oriented science of the artificial.

I began by lining up Simon and Quine as extremes, one introducing artificial science, artifacts and design, the other advocating natural science, physicalism and empiricism. I then placed Daniel Dennett in between the two, wanting to push him away from Quine closer to Simon, away from naturalism and biology in the general direction of sociology and technology, away from natural science and a naturalized philosophy towards artificial science and a socialized philosophy.

It is time that we look a little closer at what Simon is actually saying in The Sciences of the Artificial about what artificial science is supposed to be. The first thing we then notice is that the contrast I have made between organisms and artifacts is not stressed by Simon. Artifacts or artificial systems are functional, adaptive systems, on Simon’s view, systems that ”can be characterized in terms of functions, goals, adaptation” (p. 6). It is true that he defines artifacts as man-made, but he also claims that, in all other respects, organisms are good examples of artificial systems, being adaptive systems that have ”evolved through the forces of organic evolution” (p. 7). Business organizations, machines, and organisms are all exemplary adaptive systems.

An artifact can be thought of as the interface, Simon says, between the substance and organization of the artifact and its environment. The two sides of the interface, its structure and its environment, fall under the province of the natural sciences. But the interface itself belongs to artificial science. While the natural sciences are interested in how things are, the sciences of the artificial are concerned with how things might be—with design. Indeed, artificial science is a science of design.

Simon formulates his task as that of showing how science can encompass both ”human purpose” and ”natural law,” finding ”means for relating these two disparate components” (p. 4). In accordance with this project, psychology is first defined as an artificial science studying the internal limits of behavioral adaptation. Then, since these limits are seen to be rather marginal, and the complexity of human thinking to be artificial, ”subject to improvement through the invention of improved designs” (p. 26), cognitive science is included in the science of design as a study of (the design of) the cognitive artifacts designed by man attempting to adapt to his environment.

A psychology that studies the internal limits of adaptation is of course very different from a psychology that studies (the design of) cognitive artifacts, even if both can be said to study ”how things might be.” The subject matter of one is ”man’s relation to his biological inner environment,” the subject matter of the other is ”man’s relation to the complex outer environment in which he seeks to survive and achieve” (p. 81). Thus Simon stresses the role of the environment, internal vs. external, in both varieties of psychology. His ideas of how to study the artificial seem to divide into an empirical science about the limits of rationality (or adaptation) on the one hand, and an empirical science of adaptation on the other. The artificial itself, the interface between the internal and the external environment, tends to disappear into the background. And modern evolutionary biology seems like a good model for artificial science, with organisms as exemplary artificial systems. Simon’s radical program for a new science of design seems to reduce to a rather well known theme in contemporary American thinking: functionalism.

Functionalism, the idea that phenomena can be identified and explained in terms of their function, has two rather different roots: biology and engineering. These two forms of functionalism were famously joined by Norbert Wiener in his program for a general science of cybernetics. Dennett is a strong proponent of this marriage, defining biology, and cognitive science, as ”a species of engineering: the analysis, by ‘reverse engineering,’ of the found artifacts of nature.” To Dennett, evolution is a process of engineering, organisms the artifacts designed in that process. Like Simon, Dennett wants to think of evolution and engineering both as processes of adaptation and he ought to feel very comfortable with Simon’s ideas for a design-oriented artificial science as I have here described them.

But something is terribly wrong in this description. One of Simon’s motives for his campaign for an artificial science is the obvious shortcomings of the natural sciences as a foundation for (education in) engineering:

In view of the key role of design in professional activity, it is ironic that in this century the natural sciences have almost driven the sciences of the artificial from professional school curricula. Engineering schools have become schools of physics and mathematics; medical schools have become schools of biological science; business schools have become schools of finite mathematics. (p. 56) In the same context he goes on to warn that a science of artificial phenomena is always in imminent danger of dissolving and vanishing. The peculiar properties of the artifact lie on the thin interface between the natural laws within it and the natural laws without. (p. 57)

Only by holding fast to the ”process of design itself” will it be possible to develop an artificial science that is radically different from the natural sciences.

When we look closer at how Simon describes the general science of design it is substantially different from the two varieties of psychology described above. Rather than studying the relations between artifacts and environments, such a science will concentrate on the artifacts themselves and their design. Such an artificial science will stress the important differences between biology and engineering rather than delve in the obvious similarities between them. If one wants to argue, as Simon clearly does, that the idea and methods of an artificial science are importantly different from those of the natural sciences, even if we are only beginning to appreciate how, then one should be careful not to describe evolution by natural selection as a process of design. If we turn to biology to learn about design, then artificial science will collapse into the natural sciences, after all.

But what are then the important differences between natural science and a science of design, between evolution and engineering? The fundamental difference is one of ”knowledge interest.” In the natural sciences we want to find out what the world is like while in the artificial we are interested in what could possibly be and how to make it so. In engineering we want to make the impossible, to push further the limits of human imagination. Engineering is radical, utopian, constructive. Evolution is fundamentally conservative. Goldschmidt’s hopeful monsters notwithstanding, the design going on in nature is closely tracking the internal and external environment, advancing by ”imperceptibly small steps.” When we study evolution, we study the environment. In engineering we admire most the radical departures from environmental limitations. And the limitations that count the most are set by the ”practical inertia” of previous design decisions.

Organisms are not artifacts for the obvious reason that they are not man-made. All other similarities aside, this difference is important enough to warrant attention. It shows up in fundamentally different attitudes to organisms and artifacts. We look with wonder on complex, beautifully adaptive organs and organisms, wondering how they are at all possible, how they could have evolved. The aim of a biology inspired by evolutionary thinking is to make the wonderful comprehensible. But when we look with wonder at similarly complex artifacts, our wonder has a different, more creative, design-oriented flavor. ”If they could make that,” we say, ”then I should be able to make this!” Artifacts inspire us to improvements. Our interest in how they are made is guided by our interest in making them ourselves, and making them better. In biology it is natural to view organisms from a functional stance, but in engineering the typical stance is a design stance—and these two are fundamentally different.

The distinction I want to make between biology and engineering is not made in terms of design methods. There are such different methods, to be sure, and some have tried to use them to distinguish between evolution and engineering. But modern engineers are very good at tinkering, so Lévi-Strauss’ (1966) distinction between engineering and tinkering does not really draw the line where it should. Similarly misplaced is the rather interesting difference between cultivating a process of growth and imposing a form on an object once and for all. No, the distinction I want to make is blushingly simple, but this does not make it less important.

Of course, I am not saying that this difference between organisms and artifacts, between biology and technology, makes it impossible for us to view organisms as artifacts (or vice versa), to bring to biology an engineering perspective, to organisms a design stance. All I am saying is that when we do this, we should be careful not to reduce the differences between the two, losing what is essential to technology, its creative ambition to explore the space of possibilities, or losing what is essential to biology, its responsibility to keep track of the actual. The risk that biologists begin to think only like engineers does not seem imminent, but the engineers we educate today are certainly lacking in design attitude.

Breeders of animals or plants often get inspired by each others’ results: ”If she could make that, then I should be able to …” That is, they look upon their organisms as artifacts, with a genuine design orientation. In the course of time we may actually so change our attitude to the organic world as to make it a province of artificial science. This all depends of course on the thickness of the interface, on the proportion between the overall complexity of the organism and the internal and external limits to (artificial) adaptation.

The theory of evolution is a theory of how biological organisms develop and change by adaptation to a natural environment. But our environment is artificial rather than natural, our world is an artificial world, and we are to a large extent artificial ourselves. Quine’s image of ”a biological organism in a physical environment” is complicated by the abundance of artifacts with which we have populated our world. Simon wants us to think of these artifacts as the adaptive interface between their inner and outer environment. The human mind, as well as a person, is an artifact, an interface between a physiological machinery and the natural world. We can think of other artifacts, such as cities, airplanes, colleges, the tools we use and the organizations we belong to, as part of the environment we have to adapt to, but at the same time they are part of ourselves as adaptive systems, part of the artificial interface that we are. So, ironically we come out defending a sort of dualism between mind and body. But it is a dualism with a twist: mind is material, of course, but it is not natural. Mind is artificial.

6 An Artificial Stance

The modern world is an artificial world, but modern science is a science of nature. Explaining and predicting events in nature, including the behavior of organisms, is very different from explaining and predicting human action in a modern society. The natural sciences, including biology, are of little help in making sense of our artificial world. They tempt us to take society for granted and prevent us from appreciating the role of technology in shaping society, its members and their actions.

The natural sciences played an important role in revolutionizing thinking in the 17th century, but today they act as a conservative brake on our attempts to come to grips with our world. Physicalism, a naturalized philosophy, a biological approach to the study of human cognition, are all expressions of a conservative interest in nature in a time when what we need is a theory of technology, a theory of design, a science of the artificial. Dennett’s rightly celebrated intentional systems theory is an excellent example of such a conservative, naturalized attempt to lay the foundation for a study of mind and action. By trying to make clear exactly how this theory falls short, I hope to promote an interest in the artificial.

Dennett is remarkably creative in coming up with examples to illustrate (or generate) his ideas and distinctions, but examples can be seductive. Take the famous chess-playing computer in ”Intentional Systems” (1971), for example, used by Dennett to introduce his three stances: the physical stance, the design stance, and the intentional stance. How much hinges on the fact that the machine is playing a familiar parlour game? That it is a computer acting and not a person? Would we come up with different stances were we to use a different type of example?

Before we look at such an example, let us think more generally about human action. In order for an action to appear it must be possible and it must be wanted. An action is possible only if the agent has the competence to perform it and the environment supplies the means needed. The agent must want to perform the action and the social norm system must encourage or at least not prohibit it. In explaining or predicting human action we need consider conditions like these, and the way actions are differentially weighted by them, but sometimes one or more of these conditions will be obviously satisfied, and not in question.

In a game like chess, the overall motivation of the players is to win the game. We know what they want. What is permitted and what is not is rarely in doubt. The norm system is simple and familiar, if we want to explain an action only in its role as a chess move. Likewise there is seldom any problem with the physical environment. Asking for an explanation of a chess move we do not ask how it was physically possible to move the piece. Only one condition really needs to be considered: the relative competence of the chess players, including what relevant information they may have, or be able to glean, about their opponent.

When we leave the world of games, actions will become more difficult to predict. Motivation and norm system will often be unclear and the environment will be more complex and problematic. When predicting such actions we have to consider all the four conditions distinguished above, including the three that we can take for granted in predicting chess moves. Norm systems vary and interact with one another and with the desires of the agent in complex ways. People will act under the influence of norm systems that are alien to us, performing surprising and incomprehensible actions. Even in a game of chess we may sometimes need the norm system to explain an action. When someone is cheating, the game can be very puzzling, unless one is daring enough to suggest that there has been a breach of rules. Similarly the technical environment becomes important when the action itself was not observed and it was not clear that the environment permitted that kind of action.

The roles of these four conditions are well exemplified in the work of an anthropologist studying a foreign culture, but let us choose a less exotic cliché. When the fictional detective goes about identifying the murderer, he will typically ask about competence, means and opportunity, motive, and moral character. When the detective is reconstructing the crime, possible suspects will be excluded one after the other on the basis of falling short in one or more of these dimensions. At the ensuing trial there will be a replay of sorts of this procedure. The prosecutor will try to convince the jury that the defendant had the competence, means, motive, and character needed to commit the murder. If there is no doubt about the actual sequence of events, then the attorney for the defence still has the possibility of getting the defendant off the hook by claiming temporary insanity, or other mitigating circumstances.
In all these discussions during the criminal investigation and at the trial, there will be a rich variety of explanations and predictions of behavior. The three stances discussed by Dennett will all be used, but in addition, the environment, both physical and social, will figure prominently in these explanations. The design of the culprit, his size, strength, intelligence, special competencies will be referred to, but so will the availability of certain tools and substances. Murder is an immoral act and the likelihood that the suspect would be capable of breaking the norms of his society will also be discussed. Actions are possible only if you are at the right place at the right time, so a lot of time will be spent on the temporal geography of the suspects, the checking of alibis, and so on.

Chess is a parlour game and as such it relies on a simple and standardized physical and social environment. Environmental conditions, external to the agent, tend to drop out when we consider such actions as chess moves. To make things even worse, the pedagogical advantage of having a computer as agent, when explaining the use of the stances, is matched by the disadvantage of having an agent which, at the current state of technology, really has no environment, physical or social. But, when we are not dealing with computers, nor with games, how could we predict behavior unless we had environmental variables in our functions?

The physical and social conditions bring out the role of artifacts in human action. We could take artifacts seriously by adding a fourth, artificial stance, to complement Dennett’s three well-entrenched ones. But Dennett would see such a stance as simply an application of the intentional stance suitably provisioned with historical and social contextual details. After all, when you ask, of an intentional system, what it ought to believe, given its context, etc., the context includes its personal history, obviously, but also the context (social, environmental, cultural) in which it acts. The principles of interpretation are the same—there is no special way in which history or convention enter into it.

But the very same argument can be directed against distinguishing a special design stance. After all, when you ask, of an intentional system, what it ought to believe, it seems very natural to consider not only the external context, but the internal as well, what it ought to believe given that it is designed thus and thus. The principles of interpretation will be the same. So, the only motive for distinguishing a design stance but not an artificial stance seems to be a bias in favor of an intentional system as a natural organism rather than a social being. Of course, the point here is not how many stances you want to operate with, but the way you argue for the stances you want to distinguish.

Both the design stance and the artificial stance add information to the intentional stance. The artificial stance looks closer at the various rule systems, instruments and other environmental features that are involved in the competence of an intentional system, while the design stance goes into the details of the internal implementation of its competence. If you want to predict the behavior of a mouse confronted with combinations of cats and cheese (another of Dennett’s early examples), the intentional stance is perfect: the competence involved is simple and we can assume a good design.

In general, if you are trying to keep track of an intentional system, then the more rational or optimal it is, the less interesting will be the properties both of the design and of the physical and social environment. But you must be able to match its competence, of course, in order to use the intentional stance to your advantage. Confronted with an optimally designed chess-player, you will lose the game exactly because you will be unable to predict its moves using the intentional stance. The more complex the competence, the more important its role will be in the prediction, and the less relevant will be the beliefs and the desires of the agent, acting as she is as an instrument of her competence.

Explanations and predictions from the artificial stance typically rely on the idioms of the artifacts being referred to. When the artificial stance is couched in an intentional idiom, as it can be, it may seem as if one was ascribing beliefs and desires to the agent. But the particular agent is really not being referred to, except as a vehicle for the relevant artifacts. The intentional stance attributes beliefs and desires to the individual agent, there is never any doubt about that, and it is made quite clear in ”True Believers” (1987), the current ”flagship expression” of Dennett’s view. But sometimes, as in ”Intentional Systems” (1978, p. 13), when Dennett distinguishes between ”human psychology” and the ”‘psychology’ of intentional systems generally” or, in ”Three Kinds of Intentional Psychology” (1987, p. 58ff), when he describes intentional systems theory as ”competence theory” the intentional stance seems much more like, what I here call, an artificial stance. This vacillation, if I may so call it, explains part, but not all, of Dennett’s vacillation between realism and instrumentalism in connection with the intentional stance.

To the extent that human beings subject themselves to the rules of their institutions, or to ”opportunity,” their behavior is predictable from those rules. And to the extent that human beings rely on technology in their actions, their behavior is predictable from the normal functioning of that technology. ”Watch her now as the game begins, and you will see her move the pawn in front of her king two steps forward.” ”Watch how all these cars will begin to move when the lights turn yellow and green.” The bureaucrat’s stamping of your passport is artificial in the same sense in which the computer’s chess move is: the stamping is determined by an institution, the move by a program. Explaining and predicting their behavior we don’t have to assume rationality or posit beliefs and desires. When everything goes well, all we have to rely on is our knowledge of the relevant artifacts and the assumption that people will implement those artifacts. ”How do you dare drive here on the highway with all the other cars?” The intentional stance: ”I assume that the other drivers are rational and therefore have desires and beliefs that make them drive pretty much the way I do.” The artificial stance: ”This is traffic, isn’t it? The rules of traffic guarantee a reasonable safety.”

With most of this Dennett will agree, I think. Indeed, most of the points are from his own writings. Still, he seems unwilling to fully accept the consequences of his position and add a fourth stance to his trio. I interpret this as an indication that, in spite of everything, he is still underestimating the role of artifacts in human life, preferring to think of us as mainly biological organisms. When you introduce a fourth stance, you will have to look closer at artifacts, opinions among them, and you will begin moving towards an attitude very different from the naturalized functionalism dominating contemporary philosophy of mind.

Just seeing how something works will seldom be enough for one to be able to construct it oneself, or even to consider constructing it: there is a long way to go from a functional stance to a design stance. With such a stance you will become more interested in the design of artifacts, than in how they determine human action. Such a design stance is an essential element in Herbert Simon’s conception of an artificial science. Artificial science is a science of design. Foreign to the natural sciences as this design attitude is, it is still definitive of human existence. Our ability to negate the actual and think the possible, is what makes us human. Ending this long journey on a high note we could summarize its message by paraphrasing Heidegger: The essence of Dasein is Design.

Notes
Quotes from Simon (1969), p. 3, 22 and from Quine (1969), p. 82.
Cf. such papers as ”Memes and the Exploitation of Imagination” (1990), ”The Interpretation of Texts, People and Other Artifacts” (1990), ”The Role of Language in Intelligence” (forthcoming) and Chapter Seven of Consciousness Explained. Ironically, it is a biologist, Richard Dawkins, who (together with Julian Jaynes) has been the important source of inspiration in this process.
In Consciousness Explained, p. 460.
Verbal behavior to be sure, but still behavior in the service of biological survival.
Evolution then comes to play the role of Descartes’ benevolent God, ensuring that our beliefs are true. If they were not true we would not be here. Quine puts it thus ”Creatures inveterately wrong in their inductions have a pathetic but praise-worthy tendency to die before reproducing their kind” in ”Natural Kinds”, an essay containing a clearer formulation of the motivation for an ”epistemology naturalized,” than the essay so named. Both essays are in Quine (1969).
But surely that argument can be carried all the way down? The human organism was not designed for sex, or for eating, or for walking-organic structures designed for other purposes, if you forgive the language, were put to new use. All evolution involves exaptation.
Sometimes people seem to be confused by the superficial resemblance between the method of hypothesis and instrumental (or operant) conditioning, trial and error, but it is the deduction taking place in the development of hypotheses that makes the difference. Thorndike’s kittens do not reason.
In appendix B of Consciousness Explained, Dennett gives seven predictions (”half-baked ideas for experiment”) for scientists to test his theory by. Not bad, when Einstein only had three, you might say, but then, if you look closer at the seven predictions, do they really tell against a Cartesian Theater Model? (The answer, as you might well guess, is no.) And, what does he mean by saying that ”as a philosopher I have tried to keep my model as general and noncommittal as possible”?
The modern source of this kind of thinking is of course Hegel and his theory of consciousness in The Phenomenology of Spirit. Via Marx this theory has been particularly strong in the Russian school of psychology, often called ”activity theory.” Two recent, rather different, examples of this kind of thinking are Jaynes (1976) and Suchman (1987). Gregory (1981) is among the few who discuss, rather than just mention, the importance of technology, and the role of tools, in thinking. The theory of intelligence I hint at here, he there develops in more depth, using a fruitful distinction between what he calls ”potential” and ”kinetic” intelligence.
In Tractatus de Intellectus Emendatione: ”So, in like manner, the intellect, by its native strength…, makes for itself intellectual instruments, whereby it acquires strength for performing other intellectual operations, and from these operations gets again fresh instruments or the power of pushing its investigations further, and thus gradually proceeds till it reaches the summit of wisdom.”
This distinction, or difference in degree, is central to my argument, but I have very little to say about it. For a while we thought we had a good handle on it in terms of the distinction between hardware and software, but that was a little simple-minded we have come to realize. Like Dennett’s use of the related notion of a ”virtual machine,” this distinction can serve an educational purpose, but does not really stand up for a closer scrutiny. For another attempt, see Haugeland’s (1985), p. 106ff, distinction between, what he calls, type A and type B automation.
”Mother Nature versus the Walking Encyclopedia: A Western Drama” (1991), p. 27.
Dennett’s ”The Role of Language in Intelligence” (forthcoming) is a hint in the right direction, raising the issue of ”the details of the interactions between … pre-existing information structures and the arrival of language.”
”Self-Portrait” (forthcoming).
This is not a rhetorical question. The answer can be found in classical behavioristic research, with its explicit condemnation, as ”cheating,” of the use of cognitive tools. This was pointed out by Miller, Galanter and Pribram (1960), commenting specifically on behavioristic research on memory. The heritage still plagues our school systems: ”This is mental calculation, boys and girls, so let me see your fingers.”
Let us think of the horse as the user of the carriage. It simplifies the example without biasing the conclusion.
Cf. Ceruzzi’s (1991) account of the human computers. The computer profession met Taylor’s demand for simplicity. A relatively uneducated typist could, according to Ceruzzi, be transformed into a professional computer in just a couple of weeks.
Cf. Newell (1990), p. 90: ”… intelligence is the ability to bring to bear all the knowledge that one has in the service of one’s goals.”
Cf. Simon (1969), p. 30f, for this point.
Cf. Simon (1980), p. 42f.
Sometimes I think people in AI should be more careful in defining what they do. Compare standard definitions of AI, e. g., Charniak & McDermott (1985) and Waltz (1988).
Unless psychology is viewed as an artificial science, the science of cognitive artifacts, with nothing particularly human about it. But if this is Simon’s later view, this was not the view of the Newell and Simon of the late 50s. There their ambition was to advance functionalism in psychology against a cybernetic interest in the nervous system. Cf. Newell, Shaw & Simon (1958). Cf. also McGinn (this volume) for a view of psychology on which it is neither a natural nor an artificial science.
Waltz (1988), p. 191.
I am simplifying. A lot of the classical AI effort, particularly by people like Roger Schank, went into attempts to imitate our everyday knowledge, aiming, in effect, to automate an insuperable mess of tacitly acquired artifacts.
”Self-Portrait” (forthcoming).
In spite of the importance of technology in the life of mankind, it remains very difficult to appreciate its nature. The natural sciences continue to determine our way of thinking, to the extent that we still speak of technology as applied science when it would be more appropriate to speak of science as applied technology. Scientific progress is made possible by technical development, and its direction is determined by the direction of that development.
Dennett’s notion of a design stance tends to coincide with a more passive, functional stance, attending to the mechanisms implementing the competence of an intentional system. This stance is better called a ”functional stance,” I think, reserving the term ”design stance” for an attitude from which you typically ask about the design, meaning how something is made or, better, how to make it, rather than how it functions.
Such a change is already on its way, of course, inspired both by genetic engineering and by computer simulation as a method for studying possible life forms. Cf. Langton (1989) and Dennett’s review, ”Artificial Life: A Feast for the Imagination” (1990).
Even if we may not always be as dependent on our environment as Simon’s (1969), p. 23f, famous ant making its very complex path across the uneven cliff, certainly most of what we do depends on it.
I am paraphrasing Dennett’s own words. Similarly, in ”The Interpretation of Texts, People and Other Artifacts” (1990), Dennett wants only two principles of interpretation-an optimality assumption and an ask the author or designer principle-declaring irrelevant what might be called a conventionality principle. But all three principles are normally used to generate and settle conflicts of interpretation.
Cf. Simon (1969), p. 11: ”To predict how it will behave, we need only ask ’How would a rationally designed system behave under these circumstances?’ The behavior takes on the shape of the task environment.”
Compare the newspaper delivery example discussed by Fodor and Lepore (this volume).
Lars-Erik Janlert gave invaluable advice on a previous version, Svante Beckman has been an unusually stimulating discussion partner, and Susan Dennett heroically checked my English. Olle Edqvist and the Swedish Council for Planning and Coordination of Research have been generous in their support.

References

Ceruzzi, P. E. (1991) When Computers Were Human. Annals of the History of Computing, 13, 237-44.
Charniak, E. & McDermott, D. (1985) Introduction to Artificial Intelligence. Reading, MA: Addison-Wesley.
Dawkins, R. (1976) The Selfish Gene. Oxford: Oxford University Press.
Dawkins, R. (1982) The Extended Phenotype. Oxford and San Francisco: W. H. Freeman.
Gregory, R. L. (1981) Mind in Science. A History of Explanations in Psychology and Physics. Cambridge: Cambridge University Press.
Haugeland, J. (1985) Artificial Intelligence: The Very Idea. Cambridge, MA: Bradford Books/The MIT Press.
Jaynes, J. (1976) The Origin of Consciousness in the Breakdown of the Bicameral Mind. Boston: Houghton Mifflin Company.
Langton, C. G. (ed.) (1989) Artificial Life. Reading, MA: Addison-Wesley.
Lévi-Strauss, C. (1966) The Savage Mind. London: Weidenfeld and Nicolson.
Miller, G. A., Galanter, E. & Pribram, K. H. (1960) Plans and the Structure of Behavior. New York: Holt, Rhinehart and Winston.
Neisser, U. (1967) Cognitive Psychology. New York: Appleton-Century-Crofts.
Newell, A. (1990) Unified Theories of Cognition. Cambridge, MA: Harvard University Press.
Newell, A., Shaw, J. C. & Simon, H. A. (1958) Elements of a Theory of Human Problem Solving. Psychological Review, 65, 151-66.
Polanyi, M. (1958) Personal Knowledge. Chicago: University of Chicago Press.
Quine, W. V. O (1960) Word & Object. Cambridge, MA: The MIT Press.
Quine, W. V. O (1969) Ontological Relativity and Other Essays. New York: Columbia University Press.
Simon, H. A. (1969) The Sciences of the Artificial. Cambridge, MA: The MIT Press. A second, much enlarged, edition was published in 1981.
Simon, H. A. (1980) Cognitive Science: the Newest Science of the Artificial. Cognitive Science, 4, 33-46.
Suchman, L. (1987) Plans and Situated Behavior. Cambridge: Cambridge University Press.
Taylor, F. W. (1911) Principles of Scientific Management. New York: Harper & Row.
Waltz, D. L. (1988) The Prospects for Building Truly Intelligent Machines. In S. R. Graubard (ed.) The Artificial Intelligence Debate. Cambridge, MA: The MIT Press.