joi, 17 aprilie 2008

Some Different Ways to Think

by Ruth Garrett Millikan
University of Connecticut

I. Introduction

Daniel Dennett has offered a helpful framework in which to consider the evolution of mind, calling it "the tower of generate and test" (1995, 1996). On the bottom of the tower there are "Darwinian creatures," whose patterns of behavior result from the effects of natural selection alone. Next come "Skinnerian creatures," whose behaviors continue to be modified during their individual lifetimes by trial, reward and punishment. Third are "Popperian creatures," capable of learning, as well, by trying things out in their heads. Last are "Gregorian creatures," who learn through interaction with culture. I have spent some time trying to construct a similarly broad and rough framework in which to consider the evolution of mind, but focused on the development of inner representational systems. The idea was to explore possible forms of representation first within perception and then within thought as it becomes freed from perception. As I progressed, however, this fairly simply conceived project was soon out of hand. At times I thought I must be trying to reconstruct Kant's Critique of Pure Reason in transcendental realist idiom! But in the end I think the project should not be all that difficult, and I have tried to present here, as originally intended, just a skeleton.
The project is to survey something of the variety of ways that seem, a priori, to be possible, that a creature might employ inner representations to help govern its behavior. A poor imagination for these possibilities seems likely to hamper empirical studies of how particular animal species in fact do perceive or think. On the other hand, empirical studies, as they proceed, will surely alter our ideas of what is possible. It is unavoidably a bootstrapping task. We must tug wherever we can find leverage.
Dennett called his scheme a "tower" because each level rested on the one below, not merely in evolutionary progression, but in individual animals. Thus humans, who have built the tallest tower, are at bottom Darwinian, then Skinnerian, then Popperian and finally also Gregorian. Recall that Aristotle's vegetable, animal, and rational souls formed the same kind of tower. Similarly, the simpler kinds of representations that I describe may all still be used by subsystems within humans, more sophisticated ways of perceiving and thinking being reserved for quite sophisticated projects of the whole person.

II. IntenTionality: Introducing intentional icons and signals

The more primitive structures that I will describe hardly deserve the title "representations", and certainly they are not "thoughts". I call them "intentional icons," from the philosopher's technical term "intentionality" (note the 't') and C.S.S Peirce's technical term "icon.". Intentionality is that peculiar property of representations that allows them sometimes to appear to bear relations to nonexistent things, for example, to be about states of affairs that aren't actual. Peirce's "icons" are signs that work by bearing a similarity or abstract isomorphism to what they are about. Roughly, intentional icons (inner ones) are plastic states or structures, physical modifications of an organism, caused by its experience, the forms of which vary in a systematic way so as to parallel certain variations in the organism's environment. There are two possibilities here. (1) The icon helps to guide the responses or activities of the organism to enable it to perform certain context-dependent biological functions in the context the animal is actually in, and does this by bearing an isomorphism to the relevant contextual features or structures. Call these "fact icons." (2) The icon helps to guide the responses or activities of the organism so as to produce a structure or state of affairs isomorphic to the icon. Call these "goal icons.{1}"
An easy way to think about the "isomorphism" requirement here is just this. There are certain kinds of possible variations of (mathematical transformations of) the icon that correspond systematically to certain possible variations of the environmental context such that in the perfectly well-functioning animal, every icon corresponds in this manner to a certain environmental structure or state of affairs (but not necessarily vice versa). We can call the rules of correspondence here "mapping rules," and speak of the icon as "mapping" (in the mathematical sense) the corresponding structures or states of affairs in the environment, calling them, in turn, "the mapped affairs."
Given this description of "intentional icons," even the Gibsonians, indeed, perhaps especially the Gibsonians, must conclude that animals typically employ intentional icons in perception. If there are systematic mappings of distal features of an organism's environment onto patterns of ambient energy impinging on the organism's perceptual organs, and if the animal is tuned to be guided by these patterns in appropriate context-dependent behaviors, this must be because a relevant isomorphism to the distal environment or the animal's relation to it shows up within the physiology (e.g., neurology) of the animal, first in patterns of sensory organ response or transduction, later wherever the translation from perception to springs of action occurs. I call these icons "intentional" because if anything should disturb the normal isomorphisms between icon and environment, and should the organism proceed to be guided by the misaligned icons in the usual way, a nonadaptive response is the likely result. A misaligned icon is like a false representation of the environment.
There are limiting cases, "zero cases," of intentional icons that I will call "intentional signals." The only way in which an (inner) intentional signal varies qua intentional'we can say, the only way it varies "significantly"SS is with respect to its time of occurrence. The neural signal that triggers the protective eye blink reflex, whose job it is to prevent foreign objects from touching the eye, is an intentional signal. The time at which it occurs corresponds, ideally, to the time of approach of a foreign object that might damage the eye (the signal is a fact signal) and ideally corresponds, also, to the time at which the eye-blink response is to be produced (the signal is also a goal signal). Similarly, adrenalin running in he bloodstream is an intentional signal indicating some circumstance or other requiring a sudden burst of strenuous activity.
Besides inner intentional icons and signals, there also exist outer ones, such as beaver tail splashes (signal of danger) and bee dances (icon of location of nectar). I will not discuss outer intentional icons as such, but I will use them occasionally to illustrate quite general points that apply to both inner and outer icons.

III. IntenSionality

Suppose that Chinese characters worked more simply than they actually do, by brutely corresponding one to one to Chinese meanings. And suppose that sometimes portions of these characters corresponded to the sorts of meaning elements that are expressed by past tense, plural form, and so forth, these elements of meaning having in Chinese, as in English and German, irregular phonetic transcriptions. Contrast this way of mapping by meaning with the way an alphabetic system might map the same Chinese words and sentences. Now consider how these two systems might be used to map or "icon" strings of Chinese phonemes. Many of the same phoneme strings could be mapped in either system, but the mapping rules would articulate these strings differently. There would, in general, be numerous transformations (in this example, permutations and substitutions) performable on an alphabetic icon of a phoneme string that would produce another significant alphabetic icon but whose corresponding phoneme string could not be represented by the Chinese character system because it would not be a string with a meaning. There would also be transformations performable on character strings that would yield new representations of phoneme strings but that would not correspond systematically to alphabetic transformations. The two systems would not be isomorphic, yet they would be able to represent many of the same phoneme strings.
Nor would the distinction between these systems be entirely a matter of fineness of grain or difference of grain. The characters map by a correlation first with meanings, which in turn correspond to sounds in a somewhat irregular way. The systems not only articulate the affairs they map differently but they map by means of a different set of relations. Similarly, it is possible to represent Bill Clinton with his name "Bill Clinton" or with the description "the U.S. president in 1997" and to represent the number ten with roman numerals, arbic numerals or with any of a variety of equasions, using different kinds of mapping rules in each of the various cases.
The fact that intentional icons subject to different sets of significant transformations, articulated in accordance with different mapping rules, can sometimes map the very same affair produces the phenomenon philosophers call "intensionality" (note the 's'). The simplest way to name the environmental affair an intentional signal or icon maps onto is to use a representation that maps onto the very same affair. For example, we say, "that beaver tail splash means that some danger is now threatening the beavers" using an English sentence, the one in the "that" clause, to map the affair that it is the job of the beaver splash to map. Similarly, we tell about people's beliefs, saying, for example, "Bill believes that your daughter is going to win," putting in the "that" clause a sentence that maps the affair Bill's belief concerns. But this technique leaves it open whether or not the representation employed in the "that" clause should be taken to reveal the same articulation of the subject matter, to map using the same sort of mapping rule(s), as the one being described. Are we to take it that beaver tail splashes or, perhaps, certain inner intentional icons they give expression to, are articulated such that some transformation of it would express the past tense ("some danger was threatening the beavers"), so that substitution into the direct object place would be significant ("some danger is threatening the bears"), and so forth, as is true for the English sentence? Similarly, are we to suppose when I say to John, "Bill believes that your daughter is going to win" that Bill knows she is John's daughter and thinks of her that way?
The intenSionality that is characteristic of things intenTional consists in this peculiarity. A full description of any item exhibiting intenTionality requires telling what the relevant articulations and kind of mapping rules for it are as well as telling what affair it purports to map. Sometimes all three of these things can be done at once simply by holding up another representation, for example an English sentence embedded in a "that.." clause. But sometimes it cannot.
English indicative sentences have, at a minimum, a subject and a predicate, either of which might be replaced without losing significance, but neither of which has any significant transforms that result merely from distorting its form along some physical continuumSSby making it louder, say, or longer, or higher in pitch, and so forth. English sentence are also, all of them, subject to a negation transformation. These properties set them far apart from a host of simpler intentional icons, including many kinds of inner intentional icons that may help govern behavior in lower organisms. To describe these simpler icons is harder than to describe the more complex thoughts of humans, for there are no simple English sentences that can express the same contents with a corresponding lack of contrast or articulation. (There are some who doubt that even human thought ever takes sentential form.)
Consider, for example, the dance of the honey bee. It is an icon of the location of nectar relative to the bees' hive and the sun, but there is no transformation of it that would tell about nectar location relative to any other objects nor about the relation of anything other than nectar to hive and sun{2}. Although to state it's "truth conditions" in English we must mention these items, the bee dance does not map (does not "mention") the hive or the sun or the nectar as such. It is an undifferentiated icon relative to its English "translation." Notice also that the bee dance has no negation; bees have no way of saying where there isn't any nectar, so don't bother looking. An important part of the story of the evolution of cognition must concern the emergence of various new forms of articulation for intentional icons.

IV. Pushmi-pullyu Icons

The most primitive intentional icons and signals are at once fact icons and goal icons, telling both what is the case and what to do about it. Thus each of the various chemical "messengers" that run in the blood stream "tells" about some particular state of the organism's physiology and, in stimulating a physiological response appropriate to that state, also "tells" what to do about it. The infamous "fly detector" in the optic nerve of the frog produces an icon that is at once a fact icon, telling when and at what angle there is a fly in front of the eye, and a goal icon telling when and at what angle to snap with the tongue. The dances of honey bees are icons at once of where the nectar is and of where the watching worker bees are to go. Elsewhere I have called representations having this sort of lumped double structure "pushmi-pullyu" representations" after Hugh Lofting's mythical creature of that name (Millikan 1996). Here I will speak of intentional pushmi-pullyu icons or P-PI's for short.
At the opposite end of the spectrum from pure P-PI's are human desires and beliefs. Here fact-iconing and goal-iconing functions have completely separated. Our beliefs often concern facts that we have no notion how to use in action, and our desires include many we have no hope of satisfying. Having separated fact icons from goal icons, it is necessary somehow to reassemble them for use. Practical inference is needed, but the result can be a huge gain in flexibility of action. What kinds of steps might there be in the evolution from systems employing only simple P-PIs to a system employing beliefs, desires and inference?

V. The first differentiation: goal signals and affordances

The P-PI, the neural impulse, produced in the frog's optic nerve by a passing fly, though it has minimal articulation, reports when and at what angle the fly passes and demands a correspondingly definite response from the frog's tongue. The impulse forms part of a simple reflex arc which, in this case, cannot be inhibited. It is not depotentiated even if the frog is completely sated. It reports a fact and issues an unconditional command.
Similarly, during the first few days of its life, a rat pup that feels a nipple touching its face responds by turning and sucking whether or not it is hungry. The neural "nipple detector" is a P-PI that makes an unconditional demand. A few days later, however, this reflex response is potentiated only when the pup is hungry, depotentiated when it is sated. Spelling this out in intentional terms, the pup's system is now sensitive to a new kind of intentional signal, the hunger signal. This signal indicates a state of nutritional depletion and demands its rectification. That is, the signal will perform its work normally only if it is aligned with a state of nutritional depletion and only if it effects rectification of this situation. In the nursing rat pup, moreover, it performs this function normally only by means of causing the pup to suck on a nipple. So it is a P-PI indicating nutritional depletion and demanding sucking and hence rectification. But unlike the frog's fly-detector P-PI, the fact that it is aligned properly with what it is designed to fact-signal and occurs in a well functioning organism does not guarantee that it will produce the result it demands, the result it goal-signals. It cannot do so unless coupled with a properly aligned firing of the pup's nipple detector, which cannot occur unless there happens to be a nipple there to detect. The hunger P-PI indicates a fact and sets a goal, but without the mediation of a second properly functioning intentional icon, it cannot cause that goal to be reached. Having set your goal in acknowledgment of certain of the facts does not guarantee that you know how to reach that goal from where you happen to be (in this case, perhaps, from where there is no nipple).
Taking another example, many small animals take cover if they see a small shadow gliding over the ground, such as would be cast by a flying predator. I am guessing that in many species, this response cannot be inhibited. The shadow produces in them a very simple P-PI that means, though it says this with minimal articulation, predator overhead so take cover. But though this is the demand, it does not always produce satisfaction of this demand even in the normal animal. First, the animal has to perceive some way to take cover.
Now let us return to the nipple detector in its new neural environment. It fires or fails to fire, depending first on the presence of a nipple, and second on its potentiation or depotentiation by the hunger detector. Looking more closely, suppose that it does respond when the pup's mouth touches a nipple but that unless the reflex is potentiated by hunger, the response is not passed on to the efferent nerves that control sucking. Intuitively, the nipple is perceived but the perception is not acted on. When the reflex is potentiated, the signal is an active or full P-PI. When the reflex is not potentiated, the signal, I suggest, in Gibsonian idiom, is a perception of an affordance.
Gibson spoke of affordances as being possibilities for action. The suggestion that possibilities are things that can literally be perceived, however, is puzzling. Surely possibilities should not be reified and introduced whole into the causal order. But we can put Gibson's point another way. The perception of the nipple when the sucking response is not potentiated is not a full pushmi-pullyu icon because although it is "pushed" it does not (or the rat does not) yet "pull". Still, it has the potential to become a pushmi-pullyu icon. It is a pushmi-possible-pullyu icon or P-P-PI! The presence of a P-P-PI is what corresponds to what Gibson calls the perception of an affordance, the perception of a possibility for action. Thus the rat pup that encounters the nipple when it is not hungry perceives the nipple as affording sucking and affording nourishment. The animal that is alert to the places around it where it might quickly take cover perceives these places as affording cover.
Notice why this kind of perception is not perception of facts, just as Gibson said it is not. The P-P-PI counts as an intentional icon at all only because it has a biological function, in this case a potential function, a function under certain circumstances. As I have defined them, all intentional icons, as such, have functions. If there are mapping rules in accordance with which an intentional icon can be said to be properly aligned with the environment rather than misaligned, this is so only because it has a function that cannot be performed normally unless it is so aligned. Thus its function, not, for example, statistics on what it is typically aligned with, determines what it is an icon of in the pushmi or fact-iconing direction. The relevant alignment then is with some mapped affair bearing a description under which it is causally possible for it to help account, in normal cases, for the performance of that function.
This description, just as Gibson said, is a description that is determined relative to the capacities of the animal. What the icon must show is a relation that the animal bears to its environment, one that will afford something to the animal if it tailors its response as some definite function of that relation. It may well be then that what the animal perceives is, from the point of view of physical science, a strangely gerrymandered affair. More important, the animal does not perceive this property or affair as a fact in a world with other facts but, intrinsically, merely as for being used in a certain way (Heidegger would have said "zuhanden").
This urgently raises the question what a pure fact-icon could possibly be. But first, we should notice what lies just around the corner, namely, the possibility of pure goal signals, of lopping off the pushmi part of the P-PI.
What I have called a hunger P-PI corresponds roughly to what ethologists traditionally called a hunger drive. What a drive does is to potentiate, often, a whole collection of lower P-P-PI so that if triggered by perception, they will respond by activating efference in the animal and/or by potentiating still other P-PIs. For example, hunger potentiates a disposition to activate perceptions of food affordance. But the standard view assumes that some drives do not require to be activated by afference. Rather, they come into play either as the animal matures or in a cyclical manner. Drives of this sort might be said to consist in pure goal "signals" (see pp above). Or suppose that certain P-PIs are potentiated for action merely as a result of periods of disuse of the behaviors they effect as suggested, for example, by Konrad Lorenz. Then their potentiated states might be considered to be pure goal signals, waiting for an opportunity to express themselves in action. Gallistel suggests that the "tendency toward autonomous potentiation in the circuitry controlling simple acts....results in the aimless mixing of behavioral fragments...the seemingly purposeless play that is so salient in the behavior of higher animals" (Gallistel 1980, p. 33). Such play would indeed be purposeless in the sense that it would not be directed by prior or higher level goal signals, yet as Gallistel suggests, it might well have a biological purpose without that.
I have already noted that, so far, we have no way of understanding how there could be such things as fact icons. Note that we also have no inkling yet of the possibility of goal icons that are more articulate than mere signals. Goal signals tell when to act toward a goal but show nothing more of that goal's structure.

VI. Perception and Cognition as Search Techniques

The most basic sort of P-P-PI, the most basic kind of perception of an affordance, is an icon of a relation of the animal to some object that is potentially a goal of action, which results from transduction of some pattern of energy impinging on or flowing over the animal, such that an invariant response to that icon, hence to that energy pattern, invariant in that it is describable as some definite function of the pattern, yields always the same action with respect to the goal object. To be in a position such that a final goal, such as having a fly in the stomach, is achievable by utilizing just one such perceived affordance is a blissful condition. Call such conditions "B-conditions". It is typical of animals that do not move about that they merely wait for B-conditions to pass by them and then seize the moment. More sophisticated animals, however, make an effort to maneuver themselves into B-conditions, for example, the frog has sense enough to sit in a place that attracts flies. The very simplest way to attempt to maneuver oneself into a B-condition is, of course, just to wander around aimlessly hoping to bump into one. Alternatively, one can use some sort of search technique. The whole story of the development of perception and cognition can be viewed as the development of more and more sophisticated search techniques for maneuvering oneself into B-conditions. These are techniques for raising the probability of getting into places or positions from which one can productively act.
The first principle here is very elementary. Be constructed such that you can perceive affordances that afford your probable placement in new positions from which you are likely to perceive new affordances that afford your probable placement is still other positions ...and so forth...finally probably placing you in B-conditions. The trick is that the series of probabilities should have a product greater than the probability of B-conditions just happening along without your action, the higher the probability the better. Thus the search domain is narrowed and then narrowed again. The baby's response to a touch on the cheek is to turn toward it, thus raising he probability of feeling a nipple on the mouth which will afford nourishment. Very simple animals show various kinds of taxis likely to take them into conditions where food affordances are prevalent or certain danger-avoidance affordances less likely to need to be utilized.

VII. Introducing Fact Icons

Extremely complicated long and branching chains of affordances leading to the probability of finding one or another other affordances, leading to the probability of finding one or another...and so forth, may be perceived by some animals, resulting in highly flexible behaviors. And it may be that correctly quantified increases in potentiations of response dispositions resulting from other relevant stimuli encountered along the way help account for the tendency of the animal to chose from among equally available and relevant affordances those objectively associated, in its particular circumstances, with higher probabilities of eventual success. The result would be an animal whose behavior is very flexibly governed by what Gallistel calls a "lattice hierarchy." Such an animal would be capable of navigating in the space-time-causal order from a great variety of different starting positions relative to its goals so as to reach them with reasonable probability. But I think it would be natural to say of such an animal that it did not think.
Nor does introduction of the capacity to learn, introduction of Dennett's Skinnerian level, as conceived at least in the associationist tradition of psychology, add more than details to this general picture. There are interesting questions to be asked here of course, such as whether a certain creature is capable of making associations with unused affordances or not, for example, whether it can learn how to find water when thirsty from the experience of having found water when it was not. But learning concerns the ontogeny of the hierarchy only, and does not affect its basic structure. "In this approach, those aspects of the latticework that derive from experience constitute the animal's acquired knowledge of the world" (Gallistel 1980p. 330).
We are still in search, then, of fact icons, of articulate goal icons, and of something that might reasonably be called inference. Concerning fact icons, for example, if all an animal ever perceives is affordances, no matter how good it is at remembering unused affordances that pass by, it couldn't in principle construct pure fact icons. For example, a snake wired up this way that perceives (as some snakes do) a mouse for purposes of chasing by sight, then for purposes of striking by feeling its warmth, then for purposes of swallowing by perceiving its smell, would merely perceive first a "chase me", then a "strike me" and finally a "swallow me." How is it possible to liberate the mouse from total submersion in the series of transitory interests someone takes in it, if not in the snakes mind, then at least in yours and mine?
Facts enter before inference and before articulate goal icons, I believe, and could do so without disturbing the lattice hierarchy picture at all. Two simple principles are involved. First is the use of multipurpose icons that represent always the same kind of world affair but afford the animal different possibilities for action under different motivations. The second is the production of icons containing a surplus of natural information over designed information that becomes available for new uses not anticipated in the original design of the mechanisms. I'll address these in turn.
Multiple uses for icons may originally be just a side effect of economic construction of the perceptual apparatuses. If you eat both mice and frogs, it is not economical to have separate sets of eyes or completely different perceptual processing mechanism for perceiving these. Similarly, if you eat mice and flee from snakes. This principle becomes especially clear when we consider the enormous obstacles confronting the design of any apparatus that is to have a sophisticated capacity accurately to make icons showing affordances of distal objects. To be as useful as possible, such an apparatus must enable recognition of the affording object or property and its relevant relation to the animal over as wide a range of these relations as possible (not just dead center in front of its nose) under a variety of mediating conditions (perceptual constancy in various lighting conditions or echo conditions, etc.) while filtering out distractive intrusions affecting proximal stimuli but irrelevant to action ("static" such as wind noise or shadows or extraneous smells). If you have such an apparatus, clearly you should use it for as many of your purposes as it can help serve. And the more purposes it serves, the less determinate is the "pullyu" reference of the P-P-PIs it makes.
This leads immediately to the second principle, that of the production of excess information. The more versatile such a perceptual apparatus becomes, the more likely it is to be relying on quite general principles in producing its icons, and this inevitably results in more natural information being produced than is consumed through its designed uses. If you build a visual system so that it that can see mice, frogs, snakes and also conspecifics, then it undoubtedly brings in enough information to see any other small object as well, if only the rest of the animal's system could be tuned to find a use for this information. Now design into the animal some principles or mechanisms by which it can experiment in the use of such extra information, or just principles by which it searches for patterns of association involving it, and you have an animal that employs completely general purpose icons by design. It is designed to perceive certain kinds of facts for completely indeterminate uses. You have an animal that harbors pure fact icons.

VIII. The Construction of Objective Space

Notice, however, what these fact icons represent. They still represent merely relations that the animal itself bears to things, objects and properties, in its environment. Moreover, they do not represent these relations as such. The sentence "there is a round pebble in front of Kermit" is articulated such as to show Kermit, pebbleness, roundness and the relation in front of all separately. But the sort of fact icons we are talking of here, though they may be articulate about shape and direction, do not articulate a two termed relation between Kermit and the pebble. There are no transformations of them that can show the relation of anything other than Kermit to things, no icons that leave Kermit out of the picture.
Now comes the difficult part (the "transcendental deduction" in the Kantian part) of my story. I want to argue that there are better strategies than constructing a lattice hierarchy network for conducting searches for convenient and safe paths from wherever one happens to be into B-conditions. These strategies involve the introduction of inner representations, isomorphs, of various aspects of the objective world. By the objective world, I mean the world with the animal's special position in it temporarily removed from the pictureSSthe world as it is "in itself" rather than, as it were, "from here". (This is the "transcendental realism" part.) And the strategy involves first inference and then, in its most sophisticated forms, a step up to Popperian animals who "generate and test" in their heads. I will begin by giving a simple and well known illustration of the general principle involved, namely, the advantages of employing cognitive spatial maps (my Kantian "aesthetic") and then sketch in some of the details of richer applications (my Kantian "deduction of the categories").
Suppose that you wanted to find your way home, but that all you had to go by was a collection of memories showing paths you had actually taken at one time or another from one place to another. Perhaps these memories form an associative network, the want-to-go-home signal potentiating nodes representing the various places you have been with lessened potentials as the number of links from home in the chain increases and also according to the lengths of the individual links. You take the path that sends the strongest signal to the node representing the place that you now are in.{3} The trouble with this arrangement, as with any arrangement resting on a record merely of orderings of past experienceSSa record of things as they happened to be associated for some creature in timeSSis that the mapping of the paths tells nothing about the geometry of the underlying spaces traversed. True, you would have enough information to get home from where you are, but you would be lucky if it put you on a direct route.
What you need to have mapped for you to tell how to get home fastest is how the various paths lie relative not to your past history but to one another in a Euclidean space, the space that you will have to walk home in. You need to know how the paths twist and turn, at what angles they intersect with one another, and so forth, within that space.
It is for this reason, presumably, that even some quite lowly creatures make mapsSSsome kind of isomorphsSSin their heads of the locale where they live. There is excellent evidence, for example, that bees do (Gould & Gould 1988; Gallistel 1990). They apparently record the positions of various landmarks in the locale relative to other landmarks, not relative to themselves, and in a form that yields up a Euclidean metric. Using a map one can be guided directly from one place to another regardless of whether one has ever traveled any part of this route before. Thus a bee when transported by any route to any location in its territory knows how to fly directly home to the hive as soon as it has taken its bearings. The bee knows how to take short cuts.
To make a map of an area requires representing not merely places one has happened to go, but taking account of the general geometry of space so as to leave empty space on the map for the places one has not happened to go. A "tabula rasa" is the traditional symbol of a mind that comes into the world with no prejudices about what it will find there. In this case, however, a tabula rasa can serve as a fine symbol of what the bee knows innately. The bees blank tablet is a blank isomorph of Euclidean space in three dimensions, waiting to be filled in with landmarks. It is the bee's version of Kant's pure intuition of space.
I wish to generalize this lesson. It seems that the best way to find direct routes through the spatial-temporal-causal order from wherever you are toward B-conditions, toward conditions in which you know how to act directly and productively, is to begin to construct isomorphs, maps, of various aspects of that objective order. One needs to grasp the relations of various things in the world to each other, not just to one's own private wiggly space-time line. Searching in one's head for paths to B-conditions is both quicker and safer than searching outdoors. Constructing inner icons of the objective world which one saves for later use is true memory, as opposed to changes caused by mere conditioning. And the way this construction is done is much as Kant suggested. One must first grasp the most abstract principles of the world's ontology, the correct geometry for one's tabula rasa, and then try to fill in at least the areas most proximate to one. But I would guess that much of this knowledge of world geometry, for those creatures that have any grasp on it, is endogenous.
But there are still lessons to be drawn from considering maps of space, so let me return to them.
The bee's cognitive map is probably a bit like a road map with road side tables, gas stations, and good restaurants marked on itSSthe hive and good nectar gathering sites, at least. The map might also show the current position of the bee itself on it, as the televised map displayed in the front of a transcontinental airline coach shows the position of the plane one is in. Whether or not the bee's map shows the bee as well as where it wants to go, however, it cannot possibly stand alone as a guide to the bee's action. A direct perception of the bee's position relative to its environment is needed as well, either to keep the map-token bee in the right place on the map or, what amounts to the same, to find the place the bee is on the map. The map must be somehow joined to perception, so that the two together guide action. Moreover, this joining must be done by identifying a place on the map with the same place seen in perception. Only when joined will these two together yield the relation of the bee to, say, its hive. And that is a form of mediate inference! The bee is (better, the bee's wings are) guided by two representations that overlap in content, that is, that share a middle termSSmake reference to the same locationSSto yield the bee's action as a function of its relation to the hive.
Imagine that! The bee making inferences! And the principle involved is completely generalizable. No isolated icon of the objective world can guide action. Icons merely of the objective world, even if they have the location one currently desires to arrive at clearly marked on them, that is, even if they are articulate goal icons, are intrinsically powerless to guide action. They must be joined to perceptual icons showing some part of the same world structure but from the point of view of the animal. And this joining is practical inference.
Finally, an animal that uses maps of its world has to make maps of its world, hence will probably devote some energies specifically to this purpose, exploring and prospecting. This will be "theoretical activity", in Kant's sense. Its function is the acquisition of fact icons, gathered for very practical reasons to be sure, but with no particular practical goal in view. On the other hand, much of this "theoretical activity" can be expected to resemble the way industry supports "theoretical research": there must be possible applications in view. What goes on the bee's maps, for example, is probably just "places of interest" and landmarks potentially useful for navigation. More generally, understanding the schemas that a particular animal uses in constructing icons of its objective world and the kinds of details it is likely to represent with those icons will depend closely on understanding the affordances it is capable of perceiving, which in turn must fit with the basic atoms composing its behavioral repertoire.

IX0 Other Aspects of Objective Permanence

There is a story circulating (though apparently it is largely apocryphal: compare Burghardt 1993, pp 141-45) that there are snakes that not only detect mice for purposes of chasing, striking and swallowing using three different sensory modalities, but are incapable of recognizing a mouse for any of these three purposes through any modality but the assigned one. The three aspects of the mouse that show themselves as affordances to the snake are not integrated by it into one object. It is as though these three aspects were entirely separate pieces of the world that just happened to lie juxtaposed on the time line of the snake's experience. The truth of the story to one side, there is nothing impossible in it. Nor, of course, would merely multiplying the number of modalities through which a single mouse-relevant affordance is recognized produce recognition of the mouse as an object rather than a mere string of associated affordances.
Unlike the snake, the animal that constructs icons of its objective world gathers together the fragments of the objective world it encounters and glues them together. Compare an archeologist who reconstructs ancient objects from just a few tiny broken fragments. Gluing pieces of the world together requires some sort of schematic plan of the general structure it should have, its general geometry, its ontology. What kinds of general schemas might be available to an animal? What aspects of the world might it find useful to reconstruct.
One obvious construction is reconstruction of an object in three dimensional space from the energy it structures as fragments of this structure are encountered or searched out by the animal. This is the sort of construction that David Marr tried to explain in his theory of vision. An animal that can do this sort of reconstruction will be able to orient itself to utilize affordances that show themselves directly only from other perspectives on the object, and be able to discover affordances that require guidance from properties of the whole object. Such an animal will also be in a far better position to reidentify objects and various kinds of objects.
The ability to reidentify objects and properties from a variety of perspectives is central to all other reconstruction tasks. First, as already noted, no icon of the objective world can be used to guide action without joining it to icons from perception, and this joining is done by finding a middle term, by identifying part of what shows in one icon with part of what shows in another. That is why the bee has to recognize places. Second, reidentification is required for map construction. The bee will know where to place a new landmark on its map by noting its relation to old landmarks already on the map, so it must be able to reidentify these old landmarks. Similarly, to glue fragments of a broken object together you must be able to recognize when two fragments fit together, which requires identifying the same surface shape in the convex and in the concave. All the places where the glue goes in reassembling the world are properties or entities that need to be identified as the same presence in adjoining pieces. Turning the coin over, it can never be taken for granted that any animal recognizes when different ones of its intentional icons contain elements that map over the very same portion of the world. Thus the (apocryphal) snake's problem is that it has no idea that what it chases, strikes and swallows is the same object. Similarly, Oedipus had no idea that his thought "Mother" and his thought "Jocasta" had the same referent (Fodor's example). Discovering how reidentify what is in fact the same through all the possible manifestations of it is surely not even possible!
A rough mapping of objects in one's immediate vicinity is helpful to for an animal to have. But objects change and they come and go. No permanent mapping of them is possible in detail, or not without adding the dimension of time, and what is the practical use of a map of the past? We may be the only animals who have not found that question rhetorical. Other animals store away knowledge only of the stable structures in their environments, knowledge of "substances" roughly in the Aristotelian sense of that term. These include ordinary individuals, various stuffs such as water, wind, rain and rock, and natural kinds such as animal and plant species and their parts. Substances are distinguished by the fact that one can learn things about them on one encounter that will remain true on other occasions of meeting. Thus you learn about the sourness of one lemon by having tasted another lemon and you are ready for John's sourness on one day from having experienced it on days before. The trick is to discover how to locate and quickly reidentify these objective substances despite the variety of their manifestations to one's senses, to discover what kinds of stable knowledge can be gathered about each one, and most important, to discover how to tell this in advance. But I have given the details of this story elsewhere (Millikan 1997) and will not repeat them here.
It should be clear that no animal is going to map more than a small portion of the aspects of its objective world in this way. Nor need we think of the project as the progressive construction of a giant multi-dimensional model of the world in its head. Perhaps it puts only some fragments together and merely stores others, carefully preserving those aspects that mark known identities for it so it can join them up later. Compare having maps of all of a city but only by finding various overlapping pieces and joining them together. Compare also having the pieces of a picture puzzle but not having yet put it together. Clearly there is room here for more or less efficient methods of storage and retrieval of information so that parts relevant to one another can be found easily when one has gotten to a certain place needing them. The challenge of comparative psychology is partly to discover what gerrymandered fragments of the objective world different animals are capable of reconstructing or learning to reconstruct and what methods they use for storage and retrieval of the necessary information.

X0 Popperian Creatures and the Mapping of Processes

Developing strategies for finding where the B-conditions lie by mapping where the objective constancies over time are liable to lie is a very good search technique. But some animals, notably we ourselves, also map world processes. Every moving animal actively uses causal processes. Following any affordance is engaging in some process the outcome of which is stable or predictable in a useful degree. But knowing how to, being wired to, engage in fruitful processes on propitious occasions is not knowing about these processes. But we, at least, do actually map processesSSregularities in processes. We remember what turns into what, and what happens if you do what. In the simplest cases, we merely think ahead to what will happen if we utilize a beckoning affordance, and react to the anticipated outcome with an advance or a withdrawal. This process does not look ahead much further than when one constructs the back side of a three dimensional object in looking for affordances. But it contains the principle by which mental explorations potentially of exponentially increasing complexity are constructed.
We can think of the matter this way. When an animal acts on the world, transforming it in some way, what the animal does has a causal outcome, one that it may be able to anticipate in thought. Similarly, when an animal roams about, the direction it takes from a given place has a "spatial outcome", one that it can anticipate if it has a mental map in its head. Suppose then that the animal's goal is to arrive at a certain place. It's goal is marked on its mental map. It perceives the place where it now is, identifies this place on the map, and joining percept with map, heads straight to its goal. Now suppose instead that it has as its goal to be in a certain situation in its world to which it must traverse not just spatially but causally. It wants, say, to be sheltered in a certain sort of house. It has a goal representation that icons this particular objective situation and it is to aim for this situation in the causal, not just the spatial, order. How will it use its knowledge of what leads to what in the causal order to direct its aim so that it starts off in the right causal direction? The difficulty here is that unlike ordinary space, the logical space of possible causal outcomes in time is not a connected space with a definite geometry. It is a space in which possibilities diverge and then diverge again in infinite variety. There can be no analogue here of dead reckoning.
For an animal to use knowledge of causal outcomes in a sophisticated way to govern behavior would require it to become "Popperian." It would need to make trials and register successes and failures in its head, imagining one by one various alternative chains leading from its current situation to others until it hit on some or another causal route to its goal. An animal capable of this sort of icon production and use would be far removed from the bees.

XI0 A Common Code

What eventually emerges in Homo sapiens is the ability to recognize and to map causal processes initiated either by the thinker or by extrinsic events, and the ability to represent a layout of ongoing events many of which occur in places at a remove from the thinker. The bee constructs a three dimensional space containing enduring objects. This map may have to be revised or updated quite frequently, but it need not represent a temporal dimension, or at least not one with absolute dates (Gallistel 1990). A human, on the other hand, constructs a four dimensional map of a dated world in progress, mapping both things that endure (substances, places) and also what happens, both in its own locale and in other places. Many of these facts are represented apart from any relevance known to the thinker's practical interests, and inferences are made from these facts to further facts of the same disinterested sort. Ultimately, of course, the point of all this cognitive activity is to join up at crucial points with perception so as to guide action. But the immediate aim is merely the efficient production of representations of more and more of the world.
Now the perceptual representations that guide immediate action need to be very rich in certain kinds of information, showing the organism's exact relations to many aspects of its current environment directly as they unfold. These icons need to have significantly variable structure that conforms closely to those variations in organism-environment relations that need to be immediately perceived. Also, they need to be constructed quickly and reliably, hence algorithmically. The job of the disinterested fact icons of cognition not this, but rather easy participation in mediate inference processes. This job makes its own severe demands in that there is no way to specify in advance what kinds of inferences it may be useful for any given icon of this sort to participate in. Facts are collected for whatever, if anything, they may prove to be useful for. The ideal fact icon would be one that could be combined with any other fact icon whatever having an overlapping content, a potential middle term in common.
Thus it appears that the fact icons used for cognition should be cast in a uniform system of representation, whereas the icons of perception should not be. Nor is it just a "common notation" that is needed. The ways that the icons articulate world affairs that they map must be compatible. Pictorially, if the first premise of an inference is represented with a mental Venn diagram and the second with a mental sentence, no principled inference dispositions could apply to yield a conclusion. Similarly, one might suppose, if the information coming in through the various senses were not translated into something like a common medium for purposes of theoretical inference, it could not interact in a flexible way. Possibly this is the fundamental difference between cognition and perception.
In any event, an important question when studying the mental life of any fact-collecting species must concern the degree and the kind of interaction in inference that can occur among the varieties of fact icons it collects. Whether or not information can interact in inference does not depend on its "content" in the sense of truth conditions, but on the way that this content is articulated.

XII0 Negation

An animal constructing maps of parts of the world at a remove from it clearly is at great risk of error. Compare generalizations made from experience to behaviors with generalizations made from experience to knowledge of idle facts. Dispositions to make practical generalizations are naturally bridled. Although the unsuccessful behaviors that result may not produce actual punishment in place of reward, they do waste time and energy. Thus the animal's behavior is naturally channeled into other courses. What kind of control is there on rampant generalization for theoretical inference?
At first the answer seems obvious. A primitive scientific method must be employed. The animal generalizes so as to expect certain things to be true, and then either makes observations systematically or just happens to observe things that either verify or falsify some of its conclusions. This happens often enough to keep its dispositions to generalize well enough in check. But there is an important link missing in this explanation. The link is the capacity to represent something negative, which is needed to represent contradiction, which is needed to recognize falsification.
None of the intentional icons that we have discussed were icons subject to negation. Consider, for example, bee dances. A bee dance represents where nectar is. There are no variations on bee dances that represent where nectar is not. No bee dance contradicts another bee dance, indeed, bee dances cannot even be contraries. If two dances show nectar in two different locations, if the bees are lucky there is indeed nectar in two different locations. In particular, it is obvious, yet important to note, that the failure of a bee to dance a dance showing there to be nectar at place p is not an icon showing there to be no nectar at place p. In The mere absence of an icon showing a certain fact is not equivalent to the presence of an icon showing the negative of that fact.
This is straightforward enough, but now apply it to perception. Suppose that by theoretical inference the thinker arrives at a fact icon showing that, since birchbalm was applied yesterday, today the wound will be closed. And suppose that the wound is not in fact closed, and that the thinker is observing the wound. He does not perceive that the wound is closed. But from this it does not follow that he does perceives that the wound is not closed. Assume now that he not only fails to perceive that the wound is closed but positively perceives that the wound is open. That is, he harbors an intentional icon of an open wound. Now a wound that is open cannot at the same time be closed. That is a fact, in some sense a necessary fact, about the world. But can there be an intentional icon that represents a wound as open without representing it as being not closed or, more generally perhaps, as being contrary to closed? That is the question we must ask here.
First, notice that just as one need not represent space with space or time with time, one need not represent contrariety with contrariety. The words "red" and "blue" are no more nor less contraries of one another than are the words "red" and "square." The physical forms that constitute two different bee dances are, of course, contraries of one another in that no bee can dance two different dances at once, but this contrariety does not correspond to a contrariety in what is represented. If contrariety in content between two representations is represented, presumably it will be represented by some relation or other between these representations, but not necessarily by the relation of contrariety. What we need to ask, then, is what it would be for contrariety to be represented by some relation between intentional icons.
As with the content of any other intentional fact icon, the relation between two icons that represents contrariety must be a relation that guides the thinker appropriately with regard to its content. . How then would one be appropriately guided by representations that represent contrary facts? If contrary facts are represented this must be because one's methods of intentional icon production were faulty. These methods have lead to error. To be guided appropriately by the appearance of contrariety would be to backtrack, making corrections in one's methods of icon production, for example, in one's ways of generalizing.
In sum, having beliefs that are in fact contradictory is one thing. Recognizing that they are contradictory and reacting appropriately is another thing entirely. For an animal to achieve the latter must require an important transformation of its inner representational system, namely, introduction of icons that explicitly represent contrariety. Introduction of explicit negation is even a step beyond this. Explicit negation is indefinite contrariety. The negative says that some contrary or other of this icon is true (Millikan 1984, chapter 14). It is a very sophisticated animal indeed that understands explicit negation.

Niciun comentariu: