This book offers a comprehensive and broadly Rationalist theory of the mind which continually tests itself against exper...
101 downloads
1034 Views
6MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
This book offers a comprehensive and broadly Rationalist theory of the mind which continually tests itself against experimental results and clinical data. Taking issue with Empiricists who believe that all knowledge arises from experience and that perception is a non-cognitive state, Norton Nelkin argues that perception is cognitive, constructive, and proposition-like. Further, as against Externalists who believe that our thoughts have meaning only insofar as they advert to the world outside our minds, he argues that meaning is "in the head." Finally, he offers an account of how we acquire some of our most basic concepts, including the concept of the self and that of other minds.
CAMBRIDGE STUDIES IN PHILOSOPHY
Consciousness and the origins of thought
CAMBRIDGE STUDIES IN PHILOSOPHY General editor ERNEST SOSA
Advisory editors JONATHAN DANCY GILBERT H A R M A N
University of Keele Princeton University
FRANK J A C K S O N Australian National University W I L L I A M G. LYCAN University of North Carolina, Chapel Hill SYDNEY SHOEMAKER
JUDITH j . THOMSON
Cornell University
Massachusetts Institute oj'Technology
RECENT TITLES W I L L I A M G. LYCAN
Judgement and justification
GERALD D W O R K I N The theory and practice of autonomy M I C H A E L TYE The metaphysics of mind DAVID o . BRINK Moral realism and the foundations of ethics w. D . HART "Engines of the soul PAUL K. MOSER
D. M . A R M S T R O N G
Knowledge and evidence
A combinatorial theory ojpossibility
JOHN BISHOP
CHRISTOPHER j . MALONEY
Natural agency
The mundane matter of the mental language
MARK R I C H A R D GERALD E. GAUS
MARK HELLER
Propositional attitudes Value and justification
The ontology ofphysical objects
J O H N BIGELOW A N D ROBERT PARGETTER
F R A N C I S SNARE
Science and necessity
Morals, motivation and convention
C H R I S T O P H E R s. H I L L
J O H N HEIL
Sensations
The nature of true minds
CARL G I N E T
On action
CONRAD JOHNSON Moral legislation DAVID O W E N S Causes and coincidences
ANDREW NEWMAN The physical basis ofpredication MICHAEL JUBIEN Ontology, modality and the fallacy of reference WARREN QUINN Morality and action J O H N w. CARROLL Laws of nature
M . j . CRESSWELL
Language in the world
J O S H U A H O F F M A N & GARY s. R O S E N K R A N T Z
Substance among other
categories PAUL H E L M NOAH LEMOS
Beliefpolicies Intrinsic value
HENRY s. R I C H A R D S O N Practical reasoning about final ends ROBERT A. W I L S O N Cartesian psychology and physical minds BARRY MAUND Colour MICHAEL DEVITT ARDA D E N K E L
E. j . LOWE
Coming to our senses Object and property
Subjects of experience
Consciousness and the origins of thought Norton Nelkin
1 1 CAMBRIDGE W UNIVERSITY PRESS
CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, Sao Paulo Cambridge University Press The Edinburgh Building, Cambridge CB2 2RU, UK Published in the United States of America by Cambridge University Press, New York www. c ambridge. org Information on this title: www.cambridge.org/9780521564090 © Cambridge University Press 1996 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 1996 This digitally printed first paperback version 2007 A catalogue record for this publication is available from the British Library Library of Congress Cataloguing in Publication data Nelkin, Norton. Consciousness and the origins of thought / Norton Nelkin. p. cm. - (Cambridge studies in philosophy) Includes bibliographical references and index. ISBN 0 521 56409 3 (hardback) 1. Philosophy of mind. 2. Consciousness. I. Title. II. Series. BD418.3.N45 1996 128'.2-dc20 95-46968 CIP ISBN-13 978-0-521-56409-0 hardback ISBN-10 0-521-56409-3 hardback ISBN-13 978-0-521-03569-9 paperback ISBN-10 0-521-03569-4 paperback
To Sue and to my parents, Henry and Ann Nelkin
. . . I do not believe that scientific progress is always best advanced by keeping an altogether open mind. It is often necessary to forget one's doubts and to follow the consequences of one's assumptions wherever they may lead - the great thing is not to be free of theoretical prejudices, but to have the right theoretical prejudices. (Weinberg 1984, 11) It is very difficult to describe paths of thought where there are already many lines of thought laid down — your own or other people's — and not to get into one of the grooves. It is difficult to deviate from an old line of thought just a little. (Wittgenstein 1967, §349, 64e) This belief is based in part on several of Tulving's admonitions concerning the proper approach to psychological issues that are highly familiar to his students and colleagues. First, the self-correcting nature of the scientific enterprise insures us that nothing much will be lost if the ideas put forward here turn out to be wrong (this can also be read as an excuse for speculation). Second, broad conceptual approaches are currently needed instead of premature formalism (I take this as an excuse for vagueness). Third, falsifiability is not the only criterion for a useful scientific idea (in other words, circularity can be excused). Fourth, all current ideas in psychology are wrong anyway, so why not give it a shot? (Schacter 1989, 362-63) Like sin or poverty, the problem of consciousness will not go away, despite strenuous efforts to exorcise it. (Weiskrantz 1985, 15)
Contents Preface
page xi
Introduction PART O N E
1 2 3 4
1 Phenomena
The senses Phenomena Pains Phenomena reconsidered
PART T W O
Consciousness
5 Consciousness: preliminaries 6 Consciousness: a theory 7 Consciousness: an appendix PART THREE
8 9 10 11
Apperception
13 15 36 60 98 121 123 147 185 191
Apperception Selves Things Will
193 228 279 299
Concluding remarks
316
Bibliography Index
319 332
IX
Preface The journey to this book has been a long, tortuous, often broken-off one. My interest in philosophy of mind began with my undergraduate beginnings in philosophy. Descartes' Dream Argument (although not then quite correctly understood) played an especially large role in my wanting to become a philosopher. By the time I finished graduate school, I had discovered, and come under the sway of, the later works of Wittgenstein with their anti-Cartesian outlook. These Wittgensteinian attitudes stayed with me through a long fallow period that followed graduate school. Still armed with my Wittgensteinian views, my work, beginning in 1984, rapidly began to gel. And a number of papers, beginning with "Pains and pain sensations" in 1986, have appeared in print since then. But as I progressed along the route these papers, each in turn, seemed to set out for me, two things worthy of note occurred. First, my Wittgensteinian outlook began to fade as I progressed. While I still think that Wittgenstein was right about many of the things he said concerning sensations, more and more I have thought less and less of certain of his broader commitments, especially his anti-Cartesian ones. The upshot is that I have come full circle back to my undergraduate appreciation of a Cartesian theory of mind; and, indeed, this book represents a defense of Cartesianism (a version I call Scientific Cartesianism) against Wittgensteinian and post-Wittgensteinian attacks. Not that Descartes himself would approve of everything I defend: among other things, Scientific Cartesianism is anti-dualist. It is physicalist through and through. Along the way to this Cartesian turn, I rediscovered Kant (though it was more like discovering him for the first time). And while Kant's achievement is light years beyond mine, I am brazen enough to consider him my philosophical soulmate. At the same time, Scientific Cartesianism rejects anything like the phenomenal/noumenal XI
Preface distinction. So differences on important issues make my position less like Kant's than like Descartes'. The second noteworthy thing that occurred as I traversed the route set for me by my papers was that, although written independently of each other, each paper was leading to another; and I finally realized that together they actually formed a view. But it took me some time to unpack the view from the papers. This book is a presentation of that extracted view and leans heavily on those previously written papers. Chapters 4, 7, and 10 have been written completely from scratch; each of the others leans on one or more of the papers, though no chapter is exactly like the paper or papers on which it leans. Much new thinking and much rewriting have gone into each chapter, thinking and rewriting that were guided by the overall conception. The result is that almost no chapter closely resembles the paper(s) that gave rise to it. The shift to Cartesianism is most apparent in the third part; but the earlier chapters are no less Cartesian. Why the Cartesian shift? The answer to that question is the book itself. But two psychological motivators and one issue can be pointed to, which, individually and most especially collectively, primed me to accept the shift. I have a deep respect for the history of philosophy. Those philosophers whose works are our classics were incredibly smart people. Because they were so smart, it is unlikely that they would make large errors of logic. As a result, I have an abiding suspicion of claims that historical philosophical positions are a priori false — or, worse, meaningless. Since many Wittgensteinians make exactly such claims about skepticism and its Cartesian underpinnings, I was never fully comfortable with Wittgensteinianism. Cartesianism may be wrong (and skepticism almost certainly is); but if it is, it is because some other theory is better — not because Cartesianism is a priori false or inconceivable. The second psychological motivator is a constitutional bent towards physicalism, especially towards a physicalism that is a scientific physicalism. Part of this motivation has to do with my views of what philosophy is. These views are discussed throughout the book, so I won't rehearse them here. Of course, this same bent accounted for my initial turning away from Descartes; but that was because I couldn't then see past his dualism. Moreover, while Wittgenstein and his followers may often be thought of as physicalists, their physicalism is also often a sort that is nonscientific, or, more strongly, even anti-scientific. I think Wittgenstein himself, in his later works, was anti-scientific. He xii
Preface believed that science's rationalism had led to the technological horrors of destruction that marked World War I (and, subsequently, World War II). And his later works reflect a kind of anti-scientism, and a beginning of deconstructionism — a view that, in its extreme version, I constitutionally abhor. My own view, to the contrary, is that the Rationalism of the seventeenth and eighteenth centuries, though sometimes put to evil uses, has been on balance the great liberating force in all our lives. So, given these leanings, I was once more an uneasy Wittgensteinian, ready — and eager — to find a way back into the Rationalist fold (though I don't know that I could have always described my qualms in this way). The issue that turned the tide for me was that of intentionality As this book shows, I still have only the loosest of grips on the solution to any of the problems surrounding this issue; but I have become quite convinced that Scientific Cartesianism is considerably more likely to offer a solution than are Wittgenstein s later views — or views influenced by his later views. This book is a beginning in the attempt to work out a Cartesian solution. Chapter 9 makes a little progress. But there is a long way to go. My hope is that this book encourages others to finish (or at least further — finishing tasks of this kind may never be possible) what I have begun. I wish to thank the National Endowment for the Humanities for providing me with a 1993 Fellowship for College Teachers and Independent Scholars. Most of the work on this book was completed with that support. I would also like to thank the University of New Orleans for granting me several reduced teaching loads and for two sabbatical leaves, which allowed me to write several of the relevant papers. During the second sabbatical leave, a beginning to this book was made. Many individuals need to be thanked. They encouraged and abetted me on my journey, even though many of them thought — and still think — I ought to have trod other roads instead. A large number of people helped me with the papers that form the basis for this book; and while they have been thanked elsewhere, their contribution to the book remains at least as great as it was to the papers. Others have influenced various chapters of this book. I list them alphabetically (and hope that I have not omitted anyone): Gerianne Alexander, Kent Bach, Bill Bechtel, Robert Berman, Radu Bogdan, Bruce Brower, Ronna
Preface Burger, Keith Butler, Dave Chalmers, Andy Clark, Martin Davies, Dan Dennett, Graeme Forbes, Roger Gibson, Irwin Goldstein, Diane Gottlieb, George Graham, Harvey Green, Sam Guttenplan, Richard Hall, Larry Hardin, Ed Johnson, Eric Lormand, Yuval Lurie, Tony Marcel, Carolyn Morillo, Tom Natsoulas, Dana Nelkin, Jack Odell, Danny Povinelli, Sam Rickless, Mark Rollins, Jim Russell, Carol Slater, Alan Soble, Lynn Stephens, Petra Stoerig, Jim Stone, Alan Sussman, Jaap van Brakel, and Bob Van Gulick. Many thanks also to Hilary Gaskin, an anonymous reader, John Heil, and David Sanders of Cambridge University Press. Tom Natsoulas and Petra Stoerig each read a large chunk of this book, and helped me a great deal to improve the second part. Eric Lormand generously supplied comments on the first eight chapters. Larry Hardin and Radu Bogdan provided early — and continued — encouragement for my work. Bogdan, Keith Butler, Dana Nelkin, Sam Rickless, and Carol Slater have all read a version of this book from cover to cover. Not only have their comments been of enormous use to me in getting straight on issues and in clarifying them both for myself and for my readers, but their encouragement has been unbelievably kind, and certainly helped me through times of self-doubt. I owe many, many thanks to Martin Davies, who not only supplied many useful comments, but who encouraged my work in a way beyond what he probably realizes and beyond my ability to thank him sufficiently. Without his encouragement, this book might never have been written. Most especially, thanks are owed (and owed, and owed) to my colleagues, Ed Johnson and Carolyn Morillo. Each is a sine qua non of this book. They have each read virtually every draft of everything I have written since 1984, putting in only slightly less time on my work than I have. Not only have they read every draft, but their comments on any one draft were usually the cause of yet another draft. I cannot thank them enough. I am lucky to have friends such as all these. Throughout, I've also had the loving support of my family. My children — Dana, Karen, Benjamin, and Sarah — gave me space and time to work out my ideas. The two older ones, along with my son-in-law, Sam Rickless, have been supportive in a thousand other ways as well, including making many useful suggestions concerning the manuscript. The two younger children are too young to know how much peace and quiet have meant to my work, but they have granted them anyway. This book is dedicated to three people. My parents may not have
Preface sent me to college to become a philosopher, but they have been unquestioningly (well, almost) supportive from the beginning. Their love has meant a good deal to me. My wife, Sue, is the other dedicatee of this book. She has lent her love and support through both the brightest and the darkest days of my work (even if I didn't always acknowledge it, or even recognize it, at the time) — and in many difficult times beyond my work moods. She has bent over backwards to "help" Benjamin and Sarah give me peace and quiet. Most of all, she has made me feel good about myself. The close temporal pairing of my productive work life and our life together is surely no accident. Note After a long battle with cancer, Norton Nelkin died on April 25,1995, a few days after completing the final draft of this book.
xv
Introduction The great war between Cartesian Rationalism and Empiricism has been fought on several fronts. Empiricists believe that all concepts, meanings, and knowledge arise from experience, while Cartesian Rationalists defend the innateness of at least some concepts, meanings, and knowledge. Empiricists also usually believe that perception is a noncognitive state that serves as a foundation for cognitive states, while Cartesian Rationalists believe that perception is itself cognitive. Undoubtedly, there are other differences between Empiricism and Cartesian Rationalism, but these are the salient ones for the purposes of this book. In the seventeenth and eighteenth centuries, the battle was mainly over the issue of whether perception is constructive, proposition-like, and cognitive, on the one hand, or passive, image-like, and often noncognitive, on the other. Rationalists (as did Descartes in his Second Meditation discussion of the wax) defended the former while many Empiricists have argued for the latter position. Indeed, the British Empiricists, beginning with Locke and culminating in Hume, argued that perception is basically a phenomenal state, totally passive and unconstructed. The British Empiricists were, however, Cartesian in one respect: they accepted Descartes' Internalism, his view that contents (including perceptual contents) are wholly in the mind. They simply believed that phenomena are the bearers of content, while Descartes thought that only proposition-like states are bearers of content.1 In the twentieth century, the debate shifted significantly, though these early differences remain large. The new Empiricist move was to question whether we had to worry about what was in the mind at all. Behaviorism, for 1
The evidence for this claim is Descartes' insistence that perception is judgment (1642/1986, 22). And Descartes saw concepts as constituents of judgments, that is, as something like words. He quite explicitly rejected the view that they are images; see his discussion of the chiliagon (1642/1986, 50-51).
Introduction
instance, is deeply anti-Rationalist, and anti-Internalist on this issue. Behaviorists are Empiricists insofar as they share the British Empiricist views on innateness and on the noncognitive status of perception. After all, if we don't even have to worry about what is in the mind at all, we certainly don't have to worry about whether what is in the mind is passive or constructed, nor about whether internal perception is contentful. And, according to Behaviorists, all concepts, meanings, and knowledge (if there are concepts and meanings at all) are learned. None is innate. Even as Behaviorism has weakened its grip somewhat, more cautious versions of Empiricism have emerged, retaining the antiInternalism of Behaviorism. Perhaps the least radical form is that which says that contents, while not exactly "in the head," are not exactly outside the head either: meaning is a relation between states of the head and the external world. States of the mind/brain have content only by adverting to the actual external world. This form of Empiricism, one version of a position called "Externalism," is widely held today, as are somewhat more radical versions of Externalism lying on a scale between it and Behaviorism.2 The present book is unfashionably Rationalist and Internalist on these issues. In the first two parts, I argue that perception is cognitive, constructive, and proposition-like, and that phenomenal states play a role smaller than and different from that credited by many Empiricists. Although the focus of the attack on Rationalism has shifted in the twentieth century, criticisms similar to those made in the seventeenth and eighteenth centuries remain prominent in the work of many contemporary philosophers; and the issues relevant to these attacks are far from settled. So it is important to put these earlier British Empiricist criticisms to rest. 2
I borrow the useful notion of "adverting to" from Martin Davies (in conversation). It is meant to characterize all forms of Externalism. Here is a quote from Davies (Forthcoming), more fully spelling out this notion: According to Externalism about the mind, the mental natures of at least some of a person's or animal's mental states (and events) are such that there is a necessary or deep individuative relation between the individual's being in states of those kinds and the individual's physical or social environments. I take this to mean that the most fundamental philosophical account of what it is for a person or animal to be in the mental states in question does advert to that individual's physical or social environment, and not only to what is going on within the spatial and temporal boundaries of the creature. When I use the "advert to" expression, then, it is in this Davies-inspired sense.
Introduction
The third part of this book will challenge the twentieth-century attacks and defend the view that contents (including perceptual contents) are determined — as contents — entirely within the head. Adverting to the external world is not a necessary condition of there being mental content — for any mental content. While reference may be a relation between what is in the head and the external world, and may require adverting to the actual world, contents themselves are entirely in the head. For Cartesian Rationalists like me, solipsism is conceivable (although false). Even the idea of innate concepts — and, so, innate content — will be defended. I suspect that the fear of being somehow "unscientific" lies behind the wish for Externalism (or even Behaviorism) to be true. But Cartesian Rationalism and its accompanying Internalism are in fact both compatible with science as it is actually practiced. If Rationalism/Internalism is true — and I think psychological facts make a compelling case that it is — then science must deal with the way things actually are. We cannot close our eyes to the truth and expect to do good science. In light of this book's goals and basis, the theory of mind to be presented is best labeled "Scientific Cartesianism." The terms "theory," "Cartesianism," and "Scientific" all stand in need of clarification, and are briefly treated in sections I and II. An overview of the chapters is provided in section III.
1. Philosophical theories are unlike scientific ones. Scientific theories ask questions in circumstances where there are agreed-upon methods for answering the questions and where the answers themselves are generally agreed upon. Philosophical theories, in contrast, are forerunners of scientific theory: they attempt to model the known data in ways that allow those data to be seen from a new perspective, a perspective that promotes the development of genuine scientific theory. Philosophical theories are, thus, proto-theories. As such, they are useful precisely in areas where no large-scale scientific theory exists. At present, that is exactly the state psychology is in. This book is aimed, then, not only at philosophers, but also at those practitioners of what is sometimes called cognitive neuroscience. And the book will be successful if it leads those practitioners to regard the present data differently and, thereby, leads them onto research paths that, in the end, produce scientific theory.
Introduction 2. Developing proto-theory has been a central project (if not the central project) for philosophy throughout its history. It is because of this role that so many think of philosophy as "foundational" vis-a-vis the sciences. But this kind of foundationalism is not, as it is sometimes misunderstood to be, an epistemological one of justifying the sciences. It is, instead, an ontological one: that of making science possible. Of course, we learn from the history of philosophy that most attempts to play this role are failures. Few proto-theories ever give rise to scientific theory. Failures outnumber successes by several orders of magnitude. And the repeated failure makes many — including many professional philosophers — despair of the philosophical enterprise, leading them to deny altogether its foundational nature.3 3. But this despair is misconceived. First, there are successes, although these are often hidden from view by their very success. Successful proto-theory, refined and redirected, turns into theory over time (natural philosophy becomes physics, for one example); but the prototheory that made the scientific theory possible is masked in the finished product because of the relatively slow pace of the refinements and redirections. Second, the numerous failures, the various futile attempts to answer the same questions — over and over — while employing slightly different models, the appearance of running into intellectual wall after intellectual wall are one and all signs of a healthy philosophizing, not of a sick and dying one. While proto-theories lack the stature of scientific theories, they are nevertheless necessary for the latter. Yet, they are much more likely to fail, for there are far fewer footholds for their creators. Conceiving of the nature of the philosophical enterprise as a kind of proto-theorizing motivates the content, methods, and forms of argument throughout this book. II
4. Labeling the view "Scientific Cartesianism" has its risks, for when it comes to theories of mind, Descartes is probably most remembered for his dualism of mind and body. Yet, Scientific Cartesianism rejects dualism. It is physicalist through and through. So why the label? For at least three reasons. 3
Notable examples include Wittgenstein 1953, Rorty 1982.
4
Introduction 5. (1) Descartes may be most remembered for his dualism, but the basis for his status as the "father of modern philosophy" lies elsewhere: in his proposition-like, representational view of the mind. With the introduction of a deeply representational mind in the Dream Argument of the First Meditation, Descartes turned metaphysics, and our thinking about the mind, upside down. He created a genuine revolution. Understanding the world, Descartes argued, is an insideout affair. We first have epistemic access only to the contents of our own minds. And if we want thoroughly to understand the external world, we have to understand our conceptions of it. And understanding those conceptions means understanding the nature of the mind itself— only then can we begin to understand the relation of those representations to the external world they represent. Descartes, moreover, was thoroughly Internalist about those mental contents: contents, for a Cartesian Internalist, are completely "inside the head." They are not constituted by any relation to the external world. The external world in fact may help causally determine mental contents, but that causal relationship is a contingent one. The same contents could in principle arise without there being an external world at all. This book, especially in its third part, aims to restore Cartesianism/Internalism to its previous prominence and to defend it against Externalist attacks. While we have learned a great deal from the anti-Internalist criticisms — indeed, many of the arguments found in the first two parts owe much to Wittgenstein — there are serious flaws in the conclusions often drawn from those same criticisms. 6. (2) Another respect in which the theory of this book may be called Cartesian is its agreement with Descartes' realization that the cognitive and propositional, rather than the phenomenal, are of first importance in human (and nonhuman) mental life. Descartes' successors — most notably the British Empiricists, but even to a large degree Kant, all of whom were as thoroughly Internalist as Descartes himself— failed to take this aspect of Cartesianism seriously enough. The result was their belief that the mind is primarily the seat of phenomenal experience, that even cognition is either constituted by phenomenal experience or ranges over it; and that view made the central Cartesian project look much more dubious than it is. The first two parts of this book consist of arguments aimed at restoring Descartes' own priorities. I argue that phenomenal states have been
Introduction
overemphasized and play a lesser role in our lives than Descartes' successors led us to believe. The aim of these chapters is not to reject phenomenal states. They exist. It is, rather, to find their rightful — albeit more narrow — place in our lives. 7. (3) Like Descartes' own theory, Scientific Cartesianism takes skepticism seriously. I agree with Descartes that skepticism cannot merely be dismissed. It is a possible philosophical stance, and its very possibility does much to illuminate the nature of minds. Perhaps this motivation is the result of confusing philosophy with psychology; but perhaps instead the two are not at this stage of theorizing so easily separated. However, like Descartes, although for considerably different reasons, I argue that we have no reasons to think that skepticism — that which claims that our beliefs are unjustified — is true, and even have good reasons to think it is in fact false.4 Since much of the opposition to Cartesianism appears to arise because Descartes' own defense of it (and those of his successors as well) made skepticism seem not only possible but all too plausible, the argument that skepticism is false, placed on a basis quite different from Descartes' own, may help to soften that opposition — perhaps altogether erase it. 8. For all three reasons, the theory presented in these pages is legitimately labeled "Cartesianism." The modifier, "Scientific," is appropriate because the Cartesianism of this book has its roots in empirical experimentation and clinical data; furthermore, a key project of this book is to show that Cartesianism so conceived is compatible with experiment and data, provides a picture that better fits those experimental results and clinical data than does any rival theory of mind, and shows the way to novel and useful research. in
9. This book divides into three parts, the first of which is called "Phenomena." British Empiricism is partly recognized by its emphasis on phenomenal states, which have been variously claimed to be both prior to and central to cognition. (One can, for the moment, think of 4
On the other hand, I think that a skepticism that says that we are unable to know anything is quite possibly correct. That is, the best we can ever have are theories. Here, I depart from Descartes, who took himself to defeat both sorts of skepticism.
6
Introduction
phenomenal states as sensations, though that is an oversimplification; sensations will be shown to be complex states - as will phenomenal states themselves.) I argue that phenomena play a lesser role in our lives than the British Empiricists led us to believe. The first two chapters use both empirical data and theoretical argument to show that perception is primarily a cognitive, proposition-like state rather than a phenomenal one. In chapter 1,1 show that judgments, not phenomena, are central to perception, even central to identifying and categorizing the senses. Chapter 2 develops this theme. Phenomena are shown to be inadequate for accomplishing the goal they are often used for: explaining how we acquire perceptual concepts, like nred"1 and •"square"1. Once we see why phenomena fail in the task set for them, we are also able to see that their other relations to perception are considerably more convoluted than Empiricism has claimed. Chapter 3 argues that even pains, those seeming paradigms of phenomenal states, are complex states, requiring a judgment that evaluates the phenomenal state: no judgment, no pain. No set of phenomena forms a natural kind, pain phenomena. Anticipating the material of the middle and last parts of the book, this chapter maintains that apperception 5 is present in animals fairly far "down" the phylogenetic scale namely, in all those that feel pain. The upshot is that even nonhuman animals are more "cognitive" than we often take them to be. Having spent the first three chapters downplaying the role of phenomena in perception (at least when phenomena are considered as qualitative states), in chapter 4 I discuss reasons why phenomenal states have seemed so important. I claim that phenomena have two important features: their qualitativeness and their representationality. The latter is a reason they may legitimately be thought of as cognitively important to us. The chapter identifies phenomena as composing a subset of neurologically realized image-like representation states; and a discussion of the distinction between image-like representation and proposition-like representation follows. This distinction is found to 5
I borrow this term from Leibniz (1714/1989, 208), who used it to make distinctions similar to those I wish to make (see Part Two). In earlier work (see bibliography), I used the word "introspection" for the same purposes; but this latter word carries so much baggage with it that I have abandoned it in favor of "apperception," which carries a lighter load. Apperception is explained at several places in the book, most extensively in chapter 8. I will reserve "introspection" for a purposive, considered apperceptive investigation of one's own mental states.
Introduction
involve another distinction: that between information and content. Two kinds of representations exist — aspectualized ones, which have content, and unaspectualized ones, which do not. Phenomena are best considered to be unaspectualized representations — as are all image-like representations. These distinctions prove useful in ensuing chapters, while also deepening understanding of previous ones. Although the major distinction is labeled as "image-like" versus "proposition-like" representation, the labels themselves are not defended. For the purposes of this book, the distinction can be drawn without taking sides in the debate between language-of-thought theories and their critics. What is important is that different modes of representation exist, and much can be said that is interesting and important about each of them. The main purpose of this chapter, however, is to show that at least six different, and plausible, "information" theories of perception are compatible with the claims of the first three chapters. 10. To continue defending Cartesian Rationalism against British Empiricist views, the second part of this book is concerned with consciousness. Chapters 5 and 6 argue that rather than naming a simple, indivisible state, as it is often portrayed to do, "consciousness" names three separable (and often in fact dissociated) states: phenomenal consciousness, propositional-attitude consciousness (proposition-like representations of the world), and apperceptive consciousness (proposition-like representations of either of the former states — a kind of second-order awareness). I argue that while these three states often occur together, as in normal perceptual experience, failing to understand them as dissociable leads to important theoretical mistakes. Chapter 5 investigates several claims as to what consciousness "really" is, including the claims that consciousness is awareness, consciousness is apperception, and consciousness is a phenomenal state. Each claim is shown to be lacking in one way or another. The conclusions of Part One are reinforced by demonstrating that phenomenal states have little to do with conscious thinking. Thinking, unlike phenomena, it is argued, involves proposition-like representations; and so does our apperceptive awareness of both thinking and phenomena. Thinking, believing, hoping, fearing, desiring, and so forth - the "propositional attitudes" - and apperception are dissociable from phenomenality. Descartes' discussion ofthe chiliagon, in his Sixth Meditation (Descartes 1642/1986, 50-51), was lost sight of in the work of his successors. 8
Introduction
In chapter 6,1 argue in a more systematic fashion for the dissociability of propositional-attitude consciousness from both phenomenality and apperception, and for the — perhaps more surprising — dissociability of phenomenality from both of the others. Since apperception is a second-order awareness of the others, it obviously cannot occur independently of both of the others; but I show that it requires only one of the others at any one moment. Once we realize the possibility of these dissociations, we recognize that many of our other beliefs about consciousness are in error. The conclusion drawn is that all three previous attempts failed to find a complete definition of "consciousness" because there is no one state of consciousness — not because "consciousness" is an empty term, but because there are at least three separable, and often in fact dissociated, states, each of which is called "consciousness." The states we are most likely to think of as conscious — normal perceptual states, for instance — are not simple. Rather, they are complex states, amalgams of these three different kinds of consciousness. A major rival to my view, one more congenial to British Empiricism, is the hypothesis that there exists a self-reflective state that is noncomposite and indivisible, which somehow incorporates all the salient features. (This view is defended by Natsoulas, Searle, Nagel, and McGinn, among others.) I argue that the dissociations show such a state to be both unnecessary (because complexes of states will explain whatever can be explained by this state, which would be in addition to the others) and unlikely (because it would be miraculous if a single state could have all the salient properties). Chapter 7 clarifies this theory of triple consciousness further and sets it in an historical context. An interesting result is that several of the historical claims about consciousness — made by Descartes and Leibniz, say — that are often held up as examples of silly philosophers' claims, turn out to be near the truth. Only if we make the mistake of taking "consciousness" to name a noncomposite, indivisible state do their claims appear so "obviously false." 11. The results of the first two parts may appear as congenial to many twentieth-century anti-Cartesian views as they are to Scientific Cartesianism. However, Part Three, "Apperception," contains a sustained argument showing that these results are not only compatible with Cartesianism, but even lead in a compelling way to it. In Part 9
Introduction
Two, apperception is shown to be only one sort of consciousness. At the same time, it seems to be the sort most intimately tied to our concept of ourselves as Lockean persons. Because of its apparent importance, two questions arise about apperception: Why do we have it and why does it seem so important to us? It is in answering these questions that Scientific Cartesianism emerges. In the first two parts, judgments were the focus. In Part Three, the turn of focus is onto their constituents: concepts. In chapter 8, it is shown that it is a reasonable belief that apperception plays an ineliminable role in concept formation and possession, at least for our concepts of those states that philosophers commonly call the propositional attitudes. In spelling out this account, two somewhat prominent antiRationalist views of the mental are shown to be mistaken: (1) Instrumentalism, which holds that no mental states exist, mentalstate concepts just providing a convenient, rather than an ontologically sound, way of talking about human behavior; (2) Wholism, which claims that mental states, even if real, are states of the whole person simpliciter, rather than being states of the person in virtue of being states of a part (the brain) of the person. In defending Scientific Cartesianism against these views, the nature of apperception is made explicit and its use in concept formation is somewhat clarified. Full clarification is shown to depend on scientific advances. Speculations on how that science might go conclude the chapter. Comparisons and contrasts to Dennetts views (Dennett 1987, 1988a, 1991a, 1991b) are used throughout the chapter to highlight and clarify my own. While Descartes is the historical figure whose philosophy is the real progenitor of the present theory, the two greatest influences on me have actually been Wittgenstein and Kant. Despite the fact that I have learned more from them than from any other philosophers, they are ultimately my major targets, as their ideas — in different ways — pose the greatest threats to Scientific Cartesianism. Chapter 9, perhaps the key chapter of the book, has Wittgensteins later philosophy as its focus. In this chapter, Externalist views are critically considered. The aim is to establish that Externalist theories of representation are less plausible than Scientific Cartesianism. Using the Argument from Analogy as a backdrop, I both argue directly against Externalism and defend Internalism against criticisms. Using findings from developmental psychology, it is shown that Scientific Cartesianism best accounts, not only for concept content, but also for 10
Introduction
concept formation. No rival theory is so well able to furnish both sorts of explanation. It is claimed that if an in-control/not-in-control distinction is central and primitive — and empirical evidence supports this claim — then it is understandable how organisms acquire concepts of an external world, of themselves, and of other thinking and feeling beings, even though organisms have, in a relevant sense, no immediate epistemic access to anything but their own internal, mental states. Even a brain in a vat (or a disembodied mind) could — in principle — acquire these same concepts. On the other hand, the evidence that makes Cartesianism plausible also counts against there being such things as brains in vats (or disembodied minds) possessing our concepts. Skepticism, though possible, is shown to be in fact untenable. While chapter 9 concentrates on explaining how we acquire concepts of our self and of other selves, chapter 10 explores the Kantian question of how organisms develop, from the inside, concepts of a spatio-temporal world of objects. By and large, Scientific Cartesianism tells the same stories of concept formation as do many Externalisms. For purposes of a scientific theory of the mind, the Internalism/Externalism dispute matters less than philosophers have thought. It does matter some, however; and so it is important to be on the right side. Moreover, the dispute matters a great deal for some more purely philosophical questions. Once again, Scientific Cartesianism is shown to be on the right side of the dispute. The Kantian project is not rejected, only its reliance on a priori theorizing rather than on scientific experimentation, as well as Kant's correlative belief in a phenomenal world — a world based in phenomenal experience. Chapter 11 develops the notion of a sense of control, claimed in chapter 9 to be basic to concept formation, and traces out its relation to the free will problem and to our lives in general. It is argued that what we discover about the will, by taking the relevant starting point, gives a good deal of insight into why the will matters so much in our lives and provides a possible solution to the free will problem. The conclusion of that chapter is that thinking, feeling human beings are bodily things, and only bodily things. Perhaps that fact explains why science matters so much to our lives.
11
PART ONE
Phenomena
1 The senses This chapter begins the project of showing that perceptual states — and so, almost certainly, higher-order states — are primarily cognitive, constructive, and proposition-like, and not primarily phenomenal and passive.1 Many contemporary philosophers would agree that phenomena are unimportant, not even needed to be considered in discussions of perception and other mental states (see Dennett 1988b, 1991b, and Fodor 1975, for instance); but other contemporary philosophers continue to defend the importance of phenomenal states (Jackson 1977; Perkins 1983; Hardin 1988; Boghossian and Velleman 1991; Peacocke 1983; Nagel 1974; Chalmers Forthcoming, just to name a few). Since phenomenal states do occur, it is hard to see how discussion of them can be abandoned altogether. Dennett (1988b, 1991b, 1991c) tries to show why it can be abandoned, but the story he tells isn't a very good one — or at least not good enough. So discussion of phenomena is both useful and necessary, especially since ground can be gained by first thinking about them, for those who find them significant are onto something of real importance. It is just something other than what they take it to be. One reason for the reluctance to deal with phenomenal states is the difficulty in saying what they are. I will take them, at least for the first three chapters, to be mental states that have a certain kind of experiential "feel" to them. They have a certain quality to them, which is available only to the one who possesses the state. They are the states that best fit Nagel's (1974) slogan that when a thing is in one of these states, there is something it is like to be the thing.2 I put "feel" in shudder quotes because I mean it to stand for all sorts 1 2
This chapter is based largely on Nelkin 1990, though much rewritten, including some important terminological (though not mere terminological) changes. Because of these qualitative properties, philosophers often call phenomena "qualia."
15
Phenomena
of phenomena. 3 As such, we are all aware that the phenomenal state in seeing red just is different from that in seeing yellow; or that in seeing square from that in seeing round; or that in feeling heat from feeling cold; or in smelling a rose from smelling rotten eggs; or in tasting a salty flavor from tasting a sweet one; or in hearing the purring of a cat from hearing it meow; or in feeling pain from feeling one s skin lightly stroked. At least I think we are aware of (at least some of) these differences. And I will take it that we are. Phenomena are private - at least in the sense that they are identified relative to the single person whose experiences they are. In this respect, they are like broken arms (Sarah cannot have Benjamins broken arm). But they are unlike broken arms in that, in some sense, they are directly experienceable only by the individual whose phenomena they are. While both Sarah and Benjamin can see Benjamin s broken arm, the phenomena brought about by each looking at the arm are different for each. Each can only infer (in some sense) what the other s phenomena might be.4 For the moment, this description of phenomena is the best I can do. There is, to be sure, a reliance, not so much on shared intuition, but on shared (types of) experience. Later (in chapter 4), we will see that there is perhaps more to phenomena than this qualitative aspect. But the qualitative aspect of phenomena is so salient in our experience that it has misled us - even its detractors - in philosophically costly ways. So it is best to make the case that phenomena are virtually worthless before entertaining the possibility that they may be important after all. We begin the former project by considering fairly modest issues. But by doing so, we arrive at a conclusion at the end of this chapter that is far from modest. The issues at hand concern our partitioning of the senses: as visual, aural, gustatory, olfactory, and hap tic.5 Two sorts of questions need to be distinguished: (1) By what means have people partitioned the senses and come to believe that there are five of them? That is, how did people acquire the concepts of the five senses? 3
4
5
I often shorten "phenomenal states" to "phenomena." This shortcut is risky. "Phenomena" has object-connotations that the longer expression does not. I mean the shorter only as a shortening of the longer. Although the word "infer" may suggest that it is epistemological privacy at issue here (only Benjamin can know what phenomena he has), I want to distance myself from that suggestion - and from any issues surrounding it (at least for as long as I can). I use "haptic" in an unusually broad sense, to include any notion of bodily feeling, whether of touch, kinaesthetic feelings, tingles, itches, and so on (but not pain - see chapter 3).
16
The senses
(2) Having recognized the senses, by what defining criteria should the senses be distinguished? The criteria by which we come to have concepts of them are, at least in principle, independent of their defining criteria, just as the criteria by which we first picked out gold (the color, shininess, malleableness, and so forth) are not directly involved in the defining criteria for gold (the atomic structure). Moreover, our initial distinctions may not prove to be the best distinctions. The two sorts of questions, then, are importantly different. The first is a factual question: it asks how we, in fact, distinguished the senses. The second is a theoretical question: it asks how we should define the senses so as to make them scientifically useful concepts. More metaphysically, the second question asks what is the real nature of the senses. We can go a long way toward answering both sorts of question. Section I considers the first, while section II deals with the second.
1. How do we discover the senses? One possibility is that each of the senses is differentiated by the kind of external properties to which it is especially sensitive. While this claim may have some truth to it (see footnote 17), it is not the whole truth. The main problem is that some properties affect more than one sense. For instance, the primary qualities are perceivable by both sight and touch.6 Moreover, at least some properties — distance and location, for example — are perceivable by both of these and by hearing as well. If different senses are sensitive to the same properties, then these properties cannot be used to distinguish the senses from each other. Two replies can be made to this objection, but neither is convincing. (1) "It may be true that these shared properties do not distinguish the senses from each other; but there are others, the secondary qualities, by which we distinguish the senses, for each of the secondary qualities is available only to a single sense." Even if this claim be true, it would still leave unexplained how we distinguish visually processing 6
The primary qualities are generally taken by philosophers to include extension, size, shape, position, motion, and sometimes solidity (or impenetrability), texture, and hardness. The secondary qualities are then made to include color, sound, taste, smell, heat and cold. Texture and hardness are most usually included among the secondary qualities. I am not defending the distinction, only spelling out distinctions expressed in the philosophical lit-
17
Phenomena
the primary qualities from haptically processing them, aurally processing how far away something is from visually or haptically processing the distance, and the like. Moreover, it would leave unexplained how we canfeel more than one secondary quality. In ordinary parlance, we say we feel such diverse properties as hardness, heat, and squishiness. Since more than one secondary quality is ascribed to the haptic sense, the question of why we have individuated only one sense here rather than several remains unanswered. (2) "The objection that the primary qualities are shared between visual and haptic processes (and perhaps, in part, with aural processes as well) relies on too gross a distinction. There are properties of those properties that are not themselves shared among the senses. The reflectance properties of the primary qualities account for their being visually processed. Other sorts of properties of the primary qualities account for their being haptically processed." But this reply, too, fails to account for the undifferentiated lumping of properties we say are felt. That we feel heat, texture, motion, and hardness, just to consider a few properties, does not seem to be the result of any common property of properties shared among these in the way the reflectance properties of an object might account for our categorization of visual processes. Once more, we do not seem to generate any explanation of why we talk about one "feeling" sense rather than several. Perhaps it is yet more telling that the "reflectance property" reply itself fails to solve any problems: reflectance properties can also affect how we feel, as anyone who has felt the heat of the sun knows. So this reply misses the mark. 2. A second possibility for the desired criterion is that we divide perception into the senses because different parts of the body are differentially affected. Because we separate out the eyes from the ears as sense organs, we distinguish seeing from hearing. Because we conceive of the skin surface and underlying flesh as a single organ, we say we feel heat, texture, hardness, and motion. If the eyes, ears, nose, mouth, and skin were not such distinguishable parts of the body, we would never have distinguished the different senses. Whatever truths are contained in the organ criterion, it surely does not explain why we believe there to be more than one sense. There are many distinguishable parts of the body - hands, feet, legs, knees, torsos, backs, hair, and so on - that we do not take to be independent sense 18
The senses
organs. We must be using a further criterion to distinguish those parts of the body that we designate as sense organs from those not so designated. But this further criterion is surely the criterion we were after in the first place, for this distinction is just the original distinction in a new guise. 3. Well, how and why do we distinguish sense organs in general and one sense organ from another? Consider facing an object with one's eyes open and with one's eyes closed. What difference results in our believing the eyes to be a sense organ? Surely, when our eyes are open we experience effects we don't experience when our eyes are closed. Which effects? One answer is that when our eyes are open we experience phenomenal states of a kind not generally experienced when our eyes are closed. Moreover, these phenomena are also quite different in kind from the phenomena we are deprived of if our ears are stopped up or if patches of skin are desensitized or if our noses are stopped up with a cold or if our tongues are damaged or removed. So perhaps phenomenal differences provide the basis on which we differentiate sense organs and arrive at the idea of the senses. The phenomenon criterion gives rise to a couple of correlative points that I find congenial (with important reservations): (1) It would explain why we talk of "visual experiences" even when sense organs are not being stimulated, as in hallucinations. The internal processes, the phenomena, would distinguish these experiences as visual. If we want to explain human behavior, then — Behaviorists aside — we will want to explain it partly on the basis of what is going on "inside" the organism. And if we want to use a term like "seeing" as an epistemological success verb, as much recent literature treats it, then it would be well to have a term like "visual experience" or "visual process" to name the sense as that sense enters psychological explanation. Such a term would be wider in extension than "seeing," not carrying the weight of epistemological success; but the wider term allows us to understand that similar behaviors result from similar internal events.7 (2) Another apparent advantage of this criterion is that it does not force us to take a stand on the issue of skepticism. If we, in fact, distinguish the senses on the basis of 7
That is, what I have in mind is that the statement, "A sees an x," entails the statement, "A has a visual process as of an x" (where the best reading of "as of" will become clearer later in the chapter), but not vice versa.
19
Phenomena
phenomena, then we do so without being committed to perceptual realism. Despite these advantages, the phenomenon criterion is also inadequate. Very different sorts of phenomena are felt. As with the previously proposed criteria, we have the problem of why we recognize only one haptic sense rather than several.8 Think how different the phenomena experienced in feeling heat are from those experienced in feeling squishiness. Like the previously proposed criteria, this one also seems unable to solve that problem.9 4. When ascribing the senses to other organisms — species or individuals — we do so on the basis of an organism s behavior, but not because we think its behavior provides evidence for its phenomenal states. Instead, behavior provides evidence for the organisms perceptual judgments, irrespective of phenomena. So perhaps kinds of judgments are used to distinguish one sense from another. Judgments can be spontaneous as well as considered, and perceptual judgments are almost certainly of this spontaneous variety. Judgments can even be modular (in Fodors [1983] sense), and it is not unreasonable to think that perceptual judgments are modular or quasi-modular. But all judgments have content — proposition-like content. This latter notion will unpack itself as the book continues. A judgment criterion would be apt in that it would preserve the two correlative points raised earlier: it would put the criterion for distinguishing the senses inside our psychological selves, and it would not presuppose the falsity of skepticism. Besides, there is good reason to think that the senses evolved because they provided the organism with useful judgments about the world. So our distinguishing the senses on the basis of judgments would not be too surprising. Finally, while judgments about the primary qualities are formed in both visual and haptic processing, it is arguable that the sorts of judgments are different from each other: for example, with the visual sense we form judgments about the shapes of objects at a distance, which we do not form through the haptic sense, and so on. 8 9
It is true that now some people would like to distinguish different senses among those labeled "haptic." But why we ever did otherwise requires an answer. Leon (1988) and Searle (1983) want to distinguish phenomenological properties from phenomenal ones. I say more about this distinction later in the chapter. But it is germane to point out that the question of why only one "feeling" sense is distinguished remains a problem for that view. Significantly, Leon, who is otherwise very thorough (while defending an analogue of the phenomenon criterion), never considers this problem.
20
The senses
Despite these reasons in favor of a judgment criterion, it also fails to explain why we distinguish five senses — or any senses at all, for that matter. Why would we have conceived of the different judgments as constituting, or deriving from, different senses? Granted we make judgments about colors, tastes, sounds, smells, and so forth. Why should that fact commit us to a different sense for each? After all, we also make judgments about colors, shapes, and sizes, but don't believe that a different sense corresponds to each sort ofjudgment. The judgment criterion fails even to get us off the ground. 5. Every proposed criterion is unable to explain how people divide up the senses. So how do people do it? At least three different accounts would explain our categorizing the senses as we do, and each is worthy of comment. 6. One possibility is that we distinguish the senses by combining the phenomenon criterion with something like the organ criterion. We discover that kinds of phenomena correlate with what we take to be stimulations of particular parts of the body. Undoubtedly, we come to have beliefs about these correlations because of our beliefs about what happens when that part of the body is made inoperative in some way — when our eyes are shut, or our ears are stopped up, or the like. We learn, for instance, that the phenomena experienced when the eyes are stimulated are quite different in kind from those experienced when the ears are stimulated. Thus, we come to take the visual sense as one kind of process, the aural sense as another, and similarly for the gustatory, olfactory, and hap tic senses. When these criteria are combined, not only can we understand our identification of sense organs, we can also understand how the senses are constrained to five. Although in the case of feeling, diverse phenomena are experienced, the organ that correlates with these phenomena is the same.10 But isn't it possible that different beings experience different phenomena correlated with their beliefs in different organic origins? That possibility creates no problem. The senses would still be differentiated by each organism equally as long as there is a strong correlation between the set of phenomena experienced by a being (whatever those phenomena are) and its belief that only the eye, say, accounts for 10
The organ I have in mind is the skin. But since some feelings are visceral, perhaps "feeling" just is a dumping ground of all those phenomena for which no sense organ is obvious.
21
Phenomena
that set. If the phenomenal types that correlate with my eye stimulations are different from the phenomenal types that correlate with yours, we will equally, despite this difference, distinguish a visual sense. But would "visual" name the same sense in that case? That important question is about the real nature of the senses (i.e., our second question), not about how we come to recognize the existence of the senses. Does this combination of criteria preserve the desiderata of putting the senses inside us and of not begging questions against perceptual skepticism? Consider these questions in reverse order, for the answer to the second enables us to answer the first. Does this criterion beg the question against skepticism since it seems committed to the existence of sense organs? No. The criterion presented is not quite the organ criterion. The relevant states are beliefs about organ stimulation, organ deprivation, and the like. One can believe one has eyes even when one does not have them. Beliefs can be false. This answer to the skepticism question shows that we have clearly put the senses in ourselves: beliefs, like phenomena, are internal states.11 We infer from our phenomena and from our beliefs about how those phenomena originate that we have different organs of sense and, so, different senses. 7. But a second account about how we come to categorize the senses is possible: We distinguish the senses on the basis of a combination of the judgment criterion and the organ criterion (as previously modified). Could a creature that experienced no phenomena have ever conceived of sense organs? Of the idea of five senses? There are reasons to think so. For starters, the kinds of judgments generated by each of the sense organs do seem to be different. We come to believe that perceptual color judgments are correlated with our eyes being open. Persons without eyes or with damaged eyes fail to make an appropriate range of color judgments. Although one may doubt that anyone could make color judgments without experiencing color phenomena, there is evidence from blindsight experiments (Stoerig 1987; Stoerig and Cowey 1989, 1992; Stoerig personal communication) that such judgments occur. Some blindsight patients seem to make color discriminations that track normal color discriminations, yet these patients are surprised at their success and claim not to have experienced any color phenom11
For the argument that beliefs — if they exist — are internal states, see chapter 8.
22
The senses
ena.12 There seems no a priori reason why creatures resembling blindsight patients could not acquire the concept of colors or of a visual sense, although the creatures resembling the blindsight patients experienced no visual phenomena whatsoever.13 But could someone who suffered a blindsight-like deficit — only all over, as it were — have ever discriminated a visual sense? Yes — as long as not all the deficits of blindsight are preserved. People exactly like a blindsight patient would probably not discriminate a visual sense, but that is because they would lack the kind of access to their color discriminations that would allow them to form the belief, " This judgment [a color discrimination] was possible only because my eyes were open." That is, actual blindsight patients lack apperceptive access to their color discrimination judgments. But this lack, based on what we can safely infer from our present scientific knowledge, may well be contingent. There is no (obvious) a priori reason to believe that there could not be creatures who make color discriminations, experience no "color" phenomena, but have apperceptive access to their discriminations. As such, they would differ from blindsight subjects only in the third regard. But that difference would greatly enrich their cognitive lives. Part Three of this book will focus on the importance of apperception to our lives, so any further discussion of issues like this one will be postponed until then. When considering the primary qualities, we find that many of the judgments made about them are also made possible to us only through our eyes. We believe that if we did not have our eyes open, we would not make these other sorts ofjudgments. People without eyes or with damaged eyes are unable to form these judgments. Once again, blindsight experiments (Weiskrantz 1977,1986), and also "split-brain" (commissurotomy) cases (Gazzaniga 1970; Gazzaniga and LeDoux 1978), lend support to the idea that creatures can make "visual" judgments 12
13
In chapter 6,1 will argue that it is plausible that these patients do experience "color" phenomena (the reasons for the shudder quotes will become clearer in the next chapter), but are not apperceptively aware that they do. Still, it is a possible - and nearly equally plausible — reading of these cases that the patients experience no "color" phenomena. Hardin (1988) has argued persuasively that no objective colors exist (actually, he argues this conclusion only for hues). He concludes that there is only hue-experience, and hueexperience is to be identified with qualia. Even if he is correct about there being no objective hues, he is wrong to identify hue-experience with qualia. Moreover, his arguments do not show that no objective hues exist: what the arguments show is that if objective hues do exist, they are not scientifically useful natural kinds. These issues are too large to be dealt with in this chapter but will be considered in the next chapter.
23
Phenomena
about the primary qualities without experiencing visual phenomena.14 Weiskrantz had patients, for instance, who consistently discriminated "X"s from "O"s in their "blind" area. It is true that these patients are not apperceptively conscious that they are making such discriminations, and so perceivers exactly like them would not acquire the concept of spatial orientation from these discriminations. But that these patients are not apperceptively conscious of their discriminations is a contingent fact about their situation. As pointed out earlier, there seems no a priori reason why the patients must be unaware. Cats and monkeys that have had their visual cortex ablated (these VCA cats and monkeys gave the first clue to human Hindsight), after a period of adjustment, act as if they are conscious of their discriminations. Moreover, many beliefs we are apperceptively conscious of do not seem tied in any direct way to phenomena. One can be conscious that one believes tomorrow is Wednesday, but no set of phenomena is required for that consciousness (see chapter 5). As with the eyes, so for the other organs. If a correlation exists between types of judgments about one s external environment and beliefs about how those judgment-types originated, then an organism mignt well come to make just the same sorts of distinctions among the senses as we make. That is, it is possible we are such organisms. Even though sometimes the same judgment-type can originate from different senses (for instance, "There is a chair in front of me"), the fact is that we often believe of a token of this type that we would not have made it if our eyes, say, had been closed. And that fact distinguishes this judgment as visual.15 We do have beliefs about how our judgmenttokens originate; and this fact in combination with the fact that many judgments about the external world type uniquely in correlation with 14
15
I put visual in shudder quotes here only temporarily, so as not to beg any questions. Later, I will argue that the shudder quotes should be removed. One might argue that commissurotomy cases do not show that the patients do not experience phenomena, only that they cannot talk about them. But this claim overstates its case. While the cases do not entail that commissurotomy patients can perceive without experiencing phenomena, nonetheless, the patients do deny experiencing appropriate phenomenal states; and their denial is of a kind with the denial of hemianopic patients. Second, commissurotomy patients do talk about some phenomena whose cause is left-body stimulation: they will tell you they have a pain in their left arm when the arm is stuck with a pin, say. So their denial in the other "sensory" cases seems to carry some weight. (But see footnote 12.) The story is somewhat more complicated than this, but only somewhat. If one is both looking at and feeling the outline at the same time, it might be true that one would make the judgment, "There is a chair in front of me," even if one's eyes were closed. But one realizes at the time that that judgment has its basis in information obtained both through
24
The senses
a particular organ could well account for our typing of the senses. There is no reason in principle why this combination of the judgmentcriterion with the modified-organ-criterion does not account for our discovery of the senses as well as a combination of the phenomenoncriterion with the modified-organ-criterion does. 8. The third account simply combines the first two accounts. The phenomenon/organ 16 correlation reinforces and is reinforced by the judgment/organ correlation. For instance, our judgment that a certain color is before us is correlated with our believing that our eyes are open and that we would not have made this judgment if they were not; and these judgments, in turn, correlate with given kinds of phenomena that occur most often in just these situations. 9. So which of the three accounts correctly explains how we originate our conceiving of five senses? I don't know. But any of them would work. And that fact perhaps explains the diversity of the criteria reviewed previously, the truth contained in them, and their inadequacy when taken alone. Each criterion by itself might explain part of the truth; none by itself explains the whole truth. We somehow combine the originally proposed criteria. By means of some such combination or other we differentiate the senses. My guess is that the third account is the likely one, but that is only a guess. Defenders of one account or the other probably find this agnosticism disconcerting even if they cannot pinpoint why. But one origin of their unease is their conflating the question of how our concepts of the senses arise with the second question mentioned at the beginning of this chapter: However we come to conceive of the senses, how should we understand or define the senses? In so far as this latter question is equivalent to "What is the real nature of the senses?" it is this question philosophers most usually want answered. And agnosticism concerning it is less satisfying.
16
one's eyes and through touch, as illustrated by several higher-order beliefs one has (even if they are not explicitly represented): "If my eyes were closed, I would still make this judgment because I am also feeling the chair," "If my skin were insensitive, I would still make this judgment because my eyes are open," "If both my eyes were closed and my skin insensitive, I would not make this judgment," and so forth. Of course, this should be "belief about. . .," and so on. I use "organ" here for short only. I will use "organ" in a similar fashion in combination with judgments about the external world. And I will, in turn, abbreviate "judgment about the external world" by "judgment." I am hopeful that such abbreviations here and elsewhere in the chapter do not cause confusion. They save a lot of writing.
25
Phenomena II
10. When we considered the plausible answers to the first question, we found that all three share in common beliefs about organ stimulation. So perhaps the best way of defining the senses, of answering the second sort of question, is according to organ stimulation: the visual sense is having the eyes stimulated; the auditory sense, the ears; and so forth. But two considerations, closely connected, militate against this type of identification. (1) Suppose there are organisms quite different-looking from ourselves. If we accept the organ criterion as the defining criterion, how could we decide whether they see or not? Obviously, in order to do so, we would have to decide whether they had eyes. But it seems as if our only criterion for making this decision would be whether a part of the organisms body looks like a human eye. Surely such a criterion is inadequate, both in principle and in practice. The "ears" of eared owls, for instance, are not ears at all. They just look like ears. Perhaps there is a way around this objection that does not involve redescribing the criterion - for instance, in terms of how the organs are structured — but the motivation behind this objection also underlies the second objection. So let us turn to it. (2) An organ can be stimulated and yet sensing be absent. For instance, some blindness is the result of cortical damage. In these cases, a blind person's eyes are intact and can be stimulated in just the ways a sighted person's can be. But blind people possess no visual sense. These objections show that a scientific categorizing of the senses will need to involve more than proximal stimulation of the organ. A sense involves this proximal stimulation but only as part of a larger causal chain. The notion of a sense is wider than that of a sense organ and should take in processes occurring after, as well as prior to, organ stimulation. There must be a larger set of causal events embedding an organ stimulation — or initiated by an organ's being stimulated — which should be used to define a particular sense. What are the bounds of such a chain? Suppose we take organ stimulation as one boundary, what should be regarded as the other boundary?17 17
Actually, the first boundary should involve the manner in which the organ is stimulated, not just the organ that is stimulated (it matters whether light affects eyes or whether the eyes are touched instead). It is this truth that seems to motivate the external property criterion. I skip over this possible complication only because I am more interested in the
26
The senses
The only plausible candidates for the second boundary would seem to be the resultant phenomena or the resultant judgments. Any attempt to stop prior to one of these mental states (for instance, at the kinds of nerve endings stimulated) seems destined to be compatible with the lack of a sense, just as eye stimulation by itself is compatible with blindness. So the real choice is to identify a sense with a state of affairs in which a phenomenal type results from (particular sorts of) stimulation of an organ (whichever organ it is) or to identify a sense with a state of affairs in which a judgment type is brought about by a causal chain initiated by that organ s being stimulated. There are good reasons to prefer the latter. (1) Consider the following two sorts of cases, (a) Suppose there are people whose eyes seem to be in working order. When their eyes are appropriately stimulated, they experience all the "wrong" phenomena (i.e., phenomena quite different from ordinary human beings); but their judgments track ours almost exactly, and they have the same success in getting about in the world that we do. (b) Suppose again people whose eyes seem to be in working order. When their eyes are stimulated, they experience just the sorts of phenomena we would expect them to experience; but they make all the wrong judgments, failing to believe that there are colors, running into objects, and so forth. It seems natural to call the first people sighted and the second people blind. This result is magnified if we think that the first sort experience no phenomena but make the right judgments (as may be the case in Hindsight), while the second experience the "right" phenomena but make no judgments. Judgments would seem to be essential to the sightedness/blindness distinction in a way that phenomena are not. Granted that sightedness, which involves success of a certain kind, is not to be identified simply with a visual process. Still, an intimate relation exists between the presence of a visual process and sightedness, with the latter depending in some way on the former. I want to emphasize this connection. Any good account of the senses will maintain this connection: something that sees (hears, and so on) does so only when its visual (aural, and other boundary, though undoubtedly a full typing of the senses would have to take refining this boundary condition into serious account. That is, one would need to consider the sorts of nerve cells stimulated, how they are structured, and so on. But the focus of this chapter is on the other boundary, so these physiological considerations will be omitted.
27
Phenomena
so on) sense is activated. That this connection should be maintained in a theory of the senses is a presupposition underlying the remaining remarks and arguments.18 Given this dependence of seeing on the visual sense, the judgments that result from stimulation of the eye would seem to be essential features of visual processes. Phenomena are not essential. (2) A second reason to accept this conclusion is that there is evidence that differences in neural structure mean differences in phenomena. Some people who have been made color-blind by trauma (accident or disease) seem to have their color vision restored by the implantation of tinted lenses.19 These patients often complain that colors look different to them, even though they seem able once again to make hue discriminations similar to those they made before their accidents (Hurvich 1981, 257). These people apparently mean that their color phenomena are different. And one gathers that a neurological difference accounts for this phenomenal difference. Nonhuman animals, such as certain freshwater fish, seem to make hue discriminations closely similar to ones human beings make - despite the fact that their visual system is, neuroanatomically, radically different from our own (Hurvich 1981, 138). Thus, it is likely that other species experience phenomena very different from our own when they see. Indeed, enough variation exists in normal human brains to make one wonder whether human "visual" phenomena, say, are much alike.20 Given this variation, if we insist on the phenomenon criterion, we might be 18
19 20
If I take the senses only to be necessary for perception, why not allow organ stimulation (of the appropriate sort) to be the whole story? Because I feel thoroughly uncomfortable with the idea that a creature-type could have fully intact senses but never perceive. Perhaps such discomfort is not a very deep theoretical motivator — or shouldn't be, anyway. Perhaps not. But even if not, the present chapter begins to undermine the importance of phenomena in perception; and my real aim lies with the project begun here. Moreover, even if one counted the organs alone as the senses, surely they would be senses only because they would normally (meaning both "usually" and "not abnormally") result in perception - i.e., they would be defined as senses in terms of the cognitive states they lead to. And so the upshot would be that the claim that organ stimulation defines the senses would be virtually identical to the one here. Finally, even if one reserved the term "sensing" for appropriate organ stimulations rather than for the more encompassing states I am using it for, a new term would have to be invented for the more encompassing states because these states are perceptually salient in themselves, and differently so from mere organ stimulation. So my arguments would remain the same, only for the new term, whatever it would be. Whether they do or not is a tricky question. See the next chapter, footnote 13. See the next chapter for more detailed considerations of these cases.
28
The senses
forced to say that only human beings have a visual sense (and therefore see); or worse, each of us might be forced to say that only he or she has a visual sense (and therefore sees). However, if "visual sense" is to have explanatory force in psychology, it would be odd to restrict it in this way. For other organisms (including other human beings!) whose eyes are open judge in ways that largely overlap each of our own. What we seem to share are judgments, not necessarily phenomena. Common judgments shape common behaviors. Essential to visual (or other perceptual-type) processes in explanations of behavior are kinds ofjudgments, not kinds of phenomena. (3) This point is reinforced by thinking about how we ascribe sensory processes to others. We do it on the basis of their common behaviors — because we take those behaviors to exemplify shared perceptual judgments. We might be surprised, even disconcerted, to discover that their phenomena were different even though their behavior was similar to ours. But we would be totally baffled if the judgments turned out to be different (presuming a sameness of other propositional attitudes such as desires). In fact, finding out their judgments were different would probably compel us to redescribe the behaviors so that they would also come out as different. (4) Evolutionary considerations enter also. Our phenomena will be what they are when a person carrying a shining metal object is running toward us. But our survival may depend on our judging that this object is a knife. If there is an evolutionary reason why senses exist, the essential feature of such processes, once more, is judgments rather than phenomena. (5) There is at least some reason to think that there are cases of unapperceived seeing, hearing, and so forth - as in Hindsight, commissurotomy, and subliminal perception. If seeing requires a visual sense, then it is more plausible to think of the visual sense as essentially involving judgment rather than as involving phenomena. For, prima facie, unapperceived judgments are more likely to exist than are unapperceived phenomena. 21 One could say that such "perceivers" do not really perceive, or that, while they perceive, they nevertheless lack per21
The notion of unapperceived phenomena is not so far-fetched (see chapters 4 and 6). But such events will not help provide an objection to my remarks here - especially given the other reasons for taking judgments as essential to individuating the senses and the reasons for saying that unapperceived phenomena can exist.
29
Phenomena
ceptual senses (thus, sundering "seeing" and "visual sensing"). Both moves are arbitrary. Blindsight patients and VCA monkeys and cats would not make the discriminations they make unless their eyes were open, and so forth. And the discriminations they make, even if badly impaired, are of a like kind to those we make when we see and visually sense. Why be committed to two theories of vision when one will do nicely? Seeing is one process; apperceiving that one is seeing is another. But the important thing is to consider seeing itself as only one process. For all these reasons, judgments, not phenomena, seem essential to the senses. Even if we first discover the senses (partly) on the basis of phenomena we experience, there are reasons for believing that phenomena are only contingently connected to those senses we thereby discover. The senses are best defined by processes that end with judgment-types and begin with organ stimulations (including the manner of stimulation). 11. Consider, however, some objections to the judgment criterion. One objection goes as follows: "Suppose a creature makes judgments about sounds, but its judgments originate through its eyes. On the proposed criterion would one not have to say that this creature visually sensed sounds?" The objection is unclear. We, in fact, haptically process a variety of properties. There is no a priori reason to think that a creature might not visually process properties additional to those we, in fact, process. Perhaps this is too facile a reply or a too-facile understanding of the objection. Suppose that the creature through its eyes arrives at judgments about sounds; through its nose, at judgments about colors, shapes, sizes, and so on. That is, in general, this creature systematically correlates judgments and organs, but this systematic correlation systematically differs from our own. What should we say in this case? It depends. If the creature is sufficiently different from us, we should suspect that we have misidentified its eyes, ears, and other organs. What looks like an eye might not be an eye. As already noted, the "ears" of eared owls are not ears. But what if this creature is otherwise just like a human being, even born of human beings? Should we say the creature sees sounds or hears through its eyes? For reasons already given, we cannot answer this question by appealing to the organ criterion alone or to the judgment criterion alone. If such creatures arise, and especially if they become at all common, then our concepts of the indi30
TTte senses
vidual senses will probably become less and less scientifically useful, needing to be eliminated or radically revised. But all that result shows is that our concepts really do run up against the world. And the world is a contingent place. Moreover, for these kinds of cases to occur a great deal of our science would have to be wrong. Eyes seem to be the wrong kinds of structures for processing sound waves, and analogously for the other senses. So there are good scientific, if not a priori, reasons to think such cases cannot occur. A second objection is that the given analysis of the senses makes the senses to be conceptual and cognitive apparatuses of far too great complexity and sophistication. While the analysis may be persuasive when considering human beings, it loses more and more plausibility as one goes lower and lower "down" the phylogenetic scale. Consider an extreme case of a "sensor," an electric "eye" on a supermarket door. Isn't it plausible to think of it as sensing but not of it as making judgments? A first obvious — and correct — reply is that electric "eyes" are eyes or sensors only by analogy. Despite the fact that such a reply is obvious, it is preferable not to avail oneself of it immediately; for the real question is how the sensors of simple organisms are different from the door opener. It is worth pointing out that to whatever degree this objection weighs against building judgments into the concept of a sense, the same objection also weighs against building in phenomena. That electric "eyes" don't experience phenomena is, if anything, more obvious than that they don't make judgments. Nor is it obvious that creatures such as oysters experience phenomenal states either. I doubt if electric "eyes" do process information in any meaningful sense. To process information, an organism must represent that information in an appropriate way. Electric "eyes" lack the kinds of representational states necessary. Perhaps oysters differ from electric "eyes" in that oysters, but not electric "eyes," represent information; and because of that fact, oysters sense. But if oysters do not have appropriate representational states, they do not sense either. If a thing does not represent information (or misinformation) in its environment, it is hard to understand in what regard it has senses. That very fact motivates the judgment criterion. What may be motivating the electric-"eye" objection is the realization that organisms need not be very sophisticated in order to perceive. I would quite agree. The judgments may not even 31
Phenomena
be conscious (in one sense of "conscious"- see Part Two) to the perceiver.22 As a second objection, one might maintain, as Leon (1988) does, that while representation is important to categorizing the senses, there is something distinctively phenomenological to the sort of representation that occurs in conscious perception. At the same time, Leon agrees that nothing is distinctively phenomenal in such experiences. The phenomenological quality of perceptual experience is not that of phenomena per se. As I have written elsewhere,23 my attitude toward nonphenomenal, phenomenological states is much like Hume's toward a self: I don't find any such states in myself. If all that is meant by calling such states phenomenological is that people can apperceive at least some of their own representational states, then there is no disagreement. But if phenomenological properties are supposed to be as real as phenomenal ones, only different, then I do deny their existence in my experience. The upshot is that phenomenological properties, whatever they are supposed to be, are not a necessary condition for apperceiving representational states. Nor, for that matter, are phenomenal properties (see Part Two for the arguments supporting these claims). Perception may involve phenomenal states in an integral way (see chapter 4), but we need to distinguish their representational properties from their "feel" (see chapter 6). In the sense of "conscious" implicit in Leon's paper, conscious (i.e., apperceived) perceptual states share kinds of representational properties with the unapperceived ones of Hindsight. Since I argue this point, beginning in the next chapter, I will not do so here. Positing nonphenomenal, phenomenological properties is the last resort of a British Empiricist philosophy that has had its claims for the importance of phenomenal properties deflated (as Wittgenstein [1953] deflates them).24 A third objection to the view defended in this chapter is raised in the following question: Hasn't the analysis presented begged the ques22
23 24
What if we change the example from electric "eyes" to thermostats? Don't thermostats represent the ambient temperature? But they make no judgments, do they? These are more difficult questions and will be addressed directly in chapter 4. Note, for now, that thermostats likely do not have qualitative states (phenomenal states) either, and so they can hardly be pointed to in defense of the essentiality of phenomena to perception; and this chapter is about that essentiality claim. Gibsonian objections to representational theories of perception will also be considered in chapter 4. See Nelkin 1987b, 1989a, 1989b, 1993a, 1993b. See also the next two chapters and Part Two.
32
The senses
tion against skepticism after all, for doesn't the analysis presuppose the existence of sense organs? Yes. But doing so illustrates the difference between the two questions this chapter has been considering. There are good reasons not to beg any questions against skepticism when we are trying to answer the first query, i.e., how people partition the senses in the first place. But the second query — how the senses should be defined if they are to be useful in psychological explanation — requires that we disregard any thoroughgoing philosophical skepticism. One cannot have science without such disregard. Of course, the theory presented will be false if there are no sense organs, no material objects. If the proposed theory is correct, brains in a vat possess no senses. At best, they only think they do. But this last remark introduces a further problem for the proposed delineation of the senses: two experiences could be exactly alike, but one of them be sensory and one of them not. Moreover, both experiences would lead to similar behaviors, yet one experience be labeled a visual experience and the other not. The obvious case is that of hallucinations. In hallucinations one is said to make judgments about the external world in a closely similar way to the way one makes such judgments in what the proposed theory would label a "sense" experience. Moreover, hallucinators are said to behave in just the same ways they would if they were having what the theory would label "sense" experiences. The proposed theory, despite my intentions for it, seems to have turned visual (aural, and so on) sensing into seeing (hearing, and so on), and to have thereby lost the senses as powerful internal explainers of behavior. Of all the objections, this one is the most troubling; but in the end, it does not cut as deeply as it first appears to. Pace its last claim, seeing and visual sensing are still distinguished. In visual sensing, false judgments can be formed. We usually label such false judgments as misseeing (or mishearing, or the like). Thus, seeing is not the same as visual sensing itself, but instead, is identical to successful visual sensing. Visual sensing is different from seeing. But so is it different from hallucinating. Pure hallucinations involve neither seeing nor misseeing. So the last claim of the objection can be defused; however, the remainder of the objection needs to be considered. First, as the objection itself recognizes, the problem concerns labeling. In calling one of these experience types "sensory" while denying that label to the other, we are emphasizing the differing origins of the judgments that constitute the experiences. And that difference in 33
Phenomena
labeling by difference in origin may seem hardly worth spilling much blood over. One's calling hallucinations "visual" experiences (or "aural," or the like) would be all right, if one would also keep in mind that in doing so, one is saying no more than that these experiences resemble ones an organism has when it visually (aurally, etc.) senses. If the reply seems at all inadequate, it is because the objection has in its grasp an important truth: When it comes to explaining behavior, what matters are propositional-attitude states (judgments, beliefs, desires, and so on). But the objection assumes that the way in which propositional-attitude states originate is irrelevant for psychological explanation. Perhaps so. And if so, the senses may not serve as a natural kind in psychology.25 Still, it is hard to believe that the origins of our judgments do not matter in psychological explanation. Sensing seems to have a different psychological import from hallucinating. Perhaps for explaining some behaviors either sensing or hallucinating does equally well. But for explaining other behaviors, surely the difference in origin matters. Contemporary psychology contains lots of "transducer" talk. It is hard to believe that it is all a waste of time. Moreover, is it true that hallucinatory experiences and behaviors are just like perceptual ones? A closer reading of the relevant literature makes these identifications appear glib and hasty. Descriptions of visual hallucinations almost never have all the same - even phenomenal characteristics as seeings (even as misseeings). I don't mean to rule out their sometimes doing so (and the same for dreams); but if so, these occasions are many fewer and farther-between than usually acknowledged. And these differences at least suggest that hallucinating is quite different from, while being somewhat similar to, perceptual processing. Whether perceptual processing is also different from dreaming is an especially difficult question; but in any case, dreams do not lead to similar behaviors: a hallmark of dream experience is that the relevant motor centers of the brain are "shut down" while we are dreaming. in
12. In sum, we may distinguish five senses on the basis of a systematic correlation between phenomena and beliefs about their organic 25
Perhaps this conclusion would be entailed by Fodor's (198Id) methodological solipsism, if it were not for the considerations raised in the next paragraph.
34
The senses
origins, or on the basis of a systematic correlation between judgments and beliefs about their organic origins, or a combination of the first two correlations. But it was emphasized that the criteria by which we initially come to partition the senses may not be the criteria by which the senses are best individuated and defined. In section II, it was argued that combining types ofjudgments about the external world with the correlated origins of those judgments is best for individuating the senses. In that same place, it was also pointed out that the scientific usefulness of "the senses" is a contingent matter. While dividing perception into the senses may be scientifically useful in fact, that fact depends upon certain robust, but contingent, correlations, which we take to reflect underlying causal connections. But this contingent usefulness of a scientific concept hardly seems unusual. While the project of this chapter is modest, if the view of sense individuation defended is essentially correct, there are consequences for a larger theory of perception. A major consequence is that any good theory of perception will have to give a central role to judgment and a more peripheral role to phenomena. Giving judgment a central role in perceptual theory does not imply that phenomena have no role to play. Their role may be quite important. But many Empiricist perceptual theories have considered phenomena to constitute perception and considered judgment to occur only in the wake of perception. If the arguments of this chapter have been correct, then judgment is no mere sequel to perception. Judgment is the sine qua non of perception. And that conclusion is no small consequence of this otherwise modest proposal.
35
Phenomena This chapter further investigates the role of phenomena in perception, but on a broader scale. The conclusion will once again be that phenomena play a different and lesser role than might be thought.1 When it comes to the role of phenomenal states in perception, there are three major possibilities:2 (1) Phenomenal properties are "read off" in making perceptual judgments. This view holds that perception is itself noncognitive: an experiencing of phenomenal properties. Any cognitive act is post-perceptual and derived from the perceivers "reading off" the phenomenal properties perceived. Call this position the "' read-off 'position." (2) Phenomenal properties are not "read off." They are noncognitive causes of perception, which is a cognitive state — a judgment. While phenomenal states are not themselves perceptions, nor even necessary to perception, they — at least sometimes — play an integral, causal role in perception and so cannot be completely discounted in explaining perception itself. Call this position the "causal position." (3) Phenomenal properties are merely epiphenomena of perceptual processes. While phenomena may not be epiphenomena altogether (for instance, they may be causes of thoughts about themselves), they play no "read-off" or causal role in perception itself. Like the causal position, this view, the "epiphenomenal position," regards perception as a cognitive state. 1 2
This chapter is based largely on Nelkin 1994b, though much rewritten. Arguments are also borrowed from Nelkin 1987a. There are those who might be thought to deny any role for phenomena in perception because they deny that phenomena exist (Dennett 1988b, 1991b, 1991c). I don't plan to defend the existence of phenomena in this chapter, but I find it hard to believe that there are no such states. Nor do I think Dennett really denies their existence altogether. What I take him to be denying is that they could have the sorts of properties people most usually ascribe to them. If that is the correct reading of his claims, then he is agreeing with the view I have developed in various papers (Nelkin 1987a, 1989b, 1990, 1994b) and further develop in this book.
36
Phenomena
In its most extreme form, the "read-off" position holds that perception has both an inner component and an outer component, the inner being a representation of the outer. The inner component, which is something like a photograph or portrait (however distorted), is a phenomenon, directly accessible only to the person whose phenomenon it is. Phenomena have properties such as color and shape; and because of the projective nature of these inner properties, phenomena represent the outer world to us, bringing us to believe that like properties exist out there as well as in us. To distinguish the ways in which properties such as color or shape are instantiated, or else to distinguish the kinds of things that possess the properties, the inner instance of the property is labeled a phenomenal property; the outer, a real (or external) property. Because red exists phenomenally, we ascribe red to objects out in the world; and because square exists phenomenally, we ascribe square to objects in the world. We name the phenomenal color "red," and that also provides the name of the real color. And similarly for "square." I call this version of the "read-off" position the "Phenomenal View."3 This view has often come under attack. Philosophers influenced by Chisholm s (1957) Adverbial View have argued that phenomena are not objects', there are only phenomenal states. And if phenomena are not objects, then properties such as color and shape are not ascribable to them. Color and shape, it is concluded, are properties only of real objects. The Adverbial View is well motivated; however, it is unclear whether all Adverbialists actually reject the "read-off" position. Some seem to believe that we do "read off" phenomenal states in arriving at perceptual judgments (from "perceiving redly" to "There's a red object 3
The Phenomenal View should not be confused with phenomenalism, which is usually only a form of the Phenomenal View. Among the originators of the Phenomenal View, the most important is probably Locke (1690/1959). Locke, however, distinguished the primary qualities from the secondary qualities. He held that colors and the other secondary qualities are not nonrelational properties of external objects. Such properties, when considered as nonrelational, are only phenomenal properties. On the other hand, he held that the primary qualities are nonrelational properties of both phenomena and real objects. Berkeley (1713/1965, 127fi) argued that Locke couldn't have it both ways: the same sorts of considerations, when directed to the primary qualities, lead to a similar conclusion about shape and all. Russell (1948), Broad (1960), and Price (1932), among many other twentieth-century philosophers, held versions of the Lockean view. Its current defenders include Perkins (1983), Jackson (1977), and perhaps Peacocke (1983). Among psychologists, Shepard (Cooper and Shepard 1984) seems to hold something like this view. Even Rock (1983), who, quite like me, wants to downplay the importance of phenomena, nevertheless, often talks as if he believes that visual images have properties like color or shape.
37
Phenomena
that I am seeing"). My aim in this chapter is to argue against any version of the "read-off" position.4 I am unable to give knockdown arguments against the view (I suspect that "knockdown" arguments are generally impossible in theoretical matters); but I hope to illuminate the positions true nature, with the result that one no longer feels compelled by it and comes to see the competing positions as genuine alternatives. I focus my arguments on the Phenomenal View. Although it is an extreme version of the "read-off" position, seeing its weaknesses and the ways in which it is weak enables one to see the weaknesses in almost any "read-off" position, including less extreme ones. Like Berkeley, I defend the proposition that phenomena and external objects do not have the same properties. But unlike Berkeley (1713/1965, 146-48 and 159—60), and like the Adverbialist, I argue that colors, shapes, and so forth, if properties of anything at all, are properties of external objects, not of phenomena. 5 If phenomena possess neither colors nor shapes, then it cannot be true that our concepts of color and shape are those of phenomenal color and shape, or that phenomenal properties provide us with our first instances of color and shape.6 Important to see, right at the outset, is that introspection cannot be used to decide which of the three positions is correct. A compelling piece of evidence for this claim is the many defenders of each of the views. Moreover, it is difficult to imagine how we could tell, from looking inside ourselves, which role is being played by the phenomena we are experiencing. So we have to rely on other methods to settle the issue.7 Why does the Phenomenal View hold that phenomena are colored, 4
5
6
7
Although this is my aim in this chapter, in chapter 4 I tentatively put forward a kind of "read-off" view, which I think could be correct. But it is quite a different sort from its predecessors. Whether external objects exist or not: I am not offering a solution to the problem of skepticism here. That is, if external objects exist, they are the kinds of things that have color, shape, and so on. And if no external objects exist, then nothing has those properties. I am certainly not the first to claim that phenomena are not colored, and so on. Brown and Herrnstein (1982, 48) maintain that images of bananas are not yellow, nor soft, nor the like. Pylyshyn (1981) also holds a view similar to mine. Pylyshyn says that to think that images have ordinary properties is a mistake in scope. It is to slip from "image of (object x with property P)" to "(image of object x) with property P." That is a neat way of putting the error. However, arguments showing why it is an error are harder to find. This chapter supplies some. While arguing against the "read-off" position, I make no attempt to settle the issue completely. In particular, I make no attempt to argue for the causal position over against the epiphenomenal position, or vice versa. My own belief (usually!) is that something like the causal view is correct (see chapter 4); but actual cases lend plausibility to the epiphenomenal view (see, for instance, Weiskrantz 1988, 189).
38
Phenomena
have shape, and so forth? It is certainly not necessary for something to be a successful representation of red that it be red (this just-written word "red," for instance) or even have a color (the same word spoken aloud, say). One reason for the belief, as I will discuss later on, is that generally when one perceives red things one experiences similar phenomena on each occasion. And since these similar phenomena usually (or even always) accompany one s perception of the color red, it is natural to call the phenomena "red" also.8 But among philosophers and psychologists a deeper reason lies behind the belief as well: "If phenomena don't possess color and shape, how did we ever acquire concepts of them?" I cannot pretend to answer that question, at least not in this chapter; but I can show that the Phenomenal View fails to provide an answer to the question. If phenomena were like photographs, perhaps the Phenomenal View could provide an answer. But phenomena are not like photographs — or so I will argue. In simple outline, my initial arguments will be that different phenomena can represent the same color or shape, even the same colortoken or shape-token, and each member of a set of similar phenomena can represent different properties. When I establish these claims, it will be fairly compelling to draw the conclusion that phenomena have neither colors nor shapes. These facts alone, however, do not completely compel anyone to give up the Phenomenal View. On the one hand, while the word "red" and other nonred tokens can represent red, it does not follow that a phenomenal-red token (if there were such) could not also represent external red. And, on the other hand, while we can use a picture of the Empire State Building, say, to represent New York City, that picture retains the very properties it has when used to represent the Empire State Building itself. So neither "different representations/same token or type represented" nor "same representation/ different tokens or types represented" can by itself compel defenders of the Phenomenal View to throw in the towel. But these facts do help one begin to wonder about the truth of the view. Only theoretical reasons can, in the end, better settle the dispute; and these will be arrived at by and by in this, and in later, chapters. See Reid (1785/1969, 242), whose view on this issue I believe to be close to my own: "Almost all our perceptions have corresponding sensations which accompany them, and, on that account, are very apt to be confounded with them . . . Hence it happens, that a quality perceived, and the sensation corresponding to that perception, often go under the same name." See also 28, 60, 130, 242-45, 247, and 257. See chapter 4 for additional discussion of Reid's views.
39
Phenomena
1. This section deals with the secondary qualities, using color as an example, while the following section deals with the primary qualities, using shape as the example. We begin with a real case. The current theory about color perception involves what are called "opponent systems." According to this theory, people have three different kinds of cones in their eyes. Each kind contains a pigment particularly sensitive to a given wavelength of light, each a different one from the others, though each cone type is also somewhat sensitive to the wavelengths that the others are particularly sensitive to. Three kinds of primary processes, or channels, farther along in the visual system — occurring in the ganglia of the retina, again in the lateral geniculate nucleus, and also perhaps in post-striatal area V4 — receive information from these three cone types and play a major role in processing this information. These three processes represent opponent (complementary) pairs of colors (red—green, blue—yellow, black—white), depending on whether the activity of these cells is increased or inhibited and on how their firings are summed. When the cone cells are activated by an incoming light source, the opponent-process areas of the brain, as well as others, receive and compute the data from the cones; and after much processing, phenomena and judgments about an object s color are produced. The relation of wavelengths to perceived colors is not one-to-one. Among other things, a perception that an object is red, say, can be initiated by various combinations of wavelengths rather than just by a single wavelength. Despite this lack of one-to-one correspondence between wavelengths and color perceptions, the theory is explanatorily rich, being able, for instance, to explain perceptual deficits such as color blindness. Most color-blind people are so only with respect to red and green. They cannot easily, if at all, distinguish one from the other.9 Their color blindness can be explained by the fact that they lack one or another light-absorbing pigment in their cones or that a breakdown 9
Actually, more than one sort of red—green color blindness exists. Deuteranopes lack cones of one light-absorbing dominant pigment (in the green range of light waves for most observers under most conditions), while protanopes lack, instead, another light-absorbing dominant pigment (in the red range of light waves for most observers under most conditions). The former, oddly enough, are not totally insensitive to green light, although there are conditions under which they are unable to distinguish green from combinations of red and blue light on the basis of hue itself. See, for instance, Kaufman 1974, 177.
40
Phenomena
occurs in the red—green opponent processing cells at a given place in the system. These people are dichromats as opposed to normal trichromats.10 The rare blue—yellow color blindness can be explained in a similar way (though not exactly similar, since there appears to be no light-absorbing cone pigment whose dominant wavelength corresponds to the yellow range for most observers), as can the even rarer total color blindness (only the black—white, the "lightness," system is fully functional).11 If this theory had been known to Locke, given the many—one correspondence from light waves to perceived color, it would likely have reinforced his belief that colors, as nonrelational properties, are not in the world but in us. And if in us, then phenomenal properties. But one needs to look further into the cases before drawing this conclusion. While most color blindness is innate and probably of genetic origin, some color blindness is brought about by disease or physical injury. As mentioned in the previous chapter, some people with traumatic color blindness (though none with innate color blindness) recover many (though not all) of their discrimination powers when appropriately tinted lenses are inserted into their eyes. As noted, a peculiarity is that some of the lens wearers, while able to make color discriminations they made before, say that colors look different to them (Hurvich 1981, 256—57).12 I take their claim to mean that their phenomena are different. Similar to these cases are those of the deuteranopes men10
11
12
There are also trichromatic forms of "color blindness." Some people seem to be sensitive to wavelengths different from the three most perceivers are sensitive to. These people make many of the same color judgments as ordinary perceivers; but when they mix wavelengths to match a particular wavelength, they combine a different three wavelengths, quite removed from the ordinary range. See Kaufman 1974, 178. Only under special conditions of testing do actual deficits (in making judgments about relations among hues, for instance) appear (Cavonius et al. 1990). See Hurvich 1981 for a fuller account of an opponent systems theory. Hardin (1988) provides the fullest, most interesting, and most provocative philosophical discussion of color perception that I know of. He argues that colors (hues) are not instantiated in the external world. In fact, he argues that they are not instantiated at all. His position, as I understand it, is a variation of the Adverbialist position; but, I believe, he also leans toward the "read-off" position. It is at this point where we most disagree. Hardin, in a personal communication, maintains that the lenses do not enable the subjects to make trichromatic discriminations (hue discriminations) but only to make corresponding discriminations, by being able to make further lightness discriminations. While Hardin may be right, his being right does not affect the argument. What is really important for my argument is that it is an empirical question as to whether the discriminations are chromatic or lightness based. We can imagine either case.
41
Phenomena
tioned in footnote 9. Deuteranopes are most often able to distinguish green from red when we do; but given their deficit, their basis for doing so is unlikely to be the same as ours. The differences, whether structural or chemical, in their perceptual systems raise doubts as to whether they experience "green" phenomena at all. Similarly, the anomalous trichromats mentioned in footnote 10 seem to make the same color discriminations we make; but, once more, it is doubtful that they experience the same phenomena we do. All these cases strongly suggest that people can agree on a wide range of color judgments even when experiencing different phenomena. 13 Hurvich does not make altogether clear whether the lens wearers claim that their phenomena differ in hue (what we normally think of as color) from the phenomena they previously experienced, or in intensity, or in saturation, or in an even more radical way. Significantly, we can conceive of their phenomena differing from ours in any of the four ways.14 In short, we can conceive that different persons will each discriminate an occurrence of red, yet each be experiencing quite different phenomena from the others. In the case of hue, these intuitions have been expressed in inverted spectrum problems. Two people could agree on their color judgments, but one of them, on judging objects to be red, would experience phenomena just like those the other experiences when he or she judges objects to be green, and vice versa for green objects.15 13
14
15
Of course, one might claim that these people are not really agreeing with normal perceivers on their hue judgments: they only seem to be. The range of cases over which they disagree shows that they are not really agreeing even in these cases of apparent agreement. "Green," say, just means something different to a deuteranope from what it means to most people. A discussion of this issue begins in the next paragraph. To describe phenomena as differing in any of these four ways, if meant literally, is a mistake. Such ways of describing what is going on, whether we realize it or not, are elliptical for longer expressions such as, "I am now experiencing a phenomenon like the phenomena I used to experience when I perceived blue" (or "a less highly intense red "or "a less highly saturated red," or "the sound of a trumpet"). But since the ellipticality of such expressions is part of my conclusion, I will here talk as if these ways of expressing oneself were all right as they nonelliptically stand. Many philosophers deny the possibility of an inverted spectrum (among them are Shoemaker 1981 — who isn't fully committed to the impossibility — Harman 1982, Hardin 1988). Despite the appeal of their arguments, the lens wearer and anomalous trichromat cases seem to provide empirical grounds for believing that the arguments are deficient. For a more philosophical critique of the arguments, see Block Forthcoming. If the arguments against the possibility of an inverted spectrum be correct, they would allow me to make my point much more readily. I am willing to grant the Phenomenal View the possibility of inverted spectra. I want to show that even if they occur, their existence tells against, rather than for, the Phenomenal View.
42
Phenomena
If such a possible case is actual, should we say that the person experiencing the "green" phenomena, even though judging the relevant objects to be red, doesn't really see red? Even more pointedly, should we say the person doesn't really experience red phenomena? But on what grounds? Suppose half the population is one way and half the other. Whose phenomena are really red? Most philosophers who accept the possibility of an inverted spectrum seem committed to claiming that half of such a population would not be seeing red, that "red" as used by half the population would be a homonym of that same word as used by the other half, and that the two halves would not really be communicating or making like judgments but would only seem to agree in their color judgments. As far as I can see, the only reason for making these claims is an assumption that phenomena possess the property of color, that color concepts are of phenomenal color, that color words refer to phenomena, and that color judgments are about phenomena. But by hypothesis, these two half-populations agree on their color judgments, they draw similar inferences, they behave in similar ways. This hypothesis is quite natural and leads to the conclusion that both groups are not only apparently, but actually, making the same color judgments, that their color concepts are about a property taken to be in the external world, that their color words refer to properties in the external world, that their judgments are focused outwardly onto that world. If one rejects the assumption that colors are properties of phenomena, one need not draw any of the strange conclusions philosophers have drawn when confronted with this problem. Red is a property of real, external things, if it is a property of anything. The phenomenon, which occurs during the perceptual processing, is neither red nor any other color. Even if phenomena be representations, representations do not have to possess the property they represent. But more to the point, if we do not "read off" from some phenomenal red of the phenomenal state, then there is as yet no good reason to believe that phenomena are themselves representations at all. I am not arguing that phenomena play no causal role in our conceiving of, or in our representing, red. I am not even denying that phenomena are representations. I am denying that phenomena are ever red and denying that we are rationally compelled to believe that we "read off" the redness of our phenomena to arrive at our perceptual judgments that an external object is red. We do not acquire a concept of red by first seeing a 43
Phenomena
phenomenal object that is red — even if there be phenomenal objects that in some sense we can be aware of. While there is evidence (our genetic and anatomical similarity) that most of us experience similar phenomena when judging things to be red, there is still much to discover about color perception. It is not clear, as far as I know, just where in the system the damage to the lens wearers occurs; so evidence based on anatomical similarity is only of a very general sort. Moreover, the lens-wearer cases provide evidence that anatomical and structural changes in the perceptual system can result in different phenomena being experienced, even while color judgments apparently remain (at least largely) the same. Structure and chemistry matter in regard to which phenomena are experienced. That fact is important to the progress of my argument.16 2. One might raise the following objection: "In regard to the question, vis-a-vis inverted-spectrum cases, as to whose phenomenon is really red, you have succeeded in showing that the answer is arbitrary. However, while it may be arbitrary that one or the other of these phenomena-types be called 'red,' it is not arbitrary to think that, whatever names we give them, the phenomena differ with respect to color." One ground for raising the objection is that we sometimes clearly ascribe colors to phenomena, i.e., we sometimes take phenomena as in-themselves rather than as representations of something else. For instance, after-images are described as red, though not believed to be representing anything external. 16
It is true that normal color perceivers make discriminations (especially about the relations among hues) that the lens wearers and the anomalous trichromats do not make (see Cavonius et a\. 1990); but experiments had to be undertaken to reveal these deficits. The discriminatory powers of these nonnormal groups appear under most circumstances to be perfectly normal. And if the discriminations made by these latter groups are nonnormal (even their normal ones!), then that result also has to be revealed by empirical methods. Conceptually, we understand perfectly well that the experiments could have come out otherwise. Moreover, the interpretation of the results of the experiments showing these deficits will be partly a matter of theory. Why, for instance, should these experiments make us think that these anomalous perceivers differ from us in their phenomena! They certainly differ from us in their judgments, but the explanation of that difference is exactly what is at stake. Even if someone would report a difference in phenomena (having suddenly become an anomalous perceiver), his or her own introspections would be unable to reveal the role these different phenomena were playing. Any of the three positions set out in the beginning of the chapter would be compatible with such facts. It should be kept in mind (see §5 for further clarification) that I am not denying the subjectivity of hue judgments. I am instead denying that the best theory to explain our hue judgments will maintain that phenomena are bearers of hues.
44
Phenomena
We do describe phenomena this way, but perhaps such descriptions are elliptical for "I am experiencing the same kind of phenomenon I normally experience when I perceive things to be red." If the descriptions are elliptical, the correct conclusion about the fifty-fifty population case is that the phenomena ofboth halves of the population, when faced with a red object, are red. None of them is green.17 The phenomenon gets categorized according to the "real"18 property, not vice versa; and such a categorization is always an elliptical one, not a literal ascription of colors to phenomena (even if mistakenly thought to be). Either reading of our after-image descriptions (that such descriptions are elliptical or nonelliptical) accords with the evidence presented so far. The discussion in §3 provides reasons for thinking that the descriptions are elliptical. 3. There are inverted-spectrum-like problems that the objection raised in §2 is unable to account for. When I was a student, my teachers used to tell the class about machines that could be strapped onto blind people. The machines, activated by incoming light of different wavelengths, would "interpret" the light, "read" out the color of the object, and then "signal" the blind people by various scratchings on their backs, with the result that the blind people arrived at color judgments that accorded with those of normal-sighted persons. Suppose that whenever and only whenever scratch-type x occurs the blind people judge the object to be red, and suppose their other judgments apparently agree with ours. Are the blind people s relevant phenomena red? Why not? Or better: Why not, if ours are? The grounds for calling one or the other red appear similar in the two cases. One might reasonably object that the causal route exemplified in the machine case shows that the scratches cannot be taken literally as color phenomena. So suppose instead that the opponent-systems theory of color perception holds for a race of alien beings if it holds for us. These beings have eyes with the same three cone types we have; however, the parts of the brain that process data from their cones are somewhat different from our own. Remembering from the lens-wearer case that differences in brain structure or chemistry can bring about different phenomena, we can conceive that the color phenomena of the aliens 17 18
Note that this result accords with that of the arguments raised against the possibility of inverted spectra, although it is arrived at by allowing^or their possibility. The reason for the shudder-quotes will be clarified in §5.
45
Phenomena
are similar to our scratches. Because of these "scratches," they are able to make the same color discriminations we make. Their philosophers even worry about whether one of them might have an inverted spectrum. Suppose that these "scratches" are quite unlike any other phenomena these aliens experience. The aliens would then have exactly similar reasons for thinking their "scratches" are red, green, blue, and so on as we have for thinking our phenomena are. Would a defender of the Phenomenal View be willing to say that the phenomena of these aliens differed from ours in color? It is doubtful. Yet, one would have few grounds for denying the claim, if willing to make a similar claim about human inverted spectrum cases. The correct conclusion is that neither the aliens' phenomena nor ours are red, green, blue, or any other color. Phenomena, whatever other properties they possess, are colorless. Insofar as we or the aliens talk about phenomena being colored, as we do with after-images, it is an elliptical way of talking. Ascribing colors to phenomena is a parasitic kind of ascription. If this is not the correct conclusion, then it will remain an empirical question as to whose phenomena are really red (i.e., whose phenomenal properties match the external properties). This conclusion would not seem terribly consoling to defenders of the Phenomenal View since they set out to explain why we have the concept of red. Arguments would be required to show why our phenomena, not the aliens', are really red ones. One might object that the alien example is fanciful. However, the objection is off the mark. Only conceivability, not actuality, is required to make the point. But even in the real world most nonprimates have quite different neural-visual systems from ours. Yet, some of these animals appear to make color distinctions. For instance, a species of freshwater fish, with a very different visual system from ours, is quite adept at color discriminations (Hurvich 1981, 138). The evidence from the lens-wearer cases is that differences in neural structure/chemistry mean differences in phenomena. Given the very different visual systems of these fish, there is reason to believe that their phenomena are very different from our own, perhaps as different as the "scratches" of the aliens I hypothesize about. So my "fanciful" world may be the real world. 4. So far I have talked as if one kind of phenomenon always accompanied one sort of color experience, at least for any one subject. But 46
Phenomena
even that assumption is not a conceptual necessity. Suppose that each time, or just even sometimes, when a person discriminates something red, her or his phenomenon is not what it was on another occasion when she or he perceived that red thing under similar lighting conditions. Would it matter? Wouldn't the person still be perceiving red? Hasn't the person perceived red correctly many times in her or his life even if the phenomena have been different each time? One might maintain that in one's own case one knows they have not been different. Perhaps one does. But having a faulty memory about one's phenomena would seem not to matter. Surely, it is not the phenomenon we experience that matters, but the judgment we make, what we do, how we are affected that matter. In an important sense, the phenomenon itself is irrelevant. 5. Phenomena are colorless, and similar considerations apply to any other secondary quality. Once we realize that phenomena are colorless, a "read-off" position becomes less plausible. Instead, it is more reasonable to believe that phenomena obtain their descriptors in a borrowed fashion:19 We call phenomena "red" because they occur when we visually discriminate things as red. But as we have seen, even if phenomena were somehow "read off," different phenomena would do the job equally well. That fact is itself a reason for thinking that phenomena, possibly so different in kind from each other, are not themselves red (or any other color). Before turning to the primary qualities, however, I want to consider a remaining question: "As you yourself point out, the current theory of color perception, because of its many—one nature, seems to give prima facie support to the idea that colors are in us, are properties of phenomena. But as you have shown, colors are not phenomenal properties. So what are they?" Two answers to this question are possible, and neither puts color inside us. The first claims that although there is no one-to-one correspondence between wavelength20 and color, colors are natural kinds: There are lawful many—one relationships that allow us to pick out a particular set (with a potentially infinite number of members) of wavelength groupings as being a color such as red.21 19 20 21
See Nelkin 1987a, 1989b for further arguments for the truth of this claim. Or another physical property, such as surface reflectance. A similar lack of one-to-one correspondence can be shown to exist between it and color judgments. For a natural-kind analysis of color, in terms of reflectances, see Hilbert 1987.
47
Phenomena
The second answer instead denies altogether that colors are natural kinds. We will need to take this latter tack if no lawful base to the set of wavelengths causes us to judge that an object is red. In that case, red does not exist as a natural kind. However, even if we are forced to accept this latter result, two positions may be open to us. (1) Although color is not a natural kind, it is, nevertheless, a property of external things. Large and being a tree are not natural kinds either, but those facts are hardly grounds for saying that large and being a tree must therefore be properties of internal phenomena and not of external things. (2) Color is more like being a witch or being a unicorn, i.e., genuinely without instances. But just as witchhood and unicornness are not properties of phenomena, neither are we compelled to say color is. Too many philosophers have failed to see that the first alternative might be a genuine one. Certainly, until recently, no one has pursued it.22 It strikes me as worth pursuing. In any case, neither alternative turns color into a property of phenomena. That we could discover that color is not a natural kind or conclude that there are no colored objects provides further evidence that colors are not properties of phenomena, nor, in one important sense, were thought to be: our expectation is that color is a property in the world, one whose structure is to be discovered. We are surprised when Hardin (1988) argues so convincingly that no physical structure is lawfully correlated as the referent of our color judgments. Of course, if we decide that color is not a natural kind or has no instantiation in the external world, we could come to say that "color" is henceforth the name of whatever phenomenon we experience when we say or think something has a color. No great harm done, as long as one realizes that, as the above inverted-spectra-like arguments have shown, such a set of phenomena will not itself form a natural kind. Different kinds of phenomena (ours and the "scratches" of the aliens, among others) will compose the set.23 But if my view is compatible with there being no colors, wouldn't it be better to admit that colors are only phenomenal properties and not properties of external objects? Why not simply deny that non22 23
Recently, Dennett (1991b) has put forward a similar proposal. See also Thompson et al. 1992. Though how to square the claim that colors are phenomena with the existence of unconscious color discriminations (for which there seems to be evidence — see chapters 1 and 6) may be a real problem.
48
Phenomena
human animals and the aliens experience colors? The phenomena underlying human color perception would then form a natural kind: color itself. Granted that the claim that there might turn out to be no colors is counterintuitive, the view that colors are phenomenal properties is far more counterintuitive. It would require conceptual and linguistic revisions far beyond those my view demands. When we learn colors, we do not learn to ascribe them to experiences. We learn to ascribe them to books, paints, horses, hair, and so forth. Most people would be as surprised to learn that there are no colors in the external world as to learn that there are no colors at all. In our initial learning of color concepts, we no more learn to think of colors as inner properties than we learn to think of being a rabbit as a property of inner experience. But why should we think of one property more than the other as being inner? Just as rabbits eat and drink while phenomena do not, colors can be mixed, daubed on canvasses, dyed, and so forth. This externality of colors is so deeply embedded in our concept that there is no reason to change our concept unless very good theoretical reasons for doing so exist. What are they? Calling phenomenal properties "colors" will not make these properties any more perspicuous to us than admitting that we have no nonparasitic names for them, and it might — and has — made us think we know more about them than we do. I am not denying that phenomena play a role in our coming to conceive of colors. I am denying that they do so by being colored themselves. Our concepts of colors are of properties in the external world. A second reason for opposing the denial of color experiences to the aliens and to nonhuman animals is that just as the aliens might experience phenomena similar to our scratches in their representation of color, other beings might experience phenomena similar to the ones we experience when representing color when representing properties other than color. That is, there is no conclusive reason to think that human color phenomena form a natural kind as color phenomena. Nor is this possibility without empirical support. Gregory (1988) describes his experience of being slowly infused with the anaesthetic Ketamine. At one stage of the infusion, he began to experience what he describes as synaesthesia. In particular, when his skin was lightly, or more firmly, scraped by items like brush bristles, Gregory experienced what he described as vivid greens and reds, as well as other color phenomena — from touch, not from vision (1988, 262). It seems a small step from this 49
Phenomena
actual case to imagine a race of beings who only experience "color" phenomena with the experience of texture and employ those phenomena to discriminate textures rather than colors. Their phenomena would be color phenomena only in an extremely attenuated sense. Gregory s experiencing his color phenomena as texture phenomena makes questionable that "color" phenomena form a natural kind that in itself'lets us read offjudgments of color. Finally, just as on my view it is empirically possible that there are no colors, it is empirically possible on the Phenomenal View that only one person is aware of colors, i.e., it is empirically possible that no two people experience the same phenomenal property types. We may have good reason to think this possibility is highly unlikely. Yet, as we have seen, agreement in judgment cannot be a decisive reason. Instead, the best evidence is that we are all members of the same species and so have similar genetic and neural structures. However, this evidence is weaker than one might think. There are three kinds of central nervous system neurons: sensory neurons, intermediate neurons, motor neurons. We know a fair amount about the first and the third, but Nauta and Feirtag were able to say as recently as 1979 that we know almost nothing about the intermediate neurons. Yet, intermediate neurons make up 98.8 percent of the total and are the locus of most perceptual activity, almost certainly doing the bulk of the computational work (Nauta and Feirtag 1979, 92). While more is known now than in 1979 about intermediate neurons, whether these neurons and their connections are relevantly similar from person to person remains an open question.24 Since the question is open for the bulk of neuronal connections, the evidence cited is not particularly weighty in itself. Moreover, there is some reason to think that brains, even within a species, are highly individualized; and as the lens-wearer case indicated, differences in structure/chemistry can bring about differences in phenomena. So it is not obvious that even normal people experience phenomena similar to each others when making similar color judgments. So if there is an empirical possibility creating a counterintuition in my view, there is 24
Nauta and Feirtag's claim was echoed quite recently; and if correct, shows that the progress hasn't been terribly great: We know the anatomy of the major sensory and motor systems in some detail. In contrast, the pattern of connections within the intervening association cortices and the large subcortical nuclei of the cerebral hemispheres is not clearly defined. (Fischbach 1992, 55)
50
Phenomena
equally an empirical possibility of a counterintuition at least as great in the Phenomenal View. II
6. Similar considerations in regard to the primary qualities lead to similar conclusions: phenomena have no shape, size, extension, position, or motion.25 Using shape as an example, consider the property of being square. The handed-down picture is that we ascribe being square to objects in the real world on the basis of experiencing phenomenal squareness. Pointing inwardly to a phenomenon, we feel tempted to say something like, "If that s not what square is, what could square be?"26 Considered thought about inverted-spectrum and extended invertedspectrum cases caused us to surrender similar intuitions about color, and analogous considerations arise in the case of shape. Parallel to inverted-spectrum cases might be the following kind of cases. Suppose that two persons agree on which objects are square, which are rhombuses, which are circles, and so forth. Nevertheless, when perceiving squares, they experience systematically different phenomena. Suppose that one experiences a kind of phenomenon the other experiences only when perceiving a particular-angled rhombus, and suppose also that the phenomena accompanying the first s perception of protractors and the like also differ systematically from the others. Each would have similar reasons to think that his or her phenomenon was really square. As before, we can imagine populations being split evenly between these kinds of experiencers. So which phenomenon is really square? Which is really rhomboid? As with colors, 25
26
At least not as we think them to have. However, if it turns out that phenomena are identical to brain processes, then some of these properties, at least, will be truly ascribable to them. But in that case, phenomena will have real position, say, not phenomenal position. Of course, this story leaves out the great complication that exists with shape but not (at least to the same degree) with color, namely, that most often when we judge an object to be square the image being experienced is not an image we ourselves would describe as square. We seem to compute, using information about direction and distance to the object, what the real shape of the object is, that its real shape is square and not one of these nonsquare shapes that the images possess. Since this complication will not enter the argument, the reader is free to consider only those images people experience when an object is in full view while they are standing directly over it or have it directly in front of them. I myself deny that phenomena are square, round, or any other shape. As with color, such talk is elliptical and such ascriptions parasitic. But as in the case of color, I allow such talk for a while only in order to show why, when considered nonelliptical, it is illegitimate.
51
Phenomena
any answer is arbitrary. One might argue that both phenomena at least must have four sides, but what counts as a phenomenon's having four sides is itself as vulnerable to inverted-spectrum considerations as the question of whether a phenomenon is square or rhomboid. It might be empirically true that only some sorts of phenomena occur when we perceptually represent squares. If so, we do not know what the constraints might be. All we can say so far is that for each of us only certain phenomena do occur when we perceptually represent squares; and given the structural similarity of other human brains, it is somewhat likely that other human beings are similarly constrained. However, as previously discussed, such general evidence is quite weak. 7. Bringing in the word "human" here may help us realize that extended inverted-spectrum issues can enter the discussion of shape as well as of color. For instance, suppose one were to interject the claim: "While there may be a certain arbitrariness in calling a phenomenon 'square' or 'rhomboid,' still these phenomena have shape and differ from each other by having different shapes. Granted that there is a many—one relationship from phenomenal shape to real shape, that should be no more surprising a fact than that there is a many—one relationship from retinal image shape to real shape." As with the similar objection concerning color, the reply involves extending invertedspectrum considerations. Consider some nonhumans. These nonhumans — call them "Computeresers"— have eyes quite similar to ours. In particular, their retinas resemble ours; and their retinal images resemble ours. However, the brain structures that compute the retinal data are quite different from our own. The phenomena Computeresers experience in shape perception are ones we would associate as images of arabic numerals arranged linearly in sets, members of the sets separated by commas, and so on. Moreover, Computeresers never experience visual phenomena like these except in the case of shape (their phenomena associated with numbers are entirely different). Suppose that each of these types of numerical phenomena is associated with a particular shape. Moreover, Computeresers draw all the inferences about squares we do. Wouldn't Computeresers have reasons similar to ours for thinking their phenomena really were square, round, triangular, and so forth? It is hard to see why not. So which phenomena really are square, theirs or ours? If the answer to that question is seen to be arbitrary, as it should be, 52
Phenomena
then one should conclude that neither their phenomena nor ours are square, nor any other shape. If the example seems far-fetched, we can conceive of cases considerably closer to home. People born blind presumably experience no visual phenomena. We can likewise imagine people born unable to experience haptic phenomena, including kinaesthetic phenomena. Blind people experience only haptic shape-phenomena; the others experience only visual shape-phenomena. Which of these sets of phenomena really has the property of shape? As Locke (1690/1959, vol. I, 186—87) himself claimed, it is unlikely that people born blind who had had their sight restored would recognize shapes when seeing them for the first time.27 Think also of the echo-location sense of bats. When bats discriminate sharp angles from curves, are the phenomena they experience sharp-angled? It is difficult to know how even to begin answering this question — and not just because we do not have a sense of this kind. Our visual phenomena associated with shape just are noticeably different from our tactile ones and — most probably — from the bats' aural ones, and it is difficult to believe they all have anything in common except for their common association. 8. The reasonable conclusion is that shape (and, similarly, any other primary quality) is not a property of phenomena. Moreover, unlike with color, we have good reasons for thinking that the primary qualities are properties of the external world. Thus, we have arrived at the point where Berkeley is turned onto his feet: both the primary and secondary qualities, considered as nonrelational, are properties of real objects, not of phenomena, if properties of anything at all. Once we come to this realization, we again see the attractiveness and attraction of a "read-off" position all but disappear. At the end of §5, at a similar stage of argumentation for the secondary qualities, I raised an objection to my own conclusion: There are good theoretical reasons for treating the qualities as in us rather than as in objects. As applied to the primary qualities, this objection is so counterintuitive that even the originators of the Phenomenal View refused to believe it. I know of no defense for it, except perhaps for the argument to be considered in §9. 27
Locke's speculations aside, there is empirical evidence that this failure of recognition occurs, as experiments by von Senden, Riesen, and Gregory and Wallace seem to show (Kaufman 1974, 490). For a recent discussion, see Sacks 1993.
53
Phenomena in
9. "Phenomena have a natural intentionality about them that makes us think that in perceptual experience we are directly aware of the properties of external objects, while all we are actually aware of are phenomena. Because of the natural intentionality of phenomena, we come to ascribe properties to external objects that we should not be ascribing to them at all. In particular, we are apt to confuse the relational properties of external things with nonrelational properties of phenomena. Thus, we are directly aware of the nonrelational phenomenal property and because of its natural intentionality thereby ascribe is red to the object. The red we ascribe to the object is the phenomenal property itself. Later, when we learn external objects cannot have this nonrelational property, we quite naturally continue to say objects are red. But now by 'red' we mean a relational property of that object, namely, that which causes red phenomena in us. Thus, you are the one who has stood matters upside down: 'Red' is primarily used as a word to designate a phenomenal property. It is only parasitically and eUiptically used as a property of objects" (see, for instance, Perkins 1983 and Jackson 1977). Although the objection is initially plausible, there are good reasons to reject it. It slides from the possible truth that we make property ascriptions to external objects (at least partly) because of phenomena to the false claim that we ascribe the properties of phenomena themselves to objects. The slide is unwarranted. Here are three considerations opposed to this slide and in support of the position maintained throughout this chapter (several other considerations were presented in §5 and still others will be presented in §10). (1) We have no inclination to think that this description of our phenomenology and resultant property ascriptions applies in the case of most properties we ascribe to external objects. Consider the property of being water. We certainly ascribe being water to a piece of the world on the basis of its feeling and looking a certain way to us. But being water was never (idealists aside) considered a property of phenomena, even as a collection of such properties. That is why Berkeley's claim that things are collections of phenomena is usually received with amazement and disbelief. Nor did we ever take being water to be a mere relational property (one that causes "water"phenomena in us), though we believe that being water (or the piece of the world with that property) has various causal powers, including the power to cause certain phenomena in us. 54
Phenomena
Rather, when ascribing being water (or being a rabbit, or any other external property) we are quite open as to the real nature of the property. In that sense, we are neutral about the property. That is why we could discover that water can be steam or ice as well as liquid. Even more important and relevant, it is also why we could discover that being water is being H2O. We have had no difficulty accepting that a piece of the world can look and feel just like water but not be water; and in making the identification of water with H 2 O, we have not changed the meaning of the word "water," nor its referent. Being water is a property ascribable to the world. The idea of phenomenal water is very peculiar, indeed. (2) This kind of peculiarity is even more vivid in the case of being a unicorn. Of course unicorns are figments of our imagination. But it would be a solecism to infer from that fact that being a unicorn is a property of phenomena. There are no unicorns. Being a unicorn is a property of no actual thing. Its failure to apply to external objects is the basis of denying it to be a property of anything, for it was meant as a property of external objects. (3) While the objection gets mileage out of the color case, it gets far less mileage when we consider shape. In the case of square, there are both visual and tactile phenomena. Which phenomenal property do we ascribe to the world as a nonrelational property? It is not clear the question even makes sense. All we know is that both sorts of phenomena play a role in our ascribing square to various objects in the world. But what square is, like what water is, is a question that does not even emerge until we learn to do science, mathematics, or philosophy. How properties affect us is certainly important to us, but we are most often neutral about the real nature of the nonrelational properties we ascribe to the world. Given how unlikely the objection is for shape, we are justified — especially given the other considerations — in also believing that, despite an initial plausibility, it provides the wrong story for color as well. As with "water," we do not have to change the meaning of the word "red" to claim that red things do not always look red or that things that are not red sometimes look as if they were. As with water, an ascription of red entails no commitment as to the true nature of the property we ascribe. Color-blind people, for instance, learn that things they fail to take to be red are, nevertheless, red. Their discovery parallels the discovery that something that does not look like water may be water nevertheless (ice, for instance). To these considerations, add one more consideration, which is of 55
Phenomena
utmost importance. The objection that began this subsection starts with the assumption that a "natural intentionality" attaches to phenomena. The assumption is unwarranted. If a natural intentionality exists in perception, it can be said to characterize the end-state (the percept) of the perceptual process. Identifying that percept with a phenomenal state is exactly what is at issue. So simply asserting that the percept is a phenomenon, without argument supporting the identification, begs the question. Perception almost certainly results from an interplay of very complicated processes and representational schemes (see Marr 1982, for instance, and chapter 4). When phenomena occur, they do seem, introspectively, to be playing a role in this processing; but their role is far from clear. When one considers the many—one and one—many relations of phenomena to perception, as discussed throughout this chapter, it appears highly unlikely that phenomena can bear all, or even a large share of, the weight of perceptual representation. Indeed, it is unlikely that phenomena are percepts. As discussed in the previous chapter, much more likely is that percepts are judgments. A major motivation behind the Phenomenal View was to make perspicuous how we can conceive the world. The photographic nature (the natural intentionality) of phenomena was proposed to explain this ability. But now we are told that the "photographs" are not photographs of anything. To claim that we ascribe phenomenal properties to the world but the world does not have such properties is to retract the Phenomenal View, not to defend it. Any view such as the naturalintentionality-for-phenomena one set out at the beginning of this subsection faces a question analogous to the one my view faces: How can phenomena, which have none (or only a few) of the properties of the external world, enable us to conceive the world? How, that is, is nonphotographic representation possible? It is possible. Indeed, it is actual. And so there is an answer to this question. Since the Phenomenal View fails to answer the very question that motivates it and since accepting it would require such extensive revisions of concepts like '"color"1 and •"shape"1 , no good reason exists for accepting it. And similar considerations apply to any kindred "read-off" position. IV
10. Cases of Hindsight (Weiskrantz 1986), hemineglect with apparent hemianopia (Reingold and Merikle 1990), visual extinction (Volpe 56
Phenomena
et al. 1979), and commissurotomy (Gazzaniga 1970 and Gazzaniga and LeDoux 1978) provide evidence that people sometimes perceive without experiencing phenomena at all.28 And if the claim that a human-like left brain is required for phenomenal representation (Gazzaniga 1985, 131-32; Gardner 1985, 331) is true, then it may turn out that nonprimate perception never involves phenomena. If so, realizing that phenomena have no colors, shapes, and so forth (the properties we ascribe to the external world) and realizing that we do not "read off" phenomena in order to perceive allow us also to understand that our perceptual system is not altogether different from those of nonprimate perceivers. On the other hand, if colors and shapes were only phenomenal properties, as the Phenomenal View maintains, then nonprimates, blindsight patients, and the other patients — if correctly described as experiencing no phenomena — would perceive no colors or shapes, whatever discriminations they make. However, we have evidence that blindsight patients can make hue discriminations, even though they deny experiencing any color phenomena (Stoerig & Cowey 1989, 1992; Stoerig Personal Communication; Weiskrantz 1990, 254). And if perception required phenomenal experience, as defenders of the Phenomenal View have sometimes claimed, then blindsight subjects and the others — again, if correctly described as experiencing no phenomena — would not even perceive. Once we realize that we, in fact, acquire concepts of colors and shapes without photograph-like processes occurring in our conscious perception, then we need no longer feel compelled to make these counterintuitive claims. To the contrary, if we take phenomena not to be "read off" from, but for instance, to be among the noncognitive causes of perception, then we can understand how other brain states might play a substituting role in some cases of perception. We do not have to deny perception of blindsight subjects, for instance, if it would turn out that they do not experience visual phenomena. We are much better prepared to come to understand perception as all of a (however complicated) piece, namely, as primarily being a proposition-like cognitive state — though a sensory state (i.e., arising by appropriate means from 28
Actually, I don't think that the right interpretation of these cases is that no phenomena are experienced. Rather, it is only that the clinical subjects are not apperceptively conscious of their phenomenal states (see chapter 6). But it is at least empirically and theoretically possible that these are cases of perception without phenomena, and only the possibility is needed to make my point.
57
Phenomena
the senses) — rather than a phenomenal state.29 As such, the raison d'etre of the human system that accomplishes this cognition will, for all its other differences, almost certainly be similar to that of any other organism; and studying one system will almost surely throw light on any other. To consider human perception as being primarily a phenomenal state is to misunderstand both the nature of human perception and its relation to the perceptual states of other organisms. When these theoretical considerations are added to the previous arguments, then even versions of a "read-off" position less extreme than the Phenomenal View appear implausible. Both the earlier arguments and these theoretical considerations make it manifest that no one phenomenal state-type need be "read off" in the case of any particular perceptual property — no matter what interpretation (act—object or adverbial) one gives to phenomenal states. It is important to realize that we ascribe properties like color and shape to phenomena only in an elliptical and parasitic manner. For it follows that if we want to find a nonparasitic way of talking about phenomena and categorizing them, we first need to learn more about their own properties. We know little about phenomena.30 Only by understanding just how little we know about them and how misled we have been by them, will worthwhile research into their nature be able to proceed. What is their role in perception? Are they mere epiphenomena of perceptual processes or do they play an integral, effective role in perception? These questions need answering; but they will not be answered until we become aware of what we know and, more importantly, do not know about phenomena. One purpose of this chapter has been to show just how little we know. On the other hand, proposition-like cognitive processes, rather than phenomena, bear most investigation if we are to understand perception. Phenomena play a smaller role in our lives than we have tended to think. That claim is supported by the fact that the only names we use for the properties of phenomena are elliptical and parasitic ones. The further we advance the study of phenomena, the more they recede into the background, leaving much that is important in our lives intact even as they do. I am pretty certain, for instance, that much of what makes consciousness important to us has nothing to do with phenom29 30
See chapter 1 for further arguments in favor of this position. See, for instance, Dennett 1988b, 1991b, 1991c; Nelkin 1986, 1987a, 1994b.
58
Phenomena
ena (see Part Two). So even though the mystery of phenomena has not been solved, it looks to be a less and less urgent matter that it be solved. At the same time, it would be nice to solve it. Several points need to be emphasized before proceeding. (1) I am not denying that phenomena play an important role in color-concept formation. They well may (see chapter 4). I am denying only that they do so by being colored. (2) I am not denying that our color phenomena and the scratches of the aliens are different. I am denying only that the difference lies in our phenomena being colored while the aliens' are not. Neither sort is colored. (3) Finally, I am not denying that the content of our color concepts is wholly dependent on our internal states; nor am I claiming that that content is determined by being mapped onto real external properties (see chapter 9). I am denying only that that content is supplied by phenomena being colored; and I am claiming that the content of those concepts requires that these properties be instantiated in the external world, if instances of them exist at all.
59
3 Pains Analyses of pain have played a large role in the history of philosophy. Sometimes pain is taken as the paradigm of sensory, and perceptual, experience. Sometimes it is taken to be quite different from other sensory states, not being a perceptual state at all. Most of the time, it is taken as a paradigm of conscious states. In any event, many would argue that the case made in the first two chapters, on behalf of the claim that phenomena play a lesser role in our lives than we have thought, cannot be extended to pain, because phenomena are essential to pain. And since pain plays an important role in our lives, phenomena must also. And for those who take pain to be the paradigm of sensory — and perceptual — states, phenomena, if central to pain, are most probably central to those other states, whatever my arguments to the contrary have been. And if, as so many think, pain is also the paradigm of conscious states, then phenomena are central to consciousness as well. Pain is a perceptual state;1 but it is by no means the paradigm of sensory, or perceptual, states. In fact, there is something quite odd and uncharacteristic about states like pain that distinguishes them from other sensory states. Similarly, while pains are conscious states, they are quite uncharacteristic conscious states. The arguments for this last claim are presented, not in this chapter, but in Part Two. In this chapter (see section III), I only gloss the arguments to come. In fact, this chapter anticipates several ideas that are developed more fully only in the next chapter and in Part Two. A discussion of pain provides a useful transition to the fuller theory. As regards the claim that phenomena are essential to pain, the 1
As each of Berkeley (1713/1965) and Pitcher (1974), though in noticeably different ways, and on noticeably different grounds, argued. My own view, presented in section III of this chapter, is considerably closer to Pitcher's than to Berkeley's.
60
Pains
response will not be simple. In section I, I argue, in analogy to visual states, that there is no natural kind, pain phenomena. These arguments continue the work of the previous chapters and build on it. In section II, I present a theory I once held, which goes beyond these arguments to claim that phenomena are not essential to pains.2 In section III, I offer a different theory, which retracts some of the claims of section II and which does take phenomena to be, in a sense, essential to pain. But it will be further argued that it is exactly because of the respect in which phenomena are essential to pain that pain is a paradigm neither of sensory experiences nor of consciousness.3
1. I begin by asking two questions, both of which may sound pretty silly, but both of which I will take quite seriously: (1) Can one experience the kind of phenomena one usually experiences when in pain without being in pain?4 and (2) Can one be in pain without experiencing pain phenomena?5 I argue that it is possible that the answer to both questions is, "Yes," though only a few years ago I, as well as almost everyone else, would have taken the questions to be rhetorical questions, the answers to which were obviously, "No." The first question has been discussed at length by Dennett (1978c), who considers cases of lobotomized patients and of patients who are given morphine after the onset of pain. Both sorts of patients claim to feel pain but say it no longer hurts them. Their remarks are puzzling, to say the least. And I think, along with Dennett, that we find them so puzzling because deep intuitions come into conflict in these cases. We believe that being in pain is being in what I will call a transparent mental state, such that people experiencing pains are in the best position to 2 3 4
5
See Nelkin 1986 for the original statement of this theory. A version of this argument is given in Nelkin 1994c. This chapter draws heavily from that paper, and from Nelkin 1986. I am going to abbreviate the expression, "the kind of phenomena one usually experiences when in pain," to "pain phenomena." But it is important to remember that the latter expression is an abbreviation. Read literally, the expression would make certain things I say too obviously true and make others, which are also true, seem obviously false. It may not be the case that all philosophers until recently took these questions as merely rhetorical. Wittgenstein (1953), for instance, seems to have taken these questions quite seriously: The "Private Language Argument," especially the "beetle-in-the-box" example (lOOe, §293), is a result of taking these questions seriously. I think Wittgenstein answered, "Yes," to both questions (see §§3 and 4 below).
61
Phenomena
judge if they are actually in pain. We also believe that pain is a certain kind of phenomenal state — or, more weakly, that being in pain always involves a phenomenal feeling. We further believe that people cannot be in pain without hurting and that hurting is tied up, of necessity, with certain kinds of affects, beliefs, motivational states, and behavior, such as trying or at least wanting to do something to alleviate the pain, finding it uncomfortable, and showing signs of its discomfort - grimacing, groaning, and the like. (I call a cotemporal set of these affective-cognitive-motivational states an attitude.) In fact, if grimacing, groaning, and like behavior is occurring and we have no reason for suspecting pretense, then we believe we are in a position to be certain that another is in pain. Moreover, we believe that because pains hurt and people want them to stop, pains are a moral matter. These various intuitions conflict in the cases of the lobotomized and morphine-dosed patients: they say they are in pain and should be the best judges of that; yet they say they do not hurt and show none of the usual behavioral signs of hurting nor the usual motivational signs of wanting the pain to stop. There are, of course, many possibilities for resolving this conflict of intuitions. One might defend an identity of pain with pain phenomena by claiming that these patients, despite what they say, do not really experience pain phenomena. The changes in their brains brought about by lobotomy or morphine have caused them to be mistaken when they say they are experiencing pain phenomena. But then we have to surrender the intuition that pains are transparent. Perhaps we could save even that intuition if we said that the patients had been caused to forget how to use the concept of pain, or to forget what the English word "pain" means. After all, if someone learning language said he or she was in pain but it didn't hurt, we would have a reason for thinking that that person did not yet understand the word, "pain." But lobotomized patients and morphine-dosed patients have no trouble recognizing pain in others. They also track changes in the intensity of pain in a way that corresponds to changes in the intensity of the stimulus being applied. And in other ways as well, they indicate that they apparently retain a perfectly good grip on the relevant concepts. Since we apparently have to sacrifice one of our intuitions, perhaps transparency is the one best sacrificed. However, the theory presented in section II maintains that we should instead sacrifice a different intuition: that pains are pain phenomena. Indeed, that theory rejects even the weaker claim that phenomena are necessary for pain. 62
Pains
While also rejecting the identity of pains with pain phenomena (though not the claim that pain phenomena are necessary for pain), the theory presented in section HI maintains that we should also yield the intuition tying pain in an essential way to affective and motivational states. The two theories have in common a rejection of the claim that there is a natural kind, pain phenomena. In this first section, the jointly held position is established by considering cases similar to those that inspired the first two chapters. 2. Since our intuitions about pain states are confused, and at times contradictory, the position I argue for in this chapter, while based on empirical data, is not a merely descriptive claim but also a normative one, suggesting a more scientifically useful treatment of pain states and perhaps a more morally useful treatment as well. In order to make the claims about pains plausible, I argue by way of analogy with vision. Thus, the argument is only as strong as the arguments about vision and is also dependent on the strength of the analogy of pain phenomena to visual phenomena. Considering peculiarities of various nonhuman visual systems is a good place to begin. The argument emerges from trying to interpret these peculiarities. One interpretation is incompatible with my conclusion; but two other interpretations are possible, and these compatible interpretations are shown to be more plausible than the incompatible one. These same three interpretations are possible for pain processing, and the evidence for the reading incompatible with my thesis is no stronger than evidence for the compatible readings. Since evidence alone cannot decide among the readings, other theoretical considerations have to be examined. And these lead to the conclusion that there is no natural kind, pain phenomena. And if one accepts the most radical interpretation, there are even reasons to deny that phenomena are necessary for pain. Consider eagles. When soaring a mile in the air, they can spot a rabbit move. But eagles, like other birds, have a visual system that is in striking ways different from a human one. In human beings the eyes connect through the optic nerves; and these nerves cross at the optic chiasma, run through the thalamus (the lateral geniculate nucleus), and continue on to the striate cortex, generally called the "visual cortex" (area VI) because of its central role in human vision.6 VI relays to 6
The visual cortex is also called the striate cortex because of the line-like pattern that characterizes it.
63
Phenomena
post-striatal cortical areas (V2, V3, V4, and several others), which also appear to be necessary for normal human vision. Eagles, however, are unlike human beings in several respects: most notably, they have a much diminished visual cortex; and the associated cortical areas are also much diminished. Eagles have little neocortex at all. The anatomical architecture of an eagle s visual system appears to be considerably different from that of human beings; and so it is questionable how homologous their visual brains are to ours. The question I want to raise, then, is what the visual phenomena of eagles are like. Three possible answers to this question are presented, and in each case an analogous something can be said about pain phenomena. It should be noted that the visual architectures of animals like frogs and flies are even more removed from that of persons, while those of the primates are, as would be expected, much closer to our own — though variably so. So we can also ask this question about flies and frogs, even more readily than about eagles, and reasonably ask it even about other primates. One possible answer to the question of what the visual phenomena of birds and other nonhuman animals are like is that, despite the noticeably different architectures, their visual phenomena closely resemble our own. Consider evidence both for and against this claim. At least four sorts favor it, but none is particularly convincing. The first is: Human beings see only when they experience these kinds of visual phenomena. This fact provides evidence that experiencing these sorts of phenomena is necessary for seeing. In §4, we will again (cf. chapters 1 and 2) see that the claim that a particular sort of visual phenomena is necessary even for human vision may itself be a shaky one; but even if it is sturdy, it would still provide only weak evidence that phenomena of nonhuman animals are just like those of human ones. Providing a counterweight to it are the different architectures themselves. As we saw in chapter 2, different architecture (or different physiology or chemistry) seems to mean different phenomena also. But the second sort of evidence takes the architectural difference into account: Quite different sorts of physical things can be timepieces (or bombs, or any other functional kind). So it is not outrageous to think that considerably different physical systems all perform the same function. 64
Pains
The problem with this evidence, aside from its weakness (it only supports the possibility of the thesis being true - it does not give support to its being true), is that while "seeing" might be considered a functional kind, "experiencing a phenomenon" is not so obviously a functional state.7 While one would find it hard to deny that, despite the difference in neural architecture, eagles see, it is considerably easier to deny that eagles experience the same visual phenomena we do. Seeing seems to be an information-processing state, and therefore a functional state, while experiencing phenomena does not.8 It may just be a brute intuition that this difference exists (though the distinction is supported by the recovered trichromat cases discussed in the previous chapter). But, at worst, it is a question of intuition against intuition; defenders of the claim are in no better — though no worse — position than its detractors. The third kind of evidence resembles the second; but instead of being put in the context of functions, it is put in the context of causes and effects. Like the previous evidence, it is weak, purporting only to show the possibility of the reading, not its truth: A single effect can be caused in many different ways. A blasting cap may set off an explosion by the heat of a torch, by an electric spark, by a strong vibration, and so forth. In the same way, a similar visual phenomenon may be brought about by a visual system consisting of frontally placed eyes and a neocortical neural network like ours, or of laterally placed eyes with a nonneocortical neural network like an eagle's, or of a multi-eyed, nonneocortical network like a fly's. While we, the eagle, and the fly possibly experience similar phenomena that result from different causal processes, the analogy to explosions is only prima facie apt. In the explosion case, the notion of same result is plausible because we can conceive of the very same explosion being brought about in these different ways. But what allows us to conceive of it as the very same explosion is that whatever the cause, we can conceive that the same scattering of molecules and other structural identities go to make up that explosion. But what structural identities could 7
8
Some philosophers do parse phenomenal states functionally (Dennett 1988b, 1991b, 1991c; Lycan 1987). But the functionalist account of phenomena is not at issue here. Those I am arguing against in this chapter, defenders of a phenomenal-identity theory of pain, would also reject a functionalist account of phenomena. I think the belief that this difference exists lies behind Gundersons (1971) distinction of program-receptive and program-resistant states.
65
Phenomena
make the phenomena brought about by these architecturally different neural networks the same phenomena? Since the neural networks are so different, it is questionable that the phenomena are the same. Perhaps the anatomical level is the wrong level to look for architectural identity. Perhaps the eagle s visual architecture is the same as that of human beings at another level of description. Or perhaps while the architecture is different, the physiology is alike in the eagle and human cases. These conjectures may be true. I certainly know no a priori reason to reject them, though the recovered trichromat cases provide an empirical reason to reject them. But as things now stand, the causal claim is a baseless conjecture, providing only the weakest evidence for the interpretation at issue. This whole discussion may only reveal my physicalist bias: If phenomena are anything, they have to be neural structures or states. Perhaps, instead, phenomena are nonphysical states that share a common structure despite the neural differences. But with this reply, the analogy with explosions becomes even more tenuous, because explosions are physical. Moreover, what makes having the same structure even conceivable in the case of the nonphysical treatment of phenomena is that such a treatment tells us nothing (and can tell us nothing?) about phenomena and so leaves open every possibility. Given the questionableness of the analogy, this third sort of evidence provides only the weakest of reasons to believe that nonhuman animals experience the same sorts of visual phenomena we experience. The fourth defense for the interpretation, like the previous one, is a causal claim but goes beyond it in purporting to offer positive evidence: There are human beings born with water on the brain. In cases where this condition is discovered early enough, a shunt can be inserted and the brain drained of excessfluid.In a few cases, patients go on to live normal lives. One such person even has an I.Q. of 126 and, among other accomplishments, took a First-Class Honors Degree in mathematics. Remarkable about such people is that imaging scans reveal them to have mostly empty skulls, with just a few millimeters thickness of brain tissue attached to the inside of their skulls. Their heads are mostly hollow! Yet these people apparently make the same sorts of visual discriminations we make.9 Surely, these people provide evidence that very different neural architectures can result in similar phenomena. 9
See Paterson 1980. I would like to thank Fred Dretske for calling my attention to this article.
66
Pains
But at least two replies can be made. (1) "None of these people has been autopsied, so the claim that their brain organization is wildly different from ours — or even as different as an eagle s from ours — remains to be shown." While this reply is correct, it is weaker than the evidence it is replying to: that these people s brain architecture closely resembles ours is pretty improbable. (2) The second reply is better: "The conclusion drawn from the cases is unwarranted. We have no good reason to think that these people experience phenomena like ours. That they behave similarly to us, including making similar discriminations, is an undoubted fact. But that is an insufficient reason for believing that their phenomena are like ours. The recovered trichromat cases should make us wary of jumping to the conclusion 'similar phenomena' simply on the basis of similar discriminations." Nevertheless, it is possible that both these thin-brained people and nonhuman animals experience visual phenomena similar to ours despite the difference in architecture; and these "thin-brain" cases perhaps provide some evidence. To be weighed against all these different sorts of evidence, however, are the recovered trichromat cases discussed in the previous chapter. These cases make it plausible both that differences in neural architecture (physiology, chemistry) alter the phenomena experienced and that similar visual discriminations can be made when quite diverse phenomena are experienced. These recovered trichromat cases make us wonder what the phenomena experienced by the thin-brained people are like, and they make doubtful the "same behavior/same phenomena" claim. Similar issues arise for pain. Consider a few interesting facts about human pain: Cultural differences seem to influence when one feels pain. People of Mediterranean descent, for example, apparently feel pain at lower levels of noxious stimulation than do people of Nordic descent. Yet, the stimulus intensity level at which both Mediterraneans and Nordics experience phenomena of any kind seems to be the same, and Mediterraneans and Nordics equally track increases and decreases of stimulus intensity. But for Mediterraneans those phenomena are described as pain phenomena earlier on than they are for Nordics. A second fact is that people with chronic pain — causalgia, neuralgia, phantom-limb pain — can have bouts of pain brought on by tension and worry. A third interesting fact concerns people who were lightly shocked when they made mistakes while being tested with a word list. 67
Phenomena
None of the experimental subjects found the shocks painful unless the word "pain" or a close relative was on the list. Yet another fact is that making people less afraid often also lessens their pain. For example, certain "preventives" for dental pain, such as a white noise machine was supposed to be, worked only for dentists who had strong personalities and who talked to patients beforehand, telling them that the machines had been successful in preventing pains. With doctors of weaker personality or with ones who had not talked up the virtues of the machine prior to the procedure, the machines by and large failed to prevent pain.10 Consider, too, the following study (Glass et al. 1973): volunteer subjects were to push a button as soon as possible after being given a sixsecond, somewhat painful shock (this was at a level each person had previously identified as painful). Several shocks were administered in each testing period. The subjects were told that reaction time was being measured. Afterwards, the subjects were divided into two groups. The control group was told that the experiment would be the same except that the shocks would last only three seconds. The experimental group was told instead that the shocks would be shortened to three seconds if their reaction time was of a sufficient speed. Actually, both groups received the same number of three-second shocks (speedthreshold played no role). After both sessions were completed, each group was surveyed as to the quality of the pain experiences. Members of the experimental group reported diminished degrees of pain relative to their pre-test reports at the same level of shock intensity, although their autonomic response measurements presented a like profile to that of their counterparts in the control group, who reported no diminution of pain level. While this judging of pain is after the fact, it provides at least some evidence that the experimental subjects did feel less, or sometimes no, pain.11 The items on this list suggest that there is a substantial cognitive input 10 11
These examples are due to Melzack 1973, and appear there passim. Many of them are reprinted in Melzack and Wall 1983. The results are more complicated and more numerous than I have presented. I recommend the full study to the reader. For instance, if the subjects, instead, got six-secondlong shocks following the second set of instructions, subjects who thought they would be in control did worse than those who did not. The likely interpretation here is that thinking one is in control and then finding out one is not, or is not able to succeed, is psychologically more damaging than not starting with the belief in the first place.
68
Pains
into our very feeling of pain, not only into our handling pain once we have felt it (see Melzack 1973; Melzack and Wall 1983; Sternbach 1968 for further evidence for this claim). Moreover, the cognitive input involved in these cases is fairly complex. In human beings, such complex cognitive input is almost certainly neocortical. And that fact means that the neocortex plays an important role in human pain feeling. But except for some of the other primates, nonhuman animals have little or no neocortex. Their neural architecture is in this instance also significantly different from our own. So we can once more ask whether the phenomena they experience in pain are anything like ours. The same kinds of evidence and counterevidence as given for the vision cases can be brought forward. But in no way is either convincing. We have, once more, similar behaviors leading us to believe there must be similar phenomena and different architectures leading us to say there must be different phenomena. No evidence compels us to come down on the side of same phenomena in the case of either vision or pain. The readings of the facts presented in the next two subsections, and additional cases discussed, will instead enable the reader to see that identifying pains with pain phenomena is a mistake. 3. The following reply may at first sound as if it supports the first interpretation: "Because of the very different architecture of persons on the one hand, and eagles, frogs, and flies on the other (and because of the differences among the others as well), there is indeed good reason to think that human visual phenomena are quite different from those of eagles, frogs, and flies, as well as for thinking that the phenomena of each of these are different from those of each of the others. The class of visual phenomena is just much larger than we might have thought." I would agree with this reply because it supports my case. To see that it does, consider the question of why all these phenomena are visual phenomena. Implicit in this understanding of the facts is that these phenomena are all visual phenomena only because they are all associated with visual processing, that is, processing that begins with the eyes and ends with judgments about the world.12 The phenomena themselves can be as different as one pleases from each other. Sorting the phenomena as visual — as opposed to haptic, aural, or the like — is 12
This is pretty much the conclusion of the first two chapters, as well.
69
Phenomena
parasitic on a certain kind of processing that begins with the eyes. The processing is primary, serving as the sortal principle. But this principle is independent of the phenomena themselves, and in disregard of their nature. As the reply admits, phenomena we call "visual phenomena" do not seem to form a natural kind. Any phenomenon, whatever its intrinsic properties, will be visual as long as it plays an appropriate role in visual processing. "All vision involves visual phenomena" is a much more trivial truth, if it is a truth, 13 than was ever imagined by its defenders. Being involved in vision makes phenomena visual, while no natural kind of phenomena makes a process visual. It is conceivable that phenomena we think of as itches are exactly like those phenomena eagles experience when they see colors (see the previous chapter). 1415 For all that, our itch phenomena are not visual; but the similar phenomena of an eagle would be. It is conceivable that the same kind of phenomena can be visual or tactile, depending on the kind of processing it is associated with (compare the synaesthesia case from Gregory [1988] cited in chapter 2). The kind of processing counts, not the kind of phenomena. If natural kinds of phenomena exist, they cross-cut perceptual kinds: the pairs, "same perceptual kind/different phenomenal kinds" and "same phenomenal kind/different perceptual kinds," are both possible. The same point, of course, can be made about pains. If different sorts of phenomena can be experienced as pain phenomena and if the same sort of phenomena can be experienced both as pain phenomena and not as pain phenomena — as the cases suggest — then sorting phenomena as pain phenomena requires a criterion for pain other than the phenomena themselves, and this criterion also accounts for our labeling a phenomenon as a pain phenomenon. Sorting phenomena as pain phenomena requires consideration of more than phenomena themselves: 13 14
15
Its truth is called into question in §4 below. [Editor's Note.] This appears to be the second of the three answers to the question of what the visual phenomena of eagles are like (see p. 64). Eagles experience visual phenomena, but those phenomena are (or may be) very different - as different as one pleases — from the phenomena we experience when we see. If eagles make color discriminations at all. But whether they can or not is fairly irrelevant here: for my point, the possibility is good enough. While many nonhuman animals are not particularly good at color discrimination, at least at hue discrimination, others, far removed from us phylogenetically and possessing extremely different visual architectures (the freshwater fish mentioned in previous chapters, for example), do seem to make many of the same color discriminations we do. See, for instance, Hurvich 1981, 138.
70
Pains
either of attitudinal states or of another sort of state. One might experience a kind of phenomenon only when one is not in pain that an eagle experiences only when in pain. The phenomena might be of just the same kind, only a human being isn't in pain. There is reason to think that something analogous is occurring in the case of the "control" instructions of the Glass et al. (1973) experiment. Both groups can identify the intensifying of the stimulus; both groups have similar neural makeup. And both groups show similar autonomic responses to the stimulus. Yet one group feels pain and the other does not. Pain is more than — and different from — a phenomenon. 16 Wittgenstein (1953, lOOe, §293) makes exactly this point in his beetlein-the-box example. The phenomenon is relatively unimportant. What is in the box doesn't matter. It does not matter that the contents of an eagle's "box" are quite different from those of a person s: both eagles and persons feel pain. Similarly, that the contents of the control group's "box" would be the same as those of the experimental group's wouldn't matter: only the first would feel pain. And if the thin-brained people discussed previously experience phenomena considerably different from ours when they see red or feel pain, so what? They still see red and feel pain. The claim that pain phenomena do not form a natural kind can be introspectively reinforced by considering the diverse phenomena experienced when one's head aches, when one's skin is punctured, and when one's tooth nerve is struck. As said, the principle for sorting these noticeably different phenomena as pain phenomena involves more than the phenomena themselves. What more? Defenders of the identity of pain with pain phenomena may interject here that it is exactly at this point that the analogy between pain and vision breaks down. While agreeing that the control and experimental groups in the Glass et al. experiment experience similar phenomenal states, one might argue that the relevant differences between them are phenomenal as well: both groups experience the same type tingling phenomenon from the shocks, but the control group experience in addition a phenomenal hurtfulness (or negative hedonic tone — see Goldstein 1989; Morillo 1995). Such an additional phenomenon accounts for all the other cases (the Mediterranean/Nordic experiment, the Among psychologists, H. R. Marshall suggested something like this way of looking at pain as long ago as 1894 (cited in Melzack 1973, 147-48).
71
Phenomena
word-"pain"-on-the-list experiment, and so on) as well. And it is this additional phenomenon that is pain. It is important to understand exactly what this theory is claiming: Every pain experience is a complex phenomenal state, the feeling of two different phenomena at the same time. I agree that pain is a complex state (and so I will argue), but I disagree that its complexity consists in two phenomenal states being simultaneously experienced. I have no direct, or quick and dirty, arguments against the two-phenomena view. I would defend the view I will present (either of them) against it on theoretical grounds, i.e., the overall theory of mind that treats pains as I will is better than any theory that treats pains as a twophenomena experience. What is surprising is that if the two-phenomena theory is correct, I could be in doubt about it. I should know simply from the fact that I experience the hurtfulness phenomenon that the two-phenomena theory is correct; and, of course, I don't know it. Perhaps one could argue that we just are not incorrigible about our phenomena or that phenomena are not always transparent. But intuitively, that I could experience a phenomenon type over and over in my life and never know I'm experiencing it makes it an altogether strange sort of phenomenon. One certainly cannot just assume that such phenomena exist. If, on the other hand, one insists that such phenomena are transparent, then I would have to deny that I experience them — although I am quite certain I feel pain. And so that fact would in and of itself show me that the two-phenomena theory is mistaken. Of course, all pains have in common their hurtfulness. I will not deny that fact. The question is what constitutes that hurtfulness. And introspection fails to decide it. I do introspect how different toothache phenomena are from headache phenomena, but I do not introspect some other phenomenal quality that is their common hurtfulness. Or if I do, I do not introspectively know that I do. As far as I can see, nothing motivates the two-phenomena theory except a prejudice that pain must be phenomenal. II
4. An alternative possibility to a second phenomenon is that the "more" needed for pains, and for sorting phenomena as pain phe72
Pains
nomena, is constituted by the attitudes. We are brought to sort these otherwise diverse phenomena as a single kind by the similarity in attitude expressed in each case. Introspectively, when one is deciding whether a particular phenomenon is a pain or a tickle, or a pain or an itch, it doesn't seem as if one is deciding how the phenomenon feels; rather, it seems as if one is deciding what the appropriate attitude is — for instance, whether what is going on is harmful or fun, and the like. The phenomenon is what it is. Labeling it as a pain or tickle phenomenon is dependent on the attitudinal context in which it is embedded. Deciding whether a phenomenon is a pain or tickle phenomenon is deciding what attitudes it accompanies. This interpretation does salvage the analogy of pain to vision cases. And the interpretation of the vision and pain cases in §3 (that there is no natural kind, pain phenomena) is at least as plausible as the interpretation in §2. When reinforced by the arguments of the first two chapters, more plausible. The reading in §3 wrenches pain apart from pain phenomena in a way quite similar to how it wrenches vision apart from visual phenomena. While that reading provides enough material for making the point to be made in §5, an even more radical interpretation needs to be considered. The attitudinal theory of pain, when more fully understood, reveals itself to be much more radical than may have so far appeared. Visual cases continue to provide support for the claims to be made about pains. Recall the Hindsight cases discussed in the previous chapters. Using a patient whose striate cortex had been partially damaged by a tumor and who claimed to be blind in his left field of view, Weiskrantz and his cohorts (Weiskrantz 1977, 1986) conducted experiments in which simple shapes, such as an X or an O, were held several feet away from the patient in his "blind" field of view. The patient was then asked if he saw anything and each time denied doing so. Asked to guess the location of the object by pointing at it, the patient invariably "guessed" right. Moreover, if the object was above a certain critical size, the patient was able to discriminate whether it was an X or an O ninety percent of the time, though in both cases he took himself to be making wild guesses. Weiskrantz's own hypothesis concerning these patients (several others have been similarly discovered) is that there are two visual systems in human beings: the usual geniculate-cortical one described earlier and a second, older, midbrain system. When the newer system breaks down, the older system, which 73
Phenomena
presumably more closely resembles those of nonprimate animals, is reactivated. Two relevant claims can be based on these cases. The first is that visual perception seemingly can occur even in the absence of visual phenomena. Blindsight subjects certainly deny their existence. Yet, their perception can reasonably be considered "visual" because it concerns objects at a distance, requires their eyes to be open, to be in good working order, and to be pointed in an appropriate direction. So vision may not require visual phenomena at all.17 The second claim is that blindsight cases provide evidence that an intact striate cortex is necessary for experiencing visual phenomena. One can grant that evidence from a single kind of case is bound to be weak; nevertheless, the second claim is at least possibly true. And for the purpose of this section of the chapter, that is all that is needed. Also consider commissurotomy cases.18 It is well known that commissurotomy patients deny seeing anything in their left fields of view; but even while denying it, their left hands grasp an object whose picture is flashed in a tachistoscope to their left fields of view. Moreover, in later experiments subjects' left hands — if the general directions were appropriately different — grasped, instead, an object (or picture of an object) only conceptually related to the object pictured in the left field of view (see, for instance, the discussion in Gazzaniga 1977). In these more complex experiments, commissurotomy subjects process perceptual judgments considerably more sophisticated than those of blindsight patients. Perhaps this fact is explained by their having an intact striate cortex. That assumption certainly seems reasonable. However, if that is the correct explanation, then even an intact striate cortex actually active in perception may not be sufficient for experiencing visual phenomena; for these subjects, like blindsight subjects, deny experiencing appropriate visual phenomena.19 As with 17
18 19
Campion et al. (1983, 434ff) claim that, when pressed, several of the people who at first denied experiencing phenomena admitted to experiencing some after all. However, the patients describe their phenomena as "a pinprick," "a tickling," or "gunfire at a distance" (435)! So, at worst, such descriptions simply strengthen the claims of §3. Actually, I think vision — even blindsight - does involve visual phenomena (see especially chapter 6); but the evidence cited so far is certainly compatible with their absence. Since such cases are well known, I will not go into great detail. For further elaboration of such cases, see Gazzaniga 1970 and Gazzaniga and LeDoux 1978. I say "appropriate" because they do acknowledge experiencing visual phenomena associated with their right field of view.
74
Pains
Hindsight cases, two relevant points can be made: (1) Quite sophisticated visual processing can apparently occur without one's experiencing visual phenomena, and (2) there is evidence that having a linguistic ability to describe their visual phenomena is a necessary condition for experiencing visual phenomena. While both claims may be false (I think they are false), it needs to be shown that they are false. Surprisingly, they are not obviously false; and Hindsight and commissurotomy cases lend them evidential support. When combined with the eagle case, these results provide a reason to think that when eagles see, they experience no visual phenomena whatsoever;20 for eagles' visual systems are quite different from ours, and there is no reason to think that eagles have the relevant linguistic abilities. Their behavior, combined with a prejudice that visual phenomena are necessary for seeing, incline one to think that eagles must experience visual phenomena. But Hindsight and commissurotomy cases suggest that even in human beings, vision occurs in the absence of visual phenomena. It is at least possible that eagles, frogs, and flies, while seeing, experience no visual phenomena whatsoever. Wittgenstein (1953, lOOe, §293) understood and appreciated this point also. For as he said, it doesn't matter if the box is empty, that there is no beetle in it. And, of course, he was talking about pain rather than seeing. For one can reach the same conclusion about pain. It is possible that at least some other animals experience pains even though they experience no pain phenomena. What matters is that the appropriate attitudinal states function in their experiences in ways similar to the way similar attitudes function in our experiences. And these attitudinal states, according to this theory, constitute pain. Reasons can be given in support of this theory, some of which have been presented in previous subsections, and some of which — both theoretical and moral — will be presented in §5. But for the moment, consider two thought experiments. (1) Suppose that someone, say one of the thin-brained people discussed in §2, experiences no pain phenomena but believes he or she does. That is, this person behaves as we do, has the same emotional responses we have, shares the same beliefs and desires we have about 20
[Editor's Note.] This appears to be the third of the three answers to the question of what the visual phenomena of eagles are like (see p. 64). Although the eagles see (it is possible that) they experience no visual phenomena.
75
Phenomena
cutaneous and visceral damage, can say when a noxious stimulus is intensified, and so forth. Given the way we learn phenomenal language, this person might well come to think — however wrongly — that he or she experiences pain phenomena. Should we say that this person doesn't really have pains? Or, perhaps, that this person doesn't really know what the word "pain" means? There would be little point in saying either of these things. Perhaps, though, such a person couldn't have the appropriate attitudes nor make all the discriminations we make (a burning pain versus a dull ache, say). That is, perhaps this "possibility" is fanciful and impossible. If so, its impossibility is not a conceptual one but an empirical, theoretical one. At present, we just do not know whether such a state of affairs is possible or not. No a priori grounds rule it out. (2) Next, consider people who apparently never feel pain. These people tend to die young because of injury or accident, or because of visceral damage brought about by a failure to change positions, especially while sleeping. Suppose such a person, in fact, experiences phenomena just like the ones I experience when my stomach is bothering me or when I cut my arm. But suppose these phenomena do not alarm the person, or cause the person to look at the arm or worry about her or his stomach, or cause the person to want the phenomenal experience to cease. That is, the person has none of the ordinary affective-cognitive-motivational responses but does experience the phenomenon. If we say such a person is in pain after all and we (including the person himself or herself) have just been deceived about him or her all along, we will only muddle things. §5 contains reasons for not saying it. Trigg (1970) holds that this person would be in pain after all, while the thin-brained person of the first thought experiment would not be. That is, Trigg identifies pains with pain phenomena. As Trigg sees, some of our intuitions about pains must be surrendered. But the argument of this chapter is that Trigg gives up the wrong ones. He has to surrender more of them than I do (whichever of the two theories I present in this chapter is considered), he has a much harder time accounting for all the facts, and he does not fit those facts into a general theory of sensations. His best argument against a theory like the attitudinal one now under consideration is that in states like nausea the attitude is similar to pain, but we don't consider nausea to be pain. His claim is that the difference lies in the phenomena experienced. But the differences 76
Pains
among phenomena we do call pain phenomena (those accompanying a cut, a toothache, a headache) are at least as great. So his answer is unlikely to be right. A defender of the attitudinal theory might claim that "pain" and its synonyms originally got used for attitudes accompanying overt bodily injury. It then, quite naturally, got extended to cover internal states like appendicitis, toothache, and headache. So far nausea has not been included but only because the circumstances and attitudes are just different enough that English speakers have not extended the concept yet again. No natural kind of phenomena prevents us from doing so. And it may well be that a more fully developed science will treat nausea as a kind of pain. One fact that may make it plausible to do so is that toddlers do not distinguish pain and nausea, or at least do so only with difficulty (Leach 1989, 533).21 As in the previous case, one might want to charge the example with being too fanciful. One might insist that anyone experiencing pain phenomena would have to share the appropriate attitudinal states. But there is no reason to consider this claim as a priori true. And as an empirical claim, it seems likely to be false, as the cases of the lobotomized and morphine-dosed patients mentioned in §1, and as the cases in §3, illustrate. Thus, in the two thought experiments, the attitudinal theory recommends that we treat the thin-brained people, despite their lack of phenomena, as being in pain and that we treat those in the second thought experiment, despite their experiencing our pain phenomena, as not being in pain. The theory presented in section III agrees with the attitudinal theory about the second thought experiment, though not about the first. 5. The discussion of §§3 and 4 shows that phenomenal kinds and perceptual kinds (including being in pain) do not make a neat fit. At best, as shown in §3, the two sorts cross-cut each other; and, at worst, a perceiver can have each without the other, a claim §4 shows to be compatible with the evidence now available to us. However, one can go beyond the evidence and appeal to larger theoretical concerns to urge that being in pain be treated as an attitudinal state rather than as a phenomenal one. 21
Trigg's objection also causes difficulty for the two-phenomena theory. If its defenders would reply that the hurtfulness of pain phenomena is just different from the hurtfulness of nausea phenomena, their reply would have an air of ad-hoc-ness to it. What supports the reply? Introspection? Not in my case.
77
Phenomena
First consider the theoretical reasons.22 If we understand pain to be an attitudinal state, then it can be treated similarly to other perceptual states. That is, it can be treated as a functional state, just as psychologists now, by and large, treat vision. In fact, if one looks carefully at the Melzack-WaU Gate Theory of Pain (Melzack 1973; Melzack and Wall 1983), one sees that it is exactly in this way that they treat pain.23 We have some of the tools, both intellectual and physical, for dealing with and understanding functional states. We have none at the present time for dealing with phenomena. When we look at the facts, we see that in sorting experiences as pains the attitudinal criteria are primary in any case, that phenomena get sorted only on that basis. To understand this fact is to realize the secondary role that phenomena play in pain. Since phenomena play only a secondary role in pain and since we have no tools for dealing with phenomena, research on pain that takes the attitudinal criteria as definitional for pain is much less likely to get bogged down in problems that are at present unresolvable but that at the same time are really inessential to an understanding of pain. Research that thus avoids questions about phenomena is more likely to get ahead with the job and with the expected applicational benefits of a good pain theory, namely, providing actual relief for the sufferers of unnecessary pain. Given that the attitudinal criteria and the sorting criteria for phenomena as kinds in themselves cross-cut each other, if we want to save pain as a kind, we will have to choose between one sorting system and the other. Undoubtedly, sorting by the attitudinal criteria fits a greater number of our common intuitions about pain (though certainly not all of them). Cases like the Nordic—Mediterranean one cited previously, if correctly interpreted, suggest that if we were to take phenomenal kinds as basic, we would have to say either that the Nordics are in pain, despite their claims to the contrary, or that the Mediterraneans are not, despite their claims to the contrary. Neither move seems acceptable. Or we would have to take the two-phenomena view, which makes phenomena especially mysterious and is also insufficiently motivated. One might ask why we should keep pain as a kind at all. That is a 22
23
The discussion of the next several paragraphs needs to be read as if prefaced by the phrase, "According to the attitudinal theory." Several assertions made in the discussion as if they are true will be challenged in section III. Melzack (1973, personal communication), for reasons to be presented in section HI, believes that phenomena are a necessary component of pains.
78
Pains
legitimate question. The response is that science should reject the concepts of ordinary life only where it absolutely has to. After all, pain is one of those states we set out to explain in doing psychology or neuroscience; and it would be quite odd if it turns out that no such state exists. Not impossible, but certainly odd. Moreover, prima facie, it is likely that there will be lawlike relations both between pain and its causes and between pain and its effects. If no such lawlike relations obtain, only empirical research can determine that they don't. There is no good reason to assume ab initio that no such relations obtain. If we understand pain according to the attitudinal criteria, then science need not give up pain as a kind; and the scientific concept will preserve many common beliefs about pain. If, to the contrary, phenomena are defining of pain, it may turn out that many nonhuman animals - since their phenomena are likely to be different from ours, or even totally lacking - never suffer pains, despite their attitudinal states. That conclusion is hard to swallow. And which phenomena are pain phenomena? Ache phenomena appear to be like stabbing phenomena only in their attitudinal accompaniments. And that claim, if true, reasserts the primacy of the attitudinal criteria. The attitudinal theory is not denying the existence of pain phenomena but maintaining only that they are secondary and inessential to an understanding of pain. The attitudinal theory even allows that phenomena play a causal role in pain. It claims, however, that this role might be played by nonphenomenal states in other organisms. The claim is that we can solve the puzzle of pain without solving the problem of phenomena, and the first is more easily solvable than the second. Besides these theoretical considerations, there are important moral considerations. As said in §1, pain is a morally important concept. Many recent defenses of animal rights are based on the claim that nonhuman animals are like us in that they feel pain. But as was argued for in §§3 and 4, it may well be that other animals experience entirely different phenomena from ours or even none at all. Our concern for nonhuman animals should not turn on this empirical possibility. It should, instead, turn on whether nonhuman animals display attitudinal states similar to ours in the face of cutaneous and visceral damage. And so they seem to do. Since the word "pain" carries such important moral baggage, it would certainly behoove someone concerned about the lives of nonhuman animals to maintain (on the basis of their 79
Phenomena
attitudinal states, and in disregard of the phenomena they experience) that nonhuman animals are in pain. Descartes' mistake, one might claim, was not in thinking that nonhuman animals experience no phenomena (as I argued in §4, that is a conceivable state of affairs), but in thinking that these phenomena are what matter and all that matter.24 The attitudinal theory of pain holds that it is the attitude one takes toward these phenomena, or more boldly, simply being in a certain sort of attitudinal state, that matters. Thus, for both theoretical and moral reasons, the attitudinal theory argues that it is better — given how phenomenal sorts are cross-cut by attitudinal criteria — to take attitudes rather than phenomena as constitutive of pain. Pain, on this view, is an attitude, not a phenomenon. in
6. The attitudinal theory is plausible. Indeed, I once promoted it; but I now think it incorrect. However, I do not reject it completely. Most especially, I continue to think that identifying pains with phenomenal states is a mistake. The arguments for that thesis, especially when conjoined with those of the first two chapters, seem to be quite clearly correct. The thesis I wish to reconsider and revise is the claim that pains are attitudinal states. The result of this reconsideration will be a theory that is more coherent, more detailed, and more akin to commonsense beliefs about pain, salvaging even more of those beliefs than the attitudinal theory does. The attitudinal theory draws a good deal of its support from the analogy to blindsight and commissurotomy cases, but those analogies are flawed in two ways: (1) There are theoretical reasons to believe that blindsight and commissurotomy patients experience phenome24
The position here ascribed to Descartes, though often ascribed to him, is almost certainly an oversimplification of his actual position. In fact, there is good reason to believe that Descartes considered the relation between being in pain and pain phenomena in a way quite similar to the attitudinal theory. He did not deny that nonhuman animals experience pain phenomena. Rather, he denied that they suffer pains. Since he believed suffering involves proposition-like cognition and since he denied nonhuman animals have such cognitive states, he concluded that nonhuman animals do not feel pain. As section III of this chapter will make clear, I disagree with Descartes about whether nonhuman animals have cognitive states, and because of that disagreement, also disagree with him about whether nonhuman animals feel pain.
80
Pains
nal states after all, albeit not apperceivable ones. The case for this claim will begin to be made in the next chapter, and chapter 6 will develop the case begun there. (2) Unlike with other of their perceptions, after the corpus callosum has been severed, commissurotomy patients continue to say they feel pain when presented with a pain stimulus on the left side of the body, even while denying other leftside perceptions. My reasons for rejecting the attitudinal theory of pain in favor of the one to be presented in this section go beyond these breakdowns in the analogy. At the time I first presented the attitudinal theory (1986), I was already aware of the commissurotomy exception (and called attention to it in a footnote) but didn't think it important. Since the appearance of the attitudinal theory, I have been developing a theory of sensations, consciousness, and mind in general25 that was incipient in that pain theory and that is the subject matter of the remainder of this book. The new pain theory, the evaluative theory of pain, makes the tie between a pain theory and other segments of the broader theory of mind tighter; and certain saliencies in that broader theory of mind are better exposed to the light.26 For the purposes of this chapter, only a few elements of the broader theory need to be anticipated. At this juncture, the most important is the thesis that "consciousness" really names three separate, dissociable states, states I call "sensation consciousness" (CS), "firstorder propositional-attitude consciousness" (Cl), 27 and "apperception consciousness" (C2). Only CS and C2 need occupy us for the present. CS may be thought to consist of phenomenal states.28 By "apperception consciousness" (or, more simply, "apperception") is meant a second-order state that has occurrences of CS and C l states as its 25 26
27
28
Nelkin 1987a, 1990, 1994b, 1987b, 1989a, 1989b, 1993a, 1993b, 1994a, Forthcoming-a. The motivation for my reconsideration of pain has two origins: reading a book manuscript by my colleague, Carolyn Morillo (1995), and preparing lectures on pain for a class on Wittgenstein's Philosophical Investigations. In trying to better articulate my view, in contrasting it both to Morillo's and to Wittgenstein's, I found that these changes in the theory were required. From n o w on, I will shorten "first-order, propositional-attitude consciousness" to "PAconsciousness." C 2 is itself a propositional-attitude consciousness, albeit a second-order one; b u t since I already have another name for it, "apperception," I hope n o confusion will b e caused. T h o u g h the actual analysis of phenomenal states is more complicated than I have so far indicated — see §8 below, as well as chapter 4.
81
Phenomena
content. As I use the term, apperceiving a state S does not require paying attention to S, apperception is not incorrigible, and apperception is unlike perceiving (the mode of representation is different, and, most probably, nothing in apperception plays the role of sensory organs in perception — for an extensive discussion of apperception, see chapter 8). In addition, apperception itself has no phenomenology to it (nor does C l — see chapters 5 and 6). In the sense that there is something it is like for an organism to experience phenomenal states (Nagel 1974), there is nothing it is like for an organism to apperceive. Supported by theoretical considerations and by recent laboratory and clinical findings (Keating 1979; Stoerig 1987, Stoerig and Cowey 1989, 1992; Stoerig personal communication), I will argue in chapter 629 that one could be in phenomenal state S while not being apperceptively aware that one is in S. And when one is apperceptively aware of S, the only phenomenality occurring is the phenomenality of S. Apperception adds nothing phenomenal to S. These theoretical claims about the nature of consciousness play an important role in the discussion to follow. 7. Let me begin that discussion by first considering the thesis I am most anxious to alter. In section II, I argued that pains, rather than being phenomenal states, are best considered as attitudes, where I parsed attitudes as made up of belief states, affects (liking/disliking, among others), and motivational states. However, as Morillo has pointed out to me, exactly what the appropriate beliefs are supposed to be is largely left unsaid and undeveloped. I hope to rectify that shortcoming — at least to a degree — now. As she correctly points out (Morillo 1995), on certain possible interpretations of the attitudinal theory, the belief component is either trivialized or leads to an infinite — and unenlightening — regress. The revised theory's treatment of the belief component of pain more clearly escapes those pitfalls. If attitudes are constitutive of pain, then it is conceivable that there are creatures who have pains but experience no phenomenal states. In Wittgensteins (1953, lOOe, §293) metaphor, there may even be no beetle in the box. It is this pains-as-attitudes thesis I now wish to retract. Phenomenal states are necessary for pains; though in keeping 29
See also Nelkin 1989b, 1993a, 1993b, 1994b, 1994c, Forthcoming-b, Forthcoming-c.
82
Pains
with the thesis that these phenomena do not form a natural kind, no particular kind of phenomenal state is necessary.30 8. Let me now sketch the evaluative theory. In §9, I will consider a case that allows the reader to see more clearly the difference between this theory and the attitudinal theory. Then in §10, several consequences of the evaluative theory will be spelled out, including how it nicely fits into a larger theory of mind. Finally, in §11, several difficulties for the theory will be raised and resolved. Earlier, moral advantages were said to accrue to the attitudinal theory; and even earlier, it was noted that pain has a moral importance. One reason for the moral importance of pain, I would like to suggest, is that, like other moral terms, "pain" is at least partially an evaluative term. When one ascribes "pain" to oneself, one is not merely describing a condition of oneself. One is also evaluating that condition.31 That is, pains are complex states, with two components: an occurrent state of the organism and an evaluation of that state. An evaluative element is essential because the concept of pain is evaluative as well as descriptive. On consideration, the best candidates for the occurrent states evaluated are phenomenal states. So I am now willing to concede that every occurrence of pain involves an occurrence of a phenomenal state. However, as the previous arguments show, (1) no particular phenomenal state is necessary for that role (that is, there is no natural kind, pain phenomena) and (2) the phenomenal state by itself does not constitute pain (unevaluated, or evaluated differently, there is no pain, even though the very same phenomenal state occurs, as, for instance, the Glass et al. study seems to illustrate). Pains are bad, but no phenomenal state in itself wears that evaluation. Phenomenal states may have intrinsic qualities (I think they do), but being zpain (hurtfulness) is not one of them. To have a value is, in this case (if not in all), to be evaluated. Obviously, the key term in this analysis is "evaluation." What is 30
31
Others have argued for a thesis somewhat similar to the present one. Stephens and Graham (1987) argue for a thesis of this general sort, though in important ways their thesis is closer to the attitudinal theory. Green (1991) also presents a thesis somewhat similar to the present one, though his is a desire-based, rather than judgment-based, theory. I think something like this insight led to Wittgenstein's (1953, 89e, §244; 99e, §290) denying altogether that "I am in pain" is a self-descriptive statement. But if I am right, his insight is somewhat occluded and his denial too strong.
83
Phenomena
meant by it? The fuller statement of the components of pain is that there is a phenomenal state (a CS state) and a spontaneous, noninferential evaluation of that CS state as representing a harm to the body. The evaluating state is a C2 (an apperceptive) state. Only when the two states occur together as a complex state does an organism experience pain. According to the attitudinal theory, pains are said to be constituted by the attitudes; and it is said that the appropriate attitudes include certain beliefs. The statement is only half true if the evaluative theory is correct. A judgment is necessary but not sufficient for pain (as was also argued previously): a phenomenal state is also necessary (as was denied earlier). Moreover, neither affects nor motivational states are necessary (the contrary being maintained before). The attitudes are only contingently connected to pains. Like the twophenomena theory, the evaluative theory claims that phenomena are necessary to pains and that pains are complex states. But the evaluative theory assesses that complexity differently: Pains consist of a phenomenal state and the simultaneous, spontaneous appraisal of that state as representing a harm to the body. No second phenomenal state is required. Further clarification is required. When I say that the phenomenal state is evaluated as representing harm to the body, the claim is ambiguous. It does not mean that the CS states are themselves harmful to the body. CS states, as far as I can tell, are always brain states; and those brain states are rarely harmful to the body. It would be most peculiar if the result of evolution were that we always made mistaken evaluations, evaluating as harmful the brain states themselves. Rather, the evaluation is a kind of second-order representational state (a C2 state). CS states are themselves representational (see chapters 4 and 6 for further arguments).32 In the case of phenomena we categorize as bodily sensations, various cellular states of the body are represented. In pain, there is a simultaneous C2 state that evaluates the cellular state of the body so represented as also being a harm to the body. So it is not the 32
While baldly claiming the representational nature of CS states here, the matter is not so clear cut; and chapter 4 will explore this issue. However, for the expository purposes of this chapter, putting things in this bold way is not overly misleading. If I were cautious, I would have to say throughout this part that CS states are representations or indicators (this term will be introduced in the next chapter). However, the amputated limb cases, soon to be discussed, actually weigh in favor of a representational view.
84
Pains
CS state itself that is evaluated, but the state it represents. The phenomenal state is the occasion of the evaluation, not its object.33 Certain phantom limb cases provide evidence for the representational nature of phenomenal (CS) states in pain. In some cases of phantom limb, brain areas previously connected to intact areas of the body seem to switch to representing the missing limb. For instance, there are patients who, when a particular patch of their face is stimulated, experience these stimulations, including pain stimuli, as phenomena occurring in their amputated limb. Moreover, there is a one-to-one correspondence between points of the facial patch and the missing limb, such that a stimulus at point A t on the face is felt as if the phenomenon is at point A2 on the limb; at B r as at B 2 ; and so on. Thus, the phenomena experienced seem to represent bodily location. More remarkably, the representations are qualitative as well as spatial: Warm water trickling down the facial patch causes the patient to feel phenomena associated with warm water trickling down the missing limb, and so on — pains included (see Ramachandran et al. 1992). The apparent representational nature of the phenomenal states involved in phantom pains makes it plausible that phenomenal states involved in ordinary cases of pain are also representational. An important feature to be stressed about the judgment component at issue (the evaluation is a kind ofjudgment) is that it involves a de re, referring, element. The judgment is about an actually occurring phenomenal state. So no pain-judgment is possible without a painphenomenon. Both are necessary for, and mutually constitute, feeling pains. A second important feature of the judgments is more speculative, but it is both psychologically and neurophysiologically plausible. The spontaneous, noninferential evaluative judgments involved in pain are the outputs of a module dedicated to scanning (or just sensitive to) certain phenomenal states — those representing localized cellular states of the body. These are the only judgments relevant to pain. The job of 33
The evaluative theory may have to be further complicated. The representation to be evaluated may be an aspectualized one. If so, pain may well include a Cl state in addition to CS and C2 states. Actually, I am torn in two directions about whether this complication is likely. But for the present, further discussing this issue would only complicate the exposition of the theory, obscuring its structure. If I am right about the nature of the issue, then omitting a discussion of it from the present exposition, while simplifying, is not misleadingly simplifying.
85
Phenomena
the module is to note those representations of states harmful to the body.34 But what about those other belief states, affective states, and motivational states the attitudinal theory takes to be constitutive of pains? Pains are only causally connected to such states and can occur in their absence. There are two kinds of hurtfulness: the hurtfulness of pain itself and an emotional/affective hurtfulness (distress, and so on). The first is what pain is. The second is only contingently connected. That there exists only a causal connection between pains and the attitudes, that the attitudes are not constitutive of pains, helps explain anomalous cases. We will take a look at some of these anomalies in §§9 and 10. 9. To compare the attitudinal and evaluative theories, consider once again the cases of patients who are given morphine after the onset of pain and of patients who receive prefrontal lobotomies. In both sorts of cases, subjects behave strangely in the face of stimuli normally thought to cause pain (shocks, immersion in ice water, and so on). As remarked in section I, when asked about their experiences when so stimulated, subjects say they feel pain but that it doesn't hurt (Melzack 1973, 95; Melzack and Wall 1983, 168-69). The attitudinal theory claims that these subjects, since they lack appropriate affective and motivational states, are not in pain at all. The responses of the patients are to be interpreted as saying, "I am experiencing the sort of phenomenal states that normally occur when I have pain, but I am not in pain." Such phenomenal states are claimed by the theory to be nonnecessary accompaniments of pain. The evaluative theory says, to the contrary, that when shocked or immersed in ice water, these patients are in pain: they are experiencing a phenomenal state and evaluating it as harmful to their bodies; but they lack the normal causal connections between their pains and the usual attitudes. Their remark means just what it appears to mean — where "hurt" is to be read as expressing an affective state (also expressed by "I am in pain, but it doesn't bother me"), which is contingently disconnected from the pain state. It is notable that while these patients say the stimuli do not bother them, if asked whether they would prefer that the stimuli be stopped, they agree that the stimuli should be stopped. Moreover, we have evidence from other behaviors of lobot34
The questions, why there might be such notation and for "whose" sake these notations are made, are more fully treated in section III of chapter 8.
86
Pains
omy patients that their emotive and affective responses are dulled. Their sensory experiences and cognitive skills do not appear to be similarly dulled. Masochism, too, is understandable from this model: again, the connections between pains and attitudes are severed (or partially severed), or, instead, conflicting attitude states result. That is, the masochist is in pain but, nevertheless, desires the pain to continue. The attitudinal theory finds it more difficult to tell a convincing story about masochism. So we can see, from the morphine/lobotomy cases, as well as from masochism cases, that there are significant differences in the explanations the two theories offer. In §10, three kinds of reasons for preferring the evaluative theory are presented: (1) The evaluative theory better fits available empirical evidence; (2) the evaluative theory preserves more commonsense intuitions about pain than does the attitudinal theory; (3) the evaluative theory fits better with theoretical claims of a larger theory of mind. 10. Pain measurement scales, which have proved to be valuable diagnostic tools for practitioners, include a series of pain-descriptors that seem to refer to phenomena: "sharp," "dull," and the like; and these descriptors are taken as phenomenal descriptors by the people who both make up and read these scales (see Melzack 1975). In the attitudinal theory, these descriptors need to be treated, not as phenomenal descriptors, but as affect descriptors. Such a redescription is not fully convincing, though it is not obviously mistaken either. The evaluative theory allows, to the contrary, that these terms are phenomenal descriptors. However, an important proviso: the descriptors — "sharp," "dull," and so on — are "names" of properties of common external causes of such phenomena, not "names" of the phenomena themselves.35 No phenomena are intrinsically pain phenomena. No natural kind, pain phenomena, exists. So that major thesis of the attitudinal theory is in no way undercut. "The pain is sharp" says something like, "The pain I am now experiencing is a phenomenal state representing harm to the body and is like the phenomena I feel when I am poked with a sharp object." And similarly for descriptors like "dull," "stabbing," and so on. But these phenomena may, in themselves, be different for members of 35
Some of the "names" may not be related to causes; but if not, they are, nevertheless, related to other features and properties of the external world.
87
Phenomena
different species, for different members of the same species, even for oneself at different times. Despite the use of phenomenal descriptors on pain measurement scales, the Gate Theory of Melzack and Wall (Melzack 1973; Melzack and Wall 1983), as already noted in section II, treats pain as a functional state. Nothing is said about phenomenal qualities per se. The evaluative theory explains very nicely why functional analyses of pain have taken us so far: if a phenomenon is needed to be evaluated as harmful, but the qualitative character of the phenomenon doesn't matter, then the phenomenon can be treated as a place-holder in the theory, a mere X.36 We should be able to go a long way toward explaining pains, even while ignoring the actual phenomena themselves. However, at the same time, there is a widely shared intuition that purely functional analyses omit something crucial (Block and Fodor 1981). That intuition is also understandable on the present analysis. It could turn out to be the case that human (or mammalian, or earthly animal) pain phenomena form a restricted class of phenomena kinds (which would not belie the "no natural kind" claim).37 If so, we will not understand pain fully unless we go beyond a functional analysis and discover what this class is and why only its members form the class for human beings (for mammals, for earthly animals). But these are empirical issues that will not be resolved until we know much more about phenomenal states than we currently do. The present theory allows us to preserve as true the folk psychological biconditional, "One is in pain if and only if one believes oneself to be in pain," or rather, it allows us to capture something related to that biconditional. The biconditional is close to being right in that a kind of judgment is essential to being in pain: one can be in pain only if one makes the proper evaluative judgment. And if one makes that judgment, and given that that judgment is a de re judgment, then it is also true that by making that judgment one is in pain. The present analysis makes sense of why we take the biconditional to be true. Without 36 37
The qualitative character may well matter for parameters like intensity. On the evaluative theory, it does matter in this respect — and possibly for other features of the pain. Among other things, even if the class of phenomena kinds is restricted, it is probable that the members of the class also play a role in nonpain experiences. Several of the cases cited in section II, such as the Glass et al. shock experiment (1973) and the Mediterranean/Nordic pain threshold experiments (Melzack 1973), seem to support the truth of the consequent of this conditional.
88
Pains
such an analysis, it is more difficult to see why it would be thought true. But it is important to see that merely making the evaluative judgment constitutes pain (given the judgment is de re, the phenomenon must also be present). Making the judgment, however, does not mean that the bodily state represented by the CS state is itself accurately C2evaluated as being harmful to the body. Because a gap exists between making the evaluative judgment and the evaluative judgment's being correct, we can understand that someone can be in pain in the absence of physiological harm. Certain sorts of hypochondriac pain fit this description quite well. Hypochondriacs experience bodily sensations that represent cellular states of their bodies; but their spontaneous, apperceptive evaluations of these bodily states as harmful are mistaken. Any of us might be subject to such errors on occasion. Only because hypochondriacs present a pattern of mistaken evaluations are they hypochondriacs. So here we have yet another anomalous case, hypochondria, to add to those of the morphine/lobotomy and masochism cases, that becomes quite understandable on this analysis.38 The hypochondria cases suggest that although the evaluations are spontaneous and noninferential, and even the output of a dedicated module, we have a measure of control over them. We blame hypochondriacs for their misevaluations. We believe that we can not only be stoical in the face of pain, but we can even learn not to feel pain under conditions where we once, or once were likely to have, felt pain. Think of how we teach children exactly this "skill": We tell them that the fall didn't really hurt. The module is not fully impenetrable. We can also now understand how people can be injured, fail to feel pain, see the wound, and only then begin to feel pain - even though the phenomenal state has not changed. Other insights are gained from this analysis as well. It has always been a bit of a puzzle how and why pains are located in bodily parts. From phantom-limb cases (Melzack 1973, 50-60; Melzack and Wall 1983, 72—86; Ramachandran et al. 1992), we know that pains are sometimes "located" in nonexistent parts of bodies. So, on few accounts that I know of (Stephens and Graham's [1987] account is an exception) are 38
There may be other states we call "hypochondria." For instance, there are cases where one feels pain legitimately but infers that its cause, rather than being minor, is a dread disease like cancer. Such cases of hypochondria have a different basis from the ones discussed here, and the evaluative theory does not account for them - but, then again, it is not meant to.
89
Phenomena
pains in the leg, say, thought to be actually located in the leg. Pains in the leg are in the head. Nevertheless, the phenomenal state involved in a pain in the leg represents a state of the leg. But, of course, the CS representation may itself be inaccurate. Amputees really do feel pains in their (amputated) legs: both the CS states, which represent cellular states of their legs, and the C2 states, which evaluate the represented states as harmful to the body, really exist. The pain is no illusion. But the CS representation that is evaluated is itself mistaken and gives rise to the illusory location of the pain. Referred pain can be understood in a similar way. One's pain is felt in the shoulder because the CS state evaluated represents a state of the shoulder, even though the actual bodily harm is a state of ones tooth. 39 Of interest, and relevant to this analysis, is the evidence that young children do not locate pains very well (Leach 1989, 533). The best way of understanding this fact is that while the relevant CS state represents a cellular state of the leg, say, from the very beginning, only minimal information about (or understanding of) this representation is accessible to apperceptive consciousness, so that the C2 evaluation, while about a cellular state of the leg, is cognized at an apperceptive level only as regarding some state of the body or other. In this case, the child can be thought of as having a pain in the leg, feeling it there, but not being apperceptively aware that that is a correct description of its pain. The apperceptive description of its pain is dependent on how much information about (or understanding of) the CS representation is available to C2. The phenomenal feeling is readily available, but the representational information of the CS state available is minimal. For whatever reasons, as we age, more of this latter information (or a greater understanding of the information) makes its way into apperceptive consciousness.40 Even with adult pains, the information available to the C2 evaluation is only partial. If the CS state of a pain represents a localized cellular state of the body, we adults are able to describe that state only in 39 40
Harvey Green, in conversation, helped me get a better handle on these issues. A second possible account, which would also mesh with the general thesis, is that infants do not experience pains-in-the-leg, only pains-in-the-body. CS representations grow more refined as we grow older; and as they become more refined, we then feel things we could not feel before. Mark Rollins has pointed out to me that while this account meshes with the larger theory, it may have unwanted consequences for the relationship between CS and C2 states, making that relationship tighter than I would find comfortable.
90
Pains
general terms — less general than those of the toddler, but quite general, nevertheless. So, on this account, the differences between toddlers and adults, when it comes to knowledge of their pains, is one of degree rather than of kind. Besides fitting better with empirical evidence and preserving a greater number of commonsense intuitions, the evaluative theory has consequences I find quite congenial to a larger view of the mind. Being in pain requires being conscious in at least two senses: CS and C2. Thus, both forms of consciousness are ascribable to any organism we correctly ascribe pains to. If we are justified in our ascriptions of pains to nonhuman animals, then both forms of consciousness go pretty far "down" the phylogenetic scale. While many might find this result unsurprising for CS, it is probably more surprising when it comes to C2. But it shouldn't be. Later on (in chapter 9), I will argue that C2 is necessary for an organism's coming to conceive of an external world and of there being objects in it, as well as for an organism's differentiating itself as a thing in that world. Since I think that animals fairly far "down" the phylogenetic scale do conceive of an external world and do distinguish themselves as different from other objects in that world, I find this result to be quite pleasing.41 Also, given that C2 is essentially involved in so primitive a state as pain and given the importance of pain in an organism's life, it becomes more understandable how evolution could give rise to a greater role for C2 in some organisms than it has in others. All that is required is taking advantage of a mechanism that occurs comparatively early in evolutionary history. Finally, given the tight fit of CS and C2 in pain experience and given the importance of pain for our lives and behaviors, it becomes quite understandable how philosophers and psychologists (Nagel 1974, 1986; Searle 1989, 1990,1992; McGinn 1988, 1989; Natsoulas 1989b, 1990b — among other recent examples) could take consciousness to be noncomposite and indivisible, having both a phenomenal side and an apperceptive side. As I will argue in Part Two, CS and C2 are actually dissociable - but, as we now see, not when it comes to pains. Yet, pains are often presented as paradigms of conscious states. In fact, if the present theory is correct, pains are not paradigms of conscious states but quite unusual. Pains are one of a very few kinds of states (along with bodily pleasures) where CS and C2 cannot be dissociated without 41
But see section VI, chapter 9, for some qualifications.
91
Phenomena
the state itself disappearing. Phenomena are dissociable from apperception (see Part Two). Even the CS state that occurs in a pain-experience is dissociable from its C2 state. But as dissociated from the relevant C2 state, it is no longer pain (as is suggested by the Glass et a\. [1973] study). By focusing on pain, a fairly atypical conscious state, we have been led to draw false conclusions about consciousness itself (or better, consciousnesses themselves). 11. While all these reasons weigh in favor of accepting the evaluative theory, there appears to be a serious difficulty for my accepting the theory. As adverted to in §10, I will argue (in chapter 9) that we learn to separate out an external world, bodies (including our own), and ourselves as different from those other objects, by our acting on the world, by distinguishing acting from being acted on. And among the feedback experiences allowing us to make this crucial, and foundational, distinction is pain. But if we feel pain before we have a concept of body, how can we evaluate a phenomenon as representing a harm to the body? In §10, in the discussion of locating pains in bodily parts, it was maintained that various amounts of information (or of understanding) concerning the CS representation can be available to C2. This possibility allows the present question to be answered. Very little information about (or understanding of) the representational content of the CS state is accessible to C2. Because the infant does not yet have the concepts BODY and LEG, it cannot apperceptively even think of (describe) the CS representation as representing what it does represent. At best, it can think of the representation only as "this state of being." "The state here represented is harmful!" is the generalized form of all pain evaluation. How are we able to evaluate relevant phenomena in this way without prior experience of harm? We have evolution to thank. For the module must be hard-wired to output evaluations that are mostly "correct" (in the sense that the phenomena so evaluated really do represent states harmful to ourselves — though we, of course, have no concept of body, nor of our self). And there is additional evolutionary "luck" that pains are tied to the affective and motivational responses they normally evoke. Just as the larger theory of mind maintains that we are aware of being "in-control" before we are aware of our self (see chapter 9), it maintains that we apperceptively understand our CS representation as "this state of being" before we are able to understand it as "this state of body." 92
Pains
A second apparent difficulty for the evaluative theory goes as follows: Pains are said to consist of a CS state and a C2 state. But surely C2 is an evolutionarily sophisticated state that we human beings possess only because of our large neocortex. But many nonhuman animals possess little, if any, neocortex, yet feel pain. So the analysis cannot be right. I agree that many nonhuman animals with very little neocortex feel pain. But I also maintain that pain requires an evaluative state. For reasons presented in section II, phenomena alone cannot account for pain; and no other analysis makes as much sense or solves as many problems as the present one. The mistake, then, must be in thinking that such reflective states are highly sophisticated. I would predict, instead, that apperception must be, in itself, a fairly primitive state, appearing at least as early as the first creature that felt pain. The modular nature of this sort of apperception is certainly compatible with its being a quite primitive state. It may be that C2 is realized by different brain systems, some of them perhaps nonneocortical altogether, much as vision seems to be realizable by different systems (see Weiskrantz 1977, 1986). Again, as with vision, the ways in which apperception is realized may make it to have a greater or lesser cognitive range: The states accessible and the amounts of information made available to apperception for us may be much more numerous and greater than those accessible and available to some other animals. That nature could first achieve a minimal apperceptive capacity would make it a matter of gradual evolution that such a mechanism could evolve complexities allowing it to play a greater and greater role in the lives of some organisms. But to feel pain, little sophistication is needed in the apperception mechanism. It is interesting, and fits my claims, that there seems to be a significant neocortical input into our feeling of pain, and not just the assessing of it after it is felt; and this input is missing in nonprimates (see Sternbach 1968; Melzack 1973, 103; and Melzack and Wall 1983, 167—69). For us, neocortical realizations of an apperceptive capacity may have replaced earlier midbrain (or the like) realizations. There is no evidence that the kind of C2 required by the evaluative theory is too sophisticated for animals "down" the phylogenetic scale. The impression that it is too sophisticated comes from considering every sort of human apperceptive capacity, especially introspection. A second possibility is that something like Baars's (1987) account of C2 is correct. On his account, C2 is noncortical. The cortex is 93
Phenomena
responsible for the sorts of information one's C2 can access, but the second-order awareness of that information (C2) resides in the reticular-thalamic formations. Since these formations are quite old and have changed little over a large evolutionary time stretch, then C2 would exist far "down" the phylogenetic scale. Only the information that C2 can access has grown (in amount and kind) — as the cortex has developed. If Baars is correct, the neocortical input to human pain may reflect a refinement in the representations rather than a different apperceptive capacity. Either this account or the previous one allows for C2 well "down" the phylogenetic scale. Both seem empirically reasonable. And both hypotheses lend themselves to further empirical research and refinement. At this stage of our general knowledge about the mental, one can ask no more. Actual experiments with nonhuman animals support the idea that evaluative apperception plays a role in their pain. When dogs have one of their paws shocked, they react strongly: pulling the paw away, licking it, running away, yelping, and the like. That is, we have good reason for thinking these dogs are in pain. Yet Pavlov (Melzack and Wall 1983, 36) found that if he fed a dog immediately after each shock, the dog's behavior changed considerably. While the paw might reflexively be pulled away, the dog showed no other characteristic pain behavior, but instead displayed the excitement and pleasure of a dog about to eat. When the feeding followed the shocking of the same paw in each trial but not the shocking of any of the others, pleasure behavior followed the shocking only of that paw; pain behavior still followed the shocking of any of the others (Melzack and Wall 1983, 36). This experiment provides a reason to think that the dog experienced similar phenomena in the case of each paw, only it sometimes did not find such phenomena painful. The phenomena did not have the same meaning for it in each case; and it was the meaning (i.e., the dogs evaluation), not the phenomena themselves, that determined whether or not the dog was in pain. The next difficulty one might raise for the evaluative theory is that a person could experience a phenomenon, evaluate it as harmful to the body, yet label the experience "a tingling," say, not "a pain." My response is fairly straightforward. The evaluation of a tingling as harmful to the body is not a simultaneous, spontaneous, noninferential evaluation as is the evaluation, "Harmful to the body," in a pain experience. We have to infer that a tingle means harm to the body. The evaluation as 94
Pains
"Harmful to the body" is not constitutive of the experiences being a tingling. But in the previous discussion of hypochondria it was noted that one could be taught not to evaluate in a spontaneous, noninferential way that ones phenomenon represented a state harmful to the body. Putting this point upside down, couldn't one, after a while, learn to make the previously unspontaneous, inferential evaluation of the tingling in a spontaneous, noninferential manner?42 Yes, one could. But there are two possible cases. The first is that this now spontaneous judgment is not an output of the dedicated module. In that case, the judgment, while spontaneous and noninferential, is irrelevant to pain. The second possibility is that somehow the module is penetrated by the higher-level belief, and the now spontaneous judgment is an output from it. But in that case the tingle would no longer be merely a tingle. It would be a painful tingling. Tinglings really are different from pains; and the apperceptive, evaluative element is what makes pains distinct. In fact, exactly as distinct, for the phenomenon experienced could be the same in cases of each.43 Perhaps a further confusion can be prevented by answering the following questions: "Aren't tinglings sometimes apperceived, yet one not be in pain? If so, how could adding apperception to tinglings be pain?" One can be apperceptively aware of tingling without being in pain. But there are all sorts of apperceptive (C2) states. Only when the apperceptive state involves a de re judgment of a particular kind — an output of a dedicated module that is a spontaneous, evaluative judgment of the form, "This state represented by CS is harmful"— does pain occur. Not all apperceived phenomena are pains. Far from it. But all pains are apperceived phenomena. Note that pains are complex. They require a CS state that is a representation of a localized cellular state of the body and a C2 state that is an output of a dedicated module and is a spontaneous, noninferential judgment about that state represented by the CS state. 42 43
Carol Slater presented this objection to me in correspondence. Similar objections were raised by Robert Barrett and Roger Gibson. Compare the Pavlov paw-shocking experiment (Melzack and Wall 1983, 36); also see the Glass et al. shock experiment (1973). One might also claim that a state like nausea, unlike tingling, does involve essentially the evaluation, "Harm to the body," and yet nausea is not pain. But my treatment of the analogous criticism of the attitudinal theory in section II applies in this case as well.
95
Phenomena
In order to consider a final objection, let me remind the reader of the two cornerposts of the evaluative theory: (1) There is no natural kind, pain phenomena; and (2) Pain is a complex state consisting of a phenomenon and a spontaneous, noninferential C2 evaluation of the state represented by the CS state as being a harm to the body. Putting these two points together, we should be able to conclude that any phenomenon so evaluated by a judgment emanating from the relevant module results in pain. But now consider the following, not so unusual, case: one drops an object on one s foot, says "ouch," but then realizes that it didn't hurt. Here one experiences a phenomenal state, spontaneously evaluates it as harmful (as indicated by saying "ouch"), but denies that one was in pain. This case appears to show that both (1) and (2) cannot be held at the same time.44 My reply to the objection is not to deny that such cases occur, but to deny the account given of them. In particular, I deny that there has been any spontaneous evaluation of a phenomenon. "Ouch," instead, is uttered in anticipation of feeling a pain (experiencing a phenomenon and evaluating it in the appropriate way). The "ouch" is not the result of a judgment stemming from the evaluative module. I cannot prove that my account of the case is correct; but evidence that it is comes from the following, also not so unusual, sort of case: one drops the object, says "ouch," but the object misses one's foot altogether. 12. It is now time to sum up the results of this chapter, results that further the work of the previous chapters. Not only have we seen that there is evidence that pains are not identical to a certain kind of phenomena, but we have seen that, in the case of pain, much as in the case of vision and other sensory states, no natural kind of phenomena constitutes a class, pain phenomena. Two theories of pain that are in agreement with these results have been explored. The first, the attitudinal theory of pain, denies that phenomena are even necessary for pain to occur. Pain is best identified, according to that theory, with a set of cognitive, affective, and motivational states, which together comprise an attitude. The theory claims that if the attitude exists, even though no phenomena are experienced, the organism is nevertheless in pain. The second theory, the evaluative theory of pain, builds on the same base as the attitudinal theory but disagrees with the latter in crucial 44
I owe this objection to Gerianne Alexander.
96
Pains
ways. According to the evaluative theory, pains do require phenomenal states of some sort or other for their occurrence, but what makes these phenomena pains is that they are evaluated as harmful to the body by spontaneous, noninferential judgments arising from a dedicated module. While both the attitudinal theory and the evaluative theory were shown to be preferable to phenomenal theories, several kinds of reasons were presented for preferring the evaluative theory to the attitudinal theory. First, pain ascriptions do seem to involve an evaluative element that is omitted in the attitudinal theory and in all other previous pain theories. Second, the evaluative theory better fits the empirical data, both the practices of pain theorists, as reflected in their questionnaires, and quite puzzling types of clinical cases, as well. Third, the evaluative theory preserves more commonsense beliefs about pains than does the attitudinal theory. Fourth, the evaluative theory fits in better with larger theoretical considerations. This last set of reasons has been doled out mostly in the form of promissory notes. And now the task is to begin paying on them.
97
Phenomena reconsidered The purpose of this chapter is to place the results of the previous chapters onto a map of possible perceptual theories rather than to establish a single correct theory of perception. The latter task is at present beyond me. The aim, then, is to outline several general types of theories of perception that are compatible with the earlier results and with later chapters to come. At least six types may actually meet these requirements. In section I, two quite popular theory types, computational models and Gibsonian models, are considered, as is a less wellknown model derived from Thomas Reid (1785/1969). In section II, I will be bolder, outlining three, mostly original, theories of perception, also compatible with the earlier and later results. These latter theories, as a group, have several virtues. Of two subtexts to this chapter, the first concerns phenomena. The central thrust of the first three chapters has been that phenomena play a lesser role in our lives than we heretofore might have thought. In particular, it has been shown that no natural kind of phenomena is visual, or aural, or tactile, and so on. There is not even a natural kind of phenomena constituting pain phenomena, let alone pain itself. It has been argued that perception is primarily a kind of immediate judgment that defines one boundary of the senses, that the senses are not definable in terms of phenomenal types. Even pain, while requiring some phenomenon or other, requires a C2 evaluative judgment state in order to be pain. Thus, both in (other kinds of) perception and in pain, proposition-like cognitive states rather than phenomenal states are primary. Blindsight and other cases even suggest that phenomena are unnecessary for perception. If all these things are true, why have thoughtful theorists believed phenomena to be so much more important to perception and pain than they are? One answer is the saliency of qualitative properties in experience. But I don't think that this is the whole, or even main, 98
Phenomena reconsidered
reason. The first real clue to an answer makes its appearance in the third section of the previous chapter. When phenomena were first introduced in chapter 1, it was stressed that the focus was on theirfelt quality — on them as qualia. And it has been the importance of phenomena as qualia that has primarily been under attack. However, the evaluative theory of pain pointed to a second, much more important, feature of phenomena: phenomena not only feel certain ways, they also represent — in the case of pain, certain states of one s own body. It is enormously important to understand that phenomena have two aspects: as qualia and as representations (or as something somewhat akin to representations1). This "representational" nature of phenomena makes them seem so important. Moreover, a reasonable speculation is that because of this nature, they are more important than has so far been acknowledged (see section II of this chapter). While their role in perception is likely greater than so far acknowledged, it is not so great as to cause a retraction of the major points of the previous chapters: (1) Perception is primarily a spontaneous judgment state; (2) no natural kinds of phenomena are inherently visual (and similarly for the other sense modalities); (3) phenomena do not possess the properties of the external world, i.e., they are not red, square, and so forth. A subsidiary aim of this chapter, then, is to lay out several possible roles for phenomena that are consonant with these conclusions, but which, at the same time, respect the "representational" aspect of phenomena. The second subtext, also developed in section II, consists of a discussion of the differences between image-like and proposition-like representations. In drawing this distinction, I put forward yet another distinction, that between representational information and content, proposing that image-like representations contain representational information but have, in themselves, no content. Only propositionlike representations, it will be further proposed, have content. The second section of this chapter is more speculative than the rest of the book, and the positions taken are less fully defended. However, these speculations, even if wrong, are harmless to the main theses of this book and to the central thesis of this chapter itself: demonstrating To reflect the above disjunction, I will talk for the while about the "representational" nature of phenomena rather than simply their representational nature. The word surrounded by shudder-quotes is an abbreviation for the disjunctive expression.
99
Phenomena
that several different theories of perception and several possible roles for phenomena are in agreement with the results of the previous chapters. These speculations are worth presenting, though, partly because of their intrinsic interest and partly for the reason that if they are correct - or close to being correct - they help explain many further things otherwise left unexplained by the book. While future chapters assume the final perceptual theory to be proposed in this one, any of the six theories presented satisfies the basic principles of those chapters.
1. The view presented so far is that perception is primarily a kind of judgment. As such, the view looks to be compatible with information views of perception that are currently so popular. Broadly speaking, there are two current sorts of information views: computational models and Gibsonian models.2 In keeping with the overall aims of this chapter, section I will be dedicated to sketching these theory types, placing the results of the previous chapters on the maps provided, and pointing to problems, both in the theories and in fitting earlier results on the respective maps. 2. Two questions can be raised about any perceptual theory: (1) What is the end-state of the perceptual process?3 and (2) What processes are involved in arriving at the end-state? Computational views answer these questions as follows: (1) The percept is a data structure (akin to what I am calling a judgment) and (2) at least some of the processes are inferential and abductive — i.e., cognitive — ones. A brief version concerning vision might go like this: Light strikes 2
3
Connectionist views are sometimes thought to be yet a third kind of information view. On the other hand, they have sometimes been thought to fall on the Gibsonian side of the dichotomy. Others argue that they are merely implementation models for either side of the dichotomy. It may even be denied that connectionist views are information views at all, because information-processing involves at least some cognitive steps. Viewed as more than implementation models, many connectionist models are information-/?ro<:es5ing views only in a stretched sense of "information-processing"; they are, nevertheless, information views. It would be misleading to deny this label of them. And these same two last claims can be made for Gibsonian views. Moreover, it is questionable that it is in the nature of connectionist views not to be information-processing views (see Butler 1995b). I will call an end-state a percept. Questions of end-states were previously discussed in chapter 1.
100
Phenomena reconsidered
the eye forming an image on the retina. The information contained in the retinal image is recoded and channeled: into spatial layout, spatial location, color, motion - each, perhaps, being processed in distinct channels. These data are compared with incoming data derived from other sensory modalities and with data stored in memory (with many of the comparisons probably occurring in area VI). An abduction over the data (a kind of inference to the best explanation) is performed; and the result is a data-structure with a content, such as, "Brown horse in green grass field" (though, of course, not in English). At least some of the processes are taken to be cognitive (inferential, abductive, or the like) but relatively automatic, relatively immune to influences of highlevel cognitive processes (i.e., relatively modular), and except for the end-state judgment, operating outside of apperception. Early attempts at computer vision incorporated these features. Marr's (1982) theory of vision is one of the more sophisticated of this kind of model. Views of this sort are obviously quite compatible with earlier chapters; and if these views are right, I would willingly accept their being so. Some questions concerning them are nevertheless worth pursuing, even if only briefly. The first question: When the information in the retinal image is recoded, into what sort of code is it encoded? Second question: What is the role of phenomena according to a computational theory? In regard to the first question, what I have in mind is a distinction between proposition-like and image-like encoding.4 It would be fair to say that for most computational models the code is propositionlike, with image-like representation playing little role. Marr himself does talk about 2- and 2K-dimensional images preceding a 3-dimensional image, but one can talk in this way without regarding these "images" literally as images. All that is necessary is that the information of a 2-, 2lA-, or 3-d image be encoded. The encoding itself need not be image-like. On the other hand, a number of theorists would claim that imagelike representations do play at least a partial role in perceptual processing. Kosslyn (1987) points out, for instance, that spatial "maps," in a literal, "pictorial" sense, occur in several regions of the perceptual system. Why would they occur if playing no role? On the other hand, it would seem that in order for these "maps" to play a role, a "reader" I will have more to say about this distinction in section II. For now, I rely heavily on the reader's intuitions.
101
Phenomena
would have to glean the information contained within them, with the result being that inside each organism/perceiver is another littler, suborganism/perceiver — an homunculus (and perhaps an infinite number of them). In section II, I present a theory that accounts for these maps while replying to the homunculus charge (and in a way that Kosslyn might find congenial). But constructing a reply opens up the possibility of abandoning at least part of the computational model. On most computational models, the answer to the second question is that phenomena play no integral role in perception: They are epiphenomena of perceptual processes.5 Why do phenomena exist, then? One possible answer is that evolution has its quirks: If a and b are both results of c, and a is a propagation-enhancing feature, then b may just get a free ride. There will be no direct "fitness" explanation for its existence. Phenomena, that is, may be co-effects of some of the same processes that result in perceptual data-structures; and because the data-structures are evolutionarily favored, accompanying phenomena exist also. This view of phenomena is not altogether implausible. Blindsight patients are able to make visual discriminations even though they generally report no phenomenal experiences. Even more interesting and more supportive of the epiphenomenal claim are cases where blindsight subjects do report phenomena as playing a role in their discrimination judgments. They describe their phenomena as being like the sound of a cannon, like a pin-prick, and so on; and these phenomena seem quite irrelevant to the visual discrimination being made. Finally, the evidence of the previous three chapters is that when different phenomena are experienced, people can still make the same perceptual judgments (color-blind trichromats and normal perceivers, for instance); and the same phenomena can be experienced even when people make different sensory discriminations (the Glass et al. studies, for example). What role phenomena actually play in perception cannot, of course, be fully decided until we have an agreed-upon theory of perception. In section II, I offer two theories that allow for phenomena to play a more integral role in perception than computationalists generally allow. 5
Actually, most computational theorists simply ignore phenomena altogether, never telling us what their role is.
102
Phenomena reconsidered
3. Gibson (1966, 1979) presented his views in opposition primarily to phenomenal, sense-data theories of perception. He shared this opposition with computational theorists. Gibson argued that rather than being a qualitative, imagistic state, perception is a kind of information pick-up. And he further claimed that perception is not representational at all. In this latter respect, he took himself to be opposed to computational views as well (though, as we will shortly see, his opposition is not so clear). He also opposed computational views in a more fundamental way, arguing that perceptual processing, while resulting in information pick-up, involves no cognitive processing (inference, abduction, and so on) of any kind. His view is not an information-processing view, unless in a very attenuated sense of that term. In so far as Gibson opposes himself to phenomenal, sense-data views of perception - indeed, to any "read-off" view - his theory of perception is also compatible with the core of views earlier and later expressed in this book. But as with computationalism, questions concerning Gibson s theory are also worth pursuing. A start can be made by sketching somewhat more fully (but still only sketching) Gibsons theory. Gibson distinguished radiated light from reflected light. He held that only the latter contains information. From any point of view, a series of "shapes" exists in the reflected light. Moreover, for Gibson, perception is a dynamic process: Perceivers in the real world move their eyes and move themselves about. Perception is not a process ending in a frozen moment of time, anchored to a particular place, but a continuous process in both space and time. Changing bodily positions and moving our eyes allow us to pick up information about real-world objects that is contained in the ambient reflected light array at many points-of-view, and so allows us to see things. According to Gibson, we even see things that are not before our eyes at every moment of our seeing them, and that is because perception - a kind of information pick-up - is a cumulative, not a momentary, process.6 Seeing things is picking up relevant information in the light, and this pick-up takes time. Seeing is not experiencing some phenomenal state or other, or 6
Because Gibson takes perception to be dynamic, he is suspicious of psychological and psychophysical experiments that keep the subject in a single place while projecting images so rapidly that no saccadic eye movement is possible. For Gibson, such experiments tell us something about a perceivers experiences that accompany seeing but almost nothing about seeing itself.
103
Phenomena
a serial succession of phenomenal states. Phenomenal experiences contain information about the perceiver but not about perception. When we see a box, we see a box. We do not see various trapezoids or the like. The latter kind of experience, which Gibson calls "perspectival seeing," is the name of an objectless experience; it is not to be confused with real seeing. Indeed, it is not seeing at all. Phenomena, on Gibson s view, as on computational views, are epiphenomena of the information pick-up, of perception (1966, 319; 1979, 246). Phenomena underdetermine perception by far too great an amount, as regards information, ever to be considered a determinant of perception. Moreover, Gibson seems to be saying, much as I said in earlier chapters, that phenomena get their "names," such as "trapezoidal," from the world of objects, that phenomena don't literally have shapes themselves. Gibsons quarrel with computational views is more difficult to understand, but important differences exist between his view and theirs. Gibson takes reflected light to contain information about the external world and takes vision to be a matter of picking up information contained in the light. The computationalists would agree. But Gibson goes on to make two further claims, which he takes to be only a single claim, but which need to be distinguished: (1) Perception is nonrepresentational, and (2) perception is direct. The first claim, however, does not reflect so clear-cut a distinction from computationalists as might be prima facie suggested, because Gibson is using "representation" in a much narrower sense than computational theorists do. For example, according to Gibson, a visual representation, if there were such, would have to be like a photograph, a mirror-image, or a "realistic" painting, where the representation (image) would look like what it represents. Given that understanding of "representation," Gibson denies that there are any visual (or other sense modality) representations. He even denies that the structure in the light array is a representation of external objects (1966, 226—27). In this extremely "realistic" sense of representation, the grooves in a phonograph record are not a representation of the music. Nor is it clear that a road map is a representation in this sense. Certainly, on Gibson s view, a bar graph does not represent the information it graphs. Most especially, sentences do not represent their content. Computational theorists have perhaps never used "representation" in so narrow a sense as Gibson. For them, all of these — grooves, maps, bar graphs, and sen104
Phenomena reconsidered
tences - are representations. I think that, despite Gibson s own claims, computationalists are less misleading here; but I also think that Gibson is onto an important distinction, yet one he does not divide quite at the joints. The correct distinction, as will be discussed in section II, is between representational information and content. Suppose we take the wider, computationalist sense of "representations"; does Gibson deny the representational nature of perception? At the moment, consider this question to be about the end-state — for Gibson, the pick-up of information. The answer is uncertain. Gibson, while telling us a great deal about reflected light, says surprisingly little about perceiving itself— or at least about its end-state. On the one hand, it would be natural to read "information pick-up" in terms of resulting data structures (that seems to be the way Dretske [1981], a Gibsonian in spirit, reads Gibson). Data structures, in being proposition-like, are representations in the computationalists' sense but not in Gibson s sense. Read in this way, Gibson would allow that perception is representational in the computationalists' sense. On the other hand, Gibson often decries representational views for requiring an homunculus to "read off" the representation. Perhaps he would extend this criticism even to wider views of representation, rejecting even them. Gibson also denies that there is anything cognitive about perception. It is not altogether clear whether he means to include the percept in this claim; but if he does, then he must not be conceiving of the percept as a kind ofjudgment, or data-structure. Whatever he would mean by "information pick-up," it would not be that. At times, in apparent reference to the percept, he says that the visual system resonates with the information in the light. I must admit to finding this notion of "resonating" dark and impenetrable. On a favorable reading, it calls up an image (if I may use the term) of a connectionist account of perception, with the relevant states of the visual system relaxing into different networks, of differing connection strengths among the neurons, as new information is accessed. But such an interpretation is belied by Gibson s use of "visual system."7 According to him, a visual system is not just the visual cortex, or even the entire system from the eye to the post-striatal areas. It includes that system, the whole body, the latter's It is also arguable (correctly, I think) that the "relaxed" net is a representation. See Butler 1995a, 1995b, 1995c.
105
Phenomena
behavior, as well as the environment itself. I have no idea what it would be for such a system to resonate. Perhaps a theory in the spirit of Gibson can be devised (indeed, one of the theories in section II may be such a theory); but Gibsons own theory, while insightful on many issues, leaves us hanging on, and without explanation of, the critical notion of information pick-up. Suppose we take Gibson s end-state to be a representation in the sense of a data-structure (or judgment). Will we have to say that perception is, unlike what Gibson claimed for it, indirect? I think not. The real question here has to do with the sort of processing that is involved in arriving at the percept. If one denies — and Gibson does deny8 — that any of the processes are inferential, abductive, or other cognitive ones, then one may claim that perception is direct: for the percept is both noninferential and about the external world rather than about prior representations, or about itself, or about the world by first being about itself. Gibson presents two sorts of reasons for doubting that perceptual processes are themselves ever cognitive: first, if they were, a reasoner would seem to be required for whom they were reasons, and therein lies the road to an infinite regress of homunculi; second, inference requires beginning premises. But how would the information contained in the initial premises be acquired if not by perception? The alternative would be to say that our initial knowledge of the external world is innate, which is bizarre. So computationalists either have to admit to a route that belies their claim that perception always involves inferential processes or they have to defend an extremely bizarre one. While computational theorists may have a reasonable reply to the first objection (computers show us that representations can do work without requiring an homunculus to read off them), the second one seems much more difficult for them to reply to. I don't mean that it cannot be replied to, only to point out the primafacie difficulty of replying to it. If I understood better what Gibson means by "resonating," it is possible that a Gibsonian view would run counter to the main claims of Part Three of this book. If so, then my claims would not be completely compatible with a Gibsonian theory; but they are, in any case, compatible with a great deal of that theory. In chapter 9, I will aim to per8
And so do many connectionists.
106
Phenomena reconsidered
suade the reader that where my views diverge from a Gibsonian one (on certain interpretations of "resonate"), my views are the correct ones. While Gibson s views about phenomena resemble the computational theorists' (phenomena are epiphenomena of perceptual processing), he does explicitly give phenomena a role in our lives: telling one something about the state of oneself when one perceives. So Gibson has an answer to the "Why are there phenomena at all?" question. This view of phenomena is attractive, but it remains puzzling as to why phenomena have seemed so integral to perception itself.9 Perhaps Gibson is right, and phenomena are not integral to perception. But perhaps Gibson is wrong, and they are. Two of the views presented in the next section claim that phenomena seem to be integral to perception because they are integral to perception - but not in any way that violates the spirit of the Gibsonian enterprise, or of my own (though it does violate the word of the Gibsonian enterprise). 4. A third view of perception compatible with the results of the previous three chapters is Thomas Reid's; it is an early version of an information view.10 For Reid, as for me, percepts are judgments, cognitive states. As for the processing itself, he believes we will never understand it. Reid, compatibly with his dualism, believes that the causal processes that mediate between brain and mind are just closed off to our understanding. Despite his dualism, Reid's view is especially interesting for the way he treats phenomena. Like many other information theorists, Reid thinks phenomena are generally epiphenomena of perceptual processing, and nonrepresentational. However, Reid also claims that, through experience, we find that similar phenomena accompany similar perceptions (hence, as I have claimed [1987a; 1989c, chapter 2], phenomena get their "names" from perceptual judgments about external objects). When we perceive red, say, we usually experience similar phenomena on each occasion. Because of this nearly constant conjunction, phenomena, though not representations, become, over 9
10
I think Natsoulas (1989a, 1989c, 1990a, 1990b) misunderstands rather badly Gibsons views on phenomena. He claims that Gibson s answer to this question is that their seeming so integral is because they are. But this reading reflects a revision of Gibson's views on phenomena rather than an accurate portrayal of them. Descartes' view of perception was also an information view, but often misunderstood because seen through the eyes (pun intended) of his Empiricist interpreters.
107
Phenomena
time, indicators of properties in the external world and may even, as such, play a causal role in some perceptions, abbreviating the perceptual processing in relevant instances (Reid 1785/1969, 302). I will present, in the next section, theories that claim that phenomena are not mere indicators: They are really representations.11 Yet, as I believe about computationalists and Gibsonians, I think that Reid is onto a point of real importance. His views, as they stand, are compatible with my own; but I hope to improve upon them. The proverbial "fish or cut bait" time has arrived. It is time to turn to the many-times promised alternative theories of perception. But the reader needs to be reminded that the theories are only suggested. Arguing fully in favor of one or the other of them would require another book at least the length of this one. Their greatest virtue at this point is that they enable me to pull the preceding chapters more tightly together with the succeeding ones while doing justice to rather strong — and, I believe, shared — intuitions. II
5. To give the reader some structure for the theories to come, begin by considering the same two questions posed to other information theories: (1) What is the end-state? and (2) What role do phenomena play in perception? The answer to the first has been made obvious from the first chapter on: it is to agree with computationalists, with Reid, and perhaps with Gibson that the end-state is a judgment, a data-structure, a kind of proposition-like representation of a state of affairs in the world. The answer to the second question is that phenomenal states (1) are image-like representations that are epiphenomena of perceptual processing (co-effects, along with percepts, of those processes but different sorts of epiphenomena from the type considered on other views), or (2) are, in some sense, "read-off" after all on the road to bringing about percepts, or (3) play a causal role in perception (i.e., help bring about the percept). This disjunctive claim is less than satisfactory, but it is the best I can do on the basis of my present understanding. Moreover, I hope that each disjunct encourages future psychological research, while, at the same time, future psychological research will play 11
I think that the phantom-limb cases of the previous chapter support the view that pain phenomena are representational and not merely indicators.
108
Phenomena reconsidered
a role in selecting the appropriate disjunct (if any). Circular, but I hope not viciously circular.12 6. Further discussion is aided by first discussing the distinction between proposition-like and image-like representations. Like all the ideas in this section, the following claims are both speculative and quite tentative. Much of the discussion to follow is carried on at an intuitive, general, and mostly unargued level; however, what follows is not meant to be dogma, but an attempt to come to grips with, and understand, the notions of representation and perception. As such, my only wish is that these attempts be helpful to later attempts. In §3, I agreed with computational theorists that photographs, maps, bar graphs, record grooves, and sentences are all representations. By a representation I mean a symbol, whether by nature or by convention, that carries information about another thing, state, or event (using "information" in an ordinary, and not a technical, sense). By this "definition," a reflected image of a tree on water is a representation — a natural one — of the tree being reflected. The sentence, "The tree is an oak," is also a representation, perhaps of the very same tree. And on this meaning of "representation," if Gibson was correct that reflected light arrays contain information about the external world, every light array (pace Gibson s own claims) is also a representation. Clarification is necessary. I am talking about symbols and assuming something like a sign/symbol distinction. That is, representations are symbols that carry information. How symbols (i.e., representations) are to be distinguished from signs (i.e., other states that carry information) is something about which I have nothing interesting to say (which doesn't put me in a class by myself). Intuitively — and perhaps it is little more than that — there is something different about an image of a tree on water, a symbol, which carries information about the tree, and smoke, a sign, which carries information about a fire. Every effect may be a sign of its cause, but every effect is not a symbol of its cause. My real interest in this chapter is not the sign/symbol distinction, but a distinction among symbols (representations) themselves. Of course, if it turns out that there is no good sign/symbol distinction at all, then much of what I say about symbols is probably also altogether wrong. Of the sorts of representations mentioned above, only sentences are proposition-like; and the claim so far has been, and will continue to be, 12
See the third point in the quote from Schacter which is among the epigraphs of this book.
109
Phenomena
that percepts are proposition-like, rather than image-like, representations. Superficially (because that is the best I can do), and yet pointing in the direction of things quite deep, I would suggest that there are two especially crucial distinctions between image-like and proposition-like representations. Only the second is at all original to me. First, imagelike representations are analogue representations of some kind or other, while proposition-like representations are not. By saying one thing a is an analogue representation of another A, I mean that the members of a set of properties of a stand in one-to-one correspondence with the members of a set of properties of A, so that changes in A can be reflected by changes in a{ (notice can be, not necessarily are). While granting that this limning of the distinction is exceedingly vague, I nevertheless hope the reader can see how the "definition" might apply to an image on water, grooves of a record, and even to bar graphs. But the distinction is much too vague to be useful. Consider the sentence, "The book is blue." Why isn't it an analogue representation according to this account? "Book" stands in one-to-one correspondence with one property of the represented and "blue" stands in correspondence with another. If one of those properties changes, change the word. Perhaps the answer to the question has to do with the fineness of one-to-one correspondences between properties of a and those of A. Or perhaps a better attempt is to say that a is an analogue representation of A just in case changes in degree of a quality of a reflect changes in degree of a quality of A. For example, if there is an intensity scale for image-like representations, changing the intensity level of the representation can be used to track corresponding changes in shade of blue, say. But nothing comparable exists for the word "blue" in the sentence, "The book is blue." No intensifying of the printing - making it darker or larger, or whatever - represents changes in the shade of blue. If there were such a property, then "blue" would be an image-like representation rather than just a word in a sentence. Although the account of this first distinction is, at best, still vague and certainly incomplete, I let it stand, vague and incomplete as it is — partly because I am more interested in the second distinction. I am not the first to want to make this distinction, nor the first to find it difficult to do so. I would welcome an improved version of the analogue/nonanalogue distinction because something is importantly right about it.13 13
I am not certain that "nonanalogue" = "digital."
110
Phenomena reconsidered
The second distinction is that while containing representational information, image-like representations lack content. And so we are led to yet another distinction: that between representational information and content. While any substantial discussion of these items would have to say a good deal about each side of the distinction, for my purposes it is only necessary to point to one opposing set of properties. For instance, a single photograph is, at one and the same time, a photograph of Abraham Lincoln, of a tall man, of a man in a top hat, of a man in a morning coat, of a man with a beard, and of a man who posed for the camera, all equally. All of these pieces of information - as well as many others - are contained in the photograph. None of these pieces of information is in any way logically, or conceptually, connected to any of the others; but given the photograph as a representation, they mutually coexist. The information in the photograph is nonaspectual (to borrow Searles [1983] term), or is transparent (to borrow from Quine 1963, 142). To the contrary, the sentences, uttered about that photograph, "That is a picture of Abraham Lincoln," "That is a picture of a tall man," "That is a picture of man in a top hat," "That is a picture of a man in a morning coat," "That is a picture of a man with a beard," "That is a picture of the man who posed for the photograph," while all true of the photograph, nevertheless have different contents. For instance, one could believe one of the sentences without believing any of the others. But one could not image the picture without imaging all of the others. There is an aspectual nature, an opacity, to content; and it is exactly that aspectual nature that mere representational information lacks. Of course, Searle makes his point about intentionality; and put in that context, the point here is that having content is an intentional state, while merely containing representational information is not. But the aspectuality of intentional states cannot be captured completely by a notion like opacity. A computer might react to a sentence, "The present Prince of Wales is having marital problems," but fail to react to the sentence, "Bonny Prince Charlie is having marital problems." Yet, most of us would be loath to ascribe intentional states to any present-day computer.14 Something more than reacting to one side of a logical equivalence while ignoring its other side is required for 14
I owe this insight to Andy Clark.
Ill
Phenomena
aspectuality. For that differential reacting is exactly what a computer with no intentional states can do. Fortunately, I do not need to solve this problem for the purposes of this book. Although I lean heavily on the notion of aspectual representation in later chapters, I can legitimately do so at an intuitive level of understanding, leaving it to future research to unpack the notion. Still, since I do lean so heavily on it, I would like to be able to say something more. And later chapters, especially chapter 9, contain material that I think will be useful toward providing an answer. Searle is not the only one to have suggested that it is the phenomenality of experience that provides the aspectuality of intentional states (Searle 1989, 201). But enough has been said in the first three chapters, and more will be said in the next two, to make that (unargued for) position doubtful. Searle is right to think that a kind of subjectivity provides the aspectuality of aspectual states. But he has grasped the wrong sort of subjectivity. The relevant subjectivity is not that of phenomena, but of will In chapter 9, it will be seen that an in-control/not-in-control distinction is the source of all acquired concepts. Without this willed/not-willed distinction, no concepts are formed. Concepts are aspectualized (have meaning) in part because they are meant (willed). Present-day computers do not make this distinction, and they do not will. So they do not acquire concepts. And because they do not possess concepts, they do not make judgments. At best, despite the appearance of opacity, they make "judgments": simulations of the real things. Granting that this notion of aspectuality is still vague, and even chapter 9 will not fully clarify it, I nevertheless think that the content/representational information distinction is at least intuitively understandable. And I also believe that the theory of this book offers a beginning towards understanding the distinction more thoroughly. With this (granted, crudely drawn) distinction, one can say that thermostats represent the temperature in their surroundings, that their representational states contain representational information about those surroundings, but not that those states have content. Thermostatic representations are without content. It is not foolish to think of thermostatic states as representations. Certain thermostatic states are representations. Searle (1980) wrongly criticizes those who consider them to be representations. At the same time, Searle s discomfort with their view is based on the important distinction that he sees only, as it 112
Phenomena reconsidered
were, through distorting lenses: thermostatic states, although representations, are not intentional states. The representations are contentless. The upshot is that there are two kinds of representations, those that contain unaspectualized information and those where information has been aspectualized into content.15 Note that thermostatic states, in being analogues to the temperature states they represent, are image-like representations. So we can conclude16 that at least some image-like representations are contentless (only our thoughts about what those representations represent have content, not the thermostatic states themselves). Building on this conclusion, a bolder claim can be put forward: The content/contentless distinction among representations tracks the propositionlike/image-like distinction completely. That is, all proposition-like representations are contentful, and no image-like ones are.17 As with thermostatic states, the information contained in any image-like representation, while itself unaspectualized, is available for aspectualization in proposition-like representations. These claims, including the central one, are obviously only proposals. I haven't provided a shred of philosophical argument or psychological evidence for the distinctions. At this point, they are only proposals. Still, a couple of things can be said in their favor: first, they shed quite a bit of light on previous disputes about intentionality and representation, allowing us to see how opposing sides could each see truth in its own claims. The second is more of a hope: when the reader sees how these proposals fit into the more general theories of this part of the chapter (though the theories themselves are only proposals) and sees how these theories in turn fit into the previous and coming chapters of this book, into the larger 15
16 17
Dretske (1981, 1986), among others, makes something like an information/content distinction; but the distinction he and others make is broader than that I am making. They are trying to distinguish representations from nonrepresentations (symbols from signs). I am trying to point to a distinction among representations themselves. "Conclude" only in the sense that it follows from the previous remarks. The "remarks" themselves have certainly not been provided with anything like adequate justification. Although I make the distinction as proposition-like/image-like, not a great deal of what I am claiming rides on the question of whether aspectualized representations have to be closely similar to language. What is true is that linguistic representations are aspectual. The converse may not hold. The "language" of thought may not be a language at all. But there do have to be (or so I will argue throughout the remainder of this book) aspectualized representations. The really important distinction is that between aspectualized and unaspectualized representations. I will continue to label the former "proposition-like," but the caveats of this note should be taken seriously.
113
Phenomena
theory of consciousness that is the thesis of this book, the reader will find these proposals attractive — at least worthy of further investigation. 7.
The claim that launches all three perceptual theories to follow is
that all phenomenal states are image-like representations, though not the
converse. That is, phenomena are a subset of image-like representations. And the idea behind all three theories to come is that their nature as image-like representations, rather than their nature as qualia, makes phenomena important. And in all three theories, phenomena are to be thought of as high-level neural states, results of large amounts of processing. Other — nonphenomenal — image-like representations may play an earlier role in perceptual processing. I think they do and will say something about that role at the proper times. The retinal image is one obvious example. And phenomena are images in the same sense these other images are images: analogue bearers of information, though not of content.18 8. The first theory to be presented is that phenomena as image-like representations are epiphenomena of perceptual processes, i.e., they play no role in perception itself but are co-effects of the processes that result in percepts. This theory is most similar to other information views in the role assigned to phenomena, but an important difference is the recognition of the representational nature of phenomena. But what would be the evolutionary point of their being representational if the representations are not made use of in perception? One possible answer is the pleiotropy response given earlier: Coeffects of evolution-favored processes survive because the other effect is survival-making. Still, one may wonder whether there haven't arisen perceptual systems that operate without this phenomenal co-effect and why they would not have replaced this particular one with its redundancies. Several replies are possible: The best perceptual systems just in fact have this redundancy; no perceptual system without this redundancy has in fact arisen; such redundant systems are being replaced in species where the alternative does arise; since the redundancy does no harm (except to philosophers), there is no evolutionary pressure to eliminate or replace it; and so on. But another answer may be that, while ineffective in perception, phenomenal representation is very use18
No explanation is provided as to why certain high-level, neural, image-like representations are also qualitative states. I wish I had an explanation.
114
Phenomena reconsidered
fully stored information and is made accessible in some way or another to higher-order cognitive processes like memory, reasoning, and so forth to aspectualize as needed, and so plays an effective role in those higher-order processes. For this reason, these perceptually redundant states survive.19 This view, like the previous information theories, does not seem to do justice to our introspection-based belief that phenomena play a more integral role in perception. While the data and beliefs obtained through introspection need to be considered only with great caution (see chapters 2 and 3), in this case there may be something correct in our introspectively based belief. Each of the following two theories is compatible with its being true. 9. One of these two theories maintains that phenomena are, despite my earlier arguments (in chapter 2), "read off" in arriving at percepts. Here the idea is that phenomena are themselves a late result of perceptual processing, but the next-to-last state, rather than the last state, in the processing. When information contained in them is aspectualized, and so recoded in a judgment, the result is a percept. The qualitative aspects of phenomena are not read off— that was the mistake of earlier "read-off" theories. Rather, their representational, image-like nature allows them to play this role.20 States with different qualitative "feels" can represent similarly, and states with the same ones can represent differently. On this theory, variations in properties of phenomena give rise to differing judgments of what properties exist in the external world. Phenomena do not have properties like shape, size, and so forth (as chapter 2 showed); but they do possess analogues to those properties. Thus, phenomenal states of type P p say, are not colored; yet they correlate to colors: their variations, including variations on varying scales (intensity, and so forth) are "read off" in such a way that we are led to judge that a property of external objects varies in analogous ways. In our initial perceptual judgments, we are committed to nothing further about these external properties: that is why we can believe that further research will yield further information about colors, say, even information that will allow us to accept that some of our beliefs arising from 19
20
When I say "some way or another," I have in mind the differing sorts of roles possible for phenomenal representations in perception, as spelled out in §§9 and 10, only as applied to these higher-order states. Rosenthal (1991) recognizes this dual role of phenomena in his distinction of their intrinsic (qualitative) properties from their structural (representational) properties.
115
Phenomena
"reading off" these image-like representations are mistaken: that not all variations in phenomena represent actual variations in external properties. On this view, percepts are kinds of theories, however instantaneous, explaining the variations in our phenomenal states; they are about the external world. Doesn't this theory cause me to take back everything argued for in the first three chapters? Surprisingly not. First, properties of phenomena are not those of the external world. Second, different creatures, experiencing the same percept types, can have done so by "reading off" different phenomenal states (just as other earlier stages of the perceptual processing might be different in each creature as well). All that is required is that there have been corresponding analogue variations in each of the creature s phenomenal states. Third, the qualitative features of phenomenal states are not of first importance: different qualitative properties may be involved in similar representational information, and the same qualitative properties may be involved in different representational information. Fourth, judgments are required for a state s being a perceptual state; and judgments, rather than phenomena, are percepts. It just so happens, if this theory is correct, that phenomena play an important role in bringing about percepts. One can conceive of creatures that arrive at percepts by other methods. Perhaps Hindsight perception is of this other kind (but see chapter 6 for reasons for thinking that Hindsight is not different as perception from ordinary perception). In §11, I will offer reasons for why we earthly creatures might be the way we are (i.e., why we employ phenomenal states). So while I admit to the possibility of a "read-off "position, it is a much more tempered position than those considered in chapters 1 and 2. Moreover, the emphasis is on the representational nature of phenomena rather than on their qualitative properties. And at the same time, it is still being denied that phenomena are percepts, the end-states of perceptual processing. Furthermore, it is being denied that phenomena have content, that they are intentional states. The representational information contained in them is not aspectualized and is available for bringing about an intentional state in a way similar (exactly similar or analogous) to the way the representational information contained in an image of a tree on water is made available for aspectualization. But this last claim raises an apparent difficulty for this view: There seems to be a need for a "reader" to "read off" the internal image, just as a "reader" is required for "reading off" the external one. And this 116
Phenomena reconsidered
need, in turn, seems to indicate the need for an ineliminable homunculus. Things may not be as bad as they seem: perhaps computer models can be constructed to enable us to understand that image-like representations can be aspectualized for further use without requiring a "reader." In a sense, one might argue, there is already an understanding from computers about how proposition-like representations can be "read" without the need for a "reader." In the meantime, though, there is a third phenomena-as-image-like-representation theory that I can offer; and it more clearly avoids this problem. 10. The third theory is that phenomena are not "read off" but are noncognitive causes of aspectualized end-states (judgments, percepts). Gibson viewed the retinal image in this way; and I am claiming that such an account also makes sense for phenomenal, image-like states. Phenomena play this causal role; but different qualitative kinds of phenomena can play the same causal role (i.e., result in the same type of percept), while the same qualitative kind of phenomena could, in appropriately different contexts, play different roles (i.e., result in different types of percept), conclusions quite consonant with the first three chapters. Moreover, this account satisfies the intuition that phenomenal states are, in fact, integral to perception,21 while at the same time, first, requiring no "reader" to "read off" from them, and second, satisfying Gibson's intuition that perceptual processing involves no cognitive processing — the only cognitive states being the end-states, the judgments (the percepts). Of all the views presented, this is the one I think closest to the truth; and much of the discussion in later chapters will presuppose its truth. If one of the other theories is, instead, closer to the truth, only minor adjustments would have to made in the later chapters. 11. Each of the theories presented in §§9 and 10 assigns to phenomena an integral role in perceptual processing, but questions can be raised about both theories. Among them are: (1) Why should anyone be inclined to think that phenomena play an integral role in perception? Or at least, are there any better reasons for thinking so than the reasons so far presented? (2) If we accept one of the last two theories, 21
"In fact," because it is conceivable that there be perceivers that would not employ phenomena in perception - other sorts of states would play the causal role played, in fact, by phenomena.
117
Phenomena
why not consider phenomena to be the percepts - especially when phenomena are considered under their representational, rather than qualitative, description — while considering judgments as high-level add-ons to perception? After all, visual phenomena, say, are experienced only if the eyes are open, in good working order, and so forth; they are high-level, conscious states, and they don't require us to ascribe such sophisticated cognitive states as judgments to simpler perceiving organisms such as oysters might be. Experience of phenomena in these simple creatures would lead directly to action, bypassing perceptual judgments. Let me answer each set of questions in turn. (1) Better reasons than those so far given may not be able to be given - not in our present state of ignorance and not with our present lack of a guiding theory. But there is, nevertheless, an additional reason to think that these last two accounts are not far-fetched. For one thing, one obvious sort of image-like representation is not a mere epiphenomenon in perception: the retinal image. Moreover, as has been argued in recent years (Van Essen 1985; Kosslyn 1987; Zeki 1992), several spatial "maps" in the brain are literally image-like representations. What are they doing there if they are not playing a role in perceptual processes, by being available either for being "read off" by some brain structure or for causing a further informational state farther along in perceptual processing? If all these "maps" are epiphenomena, then there are an awful lot of them. Perhaps Occam never said anything about multiplying epiphenomena, but perhaps he should have. Since phenomena are like the retinal image and like these "maps" in being image-like representations, there should be no more objection to phenomena playing an integral role in perceptual processing than there is to these other image-like representations. As said, the claim that they are one and all epiphenomena is possibly true; but it stands on the brink of incredibility. (2) Granted that perceptual phenomena do not (usually) occur unless the sense organs are in good working order, and granted that they occur as the result of processes quite far up in the hierarchy of perceptual processing. Granted also that phenomenal states are conscious states.22 Granted even that there might indeed be creatures whose bodily motions result directly from phenomenal states. Nevertheless, I would deny that perceptual processing that ends with a phenomenal 22
But this admission will be greatly qualified in Part Two.
118
Phenomena reconsidered
state is perception. Or alternatively, if one wants to call such lopped-off perceptual processing perception, then there is some other state, call it what one likes, where perceptual processing ends in aspectualized perceptual judgments; and that state is significantly different from the lopped-off one and is the one that most philosophers and psychologists have been concerned with all along, and have been, if this alternative were instead correct, mistakenly thinking of as, and calling, perception. I prefer the first alternative but will not quarrel hard with someone who wants to take the second. There are several reasons for taking at least one of these alternatives. First, many of the arguments of the first chapter, while aimed against phenomena as qualia, can be adapted to phenomena as representations, especially the positive arguments for perception as judgment. Second, any creature whose movements are the direct result of phenomenal states is cognitively just like a thermostat. If one wants to say that thermostats perceive, one can. But they do not perceive in the sense in which perception is tied to cognition. For that tie to exist, the information has to be aspectualized. Third, and closely related, if such creatures perceive, then what should we say about a situation where one of the nonphenomenal "maps" directly causes action, bypassing the phenomenal state? This outcome also would not occur unless the eyes were open and so forth. Is this seeing? Moreover, one can push this question back to the retinal images. Even if one grants that phenomena are conscious and that these other image-like representations are not, it is not obvious why that consciousness should affect how we answer the question. Fourth, in chapter 3, it was concluded that feeling pain necessarily involves proposition-like, evaluative judgment states. Surely, it is unlikely that (other) perceptual states are more primitive than pain. Fifth, any particular phenomenal state not only, in a sense, contains much more information than is contained in any perceptual judgment (i.e., information that is not aspectualized at all), but it also probably contains much less information as well (and the "read-off" view adumbrated in §9 would be wrong if it claimed that a single phenomenon was "read off" for each percept). We see, that is, far more than is contained in any single phenomenal state (Gibson [1966, 47, 201, 306-07; 1979, 57, 201, 245-46] made this point as well). Sixth, and last for now, given the arguments of the previous chapters, it is at least conceivable that there are creatures whose perceptual judgments are brought about by nonphenomenal (perhaps even nonimagistic) 119
Phenomena
precursors. One would be forced to say that such creatures do not perceive, even though unless their eyes were open and so on, they would not have arrived at these judgments. For all these reasons, I call only perceptual processing that ends in judgment perception. Accordingly, it may be that although organisms are born with working sense organs, the organisms have to develop before they are able to perceive. Perceiving, that is, is an acquired state (which is not incompatible with its being a genetically determined one). It is true, if either of the last two perceptual theories is correct, that the endstate of perception is derived (in part) from phenomenal states; and it may also be true that, evolutionarily speaking, phenomenal states directly gave rise to action before the time when perception, as I conceive it, even existed (perception giving rise to much more flexible responses on the part of the organism). It may also be true, as I suggested with the epiphenomenal view sketched in §8, that phenomenal states are somehow stored as such for later access by memory mechanisms. For all these reasons, one might think of the lopped-off state as a kind of proto-perception — as long as one makes a clear distinction between proto-perception and perception per se. Perception is a state with content (is an intentional state), while proto-perception is an unaspectualized information state. 12. Any of the six theories sketched in this chapter, those three already in the field and those three newly proposed, would be consonant with and supportive of the results of the previous chapters. Since there seem to be no theories of perception that are obviously better than — or even as good as — these information theories, and which conflict with those prior results, I feel quite confident in those results. There are, admittedly, many loose ends. And not all of them can be easily tied up. The one I wish to concentrate on is that perception seems to be both conscious and unconscious (in Hindsight, for instance). In Part Two, I examine this appearance. I argue that, as described, things are not exactly as they appear. The actual story is more complicated. Part Two also continues the task of playing down the role of phenomena in our mental lives: phenomenal states are often taken to be the paradigms of consciousness; but this supposition, too, is a mistake. To see how these appearances are false and these mistakes made, it is necessary to investigate the nature of consciousness itself (or, as I shall come to say, the natures of consciousnesses themselves). 120
PART TWO
Consciousness
Consciousness: preliminaries In Part One, perceptions, both conscious and unconscious, were focused on. But what are, and how can there be, unconscious perceptions? For that matter, what are conscious perceptions? The notion of consciousness, like the notion of phenomena, has been far from clear. And the two notions seem to be importantly connected. Some (Searle 1989, 1990, 1992; McGinn 1988, 1989; Nagel 1979a, 1986; Natsoulas 1989b, 1990b, among others) would claim that they are essentially connected to each other. I deny that connection — at least in the forms it is usually presented — in these chapters. Nevertheless, I do agree that casting light on consciousness also enables one to bring into yet sharper focus the results of Part One concerning phenomena.1 Consciousness matters so much to us because it is intimately connected to the idea of ourselves as Lockean persons. To be a Lockean person is to be something that thinks and feels, that not only qualifies as an object of moral consideration, but is also, most especially, a thing with moral duties and obligations; a thing that is, in short, a moral agent. Such a list is not meant to define the Lockean notion of a "person," nor to be exhaustive. Rather, I list these traits because of the widespread belief that anything possessing them must be a conscious being. However, most attempts to analyze — or even understand — this underlying consciousness have been failures. Many psychologists have reached this conclusion as well, and many of them avoid talk of consciousness altogether. But, frankly, consciousness cannot be denied. It is an inescapable feature of our human lives (see the Weiskrantz quote among the epigraphs of this book). We should, therefore, try hard to make sense of it. In this chapter, three popular conceptions of consciousness are each shown to be lacking. However, the discussion of why these three fail opens a way towards 1
Material for this chapter is taken largely from Nelkin 1987b but is much reworked.
123
Consciousness
understanding consciousness; and a theory will be presented in the next chapter. The first conception identifies consciousness with awareness; the second claims more narrowly that it consists of apperceptive awareness only. The third is that consciousness is a phenomenal (or phenomenological) state — unanalyzable, indeed ineffable — and is best expressed by the slogan that there is something it is like to be a conscious being. Essential to consciousness, on this last account, is that it feels some way or other. The awareness here is a felt awareness. For the present, I call this last notion "Nagel-consciousness" (and abbreviate it as "CN") because Thomas Nagel has said many of the most interesting things about it (Nagel 1974, 1979a, 1986).
1. "Consciousness is awareness" expresses an identity (this identity seems to be defended, for example, in Campion et al. 1983) and, on first blush, makes good sense. When someone is knocked wnconscious, we think the person has lost all awareness: that is exactly why we say this person is unconscious. And most people would distinguish an unconscious person from someone who was asleep but dreaming. While neither person is awake, still, the dreamer is aware of the dream, hence conscious in some way or other. We might say the dreamer isn't fully conscious, but it would be odd to say he or she is tmconscious. Being fully anaesthetized is markedly different from dreaming while sleeping. In undergoing surgery with a general anaesthetic, one appears to have no awareness whatsoever. Not only does awareness seem sufficient for consciousness, it also certainly sounds wrong to say that someone is conscious but has no awareness whatsoever. All consciousness involves awareness. None of the positions to be considered, including my own (see the next chapter), quarrels with that claim (that awareness is necessary for consciousness). The real question is whether we can make sense of awareness without consciousness (whether awareness is sufficient for consciousness). While it does seem somehow obviously wrong to say someone is unconscious but nevertheless aware, it is considerably less jarring to say that someone is aware although unconsciously so. In fact, we do say this sort of thing; and it often seems to be the most reasonable thing to say. Yet, if the identity, "Consciousness is awareness," were true, someone's being uncon124
Consciousness: preliminaries
sciously aware would not even be possible, since the notion of unconscious awareness would be incoherent.2 But many, psychologists and nonpsychologists alike, find the notion of unconscious awareness to describe best what goes on in subliminal perception experiments, Hindsight experiments, and commissurotomy cases. Even among those who question the validity and interpretation of such experiments, many of the critics believe claims for unconscious awareness in these cases to be mistaken or unfounded rather than definitionally incoherent.
Before considering these anomalous cases, consider a more gardenvariety case involving events familiar to most of us. Sometimes when driving we are thinking deeply about things — philosophical problems, problems in psychology, problems in our love life — or carrying on an animated conversation with a passenger, or listening intently to music. We arrive at our destination. In order to have done so, we must have seen all the obstacles that crossed our path and avoided them; we almost certainly stopped at all the red lights, passed cars, and so forth. We were, by hypothesis, conscious of thinking, or conversing, or listening to music; and we were certainly aware of it in being conscious of it. Seeing is also a state of awareness. Yet the question arises as to whether we were also consciously seeing stoplights, other cars, and so on. We can say we must have been aware of them; otherwise we would not have arrived intact. But was our awareness conscious awareness? If asked about them at the time, or if our attention had otherwise been directed to them, we could have described the obstacles; but that fact does not show that without being asked, or without a direction of attention (brought about either externally or internally), we were at the time consciously seeing them. However, that at the end of the trip we have no memory of the traffic obstacles does not in itself show we did not consciously see them, any more than our getting to our destination unhurt means that we did. So the question remains. Defenders of the identity of consciousness with awareness (Identificationists) would reply that if we were aware of the obstacles, we were conscious, too. 2
Actually, this claim is not quite right. I think the identity is correct but lacking a good deal of refinement. When these refinements are added (chapter 6), the identity will be seen to be true and the notion of "unconscious awareness" will be found to be not only conceivable, but actualized. Since the refinements will not be added until the next chapter, the reader can, for present purposes, temporarily consider the claim to be true.
125
Consciousness
Had we been wwconscious, we certainly would not have been aware of the obstacles.3 The Identificationist reply has an element of truth: as will be maintained in the next chapter, there is a notion of "consciousness" for which it makes good sense. Indeed, as will be seen in the next chapter, every state of awareness is in some sense or other a conscious state. But neither of these conclusions is correct on the face of it. And in the present context, the Identificationist seemingly fails to address the question being asked. There are reasons to think that not all awareness is conscious and that we are consciously aware of fewer things than Identificationists would have us believe. To clarify the issue, consider a response to the Identificationists' reply. "We grant that consciousness without awareness makes no sense. However, it is a mistake to identify consciousness with all awareness: only a certain kind of awareness constitutes consciousness. One is free to use the term 'consciousness' in any way one likes, but then for clarity's sake one should realize that different uses of the term run the risk of merely being homonyms. Let us divide awareness into two sorts and call them ' C l ' and 'C2.' 4 C l is the sort of awareness we have when we see the road and its hazards, such that we are able to arrive safely. We grant that the driver was C l of the obstacles. She or he was certainly aware of them. But whether C l is a conscious state in any meaningful sense remains a question. "We would both agree that the driver was also C l of many other things: the thoughts she or he was thinking, the conversation, the music, and so on. However, a second-level awareness should be considered; and we think the essential notion of consciousness is identical to second-level awareness (C2). Granted that the driver in the example was C l of the conversation, he or she was also C2 of this C l . What we were asking about the driver earlier is whether she or he was C2 of being C l of the road obstacles. Not only do we think the question makes sense, we think the answer to it can be 'No.' It is exactly in this sense that a person can be ^consciously C l . If we identify consciousThis example is drawn from personal experience. As a teenager, once, late at night (and, yet, quite sober), I drove across town (or believe I did). When I pulled into the driveway, I realized I had just done so and must have driven there. But I had no memory of the drive. It was a frightening realization, one of those Woolfian moments of being. In " C l " and "C2," the "C" will do double duty, abbreviating both "conscious" and "consciousness," depending on the context.
126
Consciousness: preliminaries
ness with C2 - as we should - then we can say, without contradiction, that people sometimes have unconscious awareness: when they are C l but not C2. 5 That description seems apt for the driving case, and even more for cases like subliminal perception, Hindsight, commissurotomy, and so forth."6 To clarify the distinction between C l and C2, a distinction I myself defend in the next chapter, contemplate these two cases.7 In the first case, when we arrive at our destination, the passengers comment that given how deep in conversation we were, it is remarkable that we arrived safely. On being challenged in this way, one must agree that it is remarkable; for one has no memory of seeing the lights, and so forth. In the second case, one instead replies that they shouldn't worry since one was perfectly aware of what was occurring, supporting the claim by giving a rundown of various of the traffic events that occurred during the drive. These two different responses, while presented after the events in question, point to, and provide evidence for, the existence of two different kinds of mental states that can occur during the events in question. Such after-the-fact evidence is by no means conclusive. It may be that all the evidence points to is a difference in the later memory states rather than to a difference in the states at the time (see Holender 1986). If the cases under discussion were the only sort that pointed to C l without C2, then it would not be reasonable to conclude that two states are evidenced. The memory hypothesis would be just as evidenced. However, a whole host of experiments and daily experiences point to dissociations of C l from C2 (see the next chapter for some of them). And while for any particular case, an alternative 5
6
7
Much more will be said about Cl and C2 subsequently. I caution the reader against identifying either with phenomenal experiences, which comprise yet a third candidate for being labeled, "conscious state." Cf. Armstrong (1980, 198-99), who uses the same example: "My proposal is that consciousness in this sense of the word, is nothing but perception or awareness of the state of our own mind. The driver in a state of automatism perceives, or is aware of, the road. If he did not, the car would be in a ditch. But he is not currently aware of his awareness of the road. He perceives the road, but he does not perceive his perceiving, or anything else that is going on in his mind" (Armstrong's own emphases). Armstrong thinks of C2 as being like perceiving, a view I believe to be mistaken. He also claims that we are normally aware of what is going on in our minds. I would maintain that we are normally C2 of very little that is occurring in our minds. To say I defend the distinction is not the same as saying that I think that C2 is what consciousness really is. In one sense, I do; and in another sense, I don't (see the next chapter). I do think that C2 is an extremely important notion of consciousness.
127
Consciousness
interpretation of the evidence is possible, only the unconscious awareness hypothesis accounts for all such cases; and it, therefore, avoids the ad hoc-ness of other interpretations. This chapter proceeds on the assumption that C l and C2 are different mental states. In the next chapter, additional evidence for their distinct existences will be presented; and perhaps even continued doubters will then come to agree. The response to Identificationism unveils a more circumscribed identification of consciousness with awareness, namely, that consciousness is C2, i.e., a second-order awareness of first-order states of awareness (call this form of Identificationism "Apperceptionalism"). But Apperceptionalism can also be called into question.8 First, Identificationists can make a further reply; and, second, a further objection to Identificationism needs to be considered, this objection sometimes being held in conjunction with the earlier one — and sometimes confused with it. This latter objection, however, weighs against Apperceptionalism as well. The Identificationist reply will be taken up in §3, and it will lead to an attempt to further analyze C2. Then, in §4, the second objection against Identificationism will be raised, leading beyond Apperceptionalism to yet a third attempt to understand consciousness. But before moving to those tasks, I want to consider two objections to the very existence of C2. 2. First, it might be claimed that when we are consciously aware of something, that "something," as it were, so Jills our consciousness that we cannot at the same time be aware of being aware of it. No room for C2 exists when we are C l . But contrary to this reply, cases like the one involving the driver who can recite events of the drive strongly suggest that one can be aware of ones awareness of a state of affairs at the same time one is aware of that state of affairs. That is, the second car case strongly suggests that one can be C l and C2 at the same time. That one can be C l and C2 at the same time — and usually is, at least about some C l state or other —just seems to be a fact. Blindsight and Rosenthal's (1986, 1991, 1993) view is a slight twist on Apperceptionalism: he argues that being C2 that one is Cl constitutes the consciousness of the Cl state, that C2 is not itself a conscious state unless one is C3 that one is C2, and so on. While Rosenthal's theory only overlaps Apperceptionalism, it is open to many of the same objections as well as to additional ones that are irrelevant to current issues (e.g., objections concerning how he treats phenomenal states, especially their feltness, and objections to his commitment to C2s also involving an awareness of oneself as experiencing Cl [or phenomenal] states). These additional objections are raised at appropriate times.
128
Consciousness: preliminaries
the other anomalous cases drive this point home even more forcefully: the lack of C2 in these cases is part of what makes them so bizarre. Note that one can be C2 that one is in a C l state without paying attention to that C l state. One can be aware, for instance, of seeing a flower without paying attention to seeing itself. Yet, if it is such an obvious fact that C2 and C l occur simultaneously, why has it been denied? One motivation is signalled by the "filling" metaphor. There is a tendency to think of consciousness on the model of CN. CN involves a notion of awareness different from either C l or C2, and the denial can be re-expressed as saying that when (in this sense) we are aware (i.e., CN) of a state of affairs we cannot also be CN of our CN. As stated, this denial is correct; but it would be relevant only if C l and C2 are identifiable with CN — which they are not. When one is aware in the sense of CN, there is something it is like to be in that state of awareness. The paradigm here is phenomenal states. I would claim, and will argue, that one can be C2 that one is C l of something without experiencing any relevant CN state. C l and C2 take up no "space" in consciousness in the way CN does. C l and C2 awareness involve proposition-like representation; they are not phenomenal states. Only if one identifies all consciousness with CN can one not find "room" for our being C2 that we are C l of x when we are C l of x; for we cannot be in two phenomenal states at the same time.9 This point cannot be fully clarified until after a more thorough discussion of CN, so further comment is postponed until then. However, even at this point, it should be obvious that identifying C2 and C l as forms of CN requires argument. The burden of proof is surely on that side. A second objection to Apperceptionalism goes like this: If the evidence for the existence of C2 as a different state from C l is the postfacto behavioral evidence of the kind given in the second car case, isn't an infinite regress generated? For if asked later, one would also surely admit that one was aware that one was C2 that one was Cl of x. Call this newly-admitted-to awareness, "C3." But this same story can be told again — ad infinitum. So belief in the existence of C2 either (1) generates an infinite regress, or (2) is committed to our being able 9
Actually, even this claim may not be true: at a movie, for instance, we seem to experience both aural and visual phenomena at the same time. Are things as they seem, i.e., does our experience consist of one or two phenomenal states at a moment of time? I know of no principled answer to the question.
129
Consciousness
to be unconsciously C2 that we are C l — and that description does not ring true — or (3) is such that its adherents must give up the kind of post-facto evidencing for C2 employed, and it is pretty much the only kind available. Since I believe in the existence of C2, 10 I will reply for its adherents. First off, there is other evidence for the existence of C2 provided by other kinds of dissociations (see the next chapter); and so we should reject the third horn of this trilemma, especially its last clause. But even if evidence can be provided for the existence of C2, we would still seem to face the problem of understanding it without being gored on either of the first two horns. Defenders of the existence of C2 must make reasonable that one can be C l while also being C2 without generating a vicious regress. One way would simply be to grasp the second horn. This possibility can be shown to ring truer than it might initially seem (see, for instance, Rosenthal 1986). But a simpler and more relevant argument is available. It may be that, if asked, one would admit to being aware of being aware of x — i.e., to being C3 of being C2 of C l . But if one were then asked whether one was not also aware of being aware of being aware of being aware of x — if my own case generalizes — the answer seems to be in fact, no. That is, it just is experientially, and so empirically, false that I am ever C4 that I am C3 that I am C2 that I am C l (and it is probably often false that I am C3 that . . .). While there are cases where I would admit to C3, I can think of none where I would admit to C4. So there is no infinite regress of any kind. But neither is C3 an unconscious state. By being an apperceptive state (an iteration of a C2 state), it is a conscious state. It is just that we are not also aware of being in that state. (And similar remarks apply to those C2 states we would not admit to having been C3 of at the time.) Unlike Rosenthal (1986, 1993), Apperceptionalism is not claiming that a lower-order state is made conscious by being the object of a higher-order state. Apperceptionalism is claiming that the higher-order state is itself conscious, because it is the sort of state it is - an apperceptive one. Of course, more fleshing out of these arguments will be needed to convince someone who does not already believe that C2 exists; but as said earlier, there are other reasons, and much of the rest of this book is meant as the relevant argument. Which, once more, is not the same as believing in the identity of C2 with consciousness per se, i.e., not the same as accepting Apperceptionalism.
130
Consciousness: preliminaries
3. The Identificationists' further reply takes one of two forms. The stronger version maintains that there is no way to make sense of C l and C2 as distinct; the weaker version asserts that the C1/C2 distinction is useless since no experimentally useful way of establishing it exists. For the most part, these two replies are not differentiated in what follows, since the discussion is relevant to each. In short, the Identificationist reply is that while we have experimentally reliable criteria of a behavioral kind for C l , no experimentally reliable criteria for ascribing C2 exist (Campion et al. 1983). Arguments against this position have already been presented in §§1 and 2, and more will follow. But for now, consider an even more obvious counterinstance to this reply, namely, that experimental subjects can verbally reveal that they are C2 that they are C l — and often do. The counterinstance may incline one to regard consciousness as verbalization. Verbalization, being able to say what states one is in, is intended as an analysis of C2; and so Verbalizationism is a form of Apperceptionalism, not a distinct view of consciousness. A couple of other considerations might further one's inclination. One might note that while there are disagreements about whether nonhuman animals think or feel or are worthy of moral patiency, there is less disagreement about whether nonhuman animals are moral agents. Almost everyone denies moral agency of nonhuman animals.11 And if one believes that consciousness underlies all four traits, then one might conclude that consciousness must be unique to human beings. The obvious candidate for a unique trait is that human beings can talk, that they have a syntactically rich, fully developed language. That is one consideration that might lead to the Verbalizationist thesis. One might, thus, believe that being C2 is nothing more than being able to report on the information being processed. That is just what one can do in the car case in regards to the music or conversation but cannot do as far as being C l of the stop signs, and so on. A second consideration led Gazzaniga (Gazzaniga and LeDoux 1978; see also Gazzaniga 1977) to be a Verbalizationist. Most commissurotomy patients are right-handed, and generally display similar kinds of responses under tachistoscopic experimentation. If a picture is flashed to their right fields of view, they acknowledge seeing it, are able to describe it, and pick up the object pictured or a like picture with 11
I am not sure I would, by the way.
131
Consciousness
their right hands. When a picture is flashed to their left fields of view, they do not acknowledge seeing anything and deny being able to describe it; or if they do offer a description, say they are only guessing. Yet, quite often, their left hands pick up the appropriate object or picture. When asked why they picked up the object or picture with their left hands, they often seem genuinely perplexed, not only that they picked up this object or picture, but that they picked up anything at all. Sometimes, they, instead, show little perplexity but confabulate a reason for their actions (Gazzaniga 1977). Apparent dissociations like these cry out for the description "unconscious awareness." And the motivation for saying that these patients are aware, but unconsciously so, is based simply on the fact that they deny having seen anything. Their inability to report — to verbalize — their right-brain experiences seems to be the reason we believe their behavior to have been unconscious. However, cases such as these were initially insufficient for Gazzaniga himself to adopt the Verbalization thesis; and he at one time argued that all commissurotomy patients have conscious right-brain experiences.12 Later, Gazzaniga became convinced that such first-order awareness is inadequate for consciousness since automata might perform similarly to the subjects of these right-brain behaviors. Only when a dramatically different case surfaced did Gazzaniga adopt Verbalizationism. One of his patients, Paul, differed from the others in being left-handed. When items were flashed to his left field of view, Paul, like the others, denied seeing them. Yet, in other ways, Paul's responses differed significantly If flashed a picture of himself to his left field of view, Paul's affective response was especially notable. When asked whose picture it was, Paul, like the other patients, said he had no idea since he hadn't seen it. But if given access to a set of letters, Paul's left hand would spell out his name. He would react with equally appropriate responses if shown a picture of his girlfriend, asked (by written questions being flashed to his left field of view) what his favorite color was, what profession he would like to pursue, whether he was in school, and so forth. These right-brain abilities, Gazzaniga believes, are on a different level of sophistication from previous sorts and establish conclusively that Paul's right brain is conscious even though, in these circumstances, his left brain is not conscious of what his right brain is conscious of. But which of Paul's right-brain abilities 12
Correctly, as I will argue in the next chapter.
132
Consciousness: preliminaries
puts his actions on a different level and seems to be lacking in other commissurotomy patients? Paul, when employing only his right brain, displays linguistic abilities: he can read questions and spell out answers. These considerations make it certain that right-brain Paul is conscious. And so Gazzaniga concludes that verbalization is the essence of consciousness. Several objections may be raised against Verbalizationism. For one thing, all nonhuman animals appear to be relegated to the unconscious realm. But Verbalizationists can make one of at least three moves in reply to this objection: (1) They can, as Descartes is often read to have done and as Gazzaniga himself seems willing to do, bite the bullet and maintain that nonhuman animals are unconscious. (2) Verbalizationists can instead refuse to bite the bullet with quite so much determination. They may claim instead that their view sounds more bizarre than it is because of the ambiguity in the term "awareness." Thus, saying nonhuman animals are unconscious might lead the hearer, mistakenly, to think that Verbalizationists are saying that animals have no awareness whatsoever. But Verbalizationists are only denying C2 of nonhuman animals. Nonhuman animals almost certainly have Cl. 1 3 (3) The third move is to refuse to bite the bullet at all. It is to claim that the Verbalizationist thesis is in no way committed to denying consciousness of nonhuman animals — not even C2. Processes similar to those underlying human language also underlie more limited forms of verbalization, and hence of C2, in nonhuman animals. There is no evolutionary leap to human language. Human language ability differs only in degree, not in kind, from that of many nonhuman animals; and similarly for consciousness. But even if one of these moves were correct, other difficulties dog the Verbalizationist. Many psychologists have claimed, for instance, that the verbal responses of experimental subjects are untrustworthy. Subjects often deny being aware of an item in their experience; yet, when pressed about the issue, or asked in a different way, or asked to respond in a nonverbal way, they turn out to have been aware of it after all (Campion et al. 1983; Holender 1986). Since self-reporting errors concerning C2 occur, it is plausible that C2 precedes verbalization. If so, verbalization is at most a sign of C2 rather than what C2 consists in. 13
As we will see, and as chapter 3 claims, this reply is correct about Cl but wrong about C2.
133
Consciousness II
4. A further, though related, criticism seems to drive a stake in the very heart of the Verbalizationist thesis. In fact, it seems to drive a stake in the heart of any Apperceptionalist thesis, and in the heart of the Identificationist thesis as well. Ascriptions of either awareness or of verbalization, it is claimed, can truly be made only to beings who possess consciousness; and, thus, neither constitutes consciousness. Since both awareness and verbalization presuppose consciousness, neither can account for it. Consider verbalization first. Not everything emitting a string of what we take to be words, not even when appropriate to the situation, is, or should be, considered conscious. We all know the examples: tape machines, stereos, and now we could add Coke machines. We do not take these things to verbalize - because they are not conscious. We can even imagine an anaesthetized persons vocal cords and so forth being manipulated so that sentences such as, "No, stick the scalpel over there," are emitted appropriately. Still, despite apparently relevant verbalization, this person isn't conscious. Bill Cosby tells us that when he dies he wants to be rigged up so that each time a mourner passes his open coffin he sits up and says, "Don't I look like myself?" But what makes Cosby s joke a joke is that he'd be dead, not conscious. Only conscious things can be correctly held to verbalize. Verbalization requires meaning and intention, and meaning and intention exist only in conscious beings.14 Since verbalization presupposes consciousness, it cannot constitute consciousness. And similarly, one might argue, for awareness. Only conscious beings can be aware. It makes no sense to think that thermostats are aware of temperatures or that plants are aware of sunlight because thermostats and plants are not conscious. We are tempted to talk about "unconscious awareness" in the case of human beings only because (1) their behavior is similar to what it is when they are consciously aware, and (2) living, unanaesthetized human beings are the kinds of things that can be conscious. Awareness, like verbalization, presupposes consciousness. If something is not conscious, it cannot be genuinely aware — whatever its behavior might be. 14
This claim, I believe, is correct; but it should not be confused with Searle's (1989, 1990, 1992) similar-sounding claim. The latter will be discussed in the next chapter.
134
Consciousness: preliminaries
Thus runs the objection.15 Attempts to analyze consciousness in terms of C l or C2 fail, one might continue, because they fail to capture the most important fact about consciousness: that it feels. Consciousness is ineffable. Primitive. Unanalyzable. One just knows from the inside, as it were, what it is to be conscious. As Nagel (1974) says, for something to be conscious, there must be something it is like to be that thing — something it is like for the thing itself. This slogan is claimed to capture best (although not very well) that element of consciousness underlying Lockean personhood. Similarly, Searle says that to be in a state of awareness is to be in an aspectual state, and he claims that aspectual states are bequeathed their aspectuality by feeling some way or other to the subject of that state (Searle 1989, 201). Call the position identifying consciousness with CN "Qualism." Nagel uses the example of a bat to clarify the notion of an ineffable consciousness. He says that even if we thoroughly understood the neural anatomy, physiology, chemistry, and so on of a bat's sonar perception, a mode of perception that we as human beings lack, we would still fail to comprehend what that perception would be like^br the bat. We would fail to understand how being in that conscious perceptual state feels for the bat. We would lack the requisite tools for understanding the phenomenal properties of the bat's experience.16 We would fail to represent the world in the qualitative way the bat does and fail thereby to understand the bat's qualitative experience. Yet, these qualitative properties, it is claimed, are the very essence of a bat's perceptual consciousness. It is these properties that are ineffable. And similarly ineffable properties constitute our own consciousness. To be conscious is to experience a phenomenal state.17 When we distinguish a conscious 15
16 17
I think the objection is itself wrong; but the ways in which it goes wrong do not affect the present chapter. And since the objection provides a path to identifying consciousness with CN, I let it pass for the moment without further comment. Or in a word intended to cover wider ground, phenomenologkal properties. For more on the phenomenal/phenomenological distinction, see §6 and the next chapter. Consider these quotes from Nagel (1974): "But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism . . . [Fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism - something it is like/or the organism" (436 - all italics in all the quotes are Nagel's). "It is impossible to exclude the phenomenological features of experiences. . ." (437). "I want to know what it is like for a bat to be a bat. Yet if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task" (439). "Reflection on what it is like to be a bat seems to lead us, therefore, to the conclusion that there are facts that do not consist in the truth of propositions expressible in a human language. We can be compelled to recognize the existence of such facts without being able to state or comprehend them" (441).
135
Consciousness
state from an unconscious one, we are distinguishing the first as possessing these ineffable qualities and the second as lacking them. People most often experience states with these qualities; thermostats and flowers never do. Most would bet that many nonhuman animals are similar to people in experiencing phenomenal states, although we have no proof that they do. In fact, there is no proof that any person besides oneself experiences these qualities, though biology and behavior provide as good evidence as any of us needs for believing that others do. Given the ineffability of CN, the following sort of case ought to be at least imaginable, even if we have good empirical evidence from anatomical, physiological, and behavioral similarity that it is not actual. Since no proof exists that anything besides oneself has CN, one ought to be able to imagine that the world is exactly the way it is — human beings live and interact in just the ways they do — except that one oneself is, despite one's beliefs to the contrary, the only CN being.18 But, surely, this very thought experiment makes manifest that CN, whatever else may be true about it, fails to encompass all of our central beliefs about consciousness. For the correct conclusion to draw from this thought experiment is that whether other human beings are CN or not, given the world exactly as it is in every other way, human beings obviously think, feel (affectively and emotionally), are moral agents, and are objects of moral concern. That is, they are conscious Lockean persons. As Wittgenstein (1953, 126, §420) reminds us, in order to imagine that everyone else around oneself is an automaton that lacks consciousness, one would have to imagine also that the world is not exactly the way it is — and that quite independently of whether such beings are CN. One would have to imagine quite different behaviors and interactions from those actually existing in the world before one would withhold ascriptions of personhood from friends, lovers, and relatives. If we can imagine the world exactly the way it is without CN, that in itself is a compelling reason to think that CN cannot be as important as Nagel and others believe it to be. 18
Nagel himself (1980, 205) accepts this possibility: "Descartes's argument also has the following turned-around version, which to my knowledge he never employed. The existence of the body without the mind is just as conceivable as the existence of the mind without the body. That is, I can conceive of my body doing precisely what it is doing now, inside and out, with complete physical causation of its behavior (including typically selfconscious behavior), but without any of the mental states I am now experiencing, or any others, for that matter . . . The conceptual exercises on which these arguments depend are very convincing."
136
Consciousness: preliminaries
But, surely — a Qualist might respond here — if one really became convinced that one's friends, lovers, and relatives had no inner life, no CN, one would deny that they are conscious. The proper reply to this objection, for reasons soon to be presented, is that equating "inner life" with C N already begs all sorts of questions. But in addition, it is doubtful that one would so readily think of these beings as mere automata. Much more likely, one would instead consider oneself a very odd person and believe that a genetic-physiological abnormality accounted for one's experiencing CN (phenomenal) states when no one else did. After all, one would have been born from two of these beings in just the way one's non-CN siblings would have been born from them. Some defenders of the idea that CN is essential to consciousness would, unlike Nagel himself, deny that we can imagine a world having all the same bodily behaviors but lacking beings who are CN. That is, some would claim that CN is necessary for the existence of certain kinds of physical and physiological behavior. The objection may well be correct - in fact, I am inclined to think it is correct - but it needs to be grounded. Anyone making this claim needs to argue for it. There have yet to be presented any convincing arguments of this type.19 I think that these considerations by themselves show that CN is an inadequate account of consciousness and show that CN is not even essential (except perhaps, in certain cases, causally) to the consciousness that makes us Lockean persons. This conclusion does not deny the existence of CN, only its seeming importance. CN is best understood as being constituted by phenomenal states, but the above considerations show that phenomenal states are woefully inadequate for spelling out the notion of consciousness that is most important to our concept of Lockean persons. While finding these considerations sufficient to reach this conclusion, I realize that not many others will be as readily convinced, especially since the thought experiment is questionable 19
Searle's (1989, 1990, 1992) claims about these issues are discussed in considerably more detail in the next chapter. While agreeing that the turned-around Cartesian thought experiment may not be possible, I would also maintain that, correlatively, it may be unimaginable to think that changes in certain relevant brain states could occur without changes in phenomenal experience. And it may be unimaginable that, for relevant brain states, two beings could be in exactly the same state but experience different phenomenal states, or for one of them to have phenomenal experiences and the other not. For a different set of intuitions on this last set of issues, see Chalmers Forthcoming. That our intuitions can be so different here just shows how unreliable single intuitions are. They become reliable only when embedded in a good theory.
137
Consciousness
(even if Nagel himself accepts it); and my prediction about its outcome hardly constitutes an argument. But arguments can be marshalled in support of this conclusion. 5. Before moving on to the arguments, first consider a different sort of objection to the thought experiment. One might claim that my imagination just isn't rich enough. "We can imagine that the world is exactly the way it is even though all other human beings besides oneself are automata and, like the anaesthetized persons discussed previously, are being manipulated by an Evil Genius to move their limbs, vibrate their vocal cords, and so forth. Surely, in this case, we wouldn't hold those other human beings to be conscious. And what would they be missing except for CN?" Two replies are in order. The first reply is to question whether this imagined world would be exactly like ours in the relevant ways. That it would be is not at all obvious. Among other things, much of our current science would have to be discarded. We believe, for instance, that many animals, and most especially human beings, are self-moving creatures and that biology explains that ability. We believe that one's own movements share a common origin with morphologically similar creatures. We believe that no unknown, supernatural force in the universe moves other creatures. And so on and so on. Such an imagined world would not be this world. It would not be biologically and behaviorally exactly like this world, "both inside and out." Moreover, even if we can imagine such a world, we may also be able to imagine a world where other beings are not so manipulated, but make use of their own powers of using their vocal cords in just the way they do although they lack CN - i.e., the turned-around Cartesian world. Defenders of the idea that CN is constitutive of consciousness have to hold, even of this imagined turned-around Cartesian world, and not just of the Evil-Genius manipulated world, that the relevant beings do not think and feel. This conclusion remains improbable. If the Evil-Genius-manipulated world were the real world, perhaps one would refrain from saying any other beings are conscious. But that world is not this world, nor sufficiently close to it to establish the point Qualists are trying to make here. To imagine that world is to imagine another, more distantly related, world than the one imagined in the turned-around Cartesian example.20 20
This is not meant to be a refutation of skepticism. It is far too dogmatic for that. It is meant to point out how many of one's beliefs would turn out to be false, how many would have
138
Consciousness: preliminaries
Saying that if the Evil-Genius-manipulated world were the real world we might then refrain from ascribing consciousness to other creatures, brings us to the second reply. Imagining an Evil-Genius-manipulated world muddies the waters by trading on a complication unrelated to phenomenal consciousness. When creatures are so manipulated, we tend to think that they are compelled to act as they do. In reading science fiction stories, we readily deny that androids and other robots are Lockean persons when we think the androids cannot help but do what they do. However, if we learn an android is not being manipulated or narrowly programmed to do what it does, we are more likely to think of it as a Lockean person (compare the Pinocchio story). We might also believe the android really experiences phenomenal states. But if we do, it would only reflect a prejudice that any Lockean person has to be exactly like us. Notice that we hesitate to ascribe intelligence to the sphex wasp when we discover that its seemingly rational behavior (of leaving its prey outside the burrow until it can check to see if the burrow is safe) is a quite automatic behavior, manifested even in circumstances where it is self-damaging for the wasp. Experiencing CN wouldn't make the wasp any less of an automaton, any more than a dog s failing to experience CN would make it more of one. Determinism/indeterminism issues lie behind our refusal to consider the creatures of the Evil-Genius-manipulated world to be Lockean persons. Our refusal has little to do with the question of CN. Adding CN to these creatures, as we might to the wasps, does not make us any less hesitant about whether these creatures are really Lockean persons. If their CN is manipulated in just the same way as their movements and behavior, then we will still raise the same questions, have the same doubts. Automatism is automatism, whatever it feels like to the creature. For this reason, the objection to the turned-around Cartesian world misses its target.21 6. Identifying consciousness with CN also gains undeserved plausibility from muddying the waters in yet another way. Nagel (1979a, 1986) and others (Searle 1989, 1990, 1992; McGinn 1988, 1989) muddy important differences between phenomena on the one hand and thoughts, feelings, beliefs, hopes, aspirations, and so forth — the propositional attitudes — on the other. CN is exemplified by
21
to be surrendered, if skepticism about other consciousnesses is correct. The turn to skepticism about other consciousnesses would be a very great turn indeed. See chapter 9 for a discussion of skepticism and the problem of other conscious beings. Problems of determinism and consciousness are connected, but not quite in the way suggested by the objection. Chapter 11 explores their connection more thoroughly.
139
Consciousness
phenomena, probably only by phenomena, for phenomena are states where "feels" count. But the same is not true of the rest of our mental life. Nagel's own example of bats' sonar perception trades on this implicit — and illicit — reduction of cognitive states to phenomenal states. If, like Nagel himself, we confuse perception with phenomenal states, we will enable him to get more mileage than he should from his "what is it like to be" characterization of consciousness. When we consider nonperceptual cognitive states, these difficulties multiply. What is it like to have a thought? We are all tempted to say we know exactly what it is like to have a particular thought, say, the thought of needing to get the pistons in our car checked. But what is it that we know? We can usually report at the time that we have the thought, can draw inferences from it, and act on the basis of it. But what phenomenal state makes it that thought? Even if we experience similar phenomena when thinking similar thoughts, what converts those phenomena into that thought? Suppose the phenomenal state is a mental image (similar points can be made about any other notion of the phenomenal state). Identical mental images could play many roles for us. What makes a particular one play just the role it does? A thought neither reduces to a mental image, nor is a mental image even necessary to having a thought. We obviously might have experienced other mental images. Most importantly, we might have experienced none at all. Is it that we "hear" or "see" the words in our CN? But what makes those images words expressing the thought, any more than other images are the thought? If the proposals of section II of chapter 4 are correct, then images, at best, contain representational information but no content, while thoughts have content. Suppose we wanted to cause a caveman who had never seen any tool more sophisticated than a rock used for hammering or breaking to have the thought, "I need to get my pistons checked." What CN experience might we cause to give him that thought? The correct answer is pretty obvious: No CN experience would suffice. No images, not even images of English or Cavemanese words running through his mind, nor any other CN experience is relevant to his success. He cannot have that thought simply by our inducing an occurrent phenomenal state in him.22 Thinking involves more than experiencing CN states. The 22
This example is an adaptation of one once used by Norman Malcolm for fairly similar purposes.
140
Consciousness: preliminaries
consciousness relevant to being a Lockean person involves more than CN. At the least, as claimed in chapter 4, it involves an aspectualizing of the information contained in a phenomenal image. And these questions are even more salient if we concentrate fully on the qualitative properties of the images, rather than on their representational properties. The connection between qualitative properties and thought-contents (and any other propositional-attitude contents) seems wholly mysterious. "Even if much background must be filled in before one can have the thought, 'My car's pistons need checking,' why isn't that picture image or those word images (or those qualitative properties), against that background of other beliefs, the thought that one's car's pistons need checking?" Because if one has the appropriate background beliefs, chances are that one also has other background beliefs adequate for the thought, with the same picture image (and with the same qualitative properties), "My car's pistons are made of brass," or "Having sex is like the pistons and cylinders of a car," and so on. How are the background beliefs to be cashed out in the image, in one's CN? If we claim that the image occurs because it is caused by the appropriate background beliefs, how does this causal story get cashed out in the occurrent state? The point is that we can imagine the same occurrent state's occurring when brought about by quite different causes. Nothing in CN itself has to occur (or probably even the weaker, "does occur") when and only when one thinks that one's pistons need checking. Nor do word images improve matters: the thought with just those word images running through one's head could be "My car's rings need fixing," where one, quite ignorantly, has confused the word "pistons" as the name for rings. Or they might occur when one's thought is really a sexual innuendo having only metaphorically to do with cars at all, and so on and so on. We are just looking in the wrong place for thoughts when we look to CN. Everything that gives images meaning seems to lie "behind" or "alongside" the images and outside of CN. There seems to be no intentionality in CN images, even if there is representational information contained in them. 23 And if one focuses on the qualitative properties, as Nagel and others seem to do, rather than on the 23
A legitimate question at this point is how can any occurrent internal state, even proposition-like ones (thoughts and judgments, say), represent in an aspectual sense (i.e., be an intentional state). Wittgenstein (1953) raises just this question. I begin considering this larger question in chapter 9.
141
Consciousness
representational character of phenomenal states, this conclusion is even more obvious. The conclusion is dramatically illustrated by Gazzanigas patient, Paul. Gazzaniga is probably right that Paul has conscious right-brain thoughts, but there is no good reason to think that any of Pauls rightbrain states is a CN state. It only begs the question to maintain that since Paul has conscious thoughts he must have appropriate CN states. Perhaps we could ask Paul through his right brain if he is experiencing appropriate phenomena. But it is certainly an empirical matter how he would answer. Pauls case at least shows that conscious thinking in the absence of phenomenal states is conceivable. CN does not constitute conscious thought, nor is CN conceptually essential to it.24 In Nagels sense, there is nothing it is like to have a conscious thought. In another sense (and Nagel trades on confusing these two senses), of course there is something it is like to have a conscious thought, as when one is aware of ones thought, is able to say (if one can speak) what thought it is, can act purposefully on the basis of it, can draw inferences from it, and so forth. Perhaps too much stock is being put in phenomena here. One might maintain that conscious thoughts, while not reducible to phenomenal images, are like these latter states in that we are directly aware of them. And if we are directly aware of them, then they must resemble mental images, which are also directly accessible, in a relevant respect. Qualism then goes on to assert that the relevant respect is that there is something it is like to have a thought. Having a thought, while not a phenomenal experience, nevertheless "feels" different from having a desire. And having one thought "feels" different from having another. Since having a thought is admittedly not a phenomenal state, shudderquotes have to be used around "feel" because "'feel'" is supposed to be different from "feel." Let us say that "felt" states are phenomenological rather than phenomenal. But what argument grounds the Qualist assertion? The argument (which is rarely, if ever, explicitly presented) seems to be something like this: "We can distinguish among thoughts we are thinking (e.g., distinguish the thought that tomorrow is Tuesday from the thought that tomorrow is Wednesday) and distinguish what propositional attitude we are experiencing (e.g., distinguish thinking that tomorrow is 24
Though it may play a role as a causal precursor of all, or some, conscious thoughts.
142
Consciousness: preliminaries
Tuesday from hoping that tomorrow is Tuesday). Since we make these distinctions, there must be a basis, in each case, on which we make them. In phenomenal cases, we distinguish experiences on the basis of their qualitative/ee/. However, phenomena are not the basis in propositional-attitude cases. So thought experiences must 'feel' different from each other, even if they do not feel different from each other." Hence, the move is made from the phenomenal to the phenomenological.25 To the claim that thoughts "feel" a certain way even if they do not feel any way at all, I can reply only as Hume replied about the self: When I look into my CN and bracket the phenomena, I am left with nothing. If "feeling" is like experiencing nothing, okay. If it is something other than that, then I don't experience it. And I am betting that none of my readers experiences such a state either.26 Nothing we experience in thinking parallels what we experience when we experience phenomena. When we think, we are often aware of experiencing phenomena at that very time. I have no wish to quarrel with that claim. But as Wittgenstein (1953) argued, these phenomena may well only contingently accompany thoughts. At the most, phenomena may be causes of thoughts; but causes are contingent to the degree that other — nonphenomenal in this case — types of events may be conceived as able to play an analogous causal role. Moreover, phenomenological properties, unlike phenomenal properties, are theoretical posits. They are not experienced in any way — at least not in my own case. It is only claimed, without good argument, that they must be. It is true that the relevant distinctions must have a basis, but neither phenomenal nor phenomenological states provide that basis. The assertion that "being felt" is the relevant respect in which thoughts resemble mental images is conceptually, experientially, clinically, and experimentally unwarranted. Lest my claims be misunderstood, clarification is called for. While maintaining that CN is not constitutive of conscious thinking,27 there 25 27
26 See, for instance, Goldman 1993. Cf. Leslie et al. 1993, who make a similar point. It may be that Nagel's notion of subjectivity is quite independent of his notion of what I have called CN. If so, what I say in this chapter vis-a-vis the latter will say almost nothing about the objective/subjective distinction that is the central focus of Nagel's work (but see chapters 9 and 11, as well as Nelkin 1994d, for arguments that Nagel also fails to differentiate among distinct notions of subjectivity, failing to see that they are distinct). Moreover, if he doesn't identify CN with subjectivity, then his own focus on CN (there is something it is like to be conscious) is misleading.
143
Consciousness
are two things I am not claiming: (1) I am not denying that experiencing phenomena (CN) is a conscious state; (2) nor am I claiming that apperception (C2) is necessary for thinking, even if C2 is necessary for judging oneself to be thinking. 7. What is true of thoughts is by and large also true of feelings, where by "feelings" is meant affects and emotions rather than phenomena. Again, what phenomenal state comprises one's anger at Ruth, or one's hope for a better tomorrow, or one's dislike of Roger? As many have argued recently, emotions and affects are irreducible to phenomena (for instance, see Wittgenstein 1953, Mandler 1987). Beliefs and desires are taken to be essential to emotions. But beliefs and desires, even more than occurrent thoughts, are far removed from mere CN. As both Wittgenstein and Mandler argue, similar phenomena in different contexts, set against a background of different beliefs and desires, can be considered at one time anger phenomena; at another time, fright phenomena; or at a third time, phenomena of avid expectation. These phenomena get their "names," as it were, only in the context of the emotion. They do not constitute the emotion. They accompany the emotion and, perhaps, play a causal role in bringing it about.28 And, once more, it is conceivable (even if empirically false) that emotions and affects are experienced in the complete absence of phenomena. We can imagine patients like Paul reacting through their right brains to show anger or embarrassment even though they are not experiencing any CN state. Anyone who saw 2001 recognized the emotions in HAL (they were reflected in his voice, in his words, in his deeds). And one did so without imagining, or needing to imagine, that HAL experienced CN states. As is the case with thoughts, emotions and affects are far too complicated to be identical to phenomenal states. And cases like Paul's and HAL's show that phenomenal states are not conceptually necessary for their occurrence. In Nagel's sense, there is nothing it is like to experience conscious emotions and affects. In another sense, of course there is. It is to behave in complex ways on the basis of them, to admit to having those feelings (or at least often be in a position to admit to them if one can speak), to draw inferences On Mandler s (1987) view, phenomena are essential to emotions, in much the same way that I have argued that they are essential to pains (see chapter 3). If he is correct, the relation of phenomena to feelings will be somewhat different from their relation to thoughts.
144
Consciousness: preliminaries
from the beliefs that help constitute them, and so on. If to be a Lockean person is to be something that has thoughts, emotions, and affects, then CN is largely irrelevant. In the sense in which "what is it like to be" is pertinent to CN, there need be nothing it is like to be a thinking-feeling person. 8. It is a good idea to sum up the points of this chapter. First, and most important, three different views of consciousness were introduced: (1) Identificationism, which identifies consciousness with awareness; (2) Apperceptionalism, which identifies consciousness with apperceptive awareness only; (3) Qualism, which identifies consciousness with phenomenal awareness. We saw that simply identifying consciousness as awareness in general, as does Identificationism, fails to allow for the possibility of unconscious awareness. That realization led us to distinguish two sorts of awareness, C l and C2. C l is a first-order, proposition-like representation of the world, while C2 is a secondorder, apperceptive, proposition-like awareness that one is in a Cl state (or, instead, that one is experiencing a CN state). We then considered the possibility, since one could be unconsciously C l , that C2 is what consciousness consists in. Verbalization was then considered as an analysis of C2; but verbalization, it was argued, is an inadequate analysis of C2: things that do not verbalize seem to have C2, and C2 seems to be a necessary condition for verbalization to occur. So verbalization cannot be what C2 is. Criticisms of Verbalizationism led to focusing attention on CN. It was pointed out that Qualists claim that both C l and C2 omit the most important thing about consciousness: how it feels. Despite any appeal Qualism might have, it was argued that CN does not constitute, nor is even essential to, the consciousness of conscious thought, affect, and emotion — the consciousness most important to us as Lockean persons. Therefore, it was concluded that CN also cannot constitute consciousness. Given these, apparently negative, results, what are we to make of them? Every candidate theory is lacking in one way or another. For one to be conscious does seem for one to be aware, but Identificationism fails to account fully for the complexity of consciousness, and for two reasons: (1) Identificationism fails to make sense of the fact that we can be unconsciously aware of something, and (2) awarenesses of most kinds omit how consciousness feels. But 145
Consciousness
Apperceptionalism also fails as an analysis, again for two reasons: (1) It, too, omits the fact that consciousness seems to feel some way or another; and (2) it seems that one is consciously thinking about the pistons needing repair (Cl) and not merely consciously judging that one is thinking that the piston needs repair (a C2 state). That is, the first-order thought itself seems to be conscious, and not merely the thought about that thought. 29 Finally, Qualism fails because it does not account for conscious thoughts (or emotions, or affects), nor for the conscious, apperceptive awareness of those thoughts. At every turn, various aspects of consciousness escape our grasp. How, then, can we have a theory of consciousness? In one sense, we can't. 29
Rosenthal's (1986, 1991, 1993) theory would solve this problem for C2, but not the first.
146
Consciousness: a theory When it comes to consciousness, a natural first focus is on awareness. However, awareness simplidter seems too broad to define consciousness since there seems to be unconscious awareness. The discussion of the previous chapter distinguishes three types of awareness: phenomenality (CN), PA-awareness (Cl), 1 and apperception (C2). We saw that none of these accounts for everything we mean by "consciousness." Each omits important features. I will argue that these failures do not imply that there is a fourth thing consciousness is, nor that there is nothing consciousness is. Two major possibilities remain. The first is that these analyses fail because consciousness is a noncomposite state embodying all three features of the previous analyses. Because these previous analyses are but partial analyses of consciousness, they fail as total analyses. But each is a partial analysis of the state we call consciousness. Consciousness is what Natsoulas (1989b) calls a "self-reflective" state: one that is all of these at once.2 The second possibility, and the one to be defended, is that no noncomposite state, consciousness, exists. Rather than being three features of a single, noncomposite state, these three features characterize different states of human beings, each of which is labeled "consciousness." While these three features can, and often do, occur together compositely in human experience, each of the first two can exist independently of each other and of apperception. Because they frequently co-occur, the three are taken to be features of a single, noncomposite state. But if, as I intend to show, these features dissociate from each other, there will be less reason to think that they are cofeatures of a noncomposite, indivisible state. A noncomposite, 1 2
PA-awareness encompasses all proposition-like awareness other than C2. Searle (1992) would agree. But he apparently holds something considerably stronger: Any state having one of these features must have the others. That is, Searle denies the existence (even the possibility) of unconscious intentional states (other than as possible causes of actually intentional states).
147
Consciousness
self-reflective state will be seen to be a theoretical redundancy. And if these features are dissociable, it will certainly follow, contra Searle, that they do not have to occur together. Since each feature characterizes an important way in which things like human beings differ from things like rocks, no one state has any more priority in being considered as what consciousness really is than the others.3 While one should be wary of claims that important terms are systematically ambiguous, "consciousness" actually is systematically ambiguous, although in every use mentioned here it picks out sets of properties that distinguish beings like us from other sorts of things.4 In a normal, everyday conscious experience, like looking at our watch when someone asks us the time, all three features manifest themselves: (1) The result of looking at our watch just is a qualitatively different experience for us from the experience that results from looking at the clock on Parliament Tower. Looking at the hands of a dial watch just results in a qualitatively different experience from looking at a digital watch face, and so on. Such experiences just "feel" different from each other.5 Nagel (1974) alludes to this property when he says that even if we knew a bats neurophysiology we would not know what it is like to be a bat, what the bat's "sonar" experience is like. (2) The second form of experience, PA-awareness, involves content, i.e., "intentionality" (see chapter 4). For instance, in perceiving our watch, we experience a watch and experience that watch to be out there, independent of us. Indeed, we see a watch. Moreover, we could, in principle, have an experience as of an object, a watch in this case, even if no watch be out there. As Descartes (1642/1986, First Meditation) forcefully brought to our attention, we have such experiences in dreams. First, note that this use of "intentionality" is related to a use of the word "meaning" as in a sentence s meaning a. state of 3
4
5
Although the consciousness that most firmly grounds Lockean personhood is probably that of apperception (see chapter 5). See Part Three for reasons why apperception is so important. The material for this chapter derives largely from Nelkin 1993b, 1989a, 1993a, Forthcoming-b, and Forthcoming-c. I am certainly not the first to argue for the ambiguity of "consciousness": for instance, see Miller 1942 and Natsoulas 1983; however, I cut the joints somewhat differently from either Miller or Natsoulas. Moreover, Natsoulas has retreated into a notion of the theory of consciousness, with his introduction of self-reflective states (Natsoulas 1989b). However, see Wilkes (1988) for a view somewhat sympathetic with my own, though Wilkes seems to lean towards the "nothing is consciousness" view. The shudder quotes are used here because any qualitative property is intended, not just those that are associated with a haptic sense.
148
Consciousness: a theory
affairs. This use is only somewhat distantly related (though less distantly than many think - see chapters 4 and 9) to our more common uses of "intentionality," as in intending an action, or as meaning "on purpose." Second, note that this sort of intentionality is the intentionality involved in judgments, both perceptual and otherwise (though by no means exclusively in judgments). One sort of intentional state is excluded from this category: apperception is excluded because it is a second-order, proposition-like representation ("second-order," because it is "about" the two first-order states). And image-like representation is also not included with first-order proposition-like intentionality.6 (3) Finally, apperception displays itself in two ways: first, while our attention may be primarily directed toward the watch, we at the same time judge ourselves to be seeing the watch rather than hearing it, guessing what the watch says, or the like. If asked whether we saw what time it was or heard Big Ben (while looking at Parliament Tower, say), we could certainly reply, without hesitation — in normal circumstances — that we saw the clock, even though our attention had been focused on the clock itself and not on seeing it.7 Second, if we perceive by means of representations, either proposition-like or imagelike, and representing is an internal experience, then in normal circumstances we are apperceptively aware of our representation of the watch (even if we are not also aware that it is a representation we are aware of).8 Because normal perceptual experience is like this, because it incorporates all three features that make us markedly different from rocks (or even from roses), it is easy to believe that these three features are features of a single, noncomposite state; and one might even think that there can be no PA-awareness and no apperception without phenomenality. But these beliefs are mistaken. There are both theoretical and empirical reasons to think that PA-awareness states can 6 7
8
Among other reasons, for those given in chapter 4 - including the proposal that imagelike representations are not intentional states (i.e., have no content) of any kind. It is important to notice that as I am using the term "apperception," it does not require attentiveness. Nor does apperception involve perceptual-like mechanisms. "Apperception," as I am using the term, applies only to a second-order, noninferential judgment that one is experiencing either of the first two states. A considerably fuller account of apperception is presented in chapter 8. For arguments that perception involves proposition-like representation in an essential way, see chapters 1 and 2. For reasons for thinking that it also involves image-like representations, see chapter 4.
149
Consciousness
dissociate from both phenomenality and apperception, and even reasons to think that phenomenality can exist dissociated from both of the others. Of course, apperception cannot occur apart from both of the others, but it can exist apart from each of the others. I will spend much of the chapter showing that these dissociations are at least compatible with empirical evidence. And since C l is, in principle, dissociable from both apperception and phenomenality and since phenomenality is, in principle, dissociable from both of the others, arguments are required to show that consciousness is a self-reflective state wherein it would be impossible for these features to dissociate. The first section of this chapter will be spent establishing that these dissociations are compatible with the available evidence. Having established this compatibility, I conclude the chapter by indicating reasons for preferring the theory that treats the states as dissociable (the dissociability thesis).
1. While a good many readers may already find it obvious that C l is dissociable from C2, a number of recent arguments (Searle 1989, 1990, 1992; McGinn 1988, 1989; Nagel 1979b) claim to show that firstorder intentionality is tied in an essential way to apperception and phenomenality.9 My arguments and examples of this section are intended to show that the dissociation of C l from apperception and phenomenality is at least compatible with the available evidence and to show that the dissociation of apperception from phenomenality is also compatible with the evidence. Because of the attention that intentionality has recently received, I spend a fair amount of time flogging what some might conceive of as a dead horse; yet, all the recent attention given the issue belies the belief that the horse is dead. The arguments presented in this section are distillations of lengthier 9
"In this article I will argue that any intentional state is either actually or potentially a conscious intentional state, and for this reason cognitive science cannot avoid studying consciousness" (Searle 1989, 194). "A physical explanation of behavioral or functional states does not explain the mental because it does not explain its subjective features: what any conscious mental state is like for its possessor" (Nagel 1979b, 188). " . . . [Ijntentionality is a property precisely of conscious states, and arguably only of conscious states (at least originally). Moreover, the content of an experience (say) and its subjective features are, on the face of it, inseparable from each other" (McGinn 1988, 24).
150
Consciousness: a theory
versions presented elsewhere.10 Since Searle s views are, perhaps, the best known, I focus on them. Searle makes four claims that play a significant role in the discussion to follow. First, Searle claims that intentionality is essentially characterized by aspectuality (Searle 1980, 1983, 1989, 1990, 1992). Second, he takes ordinary, familiar perceptual states to be intentional states (1983). Indeed, he would agree that perception essentially involves a judgment (1983). Third, he claims that intentionality is essentially connected to consciousness (1989, 1990, 1992). And, fourth, Searle claims that unconscious states are intentional only because, and in so far as, they could become conscious (1989, 1990, 1992). I argue that there are empirical reasons to deny the last two claims — at least in the sense Searle means them — especially if we accept the first two. It is important to focus on occurrent (as opposed to dispositional) intentional states. 2. Blindsight cases (Weiskrantz 1977, 1986) are certainly compatible with C l states dissociating from either phenomenality or apperception. When blindsight patients "guess" that it is an "X" or " O " in their "blind" fields of view, their guesses make sense if unapperceived judgments (perceptions), based on presentations to their "blind" fields, have occurred. But does intentionality need to be posited in these cases? Couldn't an unthinking mechanism be constructed that would respond to "X"s and "O"s as blindsight patients do? These are questions that must be faced, but they can be faced more squarely only at the end of this chapter. For the while, all I want to claim is that these cases are compatible with the claim that blindsight patients make unapperceived perceptual judgments (i.e., C l but not-C2 judgments). They support the dissociability thesis at least as much as they support a nondissociability thesis. The fact that the patients — granted, under forcedchoice conditions — "guess" an X or an O even provides evidence that the patients see the object under an "aspect," as an X or an O; and it is the aspectual nature of an experience that Searle, quite plausibly, takes to be defining of an intentional state. Moreover, other blindsight experiments have results that make explanations involving "mechanical" responses seem less plausible. These results more clearly manifest semantic processing and an aspectual nature. Torjussen (Weiskrantz 1986, 133—34) conducted experiments with 10
Chapters 1, 2, 3, and 5, Nelkin 1986, 1987a, 1987b, 1989a, 1989b, 1993a, Forthcomingb, Forthcoming-c.
151
Consciousness
patients who were shown a semicircle in their preserved, apperceivable field and nothing in their "blind" field. Their responses were that they saw a semicircle. If shown a semicircle in their blind field and nothing in their sighted field, they denied seeing anything. But if shown a semicircle in their preserved, apperceivable field and an attached semicircle in their blind field, the patients said they saw a circle. Surely, if intentionality-characterized processes are involved in the visual processing of the preserved field, as Searle thinks, they are also involved in the visual processing of the "blind" field. Otherwise this difference would seem inexplicable, since the presentation to the preserved field is the same in both cases. That is, it is reasonable to believe that the subjects see the circle, in part, because they see a semicircle in their blind fields. Even more remarkable are cases of semantic priming from words shown in the "blind" field, as reported by Marcel (Weiskrantz 1986, 142). For instance, when shown the word "river" in their blind fields and shortly thereafter asked aloud to associate the word "bank," subjects were much more likely to associate it with a body of water than with money (Weiskrantz 1986, 139 and 149). A possible, perhaps even plausible, explanation is that semantic — i.e., intentionality-characterized — processing took place in the "blind" field. Other Hindsight experiments, as well as ones involving neglect (Marshall and Halligan 1988), prosopagnosia (Young and de Haan 1990), visual extinction (Volpe et al. 1979), and commissurotomy (Gazzaniga and LeDoux 1978), equally establish the compatibility of the evidence with unapperceived intentionality.11 Searle himself claims to recognize the existence of unapperceived intentionality. Nevertheless, he claims to preserve the essential tie between the two states (1983, 1989, 1990, 1992) by claiming that unapperceived intentional states are intentional only in the sense that they are potentially apperceivable.12 A fuller discussion of this claim will be presented shortly. 11 12
See Nelkin 1993a. Searle (1990) uses "conscious" rather than "apperceivable," mainly because he wants to deny that there could be a single state of consciousness, apperception. I am quite sure that he would agree that in seeing the clock on Parliament Tower we are aware of our seeing it, i.e., he does not deny apperception in the sense I use it, so long as it is considered to be a feature of consciousness and not a distinct mental state. What he is really denying is that the features of consciousness can exist independently of each other. For him, all conscious states are like Natsoulas' (1989b) self-reflective states. I will argue that the features are not only distinguishable, but they are dissociable.
152
Consciousness: a theory
In the meantime, we can note that not only do Hindsight, neglect, prosopagnosic, visual extinction, and commissurotomy cases make manifest that the dissociation of intentionality from apperception (Cl from C2) is compatible with the evidence, there is also reason to believe that the C l states involved in these kinds of cases are impenetrable to apperception. Blindsight, for instance, has been claimed (Weiskrantz 1977, 1986) to consist of visual states — employing neural circuits other than those of normal perception — that, in any meaningful sense of "can," cannot become apperceivable. If Weiskrantz is right, our brain is such that these experiences just are not apperceivable. I know of no philosophical argument that makes this supposition impossible.13 Searle s claim that unconscious intentional states are intentional only if they are potentially conscious is prima facie plausible for dispositional states like beliefs. Indeed, three hundred years ago, Locke (1690/1959, vol. I, 193-94) gave an exactly similar analysis of stored memories. But there is little reason to accept such a claim for occurrent intentional states like those involved in blindsight. Given the strong possibility that a second — midbrain — visual system is involved in blindsight, it is likely that perceptual states that are the output of this system are never apperceivable and never have been. 3. And these same cases bear on the dissociability of phenomenality from either C l or C2, because if it is possible that perceptual Cl states dissociate from C2, then one of two further dissociations would also seem to have to occur. If phenomenality occurs only when apperceived, then obviously a C l state (and so intentionality) can dissociate from phenomenality, since it would do so in blindsight cases. And, in fact, blindsight patients usually report no phenomenality; and when they do report phenomenal states, the states they report (see chapter 4) seem wholly inappropriate. If, on the other hand, phenomenality can dissociate from apperception, then it is possible that the unapperceived perception (Cl) would also be accompanied by an unapperceived phenomenal state. My own view is that unapperceived phenomenality can occur, and I will return to this issue later in this chapter. 13
However, there are empirical arguments that blindsight utilizes normal channels (see Fendrich et al 1992, 1489-91). But there are other empirical reasons to think that Weiskrantz is right after all: for instance, there is apparent evidence that human blindsight occurs even when the visual cortex of one side of an adult's brain has been surgically ablated and so these cases of blindsight cannot be employing normal channels (Barinaga 1992, 1439; Stoerig, personal communication).
153
Consciousness
4. Experiments involving intact brains are also compatible with unapperceived intentionality. Marcel showed subjects three consecutive strings of letters. The subjects were asked to say whether the third string was or was not a word. The cases where the third string was a word themselves divided into "congruent" and "incongruent" cases. In each of these cases, the middle string was a polysemous word. In congruent cases, the first and third strings were both words that related similarly to the polysemous middle word ("hand," "palm," "wrist"). In incongruent cases, the first and third strings were words that related to the different meanings of the polysemous word ("tree," "palm," "wrist"). When the polysemous word was presented normally, correct identification of the third string as a word was enhanced only in congruent cases. But if the polysemous word was shown in a visually degraded (subliminal) manner, correct identification was enhanced in both congruent and incongruent cases (Marcel 1980). Or consider another case from the subliminal perception literature. Dixon (1987) discusses a case where subjects were asked to select either the word "smug" or the word "cosy" to complete sentences such as "She looked . . . in her fur coat." If primed with the word "snug" such that the prime was uttered just above the audible limen, subjects invariably chose "smug" as their sentence completion. But if the prime was uttered subliminally, the subjects chose "cosy." There are experimental and conceptual difficulties with these last two cases. In the case of the Dixon experiment, it is possible that the subliminal prompt is playing no role at all: with no prompts, subjects would almost always choose "cosy." In that case, the only experimentally interesting effect would be the effect of the phonological "snug" on the responses. On the other hand, if a control group (a group without prompts of any kind) responded with "cosy" and "smug" in about equal numbers, then the effects of both cues are significant. I am assuming that Dixon ran the experiments with the proper controls. It is also possible that these last two cases, as well as the previous semantic priming case, do not involve intentionality at all. The "semantic" priming is perhaps not actually semantic: while based on semantic features, the processing is merely a kind of lexical, associative, merely "mechanical" process, and not genuinely an intentional state. That is, the processing in semantic priming cases may be no more semantic than is the "Spell-Check" feature of my word-processing program. Perhaps we could even construct a connectionist machine 154
Consciousness: a theory
that responded in these ways, but which we would all agree was not conscious in any sense.14 These claims may be true; but at this point I am not arguing for their falsity, only claiming that the evidence does not compel us to accept them as true. At the end of this chapter, I more fully confront this issue. Nevertheless, it is worth pointing out here that even if the association be purely lexical, one might argue that the fact that the printed marks are taken as words at all already displays aspectuality, giving us at least some reason to think that these states are intentional states. Second, it is reasonable to think that if similarly primed in their apperceivable field, the subjects' subsequent behaviors could be similar.15 But if apperceivable perception is fully intentional — as it is on Searle s account — then given that there seem no differences in the two situations other than the lack of apperception in the blindsight and subliminal cases, there seems no good reason to think that blindsight and subliminal perception are not characterized by intentionality if ordinary, apperceived perception is. Granted that the responses in the semantic priming cases are "automatic," even associative, that fact is not incompatible with their also revealing underlying intentional states. With Marcels subjects, it seems even more likely that the priming is semantic and so, intentional; for it is difficult to believe that a very strong automatic correlation exists between "palm" and "wrist." And Dixon s results show, somewhat to our surprise, that while the liminal, apperceivable association is predominantly phonological, the unapperceived, subliminal one is predominantly semantic. The semantic priming examples not only support the claim that the evidence is compatible with there being unapperceived intentional states, but the different results in liminal and subliminal trials suggest that the processing is different in the two conditions. And this difference in processing again makes questionable Searle's claim that unconscious intentional states must be of an apperceivable sort. 5. In fact, it is difficult to know just what is meant by the claim that unapperceived, occurrent, intentional states must at least in principle be apperceivable. At the moment of their occurrence, these states appear to be intentional states. That is why a dispositional analysis of them is so questionable. These states already seem to be aspectual, and their 14 15
I was presented with this sort of criticism both by A. J. Marcel (in conversation) and Robert Van Gulick (in correspondence). Though the Dixon case shows that they may not be exactly similar.
155
Consciousness
aspectuality does not seem to be due to their merely being potential causes of some other states, which are the real aspectual ones. If the aspectual nature of states that are not apperceptively conscious is possessed even when they are not apperceptively conscious, especially if they can occur without ever having been apperceptively conscious, then it is difficult to comprehend what the intentionality of such states has to do with whether the switch is on or not (Searle s metaphor for consciousness), or even with whether intentional states can be causes of turning on such a switch. If Searle believes their aspectual nature is acquired or preserved out of consciousness, why isn't that an admission of the independence of intentionality from both phenomenality and apperception? Not insignificantly, when Searle gives examples of unconscious intentional states, he almost always cites dispositional states (such as believing, while dreamlessly asleep, that Denver is the capital of Colorado) and not occurrent states (such as blindsight experience). The claim that occurrent, unconscious, intentional states are potentially conscious is, in many cases, much more questionable than the claim that dispositional states are. Searle s claim that all occurrent, in-principle unconscious (i.e., unapperceived) states mustbt mere neural states is not supported by the empirical facts. The dissociability thesis is also compatible with the evidence. What, then, are Searle's reasons for thinking that all intentionality must be apperceivable, that all unconscious intentional states are not actually intentional at all but only neural dispositions to cause conscious, and so actually intentional, states? As far as I can discern, the key argument, despite his numbering of various steps, is never explicitly stated. But we can construct the argument from hints he provides. Consider two quotes: It's obvious how it [aspectuality] works for conscious thoughts and experiences. (1989, 199) But this [that intentional states are aspectual] leads to a very puzzling question: how could unconscious intentional states be subjective if there is no subjective feel to them, no "qualia," no what-it-feels-like for me to be in that state? (1989, 201) What can we gather about Searle s thinking from these passages? His train of thought, I believe, is this: All intentional states are aspectual (I would agree). If a state is aspectual, it is from a point of view (I would agree again). If a state is from a point of view, then it is subjective (I 156
Consciousness: a theory
would agree, though with reservations16). If a state is subjective, then it must feel some way or other. It is their qualitative feeling that helps constitute the aspectuality of aspectual states. That is, Searle believes aspectuality requires phenomenality, that the latter provides the pointof-view for the former. Searle then goes on to assume that if a state is phenomenal it is also apperceptively conscious. And since unconscious states are not apperceivable, they are not phenomenal. And since they are not phenomenal, they are not intentional either. Still, since there seem to be unconscious intentional states (we believe that Denver is the capital of Colorado even when we are asleep), but these states cannot be aspectual, the only understanding of them is that they differ from other merely neurophysiological states in being dispositions to cause conscious states. Searle s argument rests on unsupported premises. Especially unsupported is the conditional slide from aspectuality to phenomenality. The last conditional (if a state is subjective, then it is phenomenal) is especially questionable. It is true that phenomenality involves a kind of point of view, and it is true that aspectuality also involves a kind of point of view. But it is doubtful that the same notion of point of view is involved in the two cases. In fact, I am quite sure it is false that the same notion is involved. At the very least, he owes us an argument to support the identification. Beyond this most crucial of gaps in his argument is his assumption that if one is in a phenomenal state, then one must be aware of (i.e., must apperceive) that state. This assumption, like the previous one, needs defending. And once more, it is an assumption that I believe to be false; and I later present arguments to support my belief. Finally, it must be emphasized that neither of these issues can be settled through introspection (how could they be?), so Searle's appeals to the way consciousness presents itself to us are beside the point. One may know from being conscious that one experiences aspectualized states; but that is the extent of first-hand knowledge on this issue. One certainly fails to acquire merely from being conscious any idea about how aspectuality "works" for conscious thoughts and experiences. Searle s claim that we can know how things are merely from the way consciousness presents itself to us, as the first quote suggests, is simply false. 16
See Nelkin 1994d.
157
Consciousness 6. Consider a final sort of case illustrating the evidential possibility (even plausibility) of intentionality without apperception: creative thinking. Often, when working on a problem, we get stuck. We leave it for a while. And then sometime later a well-formed solution comes to us. It seems as if this solution could not have been arrived at without rational processing having taken place outside of apperceptive awareness. Moreover, these reasoning processes would seem to require premises thought of in the aspectual way Searle demands for intentionality. Once again, no reason compels us to think that these intentionality-characterized processes are of a kind that ever could become apperceivable. Perhaps one might argue that no inference is needed: the brain, given the input, just eventually settles into a neural net that is the answer.17 It just takes a while for this relaxed net to be realized. But this reply is unlikely to be correct: we can reconstruct the kinds of inferences and abductions required for reaching the answer, and these reasoning processes are highly complex. No mere association would easily account for getting from input to neural net; and, as a result, the antiinference view seems to require something close to a miracle. But these miracles would happen too often. No miracles are required by the unconscious-inference view. Perhaps I am wrong, but the burden of proof is at least equally shared (I would say, for the reasons given, it is on the shoulders of the other side). 7. Nor do such unapperceived reasoning processes^/ any way at all: i.e., they have no phenomenality. One sometimes says things like, "I feel the wheels turning"; but this use of "feel" no more expresses a phenomenal feeling of belief than "I feel he will come tomorrow" does. No phenomenal feeling is one's thought that he will come tomorrow, and no feeling grounds that thought. Aristotle undoubtedly felt the wheels turning in his heart, not in his head. Whatever feelings occur, in the head or the heart, are mere accompaniments of these thoughts; they could hardly constitute the thoughts themselves. More likely, we use the word "feel" in these cases because we do not know the origins of our thoughts. Nor are we apperceptively aware of the basis by which we distinguish this thought as this thought. This use of "feel" occurs when we are conscious of our states but do not know how we are conscious of them. 17
Both Irwin Goldstein and Keith Butler (in personal communications) presented this objection to me.
158
Consciousness: a theory
This discussion of "feeling he will come tomorrow" underscores the truth that many ordinary, apperceivable thoughts themselves involve no phenomenality. If one occurrently thinks that tomorrow is Tuesday or that 1000-sided figures are different from 999-sided figures, no phenomenal experiences are necessary for such thoughts.18 One may have phenomenal experiences that accompany such occurrent thoughts — even cause such thoughts — but the phenomena may be different on different occasions. If one is experiencing phenomena at all when one expresses such a thought, the phenomena often appear to be totally irrelevant to the thought being expressed.19 These last cases also support the idea that C2 and phenomenality dissociate: we can apperceive, at times, that we have an occurrent thought such as that a chiliagon is different from a figure with 999 sides, and this apperception also does not feel any way at all. Whatever phenomenal states accompany this apperception, they are not conceptually required for apperception to take place and certainly do not constitute it. One should not confuse the subjectivity of apperceptive states with phenomenality (as do Searle, Nagel, and McGinn). Neither PA-states nor apperception seem to require phenomenality. Searle himself (1983) recognizes the force of arguments that demonstrate the possible dissociation of apperception and PA-states from phenomenality. He still doesn't capitulate. Instead, he argues that even if phenomenal properties are not essential to thinking, phenomenological ones are: thoughts are "felt," even if not felt. My response, as made clear in previous chapters, is much like Hume's to that of an apperceivable self. If there are such properties of experience, I am unable to find them in my own experiences. I do find phenomenal ones. It is only phenomenological "feelings" that are somehow other than phenomenal ones that I deny. Nor am I denying that we have intentional states, either PA-states or apperceptive ones. I am denying only that these states "feel" like anything. Thinking that 1000-sided figures are different from 999-sided ones does not "feel" different from thinking that 1001-sided figures are different from 1000-sided ones. Nor does apperceiving these two thoughts "feel" any way at all. I may have phenomenal states that often accompany my 18 19
The example is borrowed from Descartes (1642/1986, 50—51), who makes a similar point. Wittgenstein (1953) argued for this same independence of the cognitive and the phenomenal.
159
Consciousness
thoughts, but I can find no phenomenological ones that do even that.20 I have previously urged several times that one should not trust introspection as a psychological tool, so perhaps I should not myself rely on it here (and elsewhere). But two points can be made in defense of my using it where I do: that introspection is not always reliable does not mean it never is, and Searle can hardly deny this method to me (even though he claims that he doubts whether there is anything like introspection — but, once more, he is denying that there is an independent state of this nature). I admit that I do not know the basis by which I am able to distinguish among propositional attitudes, but I look to further scientific research to reveal this basis. There is no reason why the basis by which we make the distinction need itself be apperceivable.21 At this point, one may use one of my own claims against me.22 Since I said earlier that there could be unapperceived phenomenality, how can I be certain that phenomenality doesn't always — and somehow essentially — occur along with PA-states and apperception (only nonapperceptively so)? I cannot be certain, but neither have I any reason to believe it. That unapperceived phenomenality is always lurking in the shadows, as it were, and essentially so, seems just too much to believe. It is not an impossibility, I suppose; but I would want to be given good reasons to think it is true. And as for its being phenomenological, rather than phenomenal, properties doing the lurking, I would want even better reasons, since I never apperceive such properties. In those cases where I will defend the presence of unapperceived phenomenality, there will be reasons for my saying so. The sensible conclusion to be drawn from all these considerations is that both C l and C2 can dissociate from phenomenality. To sum up where we have got so far, it has been shown that it is compatible with known evidence that C l dissociates from both apperception and phenomenality. It has been shown that perceptual states that do have intentionality, and may or may not have phenomenality, possibly occur without being apperceivable. In addition, it has been shown that there are nonperceptual intentional states that even if apperceivable, most likely involve no phenomenality. And the creativity cases show that there is reason to believe that there are nonper20 21 22
For fuller arguments concerning this point, see chapter 5. See chapter 8, section III for further discussion of this issue. Robert Van Gulick (in correspondence) suggested this criticism.
160
Consciousness: a theory
ceptual intentional states that are neither apperceivable nor phenomenal. Two major issues concerning the dissociability thesis remain to be discussed: (1) Even if apperception is dissociable from phenomenality, it has yet to be shown that phenomenality is dissociable from apperception; (2) even if the dissociability thesis is compatible with the known evidence, it has yet to be shown why the dissociability thesis should be preferred to a nondissociability thesis. I will begin work on both tasks in the next section, though that work will not be completed until section IV. II
8. In chapter 5, the technical term "CN" was introduced. CN consists of phenomenal experiences; it is the kind of state where there is something it is like to be in that state. CN states seem, at most, to be essential only for pains, visual images, auditory images, kinaesthetic feelings, and the like, i.e., those states we usually think of as sensations. However, as shown in chapter 5, neither the state of our being aware of the world in thinking and feeling (emotions and affects) nor the state of our being aware of our own thoughts and feelings essentially involves CN. There is nothing it is like to be a conscious thinking and feeling (emotions and affects) being or to be a being that is consciously aware of its own thoughts and feelings. These cognitive states, Cl and C2 respectively, are not phenomenal. Phenomena are only contingently connected to the thoughts, emotions, and affects we experience. Yet, surely, it is thoughts, emotions, and affects that underlie Lockean personhood. Cl and C2 are much more important to personhood than is CN. A being experiencing Cl and C2 states but lacking CN states would be much closer to what we think of as a Lockean person than a being experiencing CN states but lacking Cl and C2 states.23 9. Cl has been described as a proposition-like representational state and, so, an intentional state. Given the preceding section of this chapter, calling Cl a kind of consciousness may seem odd because something that is in a Cl state, without also being CN or C2, may 23
Actually, as I shall go on to say, CN without C2 isn't possible. But this claim will turn out to be fairly trivial.
161
Consciousness
seem to us quite tmconscious. But C l is best considered a kind of consciousness because it is constituted by a sophisticated kind of awareness that things without C l lack. Organisms like ourselves, and unlike rocks, represent the world and act on the basis of those representations. Creatures with C l states do even more: they aspectualize that information, manifesting an even more sophisticated state that distinguishes them not only from rocks but from thermostats and plants as well. Some may call such awareness, taken by itself, "unconscious awareness." But when people say such a state is an unconscious awareness, it is because they have in mind one of the other states of consciousness as their paradigm of consciousness. When we distinguish organisms like ourselves from rocks, plants, and thermostats, we often slide back and forth among criteria, because, mostly unwittingly, there are different ways in which we distinguish ourselves from them. In using shifting criteria, we actually draw different lines when we draw lines between the conscious and the unconscious. These distinctions generally overlap, but in some cases they do not; and believing ourselves to be making only a single distinction, we are left deeply puzzled about these latter cases. To put the point succinctly, we think there is a single state we are referring to when we use the term "conscious," or the term "consciousness," when in actuality there are several. C2 has been described as a second-order proposition-like representation. One is sometimes C2 about one's own C l states. One is also C2 about one's C N states. That is, one apperceives that one is in a particular C l state or in a particular CN state. So three different states have been distinguished: C l , C2, and CN. C l is a first-order proposition-like representational state; CN consists of phenomenal states; and C2 is a second-order, noninferential, proposition-like awareness that one is C l or CN. People (at least normal people) all turn out to be conscious by each sort of test. However, if we make the sum of the tests the criterion for consciousness, it could turn out that human beings are the only conscious beings. Not only does this result seem wrong somehow, but it also obscures the diverse ways in which we — and other organisms — differ from rocks, or even from thermostats, plants, and present-day computers; most importantly, it obscures the dissociations that can occur among these states. 10. But the distinctions, as they stand, cannot be quite right. The problem lies with CN. CN seems prima facie unlike either C l or C2; 162
Consciousness: a theory
but introspectively it seems that we cannot experience a phenomenal state without being aware that we are experiencing one, i.e., we cannot be C N without also being C2. Is this conclusion correct? If it is, why not consider CN as a subtype of C2? One good reason to treat CN as a separate form of consciousness from C2, though one that always "entails" C2, is that CN states are phenomenal states, while the C2 state, as we argued in chapter 5 and in the previous section, is not phenomenal. Since CN states are states that always come into the purview of our C2 (i.e., we are C2 that we are CN whenever we are CN), but are not themselves C2 states, there might also be a temptation to consider CN as a kind of C l state. But there are good reasons not to consider CN as a subtype of C l states either. First, other C l states are not phenomenal. And second, there is the fact that CN states always "entail" C2 states, while for C l states in general, not only is this not true, but there are good reasons to think that only a relatively few C l states ever come into the purview of our C2. But there is an apparent difficulty here. In distinguishing CN from both C l and C2, two facts about CN states have emerged: (1) They are phenomenal states, and (2) we seem always to be apperceptively aware of them. The difficulty is understanding these two facts at the same time, especially if, as I claim, CN states are dissociable from C2 states. If they are dissociable states, it is hard to see how there could be a relation between them as strong as there appears to be. CN and C2 s being somehow causally connected, such that whenever CN then C2, does not help resolve this matter. For if CN and C2 are merely causally connected, then even if this causal relation always holds, we can still at least imagine CN s existing dissociated from C2 (we can imagine the cause without the effect — Hume said that!). But what would it be for us to experience a phenomenal state that we were not also C2 that it existed? It seems as if because of C N s phenomenality, we have to read the relationship between CN and C2 in a very strong sense; but when read in this strong sense, it is difficult to understand how CN could be a dissociable and, so, independent state of consciousness. As a result, we are pulled in one direction, to think of CN, partly because of its phenomenality, as different from and independent of C2, while also being pulled in the other direction, being unable to see how that difference and independence are possible. This puzzle presumably played a 163
Consciousness
role in leading Natsoulas (1989b) to posit self-reflective states, i.e., felt phenomenal states that always are at the same time apperceptions of their own phenomenal properties. I will argue, instead, that the "problem" is the problem itself (as will subsequently be shown), but thinking about it deepens our understanding of those states we call "conscious." Natsoulas is forced to posit self-reflective states because he fails to understand that the problem is itself the problem.24 11. Before considering C N any further, let me say a little more about C l and C2. C2 is a noninferential judgment that we are C l or CN. 25 As a judgment about a C l state, it involves a proposition-like representation of another proposition-like representation because C l itself also involves judgment and is also a proposition-like representation. I say "like" in "proposition-like" because while I believe these representations operate something like sentences in a language, I doubt if the representations share all the properties of what people consider to be sentences.26 I say "proposition" in "proposition-like" to distinguish such forms of representation from image-like representation. I realize that as Pylyshyn (1981) — among others — has argued, serious difficulties exist in practice for making a distinction between proposition-like and image-like representations; but I am largely convinced by recent work, such as that of Kosslyn (1980) and of Shepard and his colleagues (see, for instance, Cooper and Shepard 1984), that there is a distinction to be made.27 It is hard to say what this difference consists in.28 In what follows, I depend on the sketch presented in chapter 4 (though this dependence is not crucial to the points to be made). 12. My main contention is that C N states, apperceived phenomenal states, are to be thought of as a subtype, but not as a subtype of either C l states or of C2 states. To clarify and explicate my thesis, let me more 24
26
27 28
Ascribing this view to Natsoulas may not seem to be correct, for Natsoulas does say that there can be phenomenal states we are not apperceptively aware of. But his notion of "qualitative state" (i.e., phenomenal state) is not the philosophers' usual notion. I will soon 25 take up this issue. As the sequel will show, this is not quite right. Obviously Fodor (1975, 1981a), among others, holds a similar view about many of our representations. I do not know, however, whether Fodor would agree with anything I say about consciousness. See chapter 4 and also Fodor's (1975) discussion of these experiments. And not just for consciousness. How do paintings (say, portraits) represent differently from sentences? Again, saying what this difference is is very hard. But most of us feel there is a difference. Chapter 4, of course, attempts to come to grips with these issues.
164
Consciousness: a theory
formally introduce the term "sensation" here. Sensations are a subset of image-like representation states: they are high-level neural images, i.e., images that result from a large amount of hierarchical neural processing. Imaging, as said, involves a kind of nonpropositional representation. Although it may be primafacie odd to think of pain sensations as a kind of image, for the reasons presented in chapter 3, they — along with tickles, kinaesthetic sensations, and their ilk — are included in this class. Calling sensations "images" has a more general risk, as well: "image" usually carries a visual connotation. But I mean to use "image" in a wider sense, to include whatever relevant high-level neural analogue representational systems human beings and other organisms actually use. Exactly what these systems are, I take to be a project for further research, both theoretical and more straightforwardly empirical. CN states are also imagistic representations occurring high up the visual hierarchy. As said earlier, CN images display two features: their qualitativeness and their representationality.29 Even if we disregard their qualitativeness, the fact that the representationality of CN states is image-like rather than proposition-like suggests that CN is different from either C l or C2, both of which are proposition-like. 13. Because of their image-like representationality, it is possible to think of CN states as forming a subset of the class of the neurologically based, high-level, image-like representation states that I have called "sensations" and will also refer to from now on as "sensation consciousness" ("CS"). The result is that from now on the three major divisions of conscious states are C l , C2, and CS (rather than CN). I contend that not only can we treat CN states as a subset of CS states, but we ought to do it. We ought to treat CN as that subset of CS that we are also C2 of In fact, this last claim is to be considered a revised definition of CN states. Several corollaries follow from these claims: (1) CS states exist that we are not C2 of. (2) So there are sensations we are not conscious of, in the sense that we are not C2 of their occurrence.30 29
30
I am not original in finding these two features. Both Natsoulas and Kosslyn are aware of this double aspect of CN states. Natsoulas, on the other hand, does not think all sensation states have both features; and he also thinks that all that do are at the same time apperceivable. Peacocke (1983) also seems to make this distinction. Though if I read Peacocke right, he makes much more of the phenomenality of such states than I am willing to. Rosenthal (1991) also emphasizes this dual nature of phenomena. B o t h Natsoulas (1989b) and Rosenthal (1986,1991) would agree that there are such states.
165
Consciousness
(3) Either there exist CS states, and therefore sensations, that have no phenomenality, or — an alternative I shall argue is more likely — there are phenomenal states that we do not apperceive, and, in that sense, unconscious phenomenal states exist after all (i.e., phenomenal states are dissociable from apperception). Let me briefly discuss each corollary in turn. (1) This claim still remains to be fully established, and the best reasons for accepting it appear only in section IV. But in the meantime, we can say that insofar as perception involves image-like representations (see chapter 4), then cases like Hindsight, neglect, commissurotomy, prosopagnosia, and visual extinction provide evidence of high-level, image-like representations people are not C2 of. (2) I think we should affirm the antecedent and thereby the consequent: There are sensations that we experience but about which we are not C2. Most would agree that CN states are sensations. I argue, over the next several pages, that for quite similar reasons, we should think of all CS states, all relevant image-like representation states, as being sensations. One reason for treating all CS states as sensations is that they all seem to behave in terms of their representational aspect just as CN states do. My seeing certain things at the same time I have a CN percept brings about certain responses in me that are mimicked by responses that commissurotomy (Gazzaniga 1970; Gazzaniga and LeDoux 1978) and Hindsight (Weiskrantz 1977) patients have when similar objects are held before portions of their eyes. But if CN "entails" C2, then these patients have no accompanying CN state, because there is no relevant imaging these patients are C2 of. But it seems that commissurotomy and Hindsight patients, at least with some of their more sophisticated responses, almost certainly experience representations of an image-like sort although, as Gazzaniga and others have argued (Gazzaniga 1970), the patients are not aware (C2) that they are experiencing such representations.31 Why not call these image-like representations "sensations," too, since these representations are at a high neurological level and play a very similar functional role to that of C N representations? Before further arguments are presented in support of the thesis that all CS states are sensations, consider the third corollary. Further arguments for the thesis are best presented in that context. 31
See, for instance, Gazzaniga's (1977) discussion of his patient Paul.
166
Consciousness: a theory
(3) For the third corollary, each of the two alternatives will be considered in turn. If we include CN states among CS states, it is because they, like all CS states, involve high-level, image-like representations and, functionally, all CS states, including CN states, operate similarly in perception and in other cognitive states. On the other hand, what seems to distinguish CN states as a subset of CS states, different from other subsets, is that CN states are phenomenal. If we think that in phenomenality consists the feltness of CN states, then we get the somewhat counterintuitive result that a great many sensations are not felt, namely those that are CS but not CN. Undoubtedly the idea of unfelt sensations is somewhat jarring; but that seems a minor counterintuition, a minor irritation, in the face of important shared properties: high neurological level, image-like representational properties, and similarity of functional role. If we focus on the representational properties and on the functions these representations perform in leading to judgments (to percepts), then no hard and fast line divides sight from Hindsight, divides creatures that perceive like us from those whose perceptual judgments are arrived at by high-level, image-like representations quite different from our own. These latter also have sensations even though they may not be felt. Which conclusion should we accept: that CN states are not a subset of CS states because the former but not the latter have phenomenality, or that CN states are a subset of CS states although not all CS states (namely, all those that are not also CN) have phenomenality? Given these choices, there are strong theoretical motivations for concluding that CN states do constitute a subset of CS states and that many sensations have no phenomenality. The major interest in sensations, for both philosophers and psychologists, has been for what information sensations might contain. The conclusion being urged suggests that we should look in CN states for just the same kind of features we look for in CS-but-not-CN states if we want to discover how CN sensations provide information. That is, we need, in either case, to come to understand how high-level, image-like representation works. These distinctions are diagramed below: Cl CS CN C2
* first-order proposition-like representational state. * image-like representational state. > subset of CS that has phenomenality. * second-order, direct, noninferential accessing and propositionlike representation of some Cl states and of those CS states with phenomenality (i.e., of those CS states that are CN). 167
Consciousness
The separation of the representational aspect of CN states from their qualitativeness suggests that the qualitativeness that these sensations possess plays no essential role in itself in the information processes of CN states (though it might play a role as the object of further information processing: for instance, we have thoughts about qualitativeness itself). Structural (relational), not qualitative, properties make CN representationally important (cf. Rosenthal 1991). This result fits in nicely with the two-pronged attack on phenomenality of Part One: (1) It reinforces the idea that qualitativeness plays a much smaller and less important role in our lives than we thought it did, and (2) it reinforces the idea that we know virtually nothing about qualitativeness or the role it does play. This way of looking at CN and CS, and at their role in perception, while having a good many advantages, also raises difficult questions. Why do only some CS states have qualitative properties? Why just those? Most especially, why are we always C2 of phenomenal states, i.e., why are we always C2 that we have CN? What are qualitative properties? Why do we experience them? If only those CS experiences that we are C2 about do have them, perhaps phenomenal states arise from an interface of C2 with CS states. Phenomenality has these other states (CS and C2) as causes. That is why no phenomenality exists without C2. Perhaps, then, we should think of phenomenal states as yet a fourth sort of conscious state: the self-reflective states mentioned earlier. If so, they seem to emerge from the interactions of C2 and CS states. Such states would incorporate the features of their determining states and add to them qualitative properties. Or are qualitative properties merely epiphenomenal properties of the interaction of C2 and CS states (i.e., CN "states" are mere qualia and not representational at all)? And, if so, is CN also an epiphenomenon in the ontological sense? I do not have a justified or principled answer to any of these questions.32 If we adopt the first alternative of point (3), these just join many other unanswered questions about consciousness. 14. On the other hand, there is an alternative way to consider the relation of CN and CS states in regard to their phenomenality; and while this alternative may at first seem even more counterintuitive, it simpli32
Questions like these probably helped motivate Natsoulas (1989b, 1990b) to introduce selfreflective states. And they are questions with which Rosenthal (1986, 1991) has never really come to grips.
168
Consciousness: a theory
fies several of the previous questions and resolves a number of counterintuitions associated with the first alternative. Earlier, I said the question was whether we should reject the idea that CN states are a subset of CS states because the former but not the latter are phenomenal or accept the idea that CN states are a subset of CS states although not all CS states are phenomenal. But a third possibility is to reject the question in this form altogether. It is instead possible that all CS states are phenomenal. Qualitative properties attach to all high-level, neural, image-like representation states — those we have been calling "sensations."33 Before raising the obvious objection to this alternative, let me point out its virtues. First, phenomenality is no longer so obviously a sore thumb of an epiphenomenal state if every sensation state is a phenomenal state. Qualitative properties may still be unexplained, but they are not detachable in quite so mysterious a fashion as in the first alternative. Phenomenality would clearly be intimately connected with the very nature of CS states alone and be independent of C2 states or of any combination of CS and C2 states. Nor would we need to posit self-reflective states in addition to CS and C2. Second, this alternative should allay a hesitation on the part of readers who might argue that my reasons for treating CS states that are not CN states as sensations are insufficient. They might claim that the analogy with CN states breaks down at a crucial point: while both CN and CS states might play the same functional role, only CN states are phenomenal and therefore only they really qualify as sensations. Up to this point, I have argued (correctly, I believe) that being phenomenal is not obviously essential to being a sensation. But the present alternative allows me to grant that all sensations are phenomenal. Every CS state possesses both features of CN states. So if we think of CN states as sensations, we have the same grounds for thinking of all CS states as sensations. Third, if having phenomenality is constitutive of being felt, then there are no unfelt sensations since all CS states are phenomenal. We thereby preserve, for what it is worth, our intuition that sensations are necessarily felt. We can, then, diagram these distinctions as follows: Cl CS 33
> first-order proposition-like representational state. *• high-level, neural, image-like representational state with phenomenality.
I have not tried to specify the neural processes that underlie this class of representations. Nor can I. Doing so remains a project, both empirical and philosophical.
169
Consciousness C2
* second-order, direct, noninferential accessing and propositionlike representation of some Cl and of some CS states (those CS states we are C2 about are those called "CN").
We can now return to the obvious objection: I seem to be forced into claiming that there are felt experiences that we are not conscious of. What is an unconscious phenomenal state? That question has been the driving force of this entire section. But, by now, it should be understood that this question is itself a paradigm of the very mistake I have been illustrating throughout this chapter. If one means by "consciousness" a sensation state, a CS state, then the states in question are both conscious and felt. If, on the other hand, by "conscious" one means a C2 state, then such states will be felt and conscious in that first sense of "consciousness" (CS) but in this other sense unconscious (not C2). Only if we consider consciousness to be a noncomposite and indivisible state is there anything seemingly contradictory in such a claim. Introspectively, of course, it may seem to us that all phenomenal states are apperceived. How could it seem otherwise, since introspection is itself a sophisticated sort of apperception? But it should be equally obvious that introspection cannot reasonably be the means by which we decide whether all phenomenal states are also apperceived. But what other evidence do we have that phenomenal states cannot exist unapperceived? As far as I know, none has ever been presented (see section IV for further argument for this claim). What we must bear in mind is that we shift among at least three notions of consciousness, and our failure to realize these shifts leads us into many of our principal errors concerning, and confusions about, consciousness, making it seem much more mysterious than it is. We should no more expect to be C2 of all our phenomenal image-like representational states (CS states) than we are of all our C l states. Those CS states we are C2 about are those that are CN. That is why CN states "entail" C2. It is a matter of definition. And that is all. Why we become C2 aware of only some CS states (and only some C l states, for that matter) and not others is a further question about consciousness. But there are many questions about consciousness, and that is partly because there is no one state that consciousness is. To further convince the reader that the second alternative of this corollary is reasonable, I will present further arguments that phenomenal states (CS) dissociate from apperceptive awareness (C2). But I will postpone this discussion until section IV. First, I take up another issue. 170
Consciousness: a theory
in
15. McGinn has been mentioned as a defender of a self-reflective view of consciousness, but there are important differences between his view and those discussed in section I. For one thing, McGinn, unlike Searle but like Natsoulas, believes that there are unconscious states that are fully intentional (1988, 33). However, McGinn also believes that conscious states possess intentionality in a special way. Their intentionality, he believes, is ultimately unanalyzable for us. Because this sort of intentionality is essentially linked to consciousness, it can be understood only by understanding consciousness itself (1988, 33-34). Since McGinn thinks consciousness is not understandable on the basis of our humanly possible conceptual schemes (1988, 25-26; 1989), he concludes that conscious intentionality will also never be understood. What does McGinn mean by "consciousness"? My belief is that McGinn is thinking of consciousness as a self-reflective state that is essentially phenomenal (and, perhaps, also phenomenological). As I have argued, the only sense of a consciousness where there is something it is like to be in such a state - and McGinn accepts that description of consciousness - is that of phenomenal states. So the question is: Do phenomenal states have a special sort of intentionality about them? Not exactly. As chapter 4 and the previous section outline, I take phenomenal states to be representational states: high-level, neural image-like representations. And image-like representation is different from first-order, proposition-like, intentional states. However, even if some image-like representation is phenomenal, we can distinguish the representational nature of such states from their qualitative medium. That this separation is both possible and worthwhile is seen from the fact that the representational aspect of phenomena depends on their structural rather than qualitative properties. Because their structural features make them representational, the representational nature of phenomenal states is similar to that of those that are not phenomenal. And so the representational nature even of phenomenal states is not the result (other than causally) of their being phenomenal. Phenomenal representation is dependent on the structural, rather than on the qualitative properties, of the representation, just as for any other image-like 171
Consciousness
representation.34 Once one understands the representational aspect of phenomenal states as distinguishable from their qualitative aspect (though phenomena are the stuffbut of which phenomenal representations are made), then the problem for those states becomes simply the (very difficult) problem of image-like representation. Kosslyn (1980, 1987), Shepard (Cooper and Shepard 1984), and others are already providing insight into the nature of such representation. Moreover, if the proposal of chapter 4 is correct, then even if phenomenal states are representational, they, nevertheless, lack intentionality. So if they are not intentional states at all, they are certainly not a special kind of intentional state either. Second, only because McGinn views consciousness as a noncomposite state — as indivisibly both phenomenal and apperceptive — does he fail to see the above escape from his difficulty. But as I argue more fully in section IV, one can experience a phenomenal state without being apperceptively aware that one is experiencing it. Once one comprehends the dissociation of phenomenal and apperceptive consciousnesses, phenomenality becomes less powerful seeming — and less interesting. Both this and the previous section have ended with the same promise. Now is the time for fulfilling it. As I fulfill it, I will begin to urge not only that the dissociability thesis is compatible with the evidence, but that it is also a better theory than the nondissociability thesis. IV
16. What remains to be provided is evidence that phenomenality can dissociate from apperception. Can we feel tingles or experience colors, say, without being apperceptively aware that we are? A few reasons in support of our being able to were provided in section II, but there is a good deal more to say. Of the dissociations I am arguing for, this one is probably the most controversial. Many philosophers and psychologists find it extremely difficult to understand how phenomenal states could occur unapper34
That the relevant properties are structural, and so relational in a sense, should not lead one to think that they are not intrinsic to the representation. I will maintain that we possess two notions of "representation": one that is intrinsic and one that is extrinsic. For more on this distinction, see chapter 9.
172
Consciousness: a theory
ceived. They claim not to be able to conceive how one could experience a mental image or some other "feeling" but not be apperceptively aware that one is in that state. Of the arguments to be presented in favor of the dissociation, no one argument is by itself a knockdown one. Nor do all together constitute a knockdown argument. But since knockdown arguments are rarely available in theoretical matters, I am satisfied if, all together, they put the weight of evidence and argument on the side of the dissociability thesis. Even more weakly, I would be pretty satisfied if philosophers and psychologists would come to see that the question of the dissociability of CS from C2 is an open question that will have to be settled on theoretical/empirical grounds. One word of caution before I begin supporting the dissociability of CS from C2: in claiming that we can experience CS without C2,1 am not claiming that there is no awareness involved in an unapperceived CS state. CS is itself a kind of awareness —phenomenal awareness. What I am claiming is that the two sorts of awareness, phenomenal awareness and apperceptive awareness, are dissociable. 17. While there are no knockdown arguments for the view that CS states are dissociable from C2 states (the dissociability thesis), there are no arguments at all, so far as I am aware, that they are not dissociable (the nondissociability thesis). Those who would deny their dissociability simply take their nondissociability for granted. Perhaps one might assert that no argument is needed: It is just intuitively obvious that phenomenal states have to be at one and the same time apperceived. My only response is to deny the assertion in my own case and to point to the many other philosophers who also deny having the intuition. Where intuitive obviousness is so often denied, there is a place where the intuition itself should be questioned. If there be any argument that has ever been given, it would go something like this: "Every CS state I have been aware of experiencing, I have been apperceptively aware of; therefore, every CS state is such that one is apperceptively aware of it." Since no one would knowingly use such a bad argument, I doubt if argument is driving the belief in the nondissociability thesis at all. But the "grounds" for the belief are not much different from this argument. The only CS experiences we attentively notice are ones we are also C2 of, and so it seems to us as if CS and C2 are inseparable. Introspectively, it probably does seem that way to us. So much so that we are tempted to ask, "What would it be 173
Consciousness
to 'feel' an experience that was not apperceived?" But the answer to that question is, "Exactly what it 'feels' like to experience one that is apperceived." The inability to accept that simple (and correct) answer is based wholly on a prejudice that CS states can occur only when one is also C2 of them. But that very prejudice - for without argument to support it, that is all it is - is exactly what is in question. It is important to note, as has been noted about several other issues, that introspection cannot determine whether CS states have to be apperceived. Whatever it may seem like to introspection, introspection is possibly mistaken (and introspection's track record as a psychological tool hasn't exactly been a glorious one). And it is quite difficult to see how introspection could determine whether an apperceived experience is or is not capable of occurring unapperceived. Nor are all the intuitions on the side of the nondissociability thesis. I will try to persuade the reader of this claim by asking him or her to indulge me for a few moments by participating as the subject of the following experiment: Concentrate on the bottoms of your feet. When you do, you experience certain phenomena. Now, concentrate on the pit of your stomach. Again, certain phenomena are experienced; and these are different from those experienced in the first case. Now, return your attention to the bottoms of your feet. I presume that phenomena similar to the original ones are once again apperceived. How should the experiences that result from this experiment be described? I want to say that this experiment provides instances of our discovering phenomena that were being experienced all along although they were not being apperceived. This interpretation is certainly not implausible, and I would suggest (it is certainly true in my own case) that experientially it seems to us at the time as if we are discovering already occurring phenomena. Moreover, the close similarity of the phenomena experienced in the two bottoms-of-the-feet cases makes sense on this interpretation: The phenomenal state is a continuing one. The upshot is not that this experiment proves that unapperceived phenomenal experiences exist, but that it is an open question whether they do. That we were not apperceptively aware of feelings in our stomach before we shifted attention to our stomach does not mean that these feelings didn't exist prior to our shifting our attention there. We can understand that they might have. Similarly, that we were not apper174
Consciousness: a theory
ceptively aware of the feelings in the soles of our feet when we were paying attention to our stomachs does not mean that those feelings in our soles didn't continue between the two occurrences of apperceiving them. When we do return our attention to our feet, it seems, as already noted, as if we discover the phenomena as still there. So here we have conflicting intuitions: we may intuitively believe that phenomena cannot exist unapperceived, but on occasions like that described we believe that phenomena "still" continue unapperceived and we can rediscover them. Think of how when someone asks us whether we still have a headache, we often move our heads around, seem to discover the feeling there, and say we still have it. Similarly, in cases of the cocktail party effect, where we do not apperceptively hear anything of a neighboring conversation until that conversation contains a key word or topic of interest to us, it seems experientially as if we are tapping into a phenomenal flow that was there all along. Or think about cases where, while concentrating on something else, we discover ourself scratching our arm. It is difficult in this case to think of any other reason we are scratching than to rid ourself of an itch — a phenomenal feeling. Our feeling of discovery in all these cases is quite natural - and quite justified—if phenomenal states can exist dissociated from apperception. Any other view is going to have to claim that acts of attending make phenomena come into existence and shifting attention makes them cease to exist. Such a story cannot be ruled out a priori, but it is hard to see why it would be a better story than the one defended here. My own intuitions are that the story that phenomena are attention-dependent is a much less psychologically and neurologically plausible story. In the end, only a well-developed and well-accepted theory will decide between the two accounts. The point, however, is that no grounds exist at present for the attention-dependent account. Its defenders, once more, simply take it for granted. 18. Several of the abnormal perceptual states discussed earlier are relevant to this dissociability issue as well: Hindsight, commissurotomy cases (split-brains), and visual extinction, among others. Blindsight patients, for instance, have been shown to be able to discriminate an X from an O in their "blind" fields with nearly 100 percent accuracy, all the while denying that they see anything and taking themselves to be merely guessing. When later told about the accuracy of their "guesses," 175
Consciousness
they seem genuinely surprised (see Weiskrantz 1977, 1986). Similarly, commissurotomy patients will often use their left hands to pick up objects, pictures of which have been tachistoscopically flashed to their left fields of view. All the while, the patients deny having seen anything on that side at all. When it is pointed out that they picked up an object with their left hand, they continue denying that they saw anything and often confabulate a reason for having picked up that object (see Gazzaniga 1970, 1977; Gazzaniga and LeDoux 1978). Similarly, yet again, for extinction patients: Extinction patients apperceptively can see an object in either field of view (unlike Hindsight or commissurotomy patients); but if objects are made to appear in each field of view simultaneously, extinction patients apperceptively see only one of the objects. Yet, if forced to "guess" whether the "unseen" object matches or fails to match the one seen, they "guess" right at rates approaching 100 percent (see Volpe et al. 1979). The interpretation of these cases that has been assumed so far is that subjects perceive despite the absence of phenomenal experience. However, an alternative interpretation of these cases, also compatible with the data, is that the patients do experience phenomena in their blind field experience but are just not apperceptively aware that they do. Again, their introspections are not going to decide whether they do or do not experience unapperceived phenomena in their blind fields. Since both the no-phenomena-experienced view and the unapperceived-phenomena-experienced view are compatible with the evidence, the choice between them will have to be for independent experimental or theoretical reasons. However certain one may feel that these patients experience no "feelings,"35 no theoretical or empirical grounds — at least no grounds that have ever been provided — support these feelings of certainty. These are matters into which we should tread much more cautiously than is our wont. It should not surprise us if the mind turns out to be quite different from how we, in our present state of ignorance, expect it to be. At the same time, as we saw in §17, even our intuitions are not undivided on the issue of whether phenomenal experience can dissociate from apperception. 35
I again put "feels" in quotation marks because not all phenomenal states are generally thought of as being felt: there are visual, auditory, gustatory, and olfactory phenomena as well.
176
Consciousness: a theory
19. Up to this point, I have brought to the reader's notice that little in the way of support exists for the nondissociability thesis, the thesis that phenomena must be apperceived. But, still, the dissociability thesis has itself not yet been much supported either (only the theory-based argument of section II has been offered in support). Is there any other positive support for it? There are two kinds: explaining cases at an empirical level and large-scale theoretical reasons. The empirical explanation evidence is thin, but what there is lends support to the dissociability view. This evidence will be considered in this subsection, while the more theoretical reasons will be reserved for §20. The evidence to be presented is based on Hindsight experiments involving hue perception and is weaker or stronger depending on one's view about the phenomenal status of hues. If one believes hues are essentially phenomenal (see Hardin 1988; Boghossian and Velleman 1991) or even that phenomenal experience is necessary (causally, say) for hue perception, then the evidence should be accepted as pretty strong. If one thinks hue judgments are altogether independent of phenomenal states, then the evidence will be thought to be quite weak (though the dissociability thesis will be at least compatible with the evidence — as it is in the case of other blindsight perceptions). We know from human blindsight cases that subjects have left visual field perceptions that are unapperceived (Weiskrantz 1986). We also have evidence that blindsighted monkeys, whose brains would be expected to be similar to our own, can recover many of their visual functions (Weiskrantz 1977). Among the functions reported to have been recovered is the ability to make color discriminations (Keating 1979). On the assumption that hues are phenomenal or on the assumption that phenomena are necessary for hue experience, these facts about blindsight provide us a reason to believe that those blindsighted monkeys that are capable of making color discriminations experience color phenomena without being apperceptively aware that they do. Of course, it is possible that, in recovering, the monkeys recover apperception of phenomenal states; but it is an empirical question as to which reading of the facts is correct. It is certainly compatible with the evidence that the dissociation occurs in the way outlined. Somewhat more direct evidence of the dissociation of phenomenality from apperception is provided by recent work on human blindsight subjects (Stoerig and Cowey 1989, 1992). These subjects were 177
Consciousness
able to make paired hue discriminations that tracked normal discriminations, although when asked, they denied seeing anything. They took themselves to be guessing. Since their responses mirror those of normal color perceivers in saying "red" or "blue" for appropriate wavelengths and are in accord with an opponent-process theory of color perception, it seems plausible that they were discriminating hues, while not being apperceptively aware that they were. Thus, these Hindsight cases are at least consistent with, even provide evidence for, unapperceived phenomena being experienced. One might object that it may only be wavelengths, and not hues, these subjects distinguish, that "red" and "blue" may be no more meaningful in this context than "one" and "two" would be.36 But two considerations support, though rather weakly, the stronger reading — in either the monkey cases or the human cases, or both. In the human cases, subjects are asked to "guess" red or blue. If "red" and "blue" were like "one" and "two," then half the time the subjects should systematically mislabel the wavelengths. This reversal happens only occasionally. This failure to reverse would be quite strong evidence for the claim that hue was being discriminated except that Stoerig and Cowey primed their subjects at the beginning of the experiment as to which hue they were "seeing" (Stoerig, personal communication). So the nonreversal evidence is less conclusive than it may at first appear. A second piece of evidence that hues are being discriminated is that in monkeys cells in area V4 of the optic system respond in accordance with color constancy (i.e., even though the lights are dimmed on a blue object, say — and so the reflected wavelengths are different in the two cases — the cells that originally responded continue to respond [Stoerig and Brandt 1993]). If V4 is used in monkeys' Hindsight color discriminations, then it is reasonable to think that for them "color" = "hue," since constancy correlates more strongly to hue than to wavelength. Moreover, while it is not certain that monkey V4 has a homologue in the human brain, it is likely (see Van Essen 1985). However, it has not yet been shown that V4 plays a role in Hindsight color discriminations, so I may be jumping the gun on the available evidence. Moreover, there is some evidence that not all Hindsight color discrimination involves V4, since human patients who have been hemi36
Several people, in correspondence or conversation, have proposed this objection. Among them, C. L. Hardin, Petra Stoerig, and Irwin Goldstein.
178
Consciousness: a theory
spherectomized, and so have no area V4 on the "blind" side, are possibly among those who succeed at blindsight color discrimination. So this evidence concerning V4 is also probably weaker than desired. But recent evidence (Stoerig, personal communication) more clearly supports an actuality reading. Hemispherectomized patients (patients who have had half their usual optical areas ablated!) have been studied who, on their own, not only "guess" blue, say, but also "guess" that it is the color of the sky — that is, the shade of blue. Guessing what shade of color is being experienced is going well beyond anything like a "one—two" discrimination. So if this evidence holds up, it will be much more reasonable to believe that these patients are discriminating on the basis of hue than to believe the alternative.37 20. When we turn to the theoretical reasons, they can only be partially dealt with in this chapter. The best reasons really are quite largescale: they involve how a large-scale theory of consciousness maps on the world. But this theory is the project of this entire book. Nevertheless, at least a couple of theoretical considerations can be reasonably discussed now. Thefirst is that if phenomena are not dissociable from C2 states, they become theoretical danglers of large proportion. I have argued, in chapter 5 and in section I of this chapter, that even if all CS states are also C2 states, the converse is not true: there are C2 states that are nonphenomenal. To again borrow Descartes' (1642/1986, 50—51) example, when one thinks of a 1000-sided figure and then of a 999sided figure, if phenomenal states are experienced at all, they may be exactly the same in the two instances; but one is apperceptively aware 37
Alas, in a July, 1994, email message, Stoerig has pulled the rug out from under this evidence as well. When she went to Montreal to investigate these patients herself, she found either that they showed no blindsight "shade" responses or else she could not eliminate the possibility that their responses were the result of light scattering from the "sighted" halves of the retinas. Still, four points remain that provide hope for empirical support of the dissociability thesis. (1) No empirical support has been provided for the nondissociability thesis by the failure of these cases. (2) The fact that one recognizes that her evidence could have turned out otherwise and the fact that the claims initially made about these patients were made about them at all illustrate the empirical nature of the claims involved. (3) Stoerig tells me, in the same message, that similar claims continue to be made about other patients whom she has not tested personally. So my claim in the text, "If this evidence holds up . . .," continues to be operative (though I confess to being less hopeful that this particular evidence will hold up). (4) If hemispherectomized patients are eliminated from consideration, then the material about monkey V4 discussed earlier becomes all the stronger.
179
Consciousness
of the difference in the two thoughts. So the apperceptive awareness cannot be that of phenomenal experience. Also, as Wittgenstein (1953) has pointed out, in the course of ordinary conversation, it is highly unlikely that we apperceptively experience phenomenal states for every instance of apperceptive understanding.38 Moreover, there would seem to be high-level neural states that are image-like representations in exactly the sense that CS states are; but on the nondissociability thesis these high-level neural states would not be phenomenal because they are not apperceived (see section II). Somehow combining these states with C2 would cause the states to give way to phenomenal states that are exactly similar, in their representational aspect, to the states they replace except for also being phenomenal (for this view, see Natsoulas 1989b, 1990b). Such a transformation is, at best, highly mysterious. One is faced with the question of how turning ones attention (say) toward a nonapperceptively grasped high-level neural image-like representational state can cause a new, but phenomenal and apperceived, image-like representational state to come into existence, while at the same time missing the target of one's turn of attention altogether. That is, one would, on the nondissociability view, never become apperceptively aware of the state to which one turned one's attention. Instead, attending would create an altogether different representational state, or at least —just as mysteriously — add a new property (qualitative property) to an old state. And in the latter case, the qualitative property would not be essential to the representation; it would be only an epiphenomenon. On the other hand, if all high-level neural states are phenomenal, though not all are apperceived, then phenomenality, while still a problem, is less of a dangler and less of a mystery. It is essential to all high-level neural image-like representation states. C2 does not create phenomenal states where none existed before, and C2 is able to grasp the states one all along intended to grasp with it. The dissociability view does not eliminate theoretical difficulties, but it does reduce their number and their air of mystery. The second theoretical reason in favor of the dissociability thesis lies in the shortcomings of its alternative. It would seem that if the dissociability thesis is wrong, there would have to exist — in addition to 38
For fuller arguments on the dissociability of C2 from CS, see especially chapter 5 and earlier parts of this chapter, but also Nelkin 1987b, 1989a, 1989c, 1993a, 1993b, Forthcoming-c.
180
Consciousness: a theory
high-level neural image-like representational states that are not phenomenal (i.e., CS states that are not CN), C l states, and C2 states — a single state that incorporated the principal features of these other three states as well as phenomenality itself. Natsoulas (1989b, 1990b) calls such states "self-reflective states."39 However, two theoretical considerations weigh against positing self-reflective states. The first is that if the dissociability thesis is correct, then one needn't posit a fourth state at all. Co-occurrences of the three otherwise dissociable states would account for whatever data self-reflective states could account for. Moreover, self-reflective states are theoretical posits. We cannot know from direct experience whether they exist or not. Introspection cannot distinguish whether we are at an instance of time experiencing a single, indivisible, self-reflective state or experiencing three dissociable, co-occurrent states. But given their theoretical status, it is legitimate to ask what theoretical work they are doing, especially if the occurrences they explain can be explained as well without positing their existence. We need posit only three independent states, not four, on the dissociability view; and these states (and their absences) also explain the dissociations and other behavior not explained by selfreflective states (and their absence). The second shortcoming is internal to the proposed nature of selfreflective states themselves. How could a single state be a first-order image-like representation state, a first-order proposition-like representation state, a phenomenal state, and a second-order propositionlike representation state all at once? Such a state would certainly be a mystery. So no wonder defenders of the nondissociability thesis find consciousness such a mystery! Again, it seems much simpler to think that all CS states are phenomenal than to think that besides these states and C l and C2, there is a fourth sort of state, comprised of self-reflective states, that has all the principal properties of all those states plus the additional one of phenomenality.
21. Let's take stock of where we are. The best reason for maintaining the dissociability thesis will be if the large-scale theory that 39
Others who seem committed to self-reflective states are Searle (1989, 1990, 1992), Nagel (1979a, 1986), and McGinn (1988, 1989).
181
Consciousness
incorporates it is the best theory of consciousness that we have. Still, in this chapter we have seen that the nondissociability thesis has traditionally been assumed, not defended, that whatever empirical evidence exists favors the dissociability thesis, and that less large-scale theoretical considerations of at least two kinds also favor the dissociability thesis. If there is no other lesson to be learned from this discussion, we do learn that theory/data — though not introspective data — must be relied on to settle this issue. And similar remarks can be made about the other dissociations discussed in this chapter: the fuller dissociability thesis. The existence of these other dissociations (Cl from C2, C2 from each of the others at a time) will also be decidable, in the end, on large-scale theoretical considerations: whether the theory of this book is better than theories of similar scope that deny their existence. However, as with the dissociation of phenomenal states from apperception, elements of this chapter already move beyond the possible to the plausible in regard to these further dissociations. The creative thinking cases make it likely, not merely evidentially possible, that C l dissociates from C2. To imagine an explanation of these cases that would not appeal to unapperceived reasoning (i.e., unapperceived thought) is very difficult. Equally, the discussion involving thoughts such as that 1000-sided figures are different from 999-sided ones goes beyond showing merely the evidential possibility of dissociating both C l and C2 from phenomenality (or phenomenologicality). So, all in all, a fair amount of work has been done in this chapter toward moving from showing that the dissociability thesis is compatible with the evidence to showing that it is plausible — especially given the absence of good arguments in favor of a nondissociability thesis. Still, there is a long way to go; and the remainder of what I say in this chapter is no more than gesturing. One virtue of accepting the dissociability of the various states called "conscious" is that we can have a uniform and comprehensive theoretical treatment of all the cases. We do not have to apply one theory ad hoc to one set of cases and another theory ad hoc to another set. Undoubtedly, my reading is not the only one with this apparent virtue. Dennetts (1991b) seems to have it (or to come close to having it). There are many areas of agreement between his theory and mine. One reason for that agreement is that Dennett comes oh so close to accepting the sort of theory of consciousness (one, as the reader will see, that 182
Consciousness: a theory
stresses the importance of apperception) for which I have barely sketched a beginning here, and will directly argue for in Part Three. Even while trying to empty apperception of any substance, Dennett makes repeated use of it in his explanations (for one instance where apperception seems to be playing a bigger role in his explanations than he would acknowledge, see Dennett 1991b, 168). In the end, largescale theoretical reasons, having to do especially with concept acquisition and possession, persuade me that my view of consciousness is closer to the truth than Dennetts or others' (see chapters 8 and 9). As we saw in discussing cases, many of the Hindsight and other situations might be read as involving no intentionality and no phenomenality, so no consciousness of any kind at all. Is there any reason to read the cases as possessing intentionality? Again, the same answer applies: We get a kind of theoretical coherence and breadth if we do read them in this way. It is true that present-day computers lend us little reason to think of them as conscious in any way, even as possessing intentional states (though some people would argue otherwise). And it is true that many Hindsight responses might be simulated by a computer, especially by a connectionist one. But on the other end of the spectrum we have human beings, who, at times, display all three conscious states and perform the same discrimination tasks quite consciously. Unlike our attitudes towards computers, most of us accept as a given that human beings are conscious (what is a controversy is how that consciousness is to be analyzed). So the fact that nonconscious computers can simulate the activities of conscious discriminators does not tell us much about whether Hindsight discriminations are or are not proposition-like intentional-state discriminations or whether Hindsight involves phenomenal states. We know, after all, that normal human beings do have intentional and phenomenal states (see Chandler 1988 for a similar methodological point). And while one might emphasize as a methodological rule, "Do not over-anthropomorphize," one might, with at least equal justification, emphasize as a rule, "Do not assume that things that appear to be conscious are really unconscious"; for if one does, one is going to miss seeing much of their behavior entirely. And that omission, if universal, will have disastrous consequences both for science and for morals. One assumes that Descartes' view about nonhuman animals caused its adherents to fail to develop many studies of these animals because its adherents took these animals to be automata. And one also assumes that those influenced by Descartes on 183
Consciousness
this issue treated their animals differently from the way we tend to treat ours. Pointing out these facts does not establish that Hindsight discriminations are based on intentional-states or involve phenomenal properties. Of course not. But it may remove biases against thinking in these ways. Again, the final determination of our views cannot be based on introspection, or on any other direct method, but will be settled on for large-scale theoretical reasons, reasons toward which I have here only gestured. The furthering of the theory in Part Three is meant to provide at least some of those large-scale reasons in favor of the three-consciousnesses theory. Each of C l , CS, and C2 is a state we possess and that objects like rocks or roses — or even present-day computers — lack. If consciousness distinguishes us from such things, then if the proposed theory be correct, each of these states has a reason for being called "consciousness." On the other hand, when one of these states is absent at a given moment, the subject can be said to be tmconscious — in that respect. For just as there are three ways of being conscious, there are three ways of being twconscious. Moreover, that these are three independent, dissociable states (though they sometimes causally interact) means that we should expect to find beings that do not possess all three states, but possess only a subset of them. Quite likely, some nonhuman animals are just such creatures. So my philosophical theory begins to edge into a scientific one in so far as it has testable consequences. To the question, "But what about the unity of consciousness?" I reply that there is no unity, only the appearance of it.
184
7 Consciousness: an appendix This chapter serves as a transition between the previous two chapters and the ones to follow in Part Three. Before moving on, it is worth drawing a few consequences of the theory of consciousness1 presented in the last chapter, while also pointing the way to the chapters to come.
I begin with an historical note. Seventeenth-century Rationalists quite commonly claimed that persons are always conscious (see, for instance, Descartes 1642/1986, 74). To their Empiricist contemporaries and successors, and to most of us, this claim seems insupportable. It is difficult to believe that when under general anaesthetic, or when in a deep, dreamless sleep, or when struck a concussive blow to the head, one is conscious. But I would like to suggest that the Rationalists were on to something that many of their successors missed. Most of the Rationalists just didn't always express it well or always understand it altogether themselves. A good case can be made that they, unlike their successors, understood — even if only dimly — that consciousness is not a single, noncomposite state. Most especially, the Rationalists verged on understanding that phenomenal states and first-order, propositionlike intentional states dissociate from apperception. In fact, it is textually evident that Leibniz (1714/1989, 214) had an explicit grasp of these dissociations of first-order states from apperception. While it may be bizarre to think that we are always conscious in the sense of apperception (C2), it is considerably less bizarre - not bizarre at all — to think that we are conscious in one of the other senses, Cl and 1
I should probably say "theory of consciousnesses" instead; and I ask the reader, throughout the remainder of the book, to read "theory of consciousness," when referring to the theory presented in the previous chapter, as a theory that says that three distinct, dissociable states are each called "consciousness."
185
Consciousness
CS, at any time. As Leibniz said, we can be aware (Cl or CS) without realizing we are conscious.2 It may be that even under general anaesthetic, in deep, dreamless sleep, and in concussive "unconsciousness," we are conscious in one or the other nonapperceptive sense. Whether we are is an empirical issue, though not one that can be settled through introspection — as Empiricists tried to settle the matter. Nor can the claim that we are always conscious in one sense or another be ruled out a priori since it is an empirical claim. I would suppose that there are times when we are not conscious in any sense — for instance, under general anaesthetic - but that is only a supposition. In any case, if different states are labeled "conscious" and these states dissociate, the claim that we are always conscious is not the silly one it is often portrayed to be. Only if we take all consciousness to be apperceptive does the claim appear silly. II
A second point to be noted is that the three different states of consciousness - C l , C2, CS - have been specified as functional states. Even CS, in so far as it is an image-like representational state, is a functional state (though if these states are always in fact phenomenal, then there are constraints on the sorts of things that can realize the functions). C l , C2, and CS can all be described as high-level, neurological, functional states, each being a distinct kind of representational state. Their possessing functional natures itself has important theoretical consequences. First, if these functional specifications of consciousness are correct, then consciousness should be amenable to scientific treatment. Just as a turning point in our ability to treat riife~» as a biological concept was to understand it as a functional property, so, I would predict, now that we see that being conscious - in each sense - is a functional property, our treatment of consciousness will similarly open itself to being scientifically understood. It is true that the qualitative^/ of CS states appears to consist of nonrelational, rather than structural or functional, properties. But given the little we know about those qualitative properties, there are no good a priori reasons to think that an identity theory will not hold for them. The arguments maintaining that an identity theory cannot be true (see Chalmers Forthcoming) sound, once more, an awful lot like the once-common a priori argu2
Leibniz did not distinguish between Cl and CS.
186
Consciousness: an appendix
ments that "showed" that "merely" material things cannot move themselves or be alive. And the arguments against identity theories seem to be based primarily on intuition. But intuitions differ, and are only as good as the theories in which they are embedded. Moreover, in so far as we understand that three different types of states (and maybe more) are labeled "consciousness," we have gone a long way toward demystifying consciousness even further: for no indivisible, noncomposite state could have all the functional powers and nonrelational properties that we ascribe on various occasions to consciousness. It is unreasonable to think that a noncomposite, indivisible state could be all at once a phenomenal, image-like representation, a first-order proposition-like representation, and a second-order proposition-like representation. And from a proper understanding of consciousness there are further gains as well. Given the functional natures of the various types of consciousness, we realize that any of these different functions can itself be realized differently. And that insight allows us to understand that animals with quite different neural architectures and physiologies from our own can yet be conscious in one or all of the senses outlined. It is well to remember, though, that in order for a thing to be conscious, it must experience phenomenal representations or aspectualized representations (first- and second-order proposition-like ones). Which animals have one or more of these states is an empirically testable question. At the moment, we lack the tests; but as our understanding of how the functions can be realized even in us increases, tests for determining whether things realize these functions will grow up alongside, and as a result of, that deepening understanding. Most current debates about whether nonhuman animals are conscious or not reflect a failure to understand consciousness. The theory of consciousness presented here, if successful, will facilitate the sharpening of the debate and facilitate agreement on the means needed for resolving it, in any case in which the debate arises. And, of course, that each type of consciousness has a functional nature allows for the possibility that nonbiological creatures (where "biological" refers to carbon-based, living things) can be conscious. I doubt very much — though it is an empirical matter — that there are any such things known to us now. That is, there is little reason to believe that presentday computers, robots, and so on are conscious in any sense. But it is an empirical matter as to whether they are, and we should be open to the 187
Consciousness
possibility that we might be wrong. Philosophers like Searle (1980) are surely wrong to think that philosophical arguments prove that such devices lack consciousness (and so don't have intentional states). One can note quickly that no intentionality exists without consciousness. Searle s "claim" is true. But in so far as it is true, the claim does not mean what Searle takes it to mean. It is a much more trivial truth than he intends. In the senses in which he denies that there can be full unconscious intentionality (nonphenomenal, nonapperceptive intentionality), there is, in fact, unconscious intentionality — as the last two chapters have shown. And similarly, an identification of consciousness with awareness is also, in a sense, correct. Every conscious state is a state of awareness, and every state of awareness is a conscious state. But the identification can now be seen to be oversimple. Yet, at the same time, it underwrites our calling each of C l , C2, and CS a state of consciousness: aware things — whatever the kind of awareness —just are different from things that have no awareness whatsoever. Perhaps the most important point to be noted in regard to the functional nature of consciousness is similar in kind to, or perhaps even a result of, the claim that functions may be realized differently: none of these types of functions (Cl, C2, CS) needs to be "centered." That is, none of these functions may require an area of the brain devoted to the function. Each function may be distributed, and distributed differently on different occasions. There may be only the appearance of something like a central processing unit without there actually being one. The view of consciousness presented here is, by and large, neutral on this issue.3 It is compatible with there being three modules, one for each of C l , C2, and CS, or with there being none. Further work in neurophysiology, neuroanatomy, and neuropsychology will have to guide us to a resolution of this issue. Philosophers can help clarify this issue as well, and I will have more to say about it in the next chapter. in
Finally, I want to note what has happened so far in this book in relation to one very broad, sweeping set of issues. We began the book on 3
Though if the speculation (of chapter 3) that an apperceptive module is required for pain is correct, then there cannot be a single center for apperception.
188
Consciousness: an appendix
the assumption that phenomena are considered to be among the most important experiences in our lives — important for perception, thought, and consciousness. Chapters 1—3 set out to deflate the idea of the importance of phenomena. And while phenomena have received somewhat of a boost over the last three chapters, the general trend has been a deflationary one, though perhaps not so radical a one as the first three chapters threatened. Going in pretty much the opposite direction has been the case for apperception. Apperception hardly got mentioned in the first two chapters, but since then its importance has been called more and more to the readers attention. In chapter 5, it was even described as being essential to ourselves as Lockean persons — as moral agents, and so forth. The star of apperception has brightened as the star of phenomenal states has dimmed. But it has undoubtedly not gone unnoticed just how little has been said about apperception, especially about why it is so important. After all, why do we have apperceptive states at all? Why could we not get by as well if we had only C l and CS? Actually some answers to these questions have already been presented (see chapter 3, section III, for instance); but those answers themselves often rested on unargued-for claims, along with a promise to argue for them. Now is the time to turn to a serious examination of apperception and its roles in our lives. The remainder of the book is that undertaking. Success in carrying out that undertaking will increase the credibility of these first two parts, while also defending Cartesian Rationalism against the twentieth-century objections. Some have argued that apperception (C2) is important to an organism because in monitoring other mental states (Cl and CS) of the organism it allows the organism to be more flexible in its responses. I have no wish to quarrel with this claim, but much deeper abilities are also made possible only to organisms that can apperceive. Chapter 8 begins to examine and develop this last claim; chapters 8, 9, and 10 argue that we would have almost no concepts at all, and so almost no intentional states of any kind, if it were not for apperception. Chapter 11 then traces out implications of those arguments. While we have so far clarified to a degree the much greater importance of apperception than what it seemed when we began our investigations in this book, if the claims of Part Three are correct, then we have so far still greatly understated its importance. 189
PART THREE
Apperception
8 Apperception We have now reached a point where I hope I have been successful in showing that Cartesian Rationalists are correct about perception — and so also about higher mental states: they are all proposition-like, cognitive, and constructive. Passive phenomenal states are inadequate for the job British Empiricists envisioned for them. But as said in the Introduction, this century — now almost closing — has seen a new attack on Cartesian Rationalism: a denial of Internalism altogether. To fully defend Cartesian Rationalism, I need to show that anti-Internalist attacks on it also fail. I begin that defense in this chapter by showing that for one set of concepts — that of the propositional attitudes — the content of those concepts is dependent only on the internal states of the organism, a conclusion at least compatible with Internalism, if not constitutive of it. Only one form of anti-Internalism is considered, a view I label, for historical reasons, "Instrumentalism." At the end of the previous chapter, I said that the main topic in this third part of the book would be apperception. However, these two tasks — defending Internalism and further investigating apperception — are closely related. The key to preferring Internalism to Instrumentalism has largely to do with the role apperception plays in our acquiring the concepts in question. Scientific Cartesianism and apperception are closely intertwined. So a major focus of this chapter is on apperceptive consciousness. In the next chapter, I will consider Externalist critiques of Cartesian Rationalism, and consider concept acquisition and content more broadly. The narrower focus of this chapter provides a nice transition to those broader issues. And apperception is the key to resolving both narrower and broader issues. Perhaps the best way of establishing the importance of apperceptive consciousness, and gaining a greater understanding of it, is to begin by contrasting it with a view like Daniel Dennetts. 1 1
This chapter is based on Nelkin 1994a.
193
Apperception
Dennett is the most schizophrenic of philosophers of mind when it comes to the ordinary propositional attitudes: thoughts, beliefs, desires, hopes, fears, wishes, and so forth. By "propositional attitude" I mean only ordinary, everyday, "folk psychological" attitudes unless the context makes clear otherwise. Moreover, they will be considered to be occurrent states, although it has been questioned whether some of these states can be occurrent (rather than dispositional) at all (Lycan 1986). But there are good reasons to accept all of them as having occurrent status, good reasons to think that our parsing of dispositional propositional-attitude states is parasitic on — or at least derived from — our parsing of occurrent propositional-attitude states. This conclusion is at least implicit in the conclusions of this chapter. Dennett is schizophrenic in wanting both Instrumentalism and Realism about the attitudes. Of course, he is neither schizophrenic nor irrational: he believes that features of both Instrumentalism and Realism can be held without contradiction. And so do I. Our disagreement is about which features should be selected. Moreover, Dennetts "Realism" is not that of common sense. Perhaps it will be thought that I am being unfair to Dennett by considering occurrent states, but I don't think so. Either he intends his arguments to apply to occurrent states as well as to dispositional states or he doesn't. In the first case, I hope to show that he is mistaken. In the second case, I hope to show that even if his arguments were to apply to dispositional states, they do not apply to occurrent ones. And since I think that occurrent propositional-attitude states are primary, the dispositional ones being sorted in respect of the way in which occurrent ones are sorted, that conclusion would be satisfactory. One feature of Dennett's views about the propositional attitudes, the one central to our discussion, is his insistence that psychology must be approached through a third-person point of view only.2 He is wrong 2
"One must start from somewhere, however, and my tactical choice is to begin with the objective, materialistic, third-person world of the physical sciences, and see what can be made from that perspective of the traditional (and deeply intuitive) notions of mind" (Dennett 1988a, 495). Dennett may not be so opposed to the first-person point of view as remarks like this one make him out to be (personal communication); but if he is opposed, he is not alone in wishing to eliminate it from psychology. Compare Fodor's sarcasm: "I take the lack of a rival hypothesis to be a kind of empirical evidence; and there are, thus far, precisely no suggestions about how a child might acquire the apparatus of intentional explanation 'from experience.' (Unless by 'introspection'?!)" (Fodor 1987, 133). I will comment on some of Fodor's views later in the chapter.
194
Apperception
to do so. If psychology is to be a science that explains behavior, then the first-person point of view cannot be omitted. One reason philosophers and psychologists have wanted to eschew first-person points of view is the failure of nineteenth-century introspective psychology, both in its subjective form and in the attempts to "objectify" it. However, as section II of this chapter will make clear, nothing in my defense of the first-person point of view requires such a psychology. Indeed, taking account of introspective and other firstperson data is compatible with having a third-person science (see Bilgrami 1989, who, for different reasons, makes this same claim). Many also wish to restrict psychology to a third-person point of view because apperceptive states are assumed to require phenomenality and incorrigibility. I will call into question — and have already called into question (see chapters 5 and 6) — whether apperception in fact requires these properties. I want to emphasize that it is the importance of apperception to psychology, and not the existence of the propositional attitudes per se, that I wish to defend. My real interest is more in the fact that we have concepts of the propositional attitudes than in the attitudes themselves. The first section of this chapter is devoted to a discussion of Dennett s view and its shortcomings. The second section contains my own diagnosis of the problems raised and my proposed cure for them. The third section further explains my view and clarifies its differences from Dennett s.
1. Steering a course between the Scylla of Realism and the Charybdis of Instrumentalism, or synthesizing such deeply opposed positions, is not so simply done. Dennett s course takes him nearer to Instrumentalism; mine, to Realism. One motivation behind both Dennett s view and a more robust Instrumentalism is easily understandable: frustration at trying to make sense of commonsense Realism while being a physicalist. But just as Instrumentalism was an unconvincing view about atomic particles, it is also an unconvincing view about the propositional attitudes. Instrumentalism, as I intend it, is a three-part view. First, it is antiRealist: according to Instrumentalists, no propositional-attitude states actually exist. Second, Instrumentalism maintains that propositional 195
Apperception
attitudes are ascribed to persons only as tools for prediction, but no laws govern these ascriptions. Ascribing attitudes is merely a pragmatic device. Third, Instrumentalism claims that insofar as we ascribe propositional-attitude states to persons at all, the whole person is the smallest unit to which it makes sense to ascribe them. Instrumentalists, then, are what I will call "Wholists." Opposed to Wholists are Partists, who believe that mental states are ascribable to persons in virtue of being ascribable to 2. part of the person (the brain, or the mind). Dennett s strategy is to divide Instrumentalism at its joints, accepting two of its tenets, while rejecting - or at least putting a new spin on - the other. He does accept Wholism: No deep fact about a person s particular brain state or other internal state definitively answers whether the person has a thought or not. No internal state is (either type or token) identical to having a thought. Thoughts are ascribable only to whole organisms, or, even more holistically, only to whole organisms in particular background conditions. No brain in a vat, no brain in an intact head, no mind, nor any other part of an organism has thoughts. A second component of Instrumentalism Dennett accepts is that propositional-attitude talk provides only a convenient, approximative way of dealing with the behaviors of certain sorts of beings. Any nomic, scientific attempt to predict and explain behaviors is at the moment beyond us; and even if such a science existed, it would probably be too cumbersome to use for the real-time predictions and explanations we need to get on with our lives. There are certain parallels here between, say, "belief" and "table." Neither names a scientifically natural kind. But like "table," "belief" is convenient to use in everyday discourse. In everyday life it would be virtually impossible — even for physicists — to replace talk of tables by particle physics talk; similarly, in everyday life, it will almost certainly turn out to be impossible for us to replace propositional-attitude talk with neuron talk. Another important parallel between tables and propositional attitudes is that just as in certain cases no fact in the world can decide whether something is "really" a table or not, no fact in the world may be able to decide the exact description of ones propositional attitude or whether one "really" has a particular propositional attitude or 196
Apperception
doesn't. Does a frog believe that a fly is moving in its view or only that something small and edible is? Or believe anything at all? No fact in the world, at least no fact concerning a state of a frog's brain or mind (if the latter be different from the former), can decide either of those questions.3 This last claim reveals a difference for Instrumentalism between tables and beliefs, and re-emphasizes its Wholism: While anything called a table is token identical at any particular time to a state of particles (at any level: molecular, atomic, or subatomic), propositional attitudes are not like that. Even if we agree that someone is thinking that Clinton is President (or that a frog is thinking that a fly is in its view), no brain (or mind) state is that person's (frog's) thinking so. We cannot find in one compartment of the person's brain (or mind) the belief that Clinton is President and in another compartment the wish that Clinton be a better president than his predecessor. Brains (or minds) are the wrong sorts of places to look for beliefs and wishes. In fact, there are no places, short of the whole organism, to look for beliefs and wishes. Propositional attitudes are not states or events in the organism. They should be considered, according to Instrumentalists, as states or events of the organism, like "being amiable." 2. Dennett, unlike many Instrumentalists, is quite sensitive to the question of why propositional-attitude explanations and predictions have been so successful. To say that propositional-attitude concepts are only pragmatic instruments is one thing. To understand why they have been used as instruments, and used so successfully, is another thing — especially if the propositional attitudes are, in Dennett s words, abstracta rather than illata. And here Dennett completes his strategy by accepting a sort of Realism: Although no states in organisms (no illata) are the propositional attitudes, there are patterns in human affairs, real patterns — patterns that would be missed at a physical level of description, explanation, and prediction: 3
The dispute between Instrumentalism and Realism discussed here is different from the Externalism/Internalism dispute. Externalists can be Partists. In this latter case, their disagreements with Internalists are over the criteria of identity for the relevant internal state, especially over how the content of the state gets specified. Instrumentalists are claiming that no internal state is a propositional-attitude state. The Externalist/Internalist dispute is the focus of the next chapter.
197
Apperception There are patterns in human affairs that impose themselves, not quite inexorably but with great vigor, absorbing physical perturbations and variations that might well be considered random . . . (Dennett 1987, 27)4 These patterns ground propositional-attitude ascriptions, explanations, and predictions. Ascribing a propositional attitude is a kind of shorthand acknowledgement of recognition of a given pattern. Insofar as the patterns are real, one may also think of the propositional attitudes as readjust as a persons amiableness is real, or a center of gravity is real. Moreover, because these patterns are real, psychology can — and should — stand alongside neurobiology when it comes to understanding human beings. These patterns, of necessity overlooked by neurobiology, are exactly the subject matter for psychology (Dennett [1978b] assigns another role to psychology as well: to show how highlevel intentional states can ultimately be cashed out, first at the design level, then at the physical level). While Dennett claims to be a Realist, the Realism he defends is not that of common sense: Dennett denies that propositional attitudes are ascribed to persons as internal states.5 Rather, he claims that propositional attitudes are ascribed as "names" of various patterns of a person s activity. Moreover, he denies that the propositional attitudes are natural kinds, i.e., the kinds of states a scientific psychology would quantify over. Psychology, according to Dennett, insofar as it is about the propositional attitudes, is not quite a science. We can draw on the previous analogies once more. Just as no facts in the world rationally 4
5
Also compare Dennett (1987, 235): " . . . [W]e already know that there are reliable, robust patterns in which all behaviorally normal people participate - the patterns we traditionally describe in terms of belief and desire and the other terms of folk psychology." And compare Dennett (1987, 25): "Our imagined Martians might be able to predict the future of the human race by Laplacean methods, but if they did not also see us as intentional systems, they would be missing something perfectly objective: the patterns in human behavior that are describable from the intentional stance, and only from that stance, and that support generalizations and predictions." It has sometimes been claimed that common sense does not consider beliefs and so on to be internal states. Instead, beliefs are ascribed to a whole organism, i.e., commonsense psychology is claimed to be Wholist about the propositional attitudes. Frankly, such a claim is extremely doubtful. We also assign states like pains to a whole organism ("It's in bad pain"), but we also localize pain. Surely, it is because we commonsensically believe that thoughts are in the head (or mind) that expressions such as, "I can't really know what she thinks about me; I can't get into her mind," are so common. Although little experimental work has been done in this area, what there is supports my intuition about common sense (see, for example, Stanovich 1993, 81 - who, incidentally, is no friend of the commonsense view).
198
Apperception
compel us to say that something is really 2. table or someone is really amiable (and I take this to be the force of Dennett s saying, "not quite inexorably"), no facts in the world rationally compel us to say that this pattern really is the belief that Clinton is President. Tables, amiability, and the patterns underlying propositional-attitude concepts are not natural kinds. Still, there is Realism of a sort. It would seem mad to deny that tables exist or people are amiable just because the concepts '"table"1 and '"amiability"1 have no necessary and sufficient conditions of application and are not natural-kind terms in any genuine science. Similarly, it would be as wrong to deny that beliefs that Clinton is President exist just because no necessary and sufficient conditions determine whether a particular pattern token belongs to this pattern type or what the pattern type is. For certain kinds of not-quite-scientific explanations, the propositional attitudes (and psychology) are useful; and beliefs are as real as tables — though in a different way and at a different level of abstraction. 3. In The Intentional Stance, Dennett never tells us much about what these patterns are patterns of. He says (though only once [1987, 25]) that by "patterns" he means "patterns in human behavior." More recently, Dennett (1991a) has made more explicit that he is talking about patterns of behavior. But how are we to understand this notion of "behavior"? Dennett recognizes that "behavior," in the relevant sense, cannot mean mere bodily movements, at least as physically described. That is why he says, "absorbing physical perturbations and variations that might well be considered random." Dennett is no diehard Behaviorist. Or, rather, he is; but he realizes that in its simple, physically described, bodily-movement dress, Behaviorism is indefensible. A connection between occurrently believing that Clinton is President and any particular set of physically described bodily movements is pretty much nonexistent (hence: "that might well be considered random"). There is no typical "believing that Clinton is President" behavior in a bodily movement sense. So what is this patterned "behavior"? How are we to understand it? If physics is correct, there is a sense in which the world consists of nothing but bodily movements, where "body" includes all physical objects, not just human or other animal bodies. Yet, the relevant concept of behavior cross-categorizes these movements — and only for 199
Apperception
some bodies. So where did we acquire this concept? The only sensible interpretation of ^behavior"1 in this context seems to be similar to that which action theorists have drawn between bodily movements and actions, or between happenings and doings. But that concept of action presupposes goals, purposes, desires, beliefs, and so on. In short, action presupposes the propositional attitudes.6 For us to discern the behavioral patterns we call "actions"- and surely those are the ones Dennett is referring to — we must already come armed with propositional-attitude concepts. Only because we are already operating with them do we sort random bodily movements into meaningful patterns.7 Of course, in a sense, until we have a concept of anything, X, we cannot sort instances under X. But here the claim is stronger. For instance, presumably, experiencing cats is relevant to our acquiring the concept ""cat"1. Only because we perceive token cats, patterned as cats, are we able to acquire the concept ""cat"1. However, the claim in question states that we cannot even discern token belief-patterns (as patterns) until after we possess propositional-attitude concepts. If so, the existence of the patterns can hardly give rise to our propositional-attitude concepts. To claim that the concepts originate from observing the patterns would have it upside down. This sort of Neo-Behaviorist account of the attitudes would be no more successful than bodilymovement Behaviorism. Only because we already possess propositional-attitude concepts are these patterns revealed to us at all. Dretske (1988) presents a more generalized notion of behavior that includes actions as a subset only. Dennett may well find Dretske s definition congenial. However, even if one accepts Dretske s definition, the only problems presented for this chapter would be a complication in exposition. As Dretske himself sees, the story needed for actions will be different from that needed for "lesser" behaviors. To keep the exposition simple, the identification of behavior with action will be retained. Or states that operate like propositional attitudes in relevant respects. One might claim, with some justification, both that action is what is at stake (i.e., the action/happens-to distinction is what is at stake) but also that an organism can distinguish action without any acceptance, or even knowledge, of the propositional attitudes. If a propositional attitude involves something like a linguistic representation, then propositional attitudes may not be required for actions. But some sort of aspectualized representation that does much of the work propositional attitudes are thought to do is necessary. If the organism makes the action/happens-to distinction, it must represent this distinction, have a cognitively favorable attitude (i.e., something like a belief) toward it, and so forth. Further, if the organism takes itself to act on the world, it must conceive the world and have a cognitively favorable attitude toward it. It would be difficult to understand ascribing action to an organism unless we were also prepared to ascribe such representations and attitudes to it.
200
Apperception
4. If finding these patterns in movements presupposes already possessing propositional-attitude concepts, then the concepts must be innate or acquired very early on, for toddlers distinguish actions from bodily movements.8 How do we come to have these concepts? Granted, we learn the words for thoughts, beliefs, desires, and the rest from our parents; but how did propositional attitudes enter the conceptual world of human beings in the first place? If the patterns do not give rise to the concepts — but presuppose the concepts — the only possible answers to our question appear to be (1) the concepts just popped into someone's head and that person passed them on or (2) the concepts are innate.9 Since (1) is no more satisfying than (2), and since perhaps (2) will be more easily made compatible with an evolutionary account, then given just these two possibilities, the second alternative seems preferable. The idea of concepts that play such an important role in our lives and are so universally held just popping into someone's head is too much to believe. Dennett himself seems to agree with most of what has been said so far. He sometimes implies (see second quotation, footnote 4), and sometimes says explicitly (1988a, 496; 1991a), that only by taking the intentional stance do we discover these patterns. But others may find it doubtful that propositional-attitude concepts are innate.10 Instead, one might argue that we acquire the concepts of propositional attitudes, which are concepts of high-level functional states, by first acquiring concepts of lower-level functional states, like eating. Consideration of this important claim will be postponed until section II when more of the groundwork necessary for considering it will have 8
9
10
For instance, see Leslie 1987. While "believe" seems to be fully grasped only around the age of four (Wimmer and Perner 1983 - though see Wellman 1992 for arguments that by age three [and maybe younger] children grasp the essentials of this concept), perception terms ("see," for instance), "think," and "desire" are evidenced almost as soon as speech is mastered (see Gopnik 1993). Thoughts are clearly occurrent propositional attitudes. The desires in question are expressed desires (i.e., also occurrent propositional attitudes). And perception involves a judgment, i.e., an occurrent propositional attitude (chapters 1 and 2). This early mastery of propositional-attitude concepts supports the claims made in the text. Fodor (1981b) takes this latter line, and I think he flirted with a notion of "innate" much stronger than he later opts for (1981c). Of course, this claim about Fodor is mere speculation on my part, and nothing rides on it. Shortly, we will discover at least two other possible answers to the question of how we come to possess the propositional-attitude concepts. Dennett himself, perhaps surprisingly, agrees that the concepts may well be innate (personal communication).
201
Apperception
been laid. Besides, Dennett would surely insist that the propositionalattitude concepts being innate would not entail that beliefs, desires, and so forth are internal states. The proper locus of application of the concepts, he would say, is patterns of bodily movements. Dennett's claim, though possibly true, requires justification. The simplest explanation of our possessing the commonsense concepts of propositional attitudes is that propositional-attitude states exist as common sense takes them to exist: as internal states. If this simple explanation is wrong, then it needs to be shown why it is wrong.11 Moreover, what patterns are we supposed to discover when taking the intentional stance? If we discover these patterns, they must already exist, awaiting discovery. But then we are back to the original questions: What are these patterns patterns of? Do these patterns already exist in bodily movements, and are propositional-attitude concepts something like special lenses that allow us to "see" these patterns? Dennett (1991a) does seem to adopt a special-lenses view. If he were correct, the relation between bodily movements and intentional stance categorizations would not be the random one he (1987) so plausibly and persuasively portrays — and that seems so clearly correct. If there are "visible" patterns of bodily movements that constitute believing that Clinton is President, I, for one, am totally ignorant of what they are. It is true that I ascribe this belief (and others) to people on the basis of what they say and do; but if it is on account of seeing a pattern, I am unaware of what that pattern is, or that I see it, or even what sort of pattern it could possibly be.12 However, even if Dennett were correct, two questions remain unanswered: how did those patterns get there, and how did we ever come to take the intentional stance in the first place? Dennett s answer to the first question is, I suspect, a neurological-evolutionary one.13 How he would answer the second question is less clear. If he would say that taking the intentional stance is hard-wired into us, then it would follow 11 12
13
This is the simplest explanation for the trivial reason that the simplest explanation of why most people believe p to be true is that p is true. I stress "see" because the type of discerning relevant to "seeing" belief-patterns in bodily movements is nothing like seeing patterns on a chess board. The analogies Dennett uses in trying to establish his claims about patterns of movements (1991a) are weak for just this reason. Actually various neurological stories combined with an evolutionary one. That is, different neurological states (across species, within species, even within a given individual at different times), given evolution, cause similar patterns of movement.
202
Apperception
that the propositional-attitude concepts are also innate. In section II, I answer the second question in a way that does not imply that the concepts are innate and, at the same time, I supply an answer to the first question (why the patterns occur at all). Whatever Dennett s answers are, they will likely differ from those of a commonsense Realist. Commonsense Realists do not think we discover pre-existing patterns in bodily movements, patterns that are the propositional attitudes. Rather, people create patterns in their bodily movements because they have thoughts, beliefs, desires, hopes, fears, and so on as internal states. We impose such patterns through purposive action, rather than merely discover them. Though, of course, we can then go on to discover these created patterns; but this act of discovery is different from the one Dennett envisions and is not based on "visible" patterns (this claim will be further discussed in section II). More full-blown Realists, like me, take the patterns of external movements to be even more abstract than abstract but still perceivable ones like positions on a chess board. 5. Instrumentalists purer than Dennett face additional difficulties. If propositional attitudes are radical fictions, it is a deep mystery why propositional-attitude talk succeeds in being as useful as it is, why explanations involving it seem so reasonable, why predictions made in its terms are successful. Unlike Dennett (1987, 234), I don't think a new scientific theory of human beings would have to explain why propositional-attitude theory has been so successful. If a new theory were to get accepted, scientists would just stop talking about propositional attitudes. However, nothing signifies that such a theory is in the offing. And philosophers should want an explanation of why propositional-attitude theory is so successful. Only Dennett among the Instrumentalists has taken this task seriously; and if his own explanation fails, we need to understand why it fails. Moreover, if Instrumentalists maintain that these patterns are — at every level of abstraction — nonexistent, then they seem to be saying that there is no behavior. And if there is no behavior, then nothing exists for psychology to explain. And if nothing exists for psychology to explain, then no psychology. But, then, what is the Instrumentalists' alternative? If all we want is a science explaining how human beings work physically - one that brackets the notion of behavior we already have one: physics. But, surely, we want questions answered 203
Apperception
that we think physics alone cannot answer. Not only will the explananda needed to answer these questions be natural kinds other than those of physics (other than that of bodily movements), but the kinds that will appear in the explanans will also be different. That is, we want the propositional attitudes explained, and most of us also believe that they will figure (because they are causes) as part of explanations of further states (behavior). No wonder most of us remain so doubtful about Instrumentalism — and about Dennett s Behaviorist brand of Realism. Nevertheless, despite these difficulties, Instrumentalism is partly right. Dennett is right as well when he says that patterns underlie our concept of "behavior." The next section of this chapter makes explicit what is right about Instrumentalism and also explains how Dennett's patterns are to be preserved. However, the argument also shows why Realism is nearer the truth than Instrumentalism. Dennett is looking for the relevant patterns in the wrong place. They are not observable patterns of bodily movements. They are apperceivable patterns of internal states. Because they are, propositional attitudes are intended as illata, not as abstracta. II
6. Dennett denies that we have direct, apperceptive access to propositional-attitude states, in part because the Instrumentalist in him says that no inner propositional-attitude states exist, but also in part because of doubts about apperception itself. Two reasons may explain Dennett s doubts about the importance of apperception to psychology (a third reason is discussed in section III). Apperception has been thought to require either incorrigibility or phenomenality. But we are not incorrigible about our propositional-attitude states, and no phenomenal state seems to be a good candidate for being a state that comprehends propositional-attitude states — as Wittgenstein (1953) went a long way toward showing (see also Nelkin 1989b, as well as chapters 5 and 6). So it would seem to follow that we do not apperceive our propositional-attitude states. Hence, one might conclude, as Dennett does, and as Fodor (1987) also does, that commonsense propositional-attitude states are theoretical entities, believed in only inferentially, not directly accessed. Dennett concludes that propositional attitudes (as inner states) do not really exist; Fodor, that they do. But both are 204
Apperception
wrong to think that internal propositional-attitude states are theoretical entities. They are close to being right: ascriptions of propositional-attitude states are deeply theoretical. But the states themselves are accessed, apperceptively, i.e., we have direct, noninferential access to them.14 Despite my agreement with Dennett and Fodor that propositionalattitude ascriptions are a matter of theory, deep differences exist between my view and theirs, differences that affect one's view of psychology. And the role given to apperception (that I give it a role at all) is a key to understanding those differences. These claims need careful explication and elucidation. Soon, the consistency of "directly apprehended, yet theoretical" should be more apparent. 7. We can start by noting that the above argument against apperception goes through only on the assumption that apperception requires incorrigibility and phenomenality. But neither is required for apperception (see chapters 5 and 6). However, more needs to be said about the nature of apperception itself. While the account to follow is admittedly far from complete, nothing prevents it from being filled in as psychology progresses; and more is said in section HI. First, apperception does not necessarily involve paying attention. For instance, one can be aware that one is seeing the clock on Parliament Tower even while paying attention to the clock and not to one's seeing it (as opposed to hearing it, say). That awareness of seeing the clock is an instance of apperception. Apperception is a second-order awareness, a judgment, in this case with the content (expressible in English as), "I am seeing the clock on Parliament Tower." Perhaps the best way to clarify the claim that apperception is direct and noninferential is to compare it with perception. Perception, I have argued (chapter 1), is a representational state, but primarily a judgment state rather than a phenomenal state. Here is a quick story about perception (not to be defended here): Perception results from "scanning" various unaspectualized representational states (see chapter 4). The perceptual "scanning" in question is a "scanning" of internal states of 14
The short definition of apperception is that it is a second-order, noninferential, cognitive awareness of first-order representational states (see chapter 6; also Mellor 1978, Rosenthal 1986 - what Mellor calls "insight" and what Rosenthal calls "consciousness" are each, in different ways, similar to what is here called "apperception"; but the differences between their notions and mine are at least equally important as well). A good deal more will be said about the nature of apperception in §7 and in section III.
205
Apperception
the perceiver, not a sensory scanning of the world. That latter (though temporally prior) scanning results in the unaspectualized, internal, representational states that are then further "scanned." These unaspectualized representations are then "weighed," and the result of this "weighing" (or "inferring") is an aspectualized representation we call "perception." Two disclaimers: "scanning" has quotes around it because the process may be a passive, strictly causal one, rather than involving an activity of the mind. To use Churchland s word, there may be only a kind of vectoring of information encoded in various receptors. Second, one may question whether terms like "information" and "representation" used in describing states of low-level receptors are themselves appropriate. One may maintain that the only representation involved in perception occurs at the end of the perceptual process. The resolution of these issues is irrelevant to what follows (but see chapter 4). Readers, if they have any view on these issues, may read "scan" and "representation" literally or eliminatively, as they wish. In any case, the result of "scanning" (as well as "weighing," etc.) these unaspectualized representations is an aspectualized representation involving a propositional attitude, a judgment. 15 Kant (1787/1961), who held a representational view of perception, also held a representational view of apperception. And in a sense he was right. Apperception is itself a representation, a judgment (see Part Two; Mellor 1978; Rosenthal 1986, 1993). But an apperceptive representation is a representation either of another fully representational state (another aspectualized representational state) or of a phenomenal state, (either of) which directly causes the apperceptive representation to emerge. For instance, one s seeing the clock, an aspectualized representational state, causes the apperceptive representation having the content (in English), "I am seeing the clock." Thus, the "scanning" leading to apperception differs from the scanning of perception in important ways: the "object" of apperception is itself an aspectualized 15
See chapters 1 and 2. Gibsonians would deny the representational nature of perceptual processing. I am quite sympathetic to their view but believe it is not quite right. I have argued (chapters 1 and 2) that perception is primarily a state of cognitive awareness, a judgment, and is only secondarily a phenomenal state. Because judgments (or their replacements in a mature psychology) are by their very nature representational (proposition-like), then the end state of perception is representational — whatever the status of the intervening states. I will continue talking here as if the processing that leads to this end-state is itself representational; but Gibson's being right about perception, if I understand him correctly, would not alter the main thrust of what I have to say in this chapter. For a fuller discussion of Gibson s views, see chapter 4.
206
Apperception
(or phenomenal) representation, not an unaspectualized (or nonphenomenal) one. Second, only one representation is "scanned" in the process leading to apperception, while in the process leading to perception many representations are "scanned." Finally, the aspectualized (or phenomenal) representation apperceived directly causes the apperceptive representation. There is no "weighing" (or vectoring) of many states. Because of these differences, apperception, although a representation, can be considered direct and noninferential.16 Apperception has much in common with Baarss (1987) notion of a global-workspace (and with closely related bulletin board and blackboard theories). The idea is that information contained in a relatively modular system of the brain becomes available for "broadcast" to other modular systems of the brain, otherwise cut off from the encapsulated information, but able to make use of that information if it be available to them. The discussion of apperception will be continued in section III. Enough has been said so far to allow furthering the theory. 8. How can we have direct, noninferential access to our own inner states and yet propositional-attitude ascriptions be a matter of theory? Perhaps we should ask: What do we have apperceptive access to? The answer to that question has a kind of obviousness about it: token mental states. Yet the obviousness of that answer conceals its complexity: to label a token mental state a "thought" (or a "desire," and so on) is also to type, to categorize, that token. Instrumentalism is right in recognizing that token mental states do not wear their types on their sleeves. Just as categorizing objects in the external world is underdetermined by information we receive from that world, so categorizing of inner states is similarly underdetermined. And just as our only approach to categorizing external objects is by means of theory, our only approach to categorizing inner states is also by means of theory.17 16
17
Apperception is unlike perception in another way as well: the process leading to apperception does not begin with sensory scanning. Most probably, no sensory-like organ of apperception exists. A quick gloss on "theory": Theories are attempts to bring experiences under a rule-governed scheme, i.e., to organize experiences, and what we experience, in order to provide ourselves with useful explanations and predictions. I take bringing instances under categories to constitute an elementary kind of theorizing. Such a use of "theory" agrees with the use recognized by many developmental psychologists (for instance, see Keil 1981, 1991; Gopnik and Wellman 1993; Gopnik 1993; Wellman 1992) and has the virtue of allowing us to see how we come to have theories on a much grander scale (scientific theories): the latter develop out of the former, utilizing the same (innate) skills.
207
Apperception
Unlike tokens of the external world, however, tokens of our inner world are sometimes directly accessed by us — through apperception. The "entities" are not theoretical, but our categorizing them is a matter of theory.18 However, the tokens apperceived constrain the categories we can gather them under. There are discoverable patterns all right; but they are patterns of our inner life, of our internal states, not patterns of external movements. And these internal patterns ground and constrain propositional-attitude concepts and later applications of those concepts. Propositional-attitude categories constitute a theory that helps us account for these apperceived patterns. The discovered patterns are of our internal states; the patterns of bodily movements we think of as behavior are imposed patterns, the recognition of which is possible only because we already possess propositional-attitude concepts.19 9. If we assume that we apperceptively access at least some nonphenomenal mental states,20 and these states provide us with information about the world and about our own goals, several otherwise obscure points begin to be clarified. For one thing, we can understand 18
19
20
My view on these issues is, if I read him correctly, close to that of Churchland (1989). However, I believe that commonsense categorizations may be closer to being correct than does Churchland. I stand somewhere between Fodor and Churchland on this latter issue. See below. Dennett has recently come very close to this position. Consider, for instance, the following: The sense the subjects reported of not quite having had time to "veto" the initiated button-push when they "saw the slide was already changing" is a natural interpretation for the brain to settle on (eventually) of the various contents made available at various times for incorporation into the narrative. (Dennett 1991b, 168). Even where Dennett (1991b) is talking about the robot Shakey (especially, 92-98), he ascribes to it a theory of its own internal states. But, then, for unexplained - and perhaps inexplicable — reasons he identifies "theory" with "fiction." For other places where Dennett comes close to the present theory, see 1991b, 127-28, 165, 168, 215, 293, 324-25, 354, 427. At least three considerations keep Dennett from accepting, or even seeing, the very theory he comes so close to: (1) Dennett (1991b) fails, as so many others have failed, to see that we need to tease apart various strands (especially phenomenality and apperception) that are each labeled "consciousness," but which are, in fact, dissociable (see chapters 5 and 6); (2) Dennett's Behaviorism, which he inherited from Ryle; and (3) the fear that apperception requires an apperceiver, an internal Cartesian theater. For more on this last issue, see section III. And nonphenomenological - if being phenomenological involves feeling some way or other in a sense of "feel" that is not phenomenal but, for want of a better term, quasi-phenomenal. As my earlier chapters make clear, I doubt that any such property exists.
208
Apperception
how propositional-attitude concepts originate: they arise because we have apperceptive access to internal states. These concepts need not be thought of as merely popping into use, nor need they be thought of as innate. Instead, experience plays a large role in our acquiring such concepts, somewhat similarly to how experiencing cats allows us to acquire the concept of cats. Of course, experience cannot be the whole story of concept acquisition in either case; but it is an integral part of the story. When Fodor (1987) says that the propositional attitudes are theoretical entities yet innately specified, we have the very peculiar implication that hard-wired into us are concepts that resemble "~electron"1 in that the "entities" categorized are themselves inaccessible. That such deeply theoretical entities would be innately specified is a good deal to swallow. Surely, even nature, known as it is for its evolutionary "clunkers," wouldn't come up with such a bizarre — yet extraordinarily sophisticated - "solution" as that. We also understand why the patterns of movement Dennett considers exist (though at a very high, nonvisible level of abstraction): we realize, from an apperceptive understanding of our own internal states, that we impose these patterns on our bodily movements because we have beliefs, desires, and the like. The patterns are not already out there, awaiting discovery. We create those patterns because we are the kinds of beings we are, because we have the kinds of internal states we have, because we are capable of acting on the world. Indeed, we would not discover these patterns in our movements if we did not take the intentional stance; but that is because the patterns would not even exist if we didn't have the aspectualized internal states we have. We discern the patterns "out there" because we create those patterns. And we recognize patterns in the movements of others only because we ascribe to others internal states similar to our own. It is not because we recognize patterns - at any level of abstraction - in others' movements that we ascribe behaviors to them. Rather, it is because we ascribe to others propositional-attitude states of a kind we ascribe to our own self that we recognize patterns in their movements (for an account, rooted in the evidence from developmental psychology, of how we ascribe propositional-attitude states first to our self and then to others, see the next chapter). But one might here object that there are times when we infer another's propositional attitudes from his or her activities. Wouldn't such cases be clear-cut contradictions of my claim? Yes — and no. We 209
Apperception
do sometimes infer a particular propositional attitude (a belief, for instance) from a persons activities, but only in a context. And that context includes our ascribing to the person, prior to the inference, certain "standard," or otherwise known, propositional attitudes of other kinds for that context - desires, hopes, fears, and so on. Moreover, the context itself is read as the context it is only because we do make these prior ascriptions, ascriptions made possible because we ascribe - on the basis of apperception - propositional attitudes to ourselves. Given a propositional-attitude-defined context of desires and so forth, we can then infer a person s beliefs from the activities as understood in that context. Such "activities" are not, then, mere movements: they are behaviors. And these inferences, rather than belying my claims, are evidence for them. Patterns are relevant to our possessing propositional-attitude concepts and to our ascribing propositional attitudes to others, but it is the patterns of our apperceived internal states that make categorizing under propositional-attitude types possible.21 It is true that we don't feel intentional states when we apperceive them in the way we feel pain phenomena or other qualia. But that only shows that these kinds of internal states - those that give rise to propositional-attitude concepts - neither reduce to phenomenal states nor possess phenomenal properties in an essential manner. Not only does apperception not possess phenomenal properties, but neither do many of the states we apperceive. The belief that phenomenal qualities are required for apperception, or for intentionality, is a legacy of British Empiricism that we have not considered closely enough (see Part Two). Some gift horses really do need dental repair. Nor does apperception require incorrigibility. We can be mistaken about our internal states. In fact, though direct, the kind of access we have to our inner states allows only for theoretical organization of them 21
It is probably true that our categorizing is constrained by innate structures. That fact would help explain why categorizations of mental states are so universally similar, though how similar and how universal are admittedly understudied questions. J. van Brakel (personal communication) tells me that Dutch, for instance, has no word for "belief," or, rather, the Dutch word has more of the connotations of "belief in," with its accompanying notions of "trust," and so on. Moreover, although Dutch grammatically allows for the plural "beliefs," van Brakel tells me that he has never heard this form used. I am claiming only that many of the token mental states are not themselves theoretical entities and, as apperceived, also contribute constraints to our categorizations of them. The claims of this subsection form the core of what is correct about simulation theories of mental ascriptions (see, for instance, Gordon 1992; Goldman 1992). Other claims of simulation theory are more doubtful. See below.
210
Apperception
under concepts. Commonsense propositional-attitude categorizations constitute such a theory, and almost certainly not a very refined one. There is no reason to think that the internal representation system need be, as Fodor has claimed, linguistic-like. The truth is, we do not know what it is like exactly. Commonsense propositional-attitude theory does claim that the representations are like sentences; but of course, as with any theory, no guarantee exists that the propositional-attitude theory of inner states is correct. We could turn out to be misled about our representational scheme and about how to categorize its instances. Our theory may turn out to be false (that is the bet of the Churchlands, Stich, and Dreyfus, and of many of the connectionists). I do not mean to claim a priori that the internal representational scheme is not linguistic. I think the theory is closer to being correct than incorrect. And until a better theory comes along, refining the commonsense theory rather than abandoning it is an extremely reasonable endeavor. Fodors requirements that the propositional attitudes not be fused and that representations be compositional and productive are reasonable. And like linguistic representations (sentences), these representations would seem (as Searle [1983] also argues) to have to be aspectualized. Whether only linguistic-like representation can meet these criteria is an open question. Fodor s own (1987) conception of the language of thought, wherein narrow content is represented, is already quite removed from any ordinary notion of "language." Moreover, we have evidence from connectionism (Rumelhart and McClelland 1986) and Neural Darwinism (Edelman 1987) that alternative representational schemes are available. Even if these views are ultimately wrong, they make us realize that genuine alternatives to linguistic representation may exist. And one goal of a scientific psychology is to continue searching for the internal representational scheme (or schemes, if there be - as is likely - more than one). Given all the difficulties inherent in ordinary propositional-attitude talk, it is certainly possible that concepts like •"belief"1 and '"desire"1, as they stand, will fail to provide a basis for a scientific theory of human behavior.22 10. Although a propositional-attitude theory may turn out to be scientifically inadequate, it is approximative. We do have apperceptive 22
Fodor himself says that ordinary concepts will need to be modified and constrained, and no disagreement may exist between us here. I suspect, though, that I would allow for much more modification than Fodor would.
211
Apperception
access to internal states, and propositional-attitude theory attempts to describe and categorize what we apperceive. That our descriptions and categorizations are so similar throughout different eras and different cultures is surely because our concepts are rooted in similar experiences: we do represent the world to ourselves and represent our own goals in that world. Only because apperception yields at least approximative descriptions and categorizations, and supplies some constraints, is there much reason to believe in it at all. 11. But why should anyone believe that we have nonphenomenal, corrigible, but direct, noninferential access to real inner states, an access that yields the at least approximative set of descriptions and categorizations contained in propositional-attitude theory? There are several reasons. First, as the previous sections have shown, what were problems for Dennett are resolved if we do sometimes have apperceptive access to inner representational states. Compared to Dennetts view, Apperceptionism (as I will call this view) better accounts for our possessing propositional-attitude concepts at all.23 We are able to understand how diverse bodily movements can nevertheless be categorized as instances of the same behavior. Yet, if propositional-attitude theory provides only approximative descriptions and categorizations of internal representational states, we can understand both the force of the Instrumentalists' position and at the same time dispel that force. And so we are able to grasp what is right in Dennett, and in Instrumentalism as well: a propositional-attitude scheme is only a crude theory — it does run into serious, perhaps unresolvable, problems; yet there are patterns, but these are first — and foremost — patterns of internal, contentful, representational states to which we have apperceptive access. 12. At this point, we need to consider the possibility raised in §4. Why couldn't we have acquired the concept of behavior, not from apperceiving mental states, but from observing lower-level functional states, like eating, and first acquiring their concepts? Having acquired concepts of lower-level functional states, we then extrapolated to acquire the higher-level ones.24 Two replies are in order. Both, I 23
24
Apperceptionism should not be confused with the view (presented in chapter 5) called "Apperceptionalism." Apperceptionalism is a (mistaken) view about consciousness. Apperceptionism is a view about concept acquisition. It is crucial here that "eating" not be considered an intentionally characterized act but refer only to the "physical" process. "Digestion" might be a better candidate for a lower-level
212
Apperception
believe, are correct. They complement each other. The second is the deeper. The differences between lower-level functional concepts, like •"eating"1, and higher-level ones, like '"believing"1, do not seem to be merely matters of degree, but of kind (see, for instance, Bennett 1988, Norman 1986). No obvious incremental steps get us from one sort of concept to the other. One way of perceiving this "gap" is to focus on the relation of the relevant states to bodily movements. In the case of the lower-level states, the relation is fairly tight. Although eating consists of different movements at different times, the kinds of movements are fairly strictly circumscribed and constrained. But to use Dennett's own word, there seems to be a kind of random relation between thinking that Clinton is President and bodily movements. And to borrow a point from Dennett (1978d, 65), we refuse to ascribe beliefs about safety to a sphex wasp because of its stereotypical movements. It is interesting to note that attempts to generate acquisition of highlevel functional concepts from those of lower-level ones founder at just this gap.25 Perhaps the failure to build a bridge across the gap is no mere coincidence: the gap is simply too wide. The other reply cuts deeper and complements this first reply. How did we acquire concepts of lower-level functional states in the first place? If physics is right and all is just bodily movements, how did we acquire concepts that cross-cut the natural-kind categorizations of physics? Surely, the lower-level functional concepts presuppose concepts like '"goal"1, '"information"1, and the like. But the most natural explanation of how we obtained them is from our understanding of the concepts of the propositional attitudes. So the best explanation of how we acquired concepts of lower-level functional states is that we already possessed concepts of higher-level ones. We possess these latter concepts because our representational
25
function because it would avoid this ambiguity in "eating," but it is hardly likely that we acquired the concept of digestion prior to acquiring concepts of the attitudes. See, for instance, Dretske 1981, Millikan 1989. Dretske (1988) attempts to close this gap. But while he has made progress here, there is still much hand-waving at the crucial gap. Laying out the arguments in support of my claim would require another chapter (for good arguments, see Slater 1994). In any case, even if Dretske were right, he is not trying to answer the questions of this chapter: How did we acquire the intentional stance, and what is the origin of those patterns that are discerned when taking that stance? Like Dennett, Dretske thinks belief behavior is a high-level abstraction from bodily movements, but he does not try to answer how we are capable of such abstraction in the first place.
213
Apperception
states and our attitudes towards them enable us to modify the world; and we have apperceptive access to these representational states and to the attitudes, and thereby to notions like '"goals"1 and '"information"1. The developmental evidence (see, for instance, Carey 1985) is that concepts like ^eating""1 and '"thinking"' are evidenced virtually simultaneously, and toddlers ascribe the one only to those organisms to which they ascribe the other.26 It is not that we eat (or digest) only because we first have concepts of thoughts, desires, and so forth. It is not a question of when or how we acquire functional states. It is a question of when and how we come to categorize them under the concepts we do. It makes most sense to think that we categorize under the concepts of low-level functional states because we first, temporally and logically, categorize our inner states under the concepts of the propositional attitudes by apperceiving inner states and understanding that we act on the world. Understanding our actions as actions allows us to conceive of the broader range of functions. 13. There are several additional reasons for accepting that apperception and its data deserve an important role in psychology. Fodor himself gives two reasons weighing in their favor: the apparent universality and success of propositional-attitude theory. It would be odd that every culture and every language incorporated propositional-attitude concepts, and such similar ones (yet, apparently, not exactly similar), unless people were constrained by states to which they had direct, noninferential, though not necessarily incorrigible access. One could claim that the categories are wholly innate. But, then, categories of what? One has to claim either that they are categories of bodily movements (Dennett) or posit internal states as theoretical entities to which the categories apply. The first alternative was discussed and rejected in section I. The second - that both the categorizations and the entities they apply to and explain are theoretical posits — asks too much. That a belief in theoretical entities is hardwired into us demands more from evolution than we have any reason to expect. The success of propositional-attitude explanations and predictions provides similar evidence for a similar constraint. If propositional-attitude talk is such a mess, if it contains even contradictions (Stich 1983), 26
And it is probably the case that ""eating"1 is already an intentional concept for the children, not merely a physical-movement one.
214
Apperception how are we able to use it so successfully? Why don't we make just any old explanations, any old predictions (anything follows from a contradiction)? How are we able to get around in the world with propositional-attitude concepts? Surely, a plausible answer is that although propositional-attitude theory is messy, it is constrained by the states that we categorize. Whatever foolishness exists, exists in our theory; but the states we are theorizing about constrain us from too much foolishness in action, or explanation, or prediction.
in 14. Having said these things in favor of Apperceptionism, a serious challenge to it is apropos: The idea behind Apperceptionism is that by "scanning" its own internal states an organism comes to detect patterns in those internal states; and those patterns enable the organism to conceptualize the propositional-attitude states. Perhaps this view is correct, but it would be considerably more convincing if more could be said about the patterns that are supposed to be detected. This challenge is daunting and cannot be met fully. But there are good reasons why it cannot. However, the challenge can be met at least to a degree; and the failure to answer it completely can be explained. These two tasks form the project of the remainder of this chapter. The project involves three fairly major digressions: (1) a brief account of what I envision as the role for philosophy of mind; (2) a somewhat fuller account of Dennett s (1991b) theory of consciousness; and (3) a reminder about what the first two parts of this book have shown. Only after digressing in these ways will I be in a position to meet the challenge. But even before digressing in these ways, two small but important points need to be stressed. First, no other view has ever met an analogous challenge. Behaviorism certainly failed to. Classical Empiricism certainly failed to. Those who believe that we obtain the concepts of the propositional attitudes from below, as discussed in the previous section, have no such story for us. And certainly those who believe the concepts to be innate never attempt to tell us much about how they are instantiated. Clearly, patterns of some kind or other — in our physical movements, in our phenomenological experience, or in our brains must ground the concepts (even if they be innate), so Apperceptionism owes no more of a 215
Apperception
story than any other position. Nevertheless, a story can be sketched for Apperceptionism; and perhaps the sketch will give Apperceptionism a leg up on its rivals. Second, the focus in this chapter is on the concepts of the attitudes themselves (i.e., thinking, believing, desiring, and so on) and not so much on particular instances of them (that is, for example, on •"thinking"1 rather than on ""thinking that Clinton is president"1). Undoubtedly, for a full account of the attitudes, a good deal would have to be said about their contents. But attitude and content can be separated, and my focus is on the attitudes. (One can have different attitudes toward the same content and the same attitude toward different contents.)27 15. Philosophy of mind I take to be a kind of propaedeutic to a scientific psychology. What is clear is that at the moment no universally accepted paradigm for a scientific psychology exists. It is exactly in this kind of circumstance that philosophers can be helpful to empirical scientists. The task for philosophers of mind in the present context is to consider the empirical data available and to try to form a generalized, coherent way of looking at those data that will guide further empirical research, i.e., philosophers can provide a highly schematized model that will structure that research. And the resulting research will, in turn, help bring about refinements of the schematized theory, with the ultimate hope being that a closely honed, viable, scientific theory (one wherein investigators agree on the questions and on the methods to be used to answer them) will emerge. In these respects, philosophical theories of mind, though concerned with current empirical data, are too general in respect of the data to be scientific theories. Moreover, philosophical theories are aimed primarily at a body of accepted data. As such, philosophical theories merely give a "picture" of those data. Scientific theories not only have to deal with the given data but also have to make predictions (and postdictions) about unknown data, predictions (and postdictions) that can be gleaned from the theory together with accepted data. This removal to unknown data is what forms the empirical basis of a scientific theory and allows it to be justified in a way quite distinct from the way in which philosophical theories are justifiable. Philosophical theories are only schemata, coherent "pictures" of the accepted data, only pointers toward empirical theory (and as the history of philosophy makes man27
I take up the problem of content in the next two chapters.
216
Apperception
ifest, usually unsuccessful ones — though I do not think this lack of success is any kind of fault: these are difficult tasks).28 And the relevance of this discussion to the challenge issued should now be apparent: the challenge will be fully met only by empirical theory and research. My aim is to point that research toward looking for patterns in the brain (at any level: structural, physiological, or functional) that we grasp in apperception and that give rise thereby to our concepts of the attitudes, and toward looking for a mechanism (or mechanisms) by which apperception itself is realized. Until such research be well under way, only a very generalized answer to the challenge can be offered. But this generalized answer is worth presenting. In order to present it convincingly, Dennett's (1991b) views need to be more fully considered. 16. I cannot do full justice to Dennett's views here, and the following is only a synopsis useful for the further purposes of this chapter. In discussing phi phenomena, Dennett (1991b, 114—15) says that people offer either an Orwellian or a Stalinesque interpretation of the data.29 (1) Orwellian: We consciously see the second light but repress our memory of having seen it. That is, the light flashes at B, we consciously see it, repress the memory of seeing it, and construct a "memory" of seeing the light passing in uniform motion from A to B. (2) Stalinesque: We never consciously see the second light flash but "construct" a false percept of the moving light. When asked what we saw, our memory is accurate. In Orwellian cases, the perception is correct but memory faulty. In Stalinesque cases, truth and error are reversed. While Orwellian vs. Stalinesque disputes can be settled for many events, Dennett claims that for the time frames involved in 28
29
And it is not only professional philosophers who take them on - nor need it be. The kind of interchange now taking place between philosophers of mind and cognitive neuroscientists parallels the kind of interchange that took place in natural philosophy in the hundred years between Galileo and Newton. If we are lucky, things will turn out as well. Phi phenomena are illustrated in the following experiment: If there are two points, A and B, and first a light is flashed and extinguished at A, and then a light is flashed and extinguished at B, in most cases that is exactly what people say they see. However, if the temporal interval between flashes and the spatial interval between A and B are calibrated in just the right way, people instead claim to perceive a single continuous flash going from A to B. For instance, people take themselves to have seen the light at A, to have watched as it moved through the midpoint C, between A and B, then to have seen it approach B from C, and finally to have arrived at B. But in order to "see" the light at C before it arrives at B, one must have visually taken in the information that the light has already flashed at B.
217
Apperception
phi-phenomena experiences no solution exists, not because decisive experiment is hard to come by, but because no fact of the matter exists. Dennett s reasons for this conclusion are surprisingly difficult to discover in the relevant chapter.30 But, nevertheless (and unlike the reviewers mentioned), I think they can be discovered there; and Dennett provides additional arguments for his conclusion in other chapters. Dennett actually offers two independent reasons (I am not sure he sees them as independent). In his first chapter, Dennett praises Neisser s information-processing view of visual perception (12; see also 111—12 and 118—19). In Neisser s view, a visual perception is the result of a retinal stimulus on each eye in conjunction with a comparative analysis of the stimuli with each other and with stimuli from each of the other sensory systems, and all of that information in comparison with stored knowledge (memories) — of the context, of immediately past perceptions, of items in long-term memory, and so on. That is, in any occurrent visual experience, a combination of immediate stimuli information and memory exists. Thus, for the very short time frames necessary for a given percept to occur, no distinction between memory and perception exists: perception is an experience that involves memory. Bottom-up and top-down processes are simultaneously at work. So there is no answer to the question: Was the information perceived or only remembered? That is one kind of reason Dennett gives for his conclusion. But this argument invites the reply that while Dennett may well be correct about the complex cause of any percept, the question remains whether the percept of the B state consciously occurred, a question that is often read as asking whether a phenomenal representation of the B state occurred. If it did, then on this reading of the question the Orwellian view is supposed to be right; if it did not, then the Stalinesque one is supposed to win. Even if we cannot test for its occurrence, the phenomenal state either occurred or it did not. So it is claimed that resolution of the dispute exists in principle, and the problem is merely an epistemological one.31 30
31
C h a p t e r 5. Block (1993) and Strawson (1992), in their reviews of Dennett's book, claim to find no arguments for his conclusion. (From now on, numbers in parentheses in the text refer to pages in Dennett 1991b.) Block (1993) says something almost exactly like this reply, and the same idea is at least implicit in Strawson (1992). Neither offers the reply in answer to the argument I have ascribed to Dennett since both claim to find no argument at all for Dennett s position.
218
Apperception
Constructing Dennett s response to this reply (for, in fact, he anticipates it) allows us to discover his second argument. He basically rejects the reading of the question as being about phenomena. Consciousness, for Dennett, is instead embodied in a stream of consciousness. And every momentary state in the stream is about something or other (7, 132, 189). That is, we cannot be conscious without being conscious of something or other. And if we are conscious of, when conscious, then consciousness must consist of the sort of aspectualized states Searle (1983) calls intentional states. But phenomenal states (qualia) have no intentionality (are not aspectualized representations). Therefore, no phenomenal state is in itself a conscious state, and so whether a phenomenal state occurred or not is irrelevant and cannot decide whether experimental subjects consciously saw the light flash at B. Obviously, a key step in this argument is the last premise (that phenomenal states have no intentionality). While Dennett says a few things in its favor in chapter 5 (126—34), his main arguments for this premise do not appear until chapter 12. Rehearsing his arguments for this conclusion is unnecessary for my purposes, but Dennett is not making merely dogmatic claims. Pace his reviewers, he does argue for his position.32 Dennetts problem, as he sees it, is how to tell a story of a stream of consciousness without there being a central area where it all comes together, without any Cartesian theater in which it is all rerepresented. His opposition to a Cartesian theater is both a priori and empirical. The a priori objection is that its all coming together in a theater would require a viewer, an inner interpreter of the rerepresentations; but an inner interpreter would not solve any problems, just push them back — ad injinitum (39, 102—03). The empirical objection is that no evidence for a neural theater exists. No cortical area receives inputs from and makes outputs to every other area (39, 134, 165). Even in a restricted system such as vision, information appears to be channeled: color, shape, spatial position, spatial location, movement, all seem to be processed independently of each other (as discovered from brain lesion cases), with no place where even all visual information comes together. Each of the six layers of cells in the ganglia of the retina, for instance, seems to carry a unique sort of information; and each layer connects 32
Besides chapter 12 (1991b), see also Dennett 1988b, 1991c. For arguments supporting the last premise (I would reject some of the earlier premises), see Part One of this book.
219
Apperception
almost entirely to like layers in other areas of the ascending optical hierarchy, with virtually no vertical connections among the layers. And this compartmentalization exists all the way to the visual, striate cortex (area VI); and from VI, the layers each connect to different post-striatal areas. That is, the subsystems of vision seem themselves to be quite modular. Nor does all this visual information seem to go anywhere "higher" where it is unified. And similar evidence exists for a like compartmentalization of information in other perceptual systems. Dennett s solution to the problem he raises for himself is that information is represented modularly, massively in parallel, and only once. No need for a second representation of the same information arises. Yet, if representations are so modular and quite encapsulated from other modules, how does the organism make use of that information, that is, how is coordinated behavior possible? Dennett s answer is that a kind of vectoring takes place, leading to bodily control (188, 228). And this control gives the appearance of interaction among the modules when there is none. But what, then, is consciousness, why should any organisms have it, and what is its role? And here emerges what Dennett takes to be his most significant contribution to explaining consciousness: The neurons that play a role in modular processes throughout the brain also play a role in creating — and constituting — short-lived, virtual, (largely) brain-wide, serial "computing machines" (190, 210-11, 218-26, 228, 253—54, 281). Evidence for such brain-wide, virtual machines comes from cases of attention, orienting responses, and the like, in which the whole brain seems to be activated all at once in a united fashion. Modules "compete" among themselves in creating these virtual machines, causing ordinary-language (in our case, English) sounds (and syntax) — which is the language of the virtual machines — to be emitted by the organism (no deeper language of thought exists — 302). Only language appears to possess intentionality (aspectuality — 17, 135, 220, 275, 302, 365, 417; cf. Fodor 1975).33 The resulting utterances aloud, or later, sotto voce, or later still, written — the ones that win out, are loose descriptions, abstracted descriptions, of the representations in the modules, acquiring their meanings from how they hook onto the world, especially onto the behavior of the organism and most especially 33
Though Dennett seems to allow for the possibility that purposely drawn pictures also have intentionality (1991b, 59, 197, 275, 316).
220
Apperception
onto its interactions with things in the world, including other organisms (278, 354, 365). That is, Dennett is Externalist about meanings and maintains that the "language of thought" is parasitic on semantic (sentence) meaning (compare Wittgenstein 1953), rather than — as once traditionally maintained (compare Wittgenstein 1921/1961) vice versa (57, 237-42, 279, 304, 429). Utterances of the virtual machine thus provide a kind of running commentary on the representational processes of the modules; but these accounts are at an abstracted level and, so, literally fictional. Yet, they serve two purposes. (1) In "broadcasting" these commentaries, they make available information about one module that would not otherwise be available to other modules encapsulated from that module (195-97, 253-54, 257, 272-73, 278). (2) The utterances, when spoken aloud or written, cause similarly uttered responses in other organisms; and these latter noises in their turn carry additional information, which will once more be modularly processed, and so forth. I have not supplied any details as to how the virtual machines are created, or as to how they are able to abstractly describe modular representations, or as to how other modules can utilize this abstract (and fictional) information; but the failure of exposition here is Dennett s, not simply mine. And Dennett would justify himself, I believe, in ways similar to my remarks in §15 above (41, 257, 262-63, 267-68, 455). Finally, for Dennett, virtual serial machines comprise the stream of consciousness. Their "commentary" alone has intentionality. Consciousness, then, for Dennett, is a Johnny-come-lately state, arising only with language use (194, 219 — or perhaps with a kind of purposeful picturing, if it precedes, or can exist independent of, language use). 17. We will return to Dennett s views shortly; but first, I want to shift the focus. Suppose, contrary to Dennett s — and my own — view, that Classical Empiricism is correct, that phenomenal states are intentional states, and most especially, that differences among propositional-attitude states are phenomenal differences: beliefs just feel different from hopes, desires, and so forth (see Hume 1739/1967; Russell 1948; Goldman 1993). Although such a view is incorrect,34 it — surprisingly — offers a route to meeting the challenge of this section, to allowing 34
From the synopsis, Dennett's opposition to this view can be recognized. My own views on this issue are most fully expressed in 1989b, as well as throughout this book.
221
Apperception
me to say something about the relevant apperceived patterns, and to allowing me to further distinguish my view from Dennett s. According to British Empiricists, phenomenal differences among propositional-attitude states allow them to be sorted into thoughts, beliefs, hopes, and so on. Similarity of feeling marks off those states we call "beliefs" and differentiates beliefs from the similarity classes underlying each of the other propositional attitudes. Now make a physicalist assumption (and like Dennett [33-37], I am a physicalist) that a type—type identity theory is correct about phenomena (I think it probably is correct). This supposition is justifiable here to the degree that no one has ever told a better physicalist story about phenomena. Given this sort of identity theory, one will believe that classes of neural processes (at some level of description) constitute the phenomenal classes. And since phenomena are neural states, and since apperception accesses phenomenal states, apperception, in accessing phenomena, accesses these neural patterns. And in so far as phenomena are neural patterns, neural patterns differentiate one propositional-attitude state (type) from another. So if I were a British Empiricist, I would have a fairly reasonable answer to the challenge that initiated this part of the chapter. However, British Empiricism is almost certainly mistaken; as Reid (1785/1969) argued long ago, feelings do not distinguish among the propositional attitudes (see also Part Two). So I cannot make use of this road. Yet, the road does not run entirely in the wrong direction; and we can pick up a clue to the right road from it. Suppose for just another moment that British Empiricism is correct. And suppose that the only descriptions of the relevant neural states we are able to provide are of them as qualitative states. It wouldn't follow from the second supposition that the description under which we talk about neural states is actually the basis used in the discrimination. We might just be ignorant of the relevant description and be misled into thinking that the only description we know is the relevant one. We might, that is, discriminate propositional attitudes without really understanding exactly how we do it. And once we realize that neural patterns, quite independently of how we might describe them, can underlie categorizing the propositional attitudes, a road is open for those of us who believe phenomena to be irrelevant to propositional-attitude categorizations. Apperception accesses neural patterns, which form similarity classes; and these pat222
Apperception
terns lead us to categorize beliefs as different from hopes, desires, suppositions, and so on. These neural patterns are not phenomena, but nonphenomenal neural states conjoined to aspectualized representations (which are themselves neural patterns). The relevant states, then, are structures of neurons, or perhaps abstract functional states of neurons. I cannot identify the relevant neural patterns (but of course, neither can a British Empiricist/Identity Theorist), but the task of identifying them truly is an empirical project.35 A major difference between my view and the physicalized British Empiricist view may give rise to an objection: "At least on the British Empiricist view we are supposed to apperceive the patterns themselves, the phenomenal patterns, even if we do not know the underlying neural patterns that ground the phenomenal patterns. But on your view no patterns of anything exist in apperceptive consciousness. At best, only the results of'scanning' the patterns exist there: i.e., the distinctions drawn are in apperception, but the bases for those distinctions in no way show themselves."36 But apperception is a judgment that we are in other states. It is not a judgment about its own bases. And as we have seen, the British Empiricist has no guarantee that what "shows itself" is the actual basis for discrimination - at least not under that description. And certainly introspection cannot settle this issue for the British Empiricist (or for anyone else). To further pursue our reply here, let us return to Dennett s position. For him, consciousness exists in order to "broadcast" information to otherwise encapsulated modules.37 I would agree almost completely, though substituting the word "apperception" for "consciousness" (apperception is only one kind of consciousness - see chapter 6). As such, the bases on which apperception categorizes the propositional attitudes are unimportant for broadcast. All that matters to the encapsulated modules is the information contained in the categorized judgments — in the "broadcast" messages. Thus, while the propositional-attitude categorizations of apperception are based on "scanning" (see §7 for the reasons for these quotes) neural patterns, the neural 35
36 37
It is unlikely that similarity relations are ever fully adequate to account for categorizations (see Keil 1992). If they are not, my account would need fortifying in relevant ways. But I believe these issues not to be pertinent to the one at hand, and that a more simplified account is not overly misleading. Goldman (1993) is troubled by this sort of question. See also Baars 1987 from whose views Dennett borrows (257), though I believe Baars's views are closer to mine than to Dennett's.
223
Apperception
patterns are not themselves displayed in apperceptive consciousness; only the results of categorizing them are. That is, the patterns are direct causes of apperceptive judgments, but form their contents only in so far as they are categorized.38 18. How, then, does my view differ from Dennetts? In several ways. First, I am claiming that categorizations of propositional-attitude states are based on "scanning" actual neural similarity sets — patterns — and as such, these categorizations, like any other categorizations, form a theory of the relevant neural states. The categorizations are not fictions (except, if false — which is possible — the theories are fictions in the same sense as any false theory, but only in that sense). Second, apperception, on my view, does involve a kind of rerepresentation of information in the modules, a point I will return to shortly. But for the moment, let me answer Dennett s question as to the point of rerepresentation. Two answers are congenial to my view: (1) The virtual machine rerepresents because the language of the one module is useless to other modules and has to be translated into a language accessible to other modules; or (2) Although the language of any module is "understandable" to another, information encoded in one module is anatomically cut off from the others and can reach them only through the intermediation of the virtual machine. On the first reading, the virtual machine is both translator and translatee. Its "language" must be of a kind nearly universally intertranslatable with the "languages" of the modules. On the second reading, the virtual machine rerepresents because somehow its information is anatomically available to other modules and theirs to it. 38
And apperception plays a second important role as well: once it has categorized the propositional attitudes, it can then "recognize" token instances as falling under the appropriate categorization. My general silence on this function of apperception does not mean that I do not think it an important function. To the contrary. That the bases of apperception are not in apperceptive consciousness does not mean {pace Goldman 1993) that cognitive neuroscience cannot have anything to say about them. It is difficult to understand why Goldman draws such a negative conclusion. Note that Gopnik (1993) and Goldman (1993) present themselves as opposing each other. If I am correct, they are, instead, both right - and both wrong. Gopnik is right that categorization of the propositional attitudes is a matter of theory but wrong in thinking that apperception isn't crucial to this theorizing. (Actually, at times, she comes close to seeing its importance [1993, 10].) Goldman is right to think that apperception is important but wrong on two counts: (1) Its importance does not preclude our categorizations' being a matter of theory, and (2) its importance does not mean that phenomenology is also important.
224
Apperception
Third, I take apperception to have a "language" (aspectualized representations), though not English (or any other ordinary language). While ordinary language is a late arrival, apperceptive consciousness is not. My grounds for these differences with Dennett have, for the most part, already been laid out; and I cannot do more than summarize the results here (the third digression). I have argued (chapters 1 and 2) that all perception involves judgment (an aspectualized representation), a conclusion Dennett (128, 134, 150, 335, 355, 364-65, 366, 431) appears sympathetic towards. However, I have also argued that perception is dissociable from apperception (chapters 5 and 6). If both these conclusions are correct, then intentional states (perceptual judgments) exist independently (in unapperceived perception) of what Dennett calls consciousness. If so, even on Dennetts account, the "language" of the perceptual modules would not be ordinary language. And once one admits that there are intentional states (aspectualized representations) that are not ordinary-language representations, then it is more reasonable to think that the language of apperception is itself not ordinary language. And several sorts of reasons support its not being so. First, if the information "broadcast" is to be useful to the modules, it must have a neural basis — even if it were ordinary language — and must be able to be processed by the modules. It would be remarkable if the modules spoke English (or any other ordinary language), remarkable if sounds of spoken English could directly bring about exactly the same modular processing as very different sounds uttered in Chinese. And it would be more remarkable if the modules had no means of communicating their information to each other prior to the introduction of ordinary language. Much nonhuman animal behavior, given its complexity, would be truly wondrous. Neither of these remarkable possibilities is impossible; and as Dennett notes (37, 367), we should not expect a correct account of how we work to be unremarkable to common sense. But good reasons exist for thinking that apperception is required for distinguishing oneself, an external world, and the objects in it (see chapter 9). Since many animals "down" the phylogenetic scale are able to make these distinctions and these animals do not speak human ordinary languages, apperception cannot require ordinary language. In chapter 3,1 argued that even pain requires apperception. And since most of us would take many nonhuman animal species to be capable of feeling pain, the same conclusion would follow. 225
Apperception
The challenge was how my view could say anything at all about the patterns underlying our categorizations of propositional attitudes. And that challenge now seems to me to have been met. Notice that I agree with Dennett on many things: Apperception may well be constituted by a series of virtual, serial machines;39 its purpose is to broadcast otherwise encapsulated information to the modules, and so on. However, I have put much more of a Realist interpretation on these claims than Dennett does. And that Realist interpretation is also at least compatible with, perhaps constitutive of, an Internalist position on the meanings of propositional-attitude concepts. 19. One final piece of work remains in replying to the objection that began this section: how do I answer Dennett s objections, a priori and empirical, to a Realist reading of apperception? Actually, the answers have already been presented; but before closing, I want to emphasize them and add new support for my view. Let us start with the empirical objection. What was presented earlier was only part of the evidence. That which was omitted is less favorable to Dennetts view. While there appears to be a good deal of modularity in the brain, only at most seven synapses separate any one neuron from any other (Baars 1987, 214). Second, while the channeling of the visual system is much as described in §16, important differences emerge as the visual hierarchy is traversed. It is a hierarchy (Van Essen and Maunsell 1983). For instance, cells get progressively larger from the retina to VI, and from VI to poststriatal areas (another hierarchy); and these larger cells appear to be sensitive to more — and more kinds of — information than are the smaller cells below them (Zeki 1992; Stoerig and Brandt 1993). Third, and more important, while the description of the strict channeling of the visual system presented in §16 is mostly correct for forward propagation through the system, this rigid hierarchy breaks down in back connections from the poststriatal areas to VI. Here, cells in one area relay back to cells in a variety of layers of VI. These descending connections are poorly understood and may play a large role in vision all coming together, or even in our apperceiving vision (Baars 1987; 39
Though Baars's (1987) identification of apperception (global-workspace) with a real machine housed in the reticular-thalamic pathways is an alternative worth considering, as is a view somewhere between the two (see chapter 10).
226
Apperception
Zeki 1992; Stoerig and Brandt 1993). Finally, Baarss (1987, 223-29) speculation is that the global-workspace (apperception) is not cortical at all. The cortex contains the information modules. But the globalworkspace is located in the reticular-thalamic system. And two important points concerning this system are of relevance here: first, the reticular-thalamic system connects to virtually all motor and sensory areas of the cortex; and, second, unlike the cortex, it is virtually unchanged from species to species in its evolutionary history. Obviously, if Baarss speculation is right, there is a place where it all comes together — or at least relevant information does.40 So I can accept Dennetts ideas of virtual machines, which do not require it all coming together, or Baars s view, which would make sense of its doing so.41 But suppose Baars is right. We would still face Dennett s a priori objection against a Cartesian theater and its rerepresentations. However, that objection has clearly already been met: the pieces of information derived from one encapsulated module come together (and perhaps come together at the same time with information from other modules) in order to be broadcast to other encapsulated modules, which can make use of this otherwise unavailable information. The "viewers" are other modular systems of the brain, just as Dennett himself maintains. Dennett is nearly right: the most notable error is his unexplained shift from "theory" to "fiction" (in his chapter 4). But it is a major mistake. 40
41
Indirect evidence for Baarss view comes from two recent articles (Andreasen et al. 1994; Middleton and Strick 1994). Both of these argue for the influence of subcortical structures on higher cognitive functions, even assigning to the subcortical structures an apperceptive function. Neither, however, argues that there is some one subcortical structure that is the apperceptive module. But neither argues against that position either. My speculation, in chapter 3, that the apperception involved in pain is modular probably puts me on the side of distributed apperceptive processing. Whether distributed apperceptive processing entails a virtual machine is another question. I doubt that it does. If not, a third, more Realist alternative - ontologically somewhere between Dennett's and Baars s — may be required.
227
Selves In the previous chapter, I argued that apperception of token representational states is the most likely method by which we acquire propositional-attitude concepts. In this and the next chapter, I trace out ways in which apperception enables us to acquire concepts of our self, of other selves, of external objects — indeed, of all other content. This chapter concentrates on how we sort out our self from the rest of the world, i.e., come to conceive our very self, and on how we sort out other persons as also being thinking, feeling things. The approach to these problems is through an old philosophical chestnut: the problem of other minds.1 The previous chapter had Dennett as its target; this one, Wittgenstein, whose later works pose the greatest challenge to Scientific Cartesianism. The previous chapter had Instrumentalism as its target; this one, Externalism, a view — in its various versions — that carries the greatest threat to Scientific Cartesianism.
1. The Argument from Analogy for the existence of other minds came under attack in the middle part of this century (for example, Wittgenstein 1953; Strawson 1963; Malcolm 1963), and while the argument once spawned a sizable literature, it is seldom discussed any more. It has been thought dead, along with the Cartesian view of mind that made it seem necessary. My intention is to resuscitate the argument and to show that there is a perfectly reasonable version of it that avoids the objections and is also compatible with current theories in developmental psychology. The Argument from Analogy, 1
This chapter is based largely on Forthcoming-a, but the material has been greatly reworked and added to. Moreover, the additions, I believe, are significant, with the result being that this chapter, rather than the paper it is based on, is the more definitive statement of my views.
228
Selves
however, provides only a gateway to the main aim, which is to resuscitate Cartesian Rationalism and its theory of mind. Although many elements of Descartes' theory, especially his dualism, need to be rejected, nevertheless, major elements of his theory survive in my own. So I honorifically call the theory of this chapter the "Cartesian theory of mind." But the name should not keep the reader from seeing the differences between the view laid out here and Descartes' own. Resuscitating a Cartesian view does not require showing that it is true: it requires showing only that it is not obviously false, showing it can be defended against many of the criticisms it has been subjected to (and that no devastating ones remain), and showing rival positions to face difficulties at least as great as it faces. Perhaps the best way to clarify the notion of a Cartesian theory of mind (alternatively: Psychological Solipsism) is to spell out a set of contrasting distinctions relevant to the nature of mind (I will coin names for these distinctions): (1) Realist versus Instrumentalist Realists take mental states to exist, whereas Instrumentalists claim that they do not actually exist.2 For Instrumentalists, mental-state talk is just a convenient way of organizing the world; but according to Instrumentalists, there are no mental states — only neurological ones, or behavioral ones, say. (2) Partist versus Wholist For Partists, mental states are states of persons (or other organisms) only in virtue of being states of parts of the organism — brains (or minds). Wholists, on the other hand, maintain that mental states are states of persons simpliciter, the whole person being the smallest unit for which one can predicate mental states. (3) Individualist versus Anti-Individualist Anti-Individualists hold that the contents of an individuals mental states, including his or her concepts, are determined only by adverting to (see Introduction, footnote 2, for the use of "adverting to") interactions among the members of a community of conceivers. Individualists deny this claim. They claim that a single 2
By mental states, I mean aspectualized representational states, phenomenal states (qualitative states), emotions, affects (positive and negative attitudes, among others), and moods - with no guarantee that this list is either exhaustive or correct. Parsing mental states is itself a matter of theory (see, for instance, Churchland 1989, who, if I read him correctly, is sympathetic with much of what I say; and see the previous chapter).
229
Apperception
person, all alone — a born Robinson Crusoe, as it were — can in principle acquire and possess concepts. (4) Internalist versus Externalist Internalists hold that the content of a mental state is determined primarily and wholly by adverting to states (processes, etc.) within the organism (either by the content being an intrinsic property of the mental state [for instance, see Searle 1980, 1983] or by the content being determined by the relation of the mental state in question to other internal states of the organism [as on many Functionalist accounts]).3 Externalists, in contrast, hold that the content of a mental state is constituted in part at least by the mental state s adverting to things, properties, or complex interactions in the external world.4 Crudely, and even then as applying only to paradigmatic Externalism (even a crude story for AntiIndividualism, for instance, would be more complicated), the difference between Internalists and Externalists can be brought out in the following way: Suppose a brain state D has been first brought about by repeated interaction with dogs. Internalists and many Externalists might agree that state D is the concept ""dog"1. But now suppose, contrary to fact, that D had been brought about, instead, by repeated interaction with tables. For those Externalists, D would then be the concept ""table"1, while for Internalists, D would still be the concept '"dog"1, though perhaps illusionistically applied to tables. To repeat, this story is crude and oversimplified, and in ways misleading; but it gives some flavor to the distinction, Internalist/Externalist, I am after. Scientific Cartesianism is defined to be Realist, Partist, Individualist, and Internalist.5 It holds that all any of us have immediate epistemic access to are our own mental states; and it holds that in perception we construct a schema of a world, as a theoretical construct (since our schema is underdetermined — and overdetermined — by the sensory input [see section IV ff|). That is, we employ perception to represent 3 4
5
See the account below, which does not quite fall into either of these categories. It is important to see that Externalists can be either Partists or Wholists, either Individualists or Anti-Individualists. Anti-Individualism is a species of Externalism. And Externalists of all kinds are Realists. Versus Wittgenstein s views, which, while also Realist, are - to the contrary - Wholist, Anti-Individualist, and Externalist.
230
Selves
a world.6 While we have no direct epistemic access to anything outside our own mental representations, we do have access to some of our mental states through apperception.7 In this psychological sense (though not in any ontological sense), Cartesianism is solipsistic.8 6
7 8
To call perception "representational" may be misleading. Talk of representations presupposes that there is a way the world is, independently of the categories we impose on it, since representations can be more or less accurate. And accuracy of this kind presupposes a world given with its categories. Some - I will call them "Categorical Idealists" — believe that we impose categories on the world, while others - Categorical Realists believe we discover (at least many of) our categories in the world. I cannot begin to attempt to settle this dispute here, though I think that both positions harbor important truths. My only suggestion is that Categorical Idealists read "representation" throughout as possessing shudder quotes. The issues discussed here - and the position taken - are more or less neutral vis-a-vis the dispute in question. Actually, the intrinsic notion of representation (see footnote 8) - which is the focus of this chapter - can be discussed independently of these issues, since it is neutral between the sides of the dispute. For a fuller statement of the direct/indirect and inferential/noninferential distinctions, see the previous chapter. Because the Internalist/Externalist dispute is the main focus of this chapter, it is worth noting another way to see the distinction. If the spotlight is put on the notion of representation, then Internalists of the sort I am sympathetic to take there to be two notions of representation (i.e., they recognize a further dichotomy, different from the information/content one discussed in chapter 4). The primary notion is something like a one-place predicate that has for its range mental states. For instance, where "m" is a mental state, we get something like, "m is a representation of a dog" (Dm), where "is a representation of a dog" is a one-place predicate and m is generally picked out demonstratively (i.e., " This is a representation of a dog"). And this one-place predicate use is a partial determiner of the second notion of representation, a two-place predicate: "Dm is a representation oj"Fido" (where Dm is thought of as an "object"- a data structure, say). That is, for Internalists, meaning (intentionality) determines something like denotation. A large problem for Internalists is how the one-place predicate helps to determine the two-place predicate - i.e., how our internal representations can apply to actual things. But for Internalists, a one-place-predicate representation is a representation (has meaning, has intentionality) whether it applies to anything actual or not. That is, the question for them is what more has to be added to the intrinsic representation to turn it into a subject term of an extrinsic one. About this problem I will have little to say, though I think that this chapter offers the beginning of an answer. Externalists can accept both these notions of representation. But they hold that a relational notion of representation (a two-place-predicate representation) is the prior notion. One form of Externalism would hold, for instance, that because m and Fido are correlated (probably several times so) in an appropriate manner (by the former being an effect of the latter, or by the former being mappable onto the latter, say), m is a representation of Fido. And because this relation holds, one may also think of m, in itself, as being a representation of a dog. That is, for Externalists, the mapping determines meaning. The aim of this chapter is to show that Internalists are right as to which notion of representation is the prior one. While I do not sell short the Internalists' difficulties in explaining how an intrinsic representation can help determine the extrinsic one, I argue, in this chapter, that this problem is no greater than those faced by Externalism, and probably a lesser problem. So I remain confident that there is a solution.
231
Apperception
2. Three points noted in previous chapters need to be stressed here. (1) Apperceptive access does not guarantee incorrigibility. We can make mistakes about what states we are experiencing. (2) The categories by which we type our mental states are not given transparently in experience, i.e., the organization of our internal states into types is theory-laden. (3) Apperception is not a phenomenal state. As we saw in Part Two, many of the mental states to which we have apperceptive access are, like apperception, themselves not phenomenal states. To repeat Descartes' own example (1642/1986, 50-51), when one is apperceptively aware of thinking of a thousand-sided figure, no phenomenology is that apperception or that thinking. Phenomenal states may accompany, even cause, thinking and apperception; but phenomenology constitutes neither of these and is, in principle, dispensable. Even that arch anti-Cartesian, Wittgenstein (1953), agreed with Descartes on this point. One should not confuse a mental state s being representational with its being phenomenal. As we have seen, there are nonphenomenal representations (proposition-like ones, for instance). As Descartes argued, and as argued in Part One, even perception is constituted largely by its judgment element rather than by qualitative properties. The aim of this chapter is partially conditional: if the Cartesian view of mind is correct, then a quite reasonable version of the Argument from Analogy exists. What makes this claim interesting is that there is no reason to think that Cartesianism is any less plausible a view than its rivals, i.e., the antecedent of the conditional is possibly (even plausibly) true. Indeed, if my arguments — in sections IV—VI — are as compelling as I believe them to be, rather more is established: Scientific Cartesianism is the best view of mind that we have (i.e., it is reasonable to believe that the antecedent of the conditional is true). II
3. Arguments from Analogy have been used to answer two quite distinct, though not always clearly separated, questions. One is a philosophy-of-mind question: If Cartesianism is correct, how does anyone come to believe in other minds? A version of the Argument from Analogy looks as if it is needed to answer this question because of the Cartesian insistence that one has noninferential, direct access only to the contents of one's own mind and so one needs an inference-like 232
Selves
procedure to acquire a belief in other minds — indeed, to acquire a belief in anything external to states of one s own mind. A few philosophers have accepted a Cartesian view of the mind but rejected the need for the Argument from Analogy on the supposed grounds that one can not conceive of one's own mind without being aware of other minds (Strawson [1963a] argues for something akin to this claim). However, present evidence from developmental psychology supports the unrevised Cartesian view that one conceives of one s own mental states first. For instance, children first apply mental predicates to themselves, and only about a half year later apply them to others.9 I intend to show that there is a plausible version of the Argument from Analogy that fits with both Cartesianism and current developmental psychology. The second question Arguments from Analogy are used to answer is epistemological: Are we justified in believing in other minds? (Or: Can we know that there are other minds?) My emphasis will be on the philosophy-of-mind question, though in section VII, I briefly defend an epistemological version of the argument. For if Psychological Solipsism led to Ontological Solipsism, or even to a fairly strong form of skepticism, I would be more reluctant to defend it.10 However, Psychological Solipsism, while compatible with Ontological Solipsism, in no way entails it.11 My primary desire, though, is to show that even if Ontological Solipsism were true, one could, in principle, believe that there exist others who have mental states.12 If a philosophy-of-mind version of the Argument from Analogy cannot be established — if it cannot be explained from a Cartesian perspective how people ever come to believe in other minds in the first place — then Cartesianism itself would be deeply flawed. 9
10
1x
12
See, for instance, Huttenlocher and Smiley 1990, 284, 288, 290; Wimmer and Perner 1983, 105; Olson et a\. 1988, 7; Yaniv and Shatz 1988, 95; Perner 1988, 145; Wimmer et al 1988, 173; Forguson and Gopnik 1988, 232-34; Flavell 1988, 245, 248; Chandler 1988, 397-98; Fischer and Bidell 1991, 228-29; Siegal and Beattie 1991, passim. Ontological Solipsism is the view that all that exists are (oneself and) one's mental states. I suspect that associating Cartesianism with skepticism supplied one source of Cartesianism's current unpopularity. I am always suspicious of arguments purporting to show that philosophical positions like Ontological Solipsism are impossible rather than merely false; and the fact that many arguments against the Argument from Analogy are intended to show Ontological Solipsism to be impossible led me to take a second look at these issues. The switch from "other minds" to "others who have mental states" is explained in section VI below. Until the distinction becomes salient, I revert to writing the shorter expression; but the shorter is intended only as an abbreviation of the longer.
233
Apperception
Since the topic is other minds, I should also say what I take a mind to be. In general, a mind is a thing capable of functioning in one of three ways that may or may not be so interrelated that no one way exists without the others: (1) Minds categorize the world, i.e., they bring instances under concepts; (2) minds make judgments, and judgments are intentional states, i.e., states having proposition-like contents; (3) minds have phenomenal states.13 To believe in other minds is to believe that there exist other things that categorize, have intentional states, or have phenomenal states.14 The view of the mind contained in what I am calling Scientific Cartesianism, unlike Descartes' own view, is committed to no position about the nature of that which thinks and, in particular, is not committed to dualism, though in principle compatible with dualism (while the larger theory, Scientific Cartesianism, is itself physicalist and anti-dualistic). in
4. The structure of the Argument from Analogy is worthy of note. If one has direct, noninferential access only to one's own mental states, then a belief in other minds requires, for Cartesians, a kind of double inference. The belief in other minds is itself rooted in bodily behaviors; and so one must first infer (or theorize) the existence of bodies and then, on that basis, infer (or theorize) the existence of other minds. Every version of the Argument from Analogy presupposes that a belief in the existence of bodies has been established. Russell (1948, 482—86), for instance, claims that we come to have an idea of our own mind because we have direct access to its contents. Once we have the concept of an external world, we discover that among external bodies there exists a "privileged" body. This body is privileged in the sense that our mental states are often affected when this body is affected, and other bodies affect our minds only in so far as this body is first affected by them. For instance, unless its eyes are open, we do not see other bodies; only if this body is harmed, do we feel certain kinds of pain; and so on. Moreover, we discover that the 13 14
If emotions, moods, and affective states are not reducible to these three nor to each other (see, for instance, Morillo 1990), they should be included as well. For present purposes, I consider the commonsense list of mental states as the theoretically best list (but see the previous chapter). The "or" is inclusive, but it is disjunction, and not conjunction, that I intend.
234
Selves
causal processes run the other way as well: we can affect other bodies only in so far as we can affect this body. Our mental states directly affect only this body. So we come to think of this "privileged" body as our body. Next, we come to notice that other bodies behave much as our own does. Given similar inputs, other bodies output behaviors similar to our own. In our own case, the causal sequence runs INPUT —> internal mental state -> OUTPUT. In regard to another body's behavior, one perceives only the sequence, INPUT —> OUTPUT. Given our own experience, we come to believe that a mind intervenes between input and output in the cases of other bodies as well. The argument thus far comprises Russell's version of the philosophy-of-mind argument. The need for an epistemological argument arises with the question of whether we are justified in believing that a mental state intervenes between the input and the output in these cases as it does in our own case. Russell maintains that we are justified and offers the following rule as rational: Whenever one has access to an event e and also to its cause c, and the cause in these circumstances is always the same, then in those circumstances where one has access to e but not to its cause, it is reasonable to conclude that c is the cause of e in those cases as well. This rule, the scope of which is perfectly universal, leads us to accept that there are other minds, i.e., other mental events, themselves "attached" to their privileged bodies, which intervene between input and output, as effect of the former and cause of the latter. 5. Russell's version of the Argument from Analogy, whether in its epistemological or philosophy-of-mind version (see Mill 1889 and Price 1938, among others, for closely related variants), is subject to several criticisms. For instance, Russell is apparently committed to an induction from a single case.15 However, most of the criticisms can be defused quite readily. But two criticisms of the argument are more difficult to defuse. Each focuses on the philosophy-of-mind argument, which is seen (correctly) as underlying the epistemological one. The idea is that if the former is mistaken and if it grounds the latter, then the latter is unfounded, if not also mistaken. Both criticisms take the 15
"If I say to myself that it is only from my own case that I know what the word 'pain' means - must I not say the same of other people too? And how can I generalize the one case so irresponsibly?" (Wittgenstein 1953, lOOe, §293).
235
Apperception
following form: To decide whether it is rational to believe in other minds, one must first understand the sentence, "There are other minds" (or its equivalents). Consider, e.g., the sentence: "There are other pains" (i.e., pains that only others feel). To understand this sentence, one must grasp the concepts involved: '"exist"1, ""other"1, •"pain"1. Each criticism contends that Russell's Cartesian theory of mind cannot account for how one acquires the concept ""pain"1. 6. The first criticism: If the only states Russell is aware of are his own mental states, then Russell should find unimaginable that these states could be "owned" by another. Yet, surely, the only concepts of inner states that Russell acquires by experiencing his own inner states are not concepts like ""pain"1, •" thought"1, '"hope"1, and so forth, but only concepts like '"Russell-pain"1, ""Russell-thought"1, ""Russell-hope"1, and so on. Just as one could have a concept of the flower, Jack-in-thepulpit, without having a concept of a pulpit, Russell could have a concept of Russell-pain without having a concept of pain. To make the Argument from Analogy work, Russell needs to explain how from merely having the fused concept ""Russell-pain"1 he can acquire the concept '"pain"1. Possession of the latter concept is necessary for understanding the sentence, "There are other pains." (It is incoherent that anyone else could have Russell-pains.) As Wittgenstein (1953, lOle, §302) says, "If one has to imagine someone else's pain on the model of one's own, this is none too easy a thing to do: for I have to imagine pain which I do not feel on the model of the pain which I do feel!' One might conclude that since we do possess the concept '"pain"1 and since Russell cannot account for his possessing this concept in addition to '"Russell-pain"1, the Cartesian theory of mind, which leads Russell to this impasse, is itself deeply flawed. 7. The second criticism maintains that Russell could not even acquire the concept ""Russell-pain"1. This criticism is the private-language argument (Wittgenstein 1953; Malcolm 1963).16 Its underlying claim is that possessing a concept presupposes an ability to make two crucial distinctions. One must be able to distinguish having the concept from merely thinking one has the concept, and one must be 16
Any exegesis of Wittgenstein s views is controversial. I cannot swear that my interpretation is correct. I do try to support it with text. But even if the interpretation is wrong, I find the resulting position of sufficient interest to make it the focus of this chapter, for that position, Wittgenstein's or not, threatens Scientific Cartesianism.
236
Selves
able to distinguish having the concept but making a mistake as to whether something falls under the concept from not having the concept at all: "And hence also 'obeying a rule' is a practice. And to think one is obeying a rule is not to obey a rule. Hence it is not possible to obey a rule 'privately': otherwise thinking one was obeying a rule would be the same thing as obeying it" (Wittgenstein 1953, 81e, §202).17 The private-language argument means to show that mentalstate concepts cannot be acquired simply by having mental states in a private experience (epistemically accessible to oneself only), because the distinctions necessary for having concepts would not be realized in such cases. If one acquired the concept of pain merely from experiencing pain, what would constitute the boundaries of the concept? That is, what would determine whether a state correctly fell under the concept or not? To borrow Wittgenstein's (1953, 92e, §258) example, suppose I experienced a state one day and called it "pain." Suppose further that a few days later I experienced another state and also called it "pain." What criteria determine whether the second use of the word expresses the same concept as the first? For both states to instantiate the same concept, they have to be relevantly similar to one another. One might reply that one just remembered that the second experience is relevantly similar to the first, and so of the same type. And since one does remember correctly, the same concept is used. But the difficulty is not with memory. It is with the criteria for "relevant similarity." What constitutes one state being relevantly similar to another? In the case of pain, this question is salient, especially if we were to take pain to be a qualitative state.18 We need only be reminded of the noticeably differentfeeling qualitative states that are experienced by a person in pain (when having a headache, being given a hypodermic injection, burning oneself, having a sensitive tooth probed, and so on). Why are these included in the same category? If no public criteria determine the use of the concept, it looks as if ""pain"1 has as its instantiations whatever one says it has. Relevant similarity is determined merely by whatever 17
18
Also cf. 94e, §269: "Let us remember that there are certain criteria in a man's behaviour for the fact that he does not understand a word: that it means nothing to him, that he can do nothing with it. And criteria for his 'thinking he understands', attaching some meaning to the word, but not the right one. And, lastly, criteria for his understanding the word right." But we should not take pain to be a qualitative state simpliciter (see chapter 3).
237
Apperception
one asserts to be or not to be relevantly similar. But in that case no distinction would exist between possessing a concept and thinking one possesses it. The concept would be "determined" willy nilly. "Relevant similarity" in such a case just becomes whatever one says it is. Yet if a toddler learning a language says "dog" when pointing to a dog, a top, a desk, the sky, a shoe, and so on, the reasonable conclusion would surely be that the utterance "dog" represented no concept at all to the toddler. But if whatever one asserts to fall under the concept '"pain"1 is allowed to fall under it - and merely for that reason - surely that liberality of selection precludes the utterance, "pain," from expressing a genuine concept, just as it does for the toddler's utterance, "dog." Nor can the other necessary distinction be made: if on reflection one decides that a use of "pain" was an error, what would make it an error — other than one s decision to count it as one? Again, if it is "whatever one says goes," the conditions are far too liberal to ground concept formation and possession.19 The only possibility for constraining the notion of "relevant similarity" is in a community of conceivers. Only in a community, where being corrected by others is possible, can there exist the conditions required for the needed stability in the notion of "relevant similarity." Only where public criteria exist, can anyone form and possess concepts. Here are Wittgenstein s own words on this issue: "Then can whatever I do be brought into accord with the rule?" Let me ask this: what has the expression of a rule — say a sign-post — got to do with my actions? What sort of connexion is there here? — Well, perhaps this one: I have been trained to react to this sign in a particular way, and now I do so react to it. 19
"Let us imagine the following case. I want to keep a diary about the recurrence of a certain sensation. To this end I associate it with the sign 'S' and write this sign in a calendar for every day on which I have the sensation — I will remark first of all that a definition of the sign cannot be formulated. - But still I can give myself a kind of ostensive definition. How? Can I point to the sensation? Not in the ordinary sense. But I speak, or write the sign down, and at the same time I concentrate my attention on the sensation — and, as it were, point inwardly. — But what is this ceremony for? for that is all it seems to be! A definition surely serves to establish the meaning of a sign. - Well, that is done precisely by the concentrating of my attention; for in this way I impress on myself the connexion between the sign and the sensation. — But 'I impress it on myself can only mean: this process brings it about that I remember the connexion right in the future. But in the present case I have no criterion of correctness. One would like to say: whatever is going to seem right to me is right. And that only means that here we can't talk about 'right'" (Wittgenstein 1953, 92e, §258).
238
Selves
But that is only to give a causal connexion; to tell how it has come about that we now go by the sign-post; not what this going-by-the-sign really consists in. On the contrary; I have further indicated that a person goes by a signpost only in so far as there exists a regular use of sign-posts, a custom. (Wittgenstein 1953, 80e, §198)20 Thus, possessing the concepts needed for being able to understand the sentence, "There are other minds," already presupposes the existence of other minds; so one does not even need the epistemological version of the Argument from Analogy. In fact, one should conclude that something is fundamentally wrong with a view of mind that would make concept formation an inside-out affair, rather than the outsidein affair it must be. The epistemological version of the Argument from Analogy is redundant. The very possibility of stating the epistemological problem of other minds makes the problem dissolve. The conditions for being able to say meaningfully, "There might not be other minds," entail that this conjecture is false.21 IV
8. Since I believe a Cartesian view of mind to be correct, the conclusions drawn in each of these last two arguments must, if false, show either that their premises are mistaken or that the conclusions themselves are illegitimately drawn, the actual — valid — conclusions being 20
21
Compare also the following quotes: Is what we call "obeying a rule" something that it would be possible for only one man to do, and to do only once in his life? (80e, §199) And hence also "obeying a rule" is a practice [my italics]. And to think one is obeying a rule is not to obey a rule. Hence it is not possible to obey a rule "privately"; otherwise thinking one was obeying a rule would be the same thing as obeying it. (81e, §202) The word "agreement" and the word "rule" are related to one another, they are cousins. If I teach anyone the use of the one word, he learns the use of the other with it. (86e, §224) "So you are saying that human agreement decides what is true and what is false?" - It is what human beings say that is true and false; and they agree in the language they use. That is not agreement in opinions but in form of life. (88e, §241) I could not apply any rules to a private transition from what is seen to words. Here the rules really would hang in the air; for the institution [my italics] of their use is lacking. (117e, §380; also cf. §§206, 208, 234, 235, and 242) See Malcolm (1963), as well as Wittgenstein, for this argument. Malcolm applies the private-language argument directly to the epistemological problem of other minds.
239
Apperception
less threatening to a Cartesian theory of mind. The latter disjunct is nearer the truth. Even if the private-language argument succeeds at showing that more than experiencing phenomenal states is needed to possess the concept '"pain"1 (as I think it does [see chapter 3]), it is not yet demonstrated that the "more" is not to be parsed in terms of relations to other mental states, such as judgments, beliefs, desires, affects, and so forth. A Wittgensteinian might object that including other mental states begs the question since these other mental states, in having contents, presuppose the possession of concepts, which is exactly what is at stake. My plan, in defense of Cartesianism, is not only to attack the privatelanguage argument directly but also to argue that Cartesianism is no less rational than its rivals, and so to conclude that in the present circumstances there is no reason to believe that the conclusions drawn from the private-language argument by Malcolm and Wittgenstein are justified. 9. Data from perceptual science constitute the beginning point for supporting a Cartesian theory of mind because any theory of mind should be compatible with these data. Consider the fact that we have two eyes, each of which receives different information. One just needs to look at one's finger held in front of one s face while closing one eye at a time to agree that this claim is true. Yet, when we see binocularly, we see only one view, which is neither of the views seen with one eye closed. Or consider the fact that the retina contains approximately 120 million rods and 7 million cones, with most of the latter concentrated near the fovea, the central focusing area. Yet the optic nerve, which carries the information from the retinal cells into the other visual processing areas of the brain, contains only one million fibers. Apparently, much processing and distilling of the luminal information on the retina must already take place before that information ever leaves the eye. An animal, such as a horse, with laterally (rather than frontally) placed eyes, has two foveae in each eye. Surely this fact suggests that no one-to-one correspondence exists between what is foveal and what is seen.
We can be made aware of the fact that a blind spot exists where the optic nerve leaves the eye. If an object of the right size is held in front 240
Selves
of the eye at the point of exit, the object will not be seen, even though it would be visible if held anywhere else in front of the eye. Yet, we do not experience a hole (or since we possess two blind spots, two holes). Nor does one eye fill in the missing information for the other, since even with one eye closed we do not experience the blind spot. The design of the eye is especially odd in that for light to reach the retina it has to traverse the entire thickness of the eye, for the retina is at the back of the eye. Moreover, the eye itself is crisscrossed with blood vessels and other internal structures. Yet, we do not see these blood vessels or other internal structures.22 Or consider the phi phenomenon discussed in the previous chapter. If there are two points, A and B, and first a light is flashed and extinguished at A, and then a light is flashed and extinguished at B, in most cases that is exactly what we claim to see. However, if the temporal interval between flashes and the spatial interval between A and B are calibrated in just the right way, we instead take ourselves to perceive a single continuous flash going from A to B. For instance, we might describe ourselves as seeing the light at A, watching as it moves through the midpoint C, between A and B, then seeing it approach B from C, and finally arrive at B. But in order to see the light at C before it arrives at B, we must have visually taken in the information that the light has alreadyflashedat B. If two objects of similar shape but of different sizes are held at different distances from the eyes, the retinal images can be made identical. Yet, even in those circumstances we are generally very good at discriminating the larger from the smaller; indeed, we see it to be larger, though farther away. On the other hand, the very same object can sometimes take exactly the same retinal space yet look larger or smaller than on another occasion (as in the moon illusion, for instance). Similar to these facts about size perception are ones concerning motion perception. If one moves a pencil in front of one's face, one sees a moving pencil. If instead, one holds the pencil steady but moves one's head back and forth at a like speed while keeping one's eyes focused on the pencil, one perceives the pencil to be still. Yet, the series of retinal images can be exactly similar in the two cases. Moreover, our eyes themselves are in nearly constant motion. Even 22
Though under some unusual perceptual conditions, as when an ophthalmologist shines a strong light in one's eye, one can see the blood vessels.
241
Apperception
when we are attending to an object itself at rest, our eyes make small, jerky movements called "saccades." But we perceive the attended object as still, not as being jerkily in motion. If one considers color — and no one sees a world without color (though a very few apparently see a world without hue) — there is good reason to think that vision is representational. In fact, to whatever degree there is reason to think that hue does not exist in the external world (Hardin 1988; Boghossian and Velleman 1991), there is reason to think that we represent the world as having a property that it does not have. Given such facts, which constitute only a small subset of the relevant sorts, visual perception is apparently both underdetermined and overdetermined by the luminal effects on our retinas. For many, these facts make it prima facie plausible that visual perception is representational. That is, it is plausible that we passively or actively construct some sort of structure, either phenomenal or proposition-like, out of this diversity of information; and this structure is both a richer and poorer "picture" of the world than the incoming information that goes into its construction. That is, this perceptual structure is a representation. That we make perceptual error would be in part explained by this representational effort and by the fact that the end result of this effort, the percept, is a representation. Certainly, perception s being representational is at least compatible with these facts. Since similar facts can be assembled for each of the other senses, we can conclude that it is probable that all perception is representational, if visual perception is.23 And if concept formation based on perception precedes all other concept formation, and if perception is representational, and if Cartesianism provides the best account of perceptual representation, then it would be reasonable to believe that concept formation itself must at least initially be an inside-out affair. The sequel constitutes a defense of the antecedents of each of these conditionals against objections to them, and it also contains arguments in favor of the truth of these same antecedents.
10. The conclusion drawn from the private-language argument is Anti-Individualist; but Anti-Individualism and other Externalist alter23
Some psychologists, notably Gibson (1966, 1979), are aware of these facts but deny the representational nature of perception. But see the discussion of Gibson s view in chapter 4.
242
Selves
natives, although somewhat compatible with scientific evidence, are no better — I would say, worse — theories than Cartesianism. What I intend to show in this section is (1) that Cartesianism is not eliminable on a priori grounds, although most Externalist arguments, including the private-language argument, are taken to provide such grounds, and (2) that Externalism has plenty of difficulties of its own. If I can establish these points — on the basis of section IV and the arguments in this section — then in section VI I will be able to show that Cartesianism can account for our belief in other minds, and in ways fitting the evidence from developmental psychology. In fact, the fit of Cartesianism with results from developmental psychology provides yet a further reason to think that Cartesianism is the best theory of mind; for none of the alternatives fits as well with these empirical data. What follows is a series of arguments, some against AntiIndividualism, some against other forms of Externalism, and some against Externalism of any form. While perhaps no one argument is sufficient to convert the reader to Cartesianism, the weight of them all is sufficient to warrant believing that Cartesianism is no less reasonable than its Externalist alternatives — and maybe a good deal more than that. Each argument could be expanded and taken in greater depth; but since a part of this work has been done elsewhere by others (Patterson 1991; Churchland 1989; Butler In Preparation), I will say only enough for the reader to feel the force of the arguments.24 11. It is worth our while to begin by taking another look at the private-language argument.25 The argument begins with a presumption that concept possession is a rule-governed activity: Bringing instances under concepts is to apply a rule — and so demands criteria of correctness. As we have seen, Wittgenstein s conclusion is that only in a community of conceivers can criteria exist that distinguish actually possessing a concept from merely thinking one possesses it — i.e., only in a community can there be criteria for correctly following a rule, as 24 25
The best detailed criticisms, in my opinion, of recent defenses of Externalism are laid out in Butler In Preparation. Ascribing this argument - or any argument - to Wittgenstein is controversial since he claims only to be "assembling reminders," not to be presenting arguments or theories at all (1953, 50e, §§127-28). But I am quite confident that despite his protestations he both argues and theorizes. Richard Rorty, who is perhaps Wittgenstein's most faithful and most interesting successor, is well aware of this tension, both in Wittgensteins work and in his own (Rorty 1982, "Introduction").
243
Apperception
opposed merely to thinking one is, or as opposed just to be appearing to follow a rule, or as opposed just to be behaving in accordance with a rule (as Venus goes around the sun in accordance with a rule). Sensations like pain cause the greatest doubts about Wittgenstein s community claim because one thinks: "Surely I could distinguish pains, and thereby come to possess a concept of pain, even if I lived outside a community of conceivers." Wittgenstein then introduces the private-language argument, building it on top of previous arguments about rule-following, in order to establish that the general truths about rule-following do apply even in cases of sensations like pain. But consider the examples of mental-state concepts Wittgenstein uses to support his conclusion: '"understanding"1, ""reading"1, and '"pain"1 itself. All, in fact, pick out quite odd "mental states." None is paradigmatic. "Understanding" and "reading" are "achievement verbs" (Ryle 1949, 130). It is because they are achievement verbs that Wittgenstein's Anti-Iindividualism — or any Externalism — seems so plausible for them. As Wittgenstein realizes, a difficulty for his view is that we sometimes seem suddenly to understand a concept (equivalently for Wittgenstein: understand the meaning of a word) — ^red"1, for example. When we say we suddenly understand, it seems to us as if we are reporting a mental state that has suddenly popped up. But Wittgenstein argues that we could have the same mental occurrence and not really understand. Later events may make others dispute our claim to understand, or even cause us to withdraw our claim to have understood (1953, 53e, §§138-155). This argument is plausible, but it is plausible exactly because "understand" is an achievement verb. Replace ""understand"1 with the mental component of ^understand"1, something like r~occurrentry believe"1, and the argument loses its plausibility. And, of course, similarly for his ""reading"1 example. Moreover, pain is also a quite odd mental state, and not a paradigm of mental (or conscious) states. Pain (see chapter 3) is one of the few mental states (along with bodily pleasures) that require that they be apperceived (because pains consist of a phenomenal state and an apperceptive evaluation of that phenomenal state). All other mental states can exist without being apperceived — and so, in that sense, unconsciously. Because pain is an odd mental state, we should not expect that claims about it, even if true, generalize to other mental states. One motivation behind the Anti-Individualism of the privatelanguage argument is to locate the normativity in our concepts: Rules 244
Selves
can be followed correctly or can fail to be followed. And the Wittgensteinian claim is, first, that Internalism cannot account for this normativity and, second, that it can be found only in a community of conceivers, where correction by others is possible. However, a community of conceivers does not seem to be the only possible locus of normativity. If ones goal in conceptualizing — creating categories — is a kind of theorizing that will allow one to get about successfully in the world, then surely the world itself, in interaction with that organism, can supply the needed normativity. If one categorizes a car as a feather, the world will devastatingly correct one s act of "bringing under a rule." That is, for an Individualist Externalist, there seems to be available all the normativity that is needed. But what about for the Cartesian? The obvious suggestion is to replace "world" in the paragraph above by "future experiences." If we take prediction of future experiences to be of premium value (for avoiding pain, and the like), then future experiences (undoubtedly, in fact, caused in part by the world s being the way it is) will supply any needed normativity — even if there were no world out there. So Psychological Solipsists can also find all the normativity one needs. Actually, though, this reply is beside the point, because the whole normativity issue is a red herring, the result of conflating two importantly different sorts of tasks concerning rule-governed activities: making rules zndfollowing rules. Wittgenstein, and others since, run these two tasks together: If conceiving be thought of as a rule-governed activity, then concept formation must be making the rule (theorizing), while assigning tokens to types, bringing instances under concepts, must be rule following (applying the rules of the theory). Normativity is relevant only to the second activity, not to the first. No rules exist for rule making itself. Lewis Carroll said that. 12. Besides these failings in the private-language argument itself, there are other reasons to doubt the arguments Anti-Individualist conclusion, or any Anti-Individualist conclusion. One is that nonhuman animals seem to make categorizations, to have concepts; but it is difficult to believe that nonhuman animals require a community of conceivers in order for them to conceive at all. Of course, one might claim that nonhuman animals have the concepts they have only because there are public criteria for their concepts set by the community of nonhuman animals to which they belong. (Pigs have pig-community 245
Apperception
concepts.) Or one might hold that nonhuman animals do not have concepts at all. They merely behave in ways that allow us to describe their behaviors according to our concepts, to describe their behavior as if they had concepts. Either view seems unlikely enough: the former, because many animals are by and large on their own virtually from birth and never reside in a community of their peers; the latter, on evolutionary, as well as behavioral, grounds - it would require too great a saltation for concepts to arise only with the emergence of human beings.26 If Individualism is true of nonhuman animals, it is likely (again, on evolutionary grounds) to be right about us as well. Since a defense of Individualism rooted in developmental psychology has been made elsewhere (Patterson 1991), let me only sketch out such an argument. Studies done on very young children who make what appears to be a living/nonliving distinction seem to show that these children systematically apply ""alive"1 to objects adults do not and refrain from applying it to other objects adults do apply it to (plants, for instance). Granting that the word "alive" is a word in our Englishspeaking community and granting that we are trying to teach children the use of this word, how are we to describe the concept they represent by this word? There are several possible answers. One is that these children have no concept at all. But given the systematicity of their use of the word and given that we adults can recognize this systematicity — even if it is other than our own — that answer is highly suspect. So we are left with two others: either the children share our concept •"alive"1 but make numerous errors and have many false beliefs or the children have a different, but overlapping, concept and have mostly true beliefs. Given the systematicity of their word-world mapping and given the fact that we recognize and understand this mapping, it seems perverse to opt for the first alternative.27 The criterion for categorizations being genuine categorizations seems to be met in the case of the children's use of •"alive"1.28 It is important to remember what science shows us: our concepts are grossly underdetermined by the information reaching our sensory 26 27
28
The question of whether nonhuman animals have concepts is further discussed in §17. Here the mapping is evidence for the meaning of the toddler's concept; the mapping is not constitutive of that content. For this case, see Carey 1985, 165; also, for further evidence in favor of this claim, see Keil 1981, 209; Fischer and Bidell 1991, 210; Carey 1991, passim.
246
Selves
organs. The information we take in from the world is at best taken in at the level of token things and token properties. Yet we bring these tokens under types (concepts), under universals. Conceiving, then, is a kind of theorizing: by bringing instances under "universals," we are claiming a law-like structure in things, allowing those things to be better dealt with, understood, and predicted. The children's use of '"alive"1, when considered as different from ours, can be seen as a theory, also different from ours. Viewing their concept in this way seems to fit the psychology better than the Anti-Individualist way. Children are, then, best understood to possess concepts independent of their linguistic community. And if they possess concepts for which they have words but those concepts are independent of the ones picked out by the community's use of the same-sounding words, it is not unreasonable to believe that children can also possess concepts before they have ordinary-language words for them at all. Surely it makes more sense to believe that creatures think (and thus, possess concepts) before they speak, rather than vice versa. Otherwise, we are back to the claim that nonhuman animals (and human infants) have no thoughts at all, a position earlier rejected. Whether theories are successful is surely a matter of the way the world is. But their being successful or not is a different matter from what their content is, and these two issues need to be kept separate. The truth is, communities of conceivers cannot correct us unless the members of the community themselves individually already possess the concept. 13. Turning to other Externalist claims, it is important to recall from the previous section just how deeply underdetermined perception appears to be by the outside world. Many Externalist accounts of concept content are formulated in ways that presume an easy, straightforward, associative relation between things and concepts. For instance: Horses exist; we sense them; and so we acquire the concept •"horse"1. But the actual relation between things and concepts is much more complicated than made to seem in these accounts. We have not taken seriously enough just how different physics tells us the world is from our commonsense conception of it. Evidence for these complications comes not only from the neuroanatomical and neurophysiological data discussed above, but also from data in developmental psychology relating to concept formation (see Keil 1981, 281; Gallistel et a\. 1991, passim, esp. 28-29; Keil 1991, passim, esp. 245-46; Gelman 247
Apperception
1991, passim, esp. 314).29 As Keil (1991, 237) points out, features of the external world are often enhanced or ignored relative to the content of our concepts: enhancement and ignoring involve cases where concepts have aspects not present in the external world or where actual, salient external-world correlations are ignored in our concepts.30 Given the psychological data, no one can doubt that there is a large amount of internal input to the content of our concepts. And as it is shown that concepts are more and more removed from the way the world is, the more and more reason we have to doubt the idea that adverting to the external world is necessary to a determination of conceptual content. It is best to think that the world provides us the referents and tokens to which we ascribe our concepts, but not, in as direct a way, the conceptual content. If concepts require adverting to the external world for their content, then perhaps the right conclusion is that we have virtually no kind concepts. For our scientific theories tell us the world is made up of "particles" with a nature most of us cannot even imagine, that there is mostly "empty space" between these "particles," that these "particles" are in rapid movement. Yet, our concept of an object like a tree is, of course, not like that at all. We take the tree to have no empty space 29
30
One might claim that here, and throughout the chapter, I have selected only those psychological data and views congenial to my position. There is probably some truth in that charge. But the point is that these views and the data they are based on are prominent in psychology, even prevailing ones. My aim in this and the next chapter is to show that Psychological Solipsism is a reasonable position that fits with mainstream developmental — and perceptual - psychology. If psychological views other than those I rely on win out in the end, that will be a matter of science, not philosophy. If Cartesianism goes out with these "losing" theories, then Cartesianism, too, will fail for scientific, rather than philosophical, reasons. And that is all I ask the reader to allow. "Concepts cannot be represented merely in terms of probabilistic distributions of features or as passive reflections of feature frequencies and correlations in the world. Some of the most compelling demonstrations involve illusory correlations where prior theories cause people to create or enhance correlations that are central to their theories and ignore or discount equally strong correlations that are more peripheral to that theory . . . "There are many other problems with mere probabilistic models, such as demonstrations that equally typical (i.e., equally probabilistically associated) features may be dramatically different in how they affect judgments about the goodness of exemplars. Thus, Medin and Shoben (1988) have shown that, although curvedness is judged to be equally typical of bananas and boomerangs, straight boomerangs are considered to be much more anomalous members of the boomerang family than straight bananas in their family, because curvedness is seen as theoretically more central, that is, causally more critical to the 'essence' of boomerangs. This finding is also further evidence against real-world correlations exclusively driving concept structure because, empirically, there are, in fact, some straight boomerangs and no straight bananas" (Keil 1991, 237). See also Keil 1992.
248
Selves
within it, that nothing within it is moving, and so on. Surely, our commonsense notion is a theory, a useful one, about the external world, just as the scientific one is. I have purposely chosen r~tree~l for another reason: if we take science to determine what counts as a natural kind, then it is a fact that no science includes tree as a natural kind. If tree is not a natural kind, then how does adverting to the world, as it is in itself, help determine our concept? This case is not like one where we make mistakes about a natural kind that our scientific theories say does exist — gold, for instance. In the case of trees, there is no such natural kind. So how did we come to possess the concept as a natural-kind concept? Or does the concept ""tree"1 really pick out a natural kind — we know not what — that does exist, only we are making bad mistakes about it? Or did we never mean anything by l~tree~l? I find either alternative hard to believe. Once more, categorizing tokens is a matter of theory; and common sense itself involves such theorizing. There seems to be no way the world is (unless at the level of particle physics) that requires our categorizing it in just the way we do, and in no other (see Churchland 1989 for a similar view).31 While Anti-Individualists may not be much moved by these considerations — indeed, they may use them to support their own views — other Externalists should be disturbed by them. But given the previous arguments against AntiIndividualism, all forms of Externalism should now be suspect. 14. Another reason to doubt Anti-Individualism, or any sort of Externalism, is what I call the natural-clone case. Consider my natural clone, NC. N C just comes into existence by a rare combination of natural forces. NC is not modeled on me; but by a stroke of nature, he is neuron for neuron, molecule for molecule, in one-to-one correspondence with me at this moment. Would NC have thoughts that he is in front of a word-processor just like this one, composing a book just like this one? Would N C have thoughts at all? I am torn in several directions about how to answer these questions. But one of my "sides" is struck by the following apparent possibility, /am not Norton Nelkin. I am NC. If so, I have already answered these questions for myself; and nothing can change my mind: of course, I have thoughts; and they are about composing a book at a word-processor (even if they are illusory 31
To be a Realist about categorizations (see footnote 6), one need hold only that the world supports our categorizations, such that they can be right or wrong, not that only one way of categorizing it is correct.
249
Apperception
thoughts). Moreover, these intentional states do not acquire their intentionality from any special way I obtained them, nor is their content dependent on any future behaviors I might undertake. Most especially, these thoughts are not made meaningless because I have never lived in a community of conceivers. I understand perfectly well what I am thinking (even granting that one is not always infallible about one's own mental states). This thought experiment does not constitute an argument against Externalism. To borrow Dennetts metaphor, it is only an intuition pump. But some pumps are notoriously difficult to turn off. Moreover, if the natural-clone case is "merely" an intuition pump, then so are the "arguments" most recently used against Cartesianism: Putnam's (1975, 1981) twin-earth example and Burge's (1979, 1986) arthritis example. These have whatever status the natural-clone example has. If one is a "mere" intuition pump, then all are mere intuition pumps. If any is a genuine argument, then all are. I don't want to defend the naturalclone case as being an argument, but I do want to point out that our being moved by any of these examples is going to be for theoretical and conceptual reasons beyond the examples themselves. Twin-earth and arthritis cases are no more arguments, in themselves, against Cartesianism than the natural-clone case is an argument against Externalism. How do we have concepts of objects at all? That is an old question. Descartes (1642/1986, 20-23) raises (and attempts to answer) it with his Second Meditation discussion of the wax (and Kant [1787/1961] focuses his entire metaphysics on answering that question). Besides the fact that different sense modalities are to be integrated, there is good evidence that even within the single sense modality, vision, spatial properties such as shape and location are processed through different channels from each other (Kosslyn 1987; Kosslyn et al. 1990; Van Essen 1985; Van Essen and Maunsell 1983; Zeki 1992; Stoerig and Brandt 1993). Given that we do not in any simple way perceptually process an external object whole but instead receive constantly changing sensory input, from different sense modalities, and even from different channels within the same modality, how do we perceive, and conceive of, things at all? Yet we do. Answers to such questions are revealing.32 We 32
An evolutionary answer to this question is beside the point in the present context. It might tell us why we perceive things whole, but it will not answer the question of how we do. That is, it will not distinguish between Internalism and Externalism.
250
Selves
know, for instance, that certain cells respond to "edges," and "edges" in certain spatial orientations, some only to vertical "edges" and some only to horizontal "edges," for instance. But which "edges" demarcate an object and which do not? For example, if one looks at an open rolltop desk, one sees many edges; but not all the edges define an object. Most are internal to the desk itself, not defining of objects separate from the desk. So why do we see a desk instead of many desk parts? It is likely that a causal theory about how the parts operate together plays a role (see Keil 1991). And causal theories are theories that we apparently incorporate into our very perceptual discriminations and resulting perceptual concepts. So the answer points to the theorizer as much as, or more than, to the world. Evolution may explain why human theorizers are the way they are, but the content (expressed in English as "This is an object") seems to be determined wholly inside the theorizer. And the word "edges" used in these psychophysiological explanations has been placed within shudder quotes for good reason. If "edge" means unbroken boundary line, then physics tells us there are no such things. '"Edge"1 itself is a concept, a theory of the world. Objects, science tells us (contrary to our commonsense beliefs), interpenetrate. There are no sharp boundaries.33 And why do we have a concept of the world as external at all? Externalists are silent on that question. Given the underdetermination of perception by the world, the answer cannot be simply because a world exists.34 As Descartes claimed with the wax example, we seem to "form" objects by a synthetic (and nonsensory) act of the mind. But why are those objects taken to be external, and external to what? Externalists of virtually all shades are also silent on this question. There must be a way to distinguish oneself from the world and from others in the world. But if all the epistemic evidence is between the walls of one's skull — as the scientific evidence suggests — how do we do it? No one has yet given an adequate account of how it is done. Anti-Cartesians just assume the distinctions. They seem to take our 33
34
Compare the fact that we have a geometric concept '"square"1 even though most probably no squares exist in the external world. As a Realist about the external world, I think that in the sense that the world is a partcause of our coming to have the concepts we have, such an answer is correct. But given the same effects, even had there been different causes, the contents of our concepts would be the same. See the next chapter.
251
Apperception
concepts of the external world and of our self as themselves unproblematic. But accounting for these distinctions is exactly the raison d'etre of the Argument from Analogy, at least in its philosophy-of-mind version. That is, Scientific Cartesianism has the virtue of being able to bring under a single theory both concept formation and concept content, and to explain the self/world distinction. I propose to sketch such a Cartesian theory. Before moving on to that account, first consider a common line of thought that should be rejected. It goes like this: (1) Sentences and thoughts have meaning. (2) Sentences cannot have meanings intrinsically because different sounds or marks can have the same meaning and the same sounds or marks can have different meanings. (3) But thoughts cannot have intrinsic intentionality either because neurophysiological processes, which thoughts are, cannot have meanings in themselves. So (4) both sentences and neural processes (thoughts) have meanings only by being mapped onto things that do have meanings (propositions or the like) or are bequeathed meaning by being mapped onto the world (for a version of this argument, see Stich 1990, chapter 5). But why should one accept the third premise? Like the nondissociation theory discussed in chapter 6, this premise is most often simply assumed, not argued for. And this claim sounds suspiciously like similar, now rejected, ones: Mere physical things cannot move themselves, and mere physical things cannot be alive. At this point of neuroscientific understanding, we just have no idea whether this third premise is true or not. It is worth noting that mapping claims are usually illustrated with one's trying to understand another organism's neural states. We don't map our own prior to understanding their contents. How could we? When it comes to first-person understanding of contents, the conceiver seems to be in a privileged position, and Externalism has an extremely difficult time accounting for this privileged position (see Butler In Preparation, especially chapter 1, for an excellent critique of Externalist attempts to explain this privileged position). (And one might add, in regard to the other alternative, that the means by which abstract objects like propositions can possess meaning are surely no less mysterious than the means by which brain states can.) Despite the fact that this third premise has usually been merely assumed, or urged merely on the basis of someone's intuitions, it does seem to be the flywheel in the engine powering the attempt to collapse the important idea of mapping meanings onto the world with 252
Selves
the much more suspect idea that these mappings determine the meanings. 15. We can now sum up the results of this section. Many questions have been raised for which Externalists of all stripes have no good answers. The difficulties for Externalism are at least as great as — I believe a good deal greater than — those which face Cartesianism. All positions have difficulty in fully explaining concept formation and content. But the problems for Cartesianism are no worse than those for its rivals. If anything, the world s underdetermination of our perception and of our perceptual concepts better supports Cartesianism. But if Cartesianism cannot account for the belief in other minds, that would be a serious blow against its plausibility. What follows are sketches of two Cartesian accounts of the origin of the belief in other minds. Both share a similar basis. Their major differences are over timing in regards to concept acquisition and over the nature of concepts. Not enough data are in to settle the issue between these two views, but either supports the Cartesian drift of this book. VI
16. Both accounts to follow are highly speculative. Although the data are few, the accounts are consistent with them. If the accounts are at least as plausible as their anti-Cartesian rivals (given the evidence we do have), then the hope is that either account's influence will be that, not of a scientific theory, but of a "proto-scientific" theory (a philosophical one). That is, the hope is that each account provides a way of looking at the data that will spawn appropriate psychological investigation in order to see if it cannot be turned into a fuller, more clearly delineated, empirically grounded scientific theory. I have called the theory of mind underlying the following accounts "Cartesian"; but I also could have called it "Kantian," or as James Russell (1989) does, "Piagetian."35 35
Russell's article influenced my thinking greatly. Piaget owes his own inspiration to Kant, and I think it is fair to say that Kant owes much of his inspiration to Descartes. In saying the latter, I do not mean to imply that Kant did not go beyond Descartes. In my view, Kant made the greatest leap forward any philosopher has ever made. I call the view "Cartesian" because Descartes turned the philosophical world upside down with it when he presented it (see Nelkin In Preparation-a). Of course, neither Descartes, nor Kant, nor Piaget (nor James Russell, for that matter) would agree with all I say.
253
Apperception
Thefirstaccount: In the beginning, there is the newborn infant.36 The infant has experiences and is second-order aware of (some of) these experiences (apperceives these experiences), although these experiences compose an uncategorized stream for the infant. By saying the stream is uncategorized, I mean that the stream does not wear its categories on its surface: that the stream is initially undifferentiated for the infant is perfectly compatible with the stream s possessing a great deal of structure that facilitates later differentiation and categorization. Nevertheless, the categories of experience are not transparent, even to one who can apperceive his or her own experiential stream. Yet, it would be too much to say that the infant is aware only of token experiences. That the stream can be divided into tokens, I want to suggest, is itself a learned division. The stream precedes both tokens and types. In earlier chapters, I stressed that apperception is a kind of judgment. But judgments seem to presuppose concepts. So the question arises: In what sense of "apperception" does the infant apperceive its own unbroken stream of experience, since, by hypothesis, the infant lacks concepts and so lacks the ability to judge? As we will see, I do not think the infant totally lacks concepts: there are at least three innate ones. But even more to the point, the question is misdirected. All apperception, whether in these primitive cases or in more sophisticated ones, is primarily a kind of demonstrative, existential judgment of a form similar to "That is occurring" (though, of course, not in English or any other natural language). Insofar as apperceptive judgments are fuller judgments, with a compositional content, they inherit that fuller content from the states they pick out. Thus, it is a Cl state that is the primary carrier of content. Apperception (C2) inherits most of its content from the C l state (though apperception can add to that content, for instance, in specifying the source of the C l content, as in a judgment similar to the ordinary-language one, "I am seeing the clock on Parliament Tower"). But this reply only refocuses the original question. For it can now be seen that that question s worry is not really over apperception itself but over the state that is picked out in these primitive occurrences by 36
I say "newborn," but it would be wrong to be too dogmatic on this issue: distinctions I speak of as occurring only after birth may actually already begin to be made in the womb. For instance, fetuses appear to initiate movement in the womb (Bremner 1988, especially chapter 2). On the other hand, some perceptual systems (vision, in particular) are virtually shut down as long as the child is still in the womb (Bremner 1988, 30).
254
Selves
the mental equivalent of the English word "that" in the judgment, "That is occurring." This state cannot itself be a Cl state, for C l judgments are full judgments: conceptual, compositional, and aspectual. So what sort of state is picked out by the "that"? It is a state with none of these properties. Instead, it must be a state that is singular and where any information it contains is fused and unstructured for the infant (though the state in itself may have plenty of structure in it to one who can recognize it). But what sort of "mental" state, accessible to apperception, fits that description? In chapter 3, I argued that pains consist of evaluated mental states and conceded to British Empiricism that phenomena are the best candidates for the states evaluated. And I was able to do so without weakening the overall prominence of Cartesian Rationalism. Here, I make a similar, and much more important, concession to British Empiricism, though once more without reducing the prominence of Cartesian Rationalism: Phenomena are the best candidates for these singular and fused states. We are apperceptively aware of phenomena, as I have argued in prior chapters; and, as put forward in chapter 4, there is reason to think that the information contained in phenomenal states is locked up - fused - in the nonaspectualized image-like representations that they are. That is, phenomena contain representational information but no informational content. The information contained in them is pre-conceptual, or perhaps better described as ^conceptual. If phenomena are the states accessed in these primitive cases, then phenomenal states play an extremely important role in cognitive development, much more important than I have so far allowed. Moreover, the importance of this role explains, I believe, the prima facie plausibility of British Empiricism, though this basis for its plausibility is only somewhat overlapping with British Empiricism s own view of phenomenal states. And recognizing this important role for phenomena is at the same time to recognize the beginnings of a reconciliation between British Empiricism and Cartesian Rationalism. Cartesian Rationalists can make these concessions without in any way changing their minds about the judgment nature of perception or about any other central thesis of Scientific Cartesianism. If perception is of an external world, then perception is a judgment in the fullest compositional, aspectualized sense, for a world consists of objects and their properties. Remember that at the end of chapter 4 I suggested that perception, as so described and defended in the last two theories 255
Apperception
of that chapter, may well be something the organism has to develop. And what I was just saying above now helps to explain that remark: Perception arises with our being able to conceive of a world. Nonperceptual information in the form of phenomenal states, although acquired through our sense organs and then apperceptively accessed, may make perception possible; but that state of senseobtained information is not itself perception, for the reasons spelled out in chapter 4. Perception presupposes a conceptualized world to be perceived, and in these primitive cases, no conceptualized world exists for the experiencer. Phenomenal states are apperceived prior to there being any Cl states, prior to full-blown compositional, aspectualized judgments, prior to any concepts of self or world. Apperceived phenomenal states, as we will see, are the beginning points of cognitive development, making these cognitive states possible, even explaining their emergence. Cognitive development can thus be understood as a path from unaspectualized phenomenal states to the coming about of aspectualized C l states. Apperception does not change its nature as this path is traversed, but is the necessary catalyst for getting from the unaspectualized to the aspectualized. It is this emergent path that I intend to track in this section of the chapter. To continue, then, another point to be stressed is that the infant s being apperceptively aware of its experiential stream in no way entails that the infant is aware of the stream as its stream of experience. That is, the infant is in no way aware of itself as an "owner" of the stream (nor, for that matter, of the stream as a stream of experience). Yet, within this stream there is a distinction the infant is aware of: some experiences appear to be within the infant s control, in the sense that for those experiences what the infant seems to do effects and affects them; for others, their existence and quality are independent of what the infant seems to do.37 The brief paragraph above holds the key to the account to follow, and several remarks are in order about just what is, and what is not, being claimed. (1) It is true that infants can do very little. But they can cry (and thereby cause themselves to have new aural experiences — as 37
Poulin-Dubois and Shultz (1988, 120-21), Fischer and Bidell (1991, passim), and Russell (1989, 174) are among the many developmental psychologists who hold this same view, though none of them may approve of the words "seems to" here.
256
Selves
well as kinaesthetic or haptic ones from the vibrations); they can turn their heads, wave their arms and legs a little, open and close their eyelids; they can focus on an object with their eyes and visually track it (though in a herky-jerky fashion) as it moves (though, of course, they don't yet categorize their actions in these ways — indeed, in any ways at all). Each of these bodily movements brings on changes in the infants experiences. (2) What I claim as a primitive in experience is the infant's awareness of what it seems to do (as opposed to what merely seems to happen to it) in terms of altering and effecting experiences. Without this difference in experiences, and the infant's awareness of it, the infant would have no further concepts whatever.38 The basic differentiation in experience, the one underlying all others and making all others possible, the one to which we must be innately tuned in our awareness of our own experiences, is that sometimes experiences seem to be in our control, sometimes not. The sense of control is primitive. And so is our apperceptive awareness of it. '"In-control"1 and ^not-in-control" 1 are innate concepts. Just as one might be concerned that "apperception" means something different when involving pre-conceptual states, one might claim that the notion of in-control must be quite different from that of will — and for quite similar reasons. Willing always involves a full-blown judgment, while this notion of in-control could not. But I would respond exactly as I have about apperception: What we are in-control (or not-in-control) of in these pre-conceptual conditions are phenomenal states. To be in-control is to be able to bring about a desired phenomenal state, but those phenomenal states in being image-like have their information locked up in them in a nonaspectualized, preconceptual way. Full-blown states of the will do involve aspectualized judgment, but these latter states emerge out of and merge into these more primitive will-like states. (3) Infants possess yet another innate concept: that of causation, or a notion closely related to it. While no conclusive evidence I know of exists for this claim, evidence does exist that infants as young as 38
Since I am being both speculative and bold, I might as well say that I am predicting that this account of cognitive development in human infants equally applies to any other animal having an ability to acquire concepts. Of course, an account of human beings may take us beyond the capacities of other animals. Where the account stops for other animals is an empirical question.
257
Apperception
thirteen months old remember events connected by an "enabling relation" (causal or logical) better than otherwise related events (Bauer 1993). There are also studies that lower the age of this awareness to only a few months (Leslie 1988; Spelke 1988; Bailleargeon 1993). "Being in-control" is already a causal notion. (4) It is easy to see how the in-control/not-in-control distinction allows us to begin tokening experiences. The stream begins to be broken up into segments by such differentiation. The demonstrative "thats" are distinguished on this basis. (5) When I say that an infant is aware of its willed versus nonwilled experiences, I do not mean that the infant is aware of its will as its will.39 The sense of control itself is primitive in experiences, not the self whose will it is. To say that the infant is aware of those experiences that seem to be in its control is, therefore, to speak loosely. Words do little enough justice here; but the distinction, in-control/not-incontrol, is prior to one's sense of oneself as a subject of control. We will see how the notion of a self unfolds from the primitive distinction. (6) This notion of will is perfectly compatible with (hard or soft) determinism's being true (and it is, of course, compatible with indeterminism as well). All that is claimed is that beings such as human infants have a sense of "being in-control" of some of their experiences. In fact, this developmental story's being correct would account for the fact that the will means so much to us, for why we are so afraid of determinism's being true, for why the notions ""will"1 and '"self""1 are so closely tied together in our thinking. As we shall see, our belief in being in-control makes it possible for us to differentiate our self, our world, the different objects in it, our other mental states, and so on. Explanations about why people need to be in-control for their lives to be healthy and happy also begin to come clear when we realize what a basic role the will plays in our lives. It would take us too far afield to detail in this chapter these roles of the sense of control in our lives, but I hope enough has been said to lend psychological plausibility to the status of the sense of control as basic.40 The in-control/not-in-control distinction provides the beginnings of a self/not-self distinction. But only a beginning. Not enough has been said — nor enough yet differentiated by the infant - for there to 39
Here and throughout the remainder of this section, I use "will" to include the primitive 40 in-control states. These issues are developed in chapter 11.
258
Selves
be a genuine concept of the self at this point. One may think of it as a barely formed concept of the self, though it is better to think of what has been differentiated as a proto-self; this proto-self is whatever is in control of some experiences, while the proto-not-self is whatever is in control of the not-in-control (support for a proto-self stage is found in Huttenlocher and Smiley 1990, 289).41 The prefix "proto"is apt in that neither proto-self nor proto-not-self are yet conceived as things, as individuals.
To see how the infant gets from concepts of proto-self and protonot-self to concepts of self and not-self, that is to that of genuine individuals, it is best to focus on the proto-not-self For the recognition of the proto-not-self as the controller of the not-in-control is, in essence, the basis of our acquiring a concept of the outer, of the external. Indeed, initially, ""not-self"1, ""outer""1, ""external"1 all amount to the same concept. In this proto-self/proto-not-self distinction is the beginning of our notion of an external world (and we are strongly Realist about the not-self: see, for instance, Perner 1988, 145; Flavell 1988, 259; Wellman 1992, especially chapter 9). It is important to understand and recognize that this distinction of the proto-external is based solely on the infants internal, apperceivable experiences (though the infant doesn't know that this is the basis), for those internal experiences are all the infant has to go on, as Psychological Solipsism implies. Thus, at least a distinction between a proto-self and a proto-outside-world can be understood on the basis of— in the Cartesian sense — having only one's own experiences to go on. So far the story is Cartesian; but of course, we have not yet got to other minds - not even to one's own mind. But I have shown how two concepts are formed on the basis of experience, whether there be any actual external world or not. However, I have not yet shown how we move from the proto-concepts to the concepts simpliciter. Before taking the account further, let me re-emphasize one other aspect (besides the claim that will is basic) that runs counter to much current thought in philosophy of mind: having apperceptive access to internal states is absolutely essential for forming concepts of an external world, let alone, and not only, of a self in that world. Philosophers like Fodor (1987) and Dennett (1988a), who disdain apperception, will Perhaps rather than "proto-self/proto-not-self," a better labeling is "agency/nonagency." But it must be remembered that the concept ""agency"1 is prior to that of |~agent~l.
259
Apperception
of necessity (if this account is correct) have incomplete and misleading accounts of human cognition.42 But as stressed in the previous chapter, only a minimal apperceptive faculty is required for cognition. Human apperception may often go beyond this stripped-down sort, but concept acquisition of a kind needed for having concepts of a self and of an external world requires only a minimal sort. At this point, we have arrived at a Cartesian account of how one acquires concepts of a proto-self and a proto-not-self But we need to take the story further if we are to deal with the problem of other minds. Granted that the account, even as far as it has gone, is highly speculative; nevertheless, given the sense of control as primitive, the account has gone along pretty well. And a surprising number of important distinctions have been shown to fall out from an otherwise undifferentiated stream of apperceived phenomenal experience. Earlier, I warned that the empirical data are few. And now we reach the largest gap (the most vigorous hand-waving is required just here). However, the gap I need to leap is one for which, as noted in the previous section, nobody has a very good story. Somehow the ^proto-not-self"1 (•"external"1, ""outside"1) gets divided into spatial, temporal objects. That is, the proto-not-self becomes a not-self, a world of individual objects. Somehow a piece of information in phenomenal imagery gets unlocked to become part of an aspectual judgment. All three of "space," "time," and "object" are worthy of emphasis. Kant's Critique of Pure Reason attempted to explain how we might possess these concepts ('"space"1, ""time"1, '"object"1); and until recently, there has been little improvement. But given the arguments and scientific data of the previous sections and lacking a compelling argument to show that Psychological Solipsism is wrong, it is fair to say that there is as likely to be an account of these matters that is essentially Cartesian as there is to be one that is not. Furthermore, no account of any of space, time, or objectness is both independent of Psychological Solipsism and better than a Kantian story (or than having no story at all). Finally, what recent evidence we do have - for instance, 42
See chapters 3 and 8 for further defense of the early and needed use of apperception. Clark and Karmiloff-Smith (1993) present other important reasons for the importance of apperception in cognition. See also Karmiloff-Smith 1991. That apperception is present very early on, if not at birth, is evidenced in Keil 1981, 202; Leslie 1987, 416; 1988, 31; Johnson 1988, 48; Russell 1989, 166. That it is present so early on is evidence that it is there from birth, as I claim. Of course, if the speculations of this section of the chapter are correct, it must be there from birth. For further arguments, see §14.
260
Selves
that there are certain constants in perceptual experience to which the brain is attuned (as well as some plausible connectionist and Neural Darwinist speculations about how the brain works) and that the brain itself contains several spatial "maps" (see Kosslyn 1980, 1987) — is compatible with a Cartesian theory of mind. In fact, by and large, this kind of evidence may well presuppose Psychological Solipsism (see the next chapter for support for this claim); so it may even be true that the best scientific attempts to account for acquisition of the concepts '"space"1, f"time"1, and ""object"1 are themselves likely to be Cartesian ones, though I have not supported that claim here.43 The Cartesian account is at least consistent with the directions in which experimentation points. After all, all these "maps" and the like are internal states of the organism. It would seem that a creature possessed of these neural "maps" could also have phenomenally imaged maps that would allow it to come to have an idea of a spatial world even if none existed.44 Having leaped the gap between ""proto-not-self"1 and l~not-self~l (by waving my hands very hard), the trail through the story resumes more smoothly once again. In the world, then, one distinguishes spatiotemporal objects — bodies. Among the bodies distinguished in the world is this body, one that not only has size, shape, and the like, but also thinks and feels — in general, experiences the world. When this body's eyes are shut, visual experience stops, and so forth. It is this body that one identifies as sometimes being in-control. One has discovered one's self. This brief paragraph also contains much of importance. First, there is the claim that one distinguishes bodies before distinguishing oneself as a self, as a thing. Distinguishing bodies enables one to go from a concept of a proto-self to a fuller concept of a self: a self as an individual thing. In distinguishing this embodied self as that which is incontrol, one conceives of one's agent self in terms of being a self-mover. These speculative claims fit the experimental evidence.45 43
44
45
Of course, that the scientific attempts to understand these matters are consonant with Cartesianism may be the result of the scientists themselves being (closet or overt) Cartesians. Perhaps so, but maybe that is because, like me, they find that the scientific data laid out in section IV make any other position terribly unlikely. In the next chapter, I discuss in more detail the scientific evidence for our coming to have concepts of objects in the external world. The realization that one oneself has a mind seems to be present in the second year of life (see citations in footnote 9). On the other hand, the concept of bodies (at least as continuous, solid things) is evidenced in the first half of the first year of life: see Diamond 1991; Spelke 1991; and Fischer and Bidell 1991.
261
Apperception
Second, and close to the spirit of Strawson — though in important ways different from Strawson — it is being claimed that an infant, when it comes to distinguish itself as a thing, identifies itself as one among the bodies in the outside world. The self is one of the same kind of thing that makes up the not-self. But it is different in being a self-mover. That we first identify the self as a bodily thing is also consonant with the experimental data: that which eats is for toddlers also that which thinks (Carey 1985, especially chapter 3). In fact, infants do not initially seem to have a concept of the immaterial at all (Huttenlocher and Smiley 1990; also Carey 1985, 170; Keil 1981, 209; Johnson 1988, 57). The account so far, while Cartesian, deviates considerably from those given by Bertrand Russell, Mill, and Price. In their accounts, one first distinguishes oneself as a mind. Then one discovers bodies. On this account, one first distinguishes bodies and then distinguishes oneself as a body in the world — a self-moving agent body. Nothing has been said about a "mind" as a thing at all. In fact, I would claim that a genuine mind/body distinction is quite a late one, and not always drawn (and there is evidence that I am correct; see, for instance, Chandler 1988; Wilkes 1988). In section VIII, I briefly extend the story to account for this distinction. However, my present point is that the mind/body distinction postdates the determination of a perceiving, feeling, desiring, agent self. The self so identified is a body that thinks, feels, and is in-control of itself. And quite early on, the infant comes to recognize other bodies as self-movers — as agents. These bodies move when there is no perceived cause of their moving. The class of agents may be larger than the class of thinking, feeling beings, for all self-movers — battery operated cars, for instance — appear to be initially included (see Wellman 1992, 233). But this separation of agency from thinking/feeling may be only apparent: it may be that the infants take the car to think and feel because it self-moves — and so have to learn that their categorization is an error. Empirical research will have to decide which possibility (""agency"1 and thinking/feeling are always linked for toddlers, or •"agency"1 and thinking/feeling only overlap for them) is correct. Well, the end of the road is in sight. Having distinguished oneself as a body in the world, one is aware that not only is this body a body that has shape, size, moves, and so on, but also a body that feels pain, feels pleasure, perceives, desires, moves itself, and so on. That is, one conceives oneself as a body that perceives, desires, thinks, feels, and is in262
Selves
control. And one also becomes aware of other self-moving bodies as agents by having first been aware of oneself as a bodily agent. And in so far as the child has developed into a categorizer, and on whatever grounds it divides the world into categories of objects at all, the child comes to understand itself as a token of a more restricted type. Others of that type are then also thought of as agents, thinkers, feelers, perceivers, and desirers because they are of the same type. Indeed, having recognized oneself as a token of a restricted type, it would be perverse to think otherwise. But of course, others of one s type are simply other human beings. Thus, it follows that in having distinguished oneself, one distinguishes oneself as a body in the world and one distinguishes one's token body as a member of a kind, and in doing so, one quite naturally and reasonably comes to believe in others being thinking, desiring, feeling agents as easily and readily as one comes to understand oneself as an agent that thinks, desires, and feels.46 Most relevant to the present point is the evidence that children learn names for bodily parts of others, then for themselves; but mental predication is reversed (and later): it is applied to themselves first and then to others. It is not surprising that by the time one possesses a more fully developed concept of belief, say at age four, one can ascribe it as easily to others as to oneself, for one has long before recognized oneself as an instance of a type. But, nevertheless, one's first understanding of •""belief-"1 is from one's own states. One's belief in other thinking, feeling selves is simply a natural outcome of the kinds of internal states one experiences. In saying that one identifies one's self as a body, we give nothing away to anti-Cartesians; for even a brain in a vat might, in principle, have these concepts of '"outer""1, ""space"1, "-time"1, ""object"1, '"self"1, •"thinking"1, and so on that we in the actual physical world possess. A Cartesian theory of mind, thankfully, does not entail Ontological Solipsism; but it is compatible with it. Most importantly, we have presented a version of the Argument from Analogy — an argument that goes from ourselves as thinking selves to others as thinking selves, at least in the philosophy-of-mind version of the Argument It is interesting, and relevant, that children apply ""thinks"1 to just those things to which they apply ""eats"1 (see Carey 1985, especially chapter 3). There is also evidence that properties, including mental ones, applied to people are also applied to other animals (i.e., other agents), but that properties applied first to some other animals are not generalized to all animals or to persons (see Carey 1985, 130, 166). This evidence perhaps favors the possibility that agency is linked to mental states from the beginning.
263
Apperception
from Analogy — that both is Cartesian and also plausibly explains how we come to have the concept """other thinking selves"1, if not yet •""other minds"1.47 17. The second account: Like the first, this one is essentially Piagetian, perhaps even closer to Piaget s own theory (Piaget 1954; Piaget and Inhelder 1969) than the first, which is Piagetian primarily in spirit.48 Differences on two main issues mark the accounts: the age claimed for when children begin forming concepts and the nature of concepts themselves. In the first account (PI - for "Piaget 1"), discriminating into types (categorization) and possessing concepts are pretty much taken to be identical; but this second account (P2) denies that identity. P2 takes concept possession to require an apperceptive state that is aware of its discriminations as discriminations. Such an apperceptive state is far richer in content than the apperceptive states required for merely discriminating. According to P2, only when the more complex apperception is present is there a full intentionality present. Prior to apperception with this richer content, there is only "intentionality" (or quasi-intentionality). On this view, concepts of self, other, object - concepts of any kind — begin to emerge only in the latter half of the second year of life. It is no accident, on this view, that language begins to emerge at the same time, for this late-developing apperceptive ability is the underlying basis for semantic language acquisition (necessary, but not sufficient). Even when a one-year-old child utters, "ball," in appropriate circumstances, it lacks, on this view, the concept ""ball"1. It does discriminateballs from other things, but its label "ball" is simply an uttered response that results from its discrimination. "Ball" is not yet a word, not yet a label for a concept.
In addition to an "intentionality"/intentionality distinction, P2 also makes something like a "judgment"/judgment distinction. It can grant that the discriminated categories play a role in perception; and some47
48
This account lacks a good deal of detail (though, I hope, not detail relevant to the argument), and that lack may make it appear to be in conflict with Wellman's (1992) developmental account, an account I read only after this chapter had been many times drafted and an account I much admire. There may be ultimate disagreements between our views; but, if so, they are more subtle than this section, with its lack of detail, might suggest. This section owes both its form and its existence to conversations I had with Danny Povinelli.
264
Selves
thing like a judgment — a "judgment" — occurs in perception (which is a conclusion consonant with the results of chapters 1 and 2). But the components of "judgments" are not fully intentional, and so "judgments" are not fully judgments. Real judgments begin to occur along with real concepts — in the second half of the second year of life. Just as P2 can allow for apperceptive abilities at birth — it only denies that the relevant apperceptive abilities occur then — P2 has no wish to deny an innate in-control/not-in-control distinction as underlying perceptual and other pre-conceptual discriminations. Even more salient, P2 agrees that awareness of this distinction underlies a conceptual self/not-self distinction. But P2 claims that such a distinction is not possible until one is apperceptively aware of being aware of an incontrol/not-in-control distinction, of having discriminated oneself from others - once more, a more sophisticated awareness than PI requires for a conceptual self/not-self distinction. Defenders of P2 would say that prior to about eighteen months, while children discriminate themselves from others, they lack a concept of self (or of other). Besides their differences over whether very young children possess concepts and make judgments, PI and P2 will also differ over whether nonhuman animals possess concepts. Defenders of PI are much likelier to ascribe concept-possession to nonhuman animals - and well "down" the phylogenetic scale. However, defenders of P2 are not barred from ascribing concept-possession to nonhuman animals; and many would ascribe concept-possession at least to our near relatives, chimpanzees and orangutans (Povinelli and Godfrey 1993; Premack 1988).49 While P2 holds concept acquisition to be necessary for human language development, it does not require that it be sufficient. While P2 and PI have their differences, they agree that apperception and an in-control/not-in-control distinction are the bases of concept formation. However, a more radical position, somewhat resembling P2, is perhaps also imaginable. On this view, apperception itself does not even occur until the middle of the second year. Up to that point, children have no genuine mental states at all. They only seem to. I will call this view the Machine View (MV) and develop it a little. Povinelli himself (personal communication) has become more skeptical about whether apes have concepts.
265
Apperception
To make the exposition simpler, I will reserve the term "infant" for children prior to about eighteen months. On the Machine View, infants do not perceive, if "perceive" is taken in the sense required by chapter 1. Infants have no intentional states — full or less full — and so make neither judgments nor "judgments." Their visual categorizing, say, is like machine "perception." If we think in terms of connectionist machines,50 visual input causes various relaxations of their neural nets; and eventually, similar inputs cause similar relaxations. Or perhaps, similar neural nets define the inputs as similar. And similar neural-net structures cause behaviors to be similar (including utterances like "ball" on appropriate occasions). Apperception of no kind, intentionality of no kind (and phenomenality of no kind!) play a role in early categorizations. Unlike P2, MV would threaten several of the central tenets of Scientific Cartesianism, including its views on perception, pain, and the importance of apperception for conceptualization. Fortunately, while PI and P2 are compatible with the known data, those same data provide reasons for rejecting MV. In fact, there is a sizable number of reasons to reject it. First, there is MV s assumption that a relaxation of neural nets is incompatible with intentionality. But there are good reasons to think that many such relaxations are intentional states (see Butler 1991, 1995a, 1995c for the arguments). Two defenses of MV are possible here. (1) Defenders of MV may point out that we can make connectionist machines (or symbolic machines — this difference continues to be irrelevant so far) that discriminate in these ways, in which case either I am committed to the machines having intentional states or I must admit that similar discriminations can be achieved by neural relaxations that are not intentional states. I have no doubt that one could make a machine that reacted only to the presence of squares (perhaps by typing out "square") and only because their nets relaxed in a certain way. Are such net relaxations intentional states? My intuition is to deny that they are. (Though I think the question is an empirical/theoretical one, and not to be answered a priori - good scientific reasons could make me reject my 50
Nothing much is meant to ride on this assumption.
266
Selves
intuition.) Suppose my intuition is correct. In that case, the counterexample shows that there is more to intentional states than mere discriminations. One attempt to say what more is needed says that a label, a symbol, standing for the net must be able to interact compositionally with other such labels to form judgments, and a capacity for those interactions is missing in the imagined case. While judgments are composed of concepts, concepts are concepts only because they are represented by symbols that play a role in judgments. So the "square" discrimination of the simple connectionist machine described is not an intentional state, just as MV claims; but that is because the interactive capacity required for turning it into a concept — into a compositional component of an intentional state - is lacking. Of course, this is a kind of language-of-thought requirement for intentionality. But it is certainly one that Scientific Cartesianism can tolerate. (2) But even if defenders of MV would admit that perceptual discriminations are intentional states (or "intentional" states), and so admit their difference from simple machine discriminations, the Cartesian view is still threatened; for apperception would seem to be unnecessary for categorizations. The problem is that there seem to be current electronic computing machines that meet the sort of minimal compositional requirement expressed in (1) above. There are, but in this regard current symbolic machines have it all over current connectionist ones (though this need not be in principle true [see Butler 1995b], and the same considerations to be expressed would apply to the imagined connectionist machines). But symbolic machines' judgment-forming powers are heavily dependent on the role that a CPU plays. That is, symbolic machines have something at least analogous to apperceptive states playing a vital role. Does this conclusion mean that such machines possess concepts? At worst, any inclination to deny concepts of the machines is an argument in favor of P2 over PI (such machines lack the relevant apperceptive states), but not an argument in favor of MV unless one also denies that such machines have apperceptive states of any kind. Do such machines really have apperceptive states or do they only simulate apperceptive states? I find this question very difficult to answer. Developed theory will have to guide me. What I am sure of is that if they do not, then they do not make judgments either — and so 267
Apperception
have no concepts, even according to defenders of PI. 51 But my certainty hardly constitutes an argument. So let me argue instead that at least human infants and many other living animals employ apperceptive states. If I can establish this weaker claim, I will not need to resolve the issues surrounding current inorganic computing machines. The first argument is a rather weird argument - only in part because it appears at first to support MV. The magician, the Great Randi, has offered $10,000 to anyone who can perform a "supernatural" act he cannot reproduce. Although challenged a few times, he has never given away any money. Of course, that he can reproduce these acts doesn't — as he himself admits — prove conclusively that the same acts weren't brought about by supernatural means. Still, most of us think his reproductions are adequate reasons for our not ascribing supernatural means to the others who perform the same acts. This conclusion results from a use of something like Occam's Razor. Defenders of MV might argue analogously that since discriminations like those of infants and nonhuman animals can be reproduced (by machines) without apperception (and without phenomenal states), a similar use of Occam's Razor compels us to withhold ascriptions of apperception (and phenomenality) from infants and nonhuman animals. And now comes the twist in the Weird Argument: the only problem with their argument is that noninfant human beings make these discriminations because they do have apperceptive (and phenomenal) states. And so the "Great Randi Argument" doesn't apply to noninfant human beings. Thus, the Weird Argument concludes that a more judicious use of an Occam's Razor-like rule would have us infer instead that infants and many nonhuman animals — so clearly more like us in their neurophysiology than are current electronic computing machines - are like us in this respect as well. What I have called the Weird Argument would not carry a lot of weight unless there were independent reasons to think infants and nonhuman animals are more like noninfant human beings than like 51
If such machines do apperceive, does that mean that they are conscious? Here, it seems to me that two answers are possible. One would be to require the kind of apperception demanded by P2 as a minimum for apperceptive consciousness. The other would be to agree that they are minimally conscious. This reply seems bizarre only if we confuse types of consciousness with each other. Ascribing a minimal apperceptive consciousness to such machines would not be ascribing phenomenal consciousness to them.
268
Selves
current electronic computing machines. Besides the obvious biological similarities, at least two other sorts of consideration — resulting from earlier chapters of this book — lend support to the Weird Argument. While many hesitate to ascribe sophisticated cognitive states to infants and nonhuman animals, most people — including most cognitive neuroscientists — have no qualms about ascribing pain to them. But as chapter 3 has shown, pain involves apperceptive (and phenomenal) states in an essential way. So however far "down" the phylogenetic scale one is willing to go in ascribing pain, one is, pace MV, already ascribing apperceptive (and phenomenal) states to infants and to a number of nonhuman animals. Of course, the theory of pain presented may not itself be correct, but unless a better theory is proffered, it gives us a reason to deny MV and supports the Weird Argument. Next, suppose that the "broadcasting" nature of apperception limned in the final section of the previous chapter is correctly described. If so, then the following account makes a good deal of sense. Suppose — what seems reasonable — that brains, unlike current electronic computing machines (which were design engineered), evolved module by module. In simple organisms with brains of very few modules, no apperceptive states would be necessary for their survival. Their bodily movements would be determined by a straightforward vectoring among the "demands" of the modules, and those obeying the most adaptive vectoring rules for their environments would survive. But brains couldn't grow very large under these conditions, for mere vectoring would become overpowered by the complexity of numbers, leading to chaotic movements. Suppose, then, that a "module" (mA) evolved (it had to evolve sometime, since apperception does exist) that allowed one module (ml) to relay its information to another (m2 — and perhaps to receive information from that other) because the new "module" mA could translate mi's information into a "language" m2 could "understand" (and possibly vice versa — perhaps this first mA was the apperception "module" involved in pain [see chapter 3]). Then m2 could more instantly change its own states (have its own states changed) for the behavioral benefit of the organism. With this advent of broadcast "modules" like mA, brains could add more modules; and the more modules an mA "module" could relay information to/from, the more first-order modules a brain could consist of and the better the 269
Apperception
organism could survive, even (at a high level of sophistication) learning to adapt to its environment. If "broadcasting" is apperception s main function, then the upshot of this evolutionary account is that we should expect animals with brains of any complexity to have apperceptive abilities. Since this account fits with and gives insight into the evolution of our own adult abilities, it is a reasonable account. And infants and many nonhuman animals have brains of considerable complexity. So, once more, there is reason to accept the Weird Argument and its conclusion that infants and most nonhuman animals have apperceptive capabilities — i.e., that MV is mistaken. In the end, the best reason for opting for the view defended in this book — Scientific Cartesianism — with its reliance on a view like PI or P2, will be that it proves more scientifically fruitful than views based on MV. Only the future will decide. But which of PI and P2 should be accepted? At the moment, I do not feel in a position to answer that question, though I incline towards PI. At times, I think the stakes of choosing one over the other to be considerably less than first appear. Much seems to depend on whether one is struck by the similarities between infant abilities and post-infant abilities or by their differences. If one focuses on the similarities, one will understand the post-infant abilities as a burgeoning based on a critical accumulation of skills made possible by the same abilities writ small. If one focuses on the differences instead, one will probably view the post-infant abilities as an altogether different sort from those possessed prior to that stage. I would hope that there are experimental ways of deciding between PI and P2. And, of course, that we cannot now think of the appropriate experiments or observations does not mean that none exists. The important thing for the purposes of this book is to emphasize that either provides a Cartesian theory of mind with the tools to construct a philosophy-of-mind version of the Argument from Analogy. VII
18. Perhaps now is a good time to say something, briefly, about the epistemological version of the Argument from Analogy. The defense of Cartesianism is rooted in scientific data: the perceptual data presented in section IV, the developmental data presented in the last two sections, and indeed, the data of nuclear and particle physics. Because Scientific 270
Selves
Cartesianism has the best fit with these data, I accept its Psychological Solipsism. And since there seem to be no good reasons for believing in Ontological Solipsism, other than Psychological Solipsism itself, there is no good reason to believe in Ontological Solipsism. Let me clarify a little. Support for Psychological Solipsism is rooted in the scientific data. But science presupposes the existence of an external world. Thus, having good reasons for believing in Psychological Solipsism presupposes the existence of an external world.52 And so it follows that insofar as one has good reasons for believing in Psychological Solipsism, one also has good reasons for believing Ontological Solipsism to be false. So the upshot is that one is justified in believing in other minds, as one does anyway, because there is every reason to believe that there are and none not to. Skepticism is false, but not incoherent.53 VIII
19. If the account has been mostly correct, how did we ever get the mind/body problem itself? How did Ontological Solipsism ever come to rear its head? Again, my answers to these questions are speculative, but constrained by the data. In order to answer them, we need to return to that basic element in experiences, the sense of control. Over time, one learns that the body is not entirely under one's control. One's arm falls asleep, and one cannot move it; one freezes with fear despite telling oneself to run; disease strikes one, and one cannot control oneself. At such times the body seems alien, a genuine "other." The sense of control is the basis of one's notion of oneself as a self. The apparent alienation of one's body from that sense of control is the beginning of the separation of that body from ones self. If the body and ones sense of control come apart, then one begins 52
53
One might disagree with this claim, pointing out that Berkeleyans, for instance, can have a science. But two replies are in order: (1) Berkeley himself does accept the existence of an external world in a fairly strong sense. He disagrees with Realists only about how to analyze that externality. Of course, this reply may not apply to all Berkeleyan Idealists. (2) But then, I would argue that more radical forms of Berkeleyan Idealism cannot have a science at all. That is one of their major defects. Laying out the arguments for my claims would take us too far afield from the present topic to make such a project worthwhile in this book. T h e skepticism i n q u e s t i o n is that w h i c h says w e have no good reason t o believe i n o t h e r minds — in contrast to that which says we cannot know that there are other minds. About the latter form of skepticism, I have said nothing.
271
Apperception
to think of the body as part of the not-self, and only as part of the notself, whereas previously one thought of the body as the self and of the self as a being similar to the objects of the not-self. I would suggest that this belief that one is not always in control of one's body is a major source of the mind/body distinction. Because the self might be thought of as that which must be in-control, one can easily come to believe that the self, the controller, is not the body nor any part of it (and, of course, this is what Descartes himself did). I claimed earlier that all the distinctions we make could be mere illusion, that Ontological Solipsism could even be correct.54 However, I think that this last distinction — the mind/body distinction - unlike the others, really is an illusion. I am able to reject Ontological Solipsism and, therefore, able to believe in an external world as a nonillusion because I ground my belief in Psychological Solipsism in the scientific data of the sorts presented in sections IV—VI. And my belief that these data are genuinely data precludes my believing Ontological Solipsism to be correct. Science presupposes an external world. But there are no analogous reasons for thinking that the mind/body distinction is aught but an illusion. The "alienation" leading to the mind/body distinction simply tells me that I do not control many things about myself (even though taking myself to be in control of some was necessary for my conceiving of my self at all). My thoughts and will do sometimes come in conflict with other states of me, of the body that is me. But that is aU. That this illusion is merely an illusion is supported by an even later distinction philosophers sometimes make. Call it the "mind/mind" distinction. We discover in our experience that our thoughts, desires, and so forth are themselves sometimes not in our control. We cannot help thinking about Mary, or wanting to be friends with Miles even though he is so shallow, or feeling depressed for no apparent reason — the tears just flood. In cases like these, we once more feel alienated, this time from our very thoughts and desires. Even our minds are not in our control. If one has already made the mind/body distinction, then at this point one might begin to feel as if there is no self at all, for the mind isn't in-control either. The "solution" to all these states of 54
Kant would claim them to form only a transcendental illusion, but I think he is wrong for two reasons. First, I do not think they are illusions of any kind at all. I am saying only that it is possible that they are illusions. Second, if illusions, they are first-order illusions, not transcendental ones.
272
Selves
alienation is to realize that much about ourselves is not in our control. But to admit this lack of control is simply to admit that one's self is not entirely in one's control. And nothing more. We do not have to take these facts as showing that we, our real selves, are neither our bodies nor our minds (as I suspect Kant did). The basis on which we come to distinguish our self, being in-control, need not itself be an essential property of our self. That kind of mistake was made by Descartes. Of course, "and nothing more" is overly quick, for the realization of how much of oneself is out of ones own control makes one begin to doubt the entire notion of in-control, leading to the problem of free will (see Nagel 1979c; 1986, 110-24). And to close the circle, I suggest that the free-will problem appears so deep to us because the sense of willed action that we possess is itself the very basis of our having a concept of the self at all — of our having acquired any concepts whatsoever. If a notion of free will (of being in-control) underlies all of our acquired conceptual distinctions and if that free will is called into question, then it appears that we really are in a mess.55 IX
20. In §16, a reconciliation between British Empiricism and Cartesian Rationalism was at least partially effected. Can a similar reconciliation be effected between Cartesian Rationalism and its twentieth-century anti-Internalist critics? I don't think so. But we can go some way toward further understanding the attraction of the critics' position. If Scientific Cartesianism is correct, then concepts are theories; moreover, they are very local — each to a person — theories. But this view of idiosyncratic theories seems to conflict with the empirical data: we communicate quite well with each other (there wouldn't be much point in writing this book if we didn't). Especially if one considers language, meanings (contents) seem to be communal, not personal. And insofar as we take natural-language words to express concepts, then concepts seem to have communal meanings, not personal ones. The inconsistency of Scientific Cartesianism with this empirical evidence is only apparent, however. The Cartesian account is strongly 55
For a more thorough discussion of the issues - especially the problem of free will - in the context of the concept-formation theory presented here, see chapter 11.
273
Apperception
Internalist. According to it, meanings are "in the head." Because meanings are in the head, they are idiosyncratic to a large degree. One person's concepts - and so entire conceptual scheme - will be somewhat different from another's. Thus, one persons understanding of another's words and thoughts will be comprised by a series of hypotheses about what the other is saying and thinking. But three factors keep this privacy of concepts and conceptual schemes from being an insurmountable stumbling block to our living in the world and living together. First, our concepts run up against the world. Imposing concepts — identifying kinds and bringing tokens under, and recognizing them as instances of, those kinds — is, as argued earlier in this chapter, a species of theorizing. Because concepts organize our experiences and the world we experience, they allow us an understanding of that world and provide the basis for yet further understanding that world, i.e., we can explain, more or less, the world we live in by categorizing it. And concepts provide us an understanding that enables us to make predictions about future experience — among other things, good concepts better enable us to avoid pain and provide pleasure for ourselves and for those we care about. And because concepts function in these ways, and because most of our concepts involve ways of organizing our understanding of an external world, we cannot use just any old concepts. We can get our concepts right or wrong. Having wrong concepts may lead to our being severely "punished" by the world. So while concepts are internal and somewhat idiosyncratic, the world constrains each of our conceptual schemes, making it likely that one person's concepts and conceptual scheme will not be all that different from another's. Concepts are constrained by the world. The second factor closing the gap between one person's concepts and another's is that people are results of evolutionary processes. Because we belong to a species that generally relies on cooperation for survival, we have probably evolved with built-in genetic (and, so, probably neural) constraints on the sorts of concepts we can form. These builtin constraints again make it likely that one person's concepts will be closely similar to another's. These truths about the ways in which the world and our species history constrain our concept-forming and concept-keeping constitute some of the motivation behind Externalist accounts of content. But these same truths are perfectly compatible with Cartesian Internalism. It is likely, because of these two factors, that one person's 274
Selves
concepts are closely similar to another's. But each persons concepts remain private, and it is only an hypothesis for each person that another's concepts are like hers or his. We do get about in the world. And much of that ability to do so is based on our ways of conceiving the world. That ability requires explanation. But the explanation does not require that adverting to the world is necessary for providing the content of our concepts. We have concepts, which already have content, before we map those concepts onto the world. Externalism has the processes in reverse order. Moreover, as we have just seen, Internalism is perfectly compatible with the truth that Externalists point to: our concepts closely resemble; otherwise we could not get along in the world. The third factor is that we need to cooperate with others, and so need to understand them and have them understand us, if we are to get along in the world. So each of us constructs hypotheses about what others' concepts are and we alter our own, if need be, to conform more closely to those of other persons. That is, each individual strives to make his or her concepts conform to something like the collective's concepts. Language undoubtedly greatly enhances the human capacity for such fine tuning. We learn language from our parents and are rewarded or punished insofar as our concepts seem to resemble or differ from those of the persons at whose knees we learn them. How does the concept-teacher decide when the concept-learner's concepts are close enough? Almost certainly, from the behavior, both linguistic and otherwise, of the learner. And the teacher rewards and punishes the learner until the learner's behaviors are in conformity with the expectations of the teacher. And the teacher's own concepts were developed in a similar way. As one grows older, one conforms one's concepts to many other "teachers," and one becomes a teacher oneself. Thus, we can say, without being overly misleading, that there is a kind of dovetailing of individual and idiosyncratic concepts toward communal concepts. A nice empirical example of this dovetailing is provided by Carey (1985) in her discussion of children's understanding of the concept •"alive""1. Children's initial concept •" alive"1 most usually is somewhat different from the adults', including items not included in the adults' concept (some self-moving mechanical things, for instance) and excluding others included in the adults' concept (plants, for instance). Over time, the children's concept — as expressed in their use of words 275
Apperception
like "alive" and "living"— conies to resemble more closely that of the adults. Besides illustrating the dovetailing movement toward a communal concept — the communal meaning of the word "alive," in this case — these facts about ""alive"1 may also illustrate the earlier claims concerning built-in constraints on concept acquisition: it is hard to think of any other reason why the children's early concept '"alive"1 would be so similar to each other's. (Though this similarity of early concepts may not be universal; indeed, it may hold only for a few concepts. The tests for the truth on these issues will be empirical.) This third factor, a dovetailing of concepts toward communal concepts — especially, in the human case, the dovetailing toward the communal meanings of words — constitutes the core of truth behind both Behaviorism and Anti-Individualism. Behavior, with its associated rewards and punishments, leads to attempts to conform one's concepts to those of the persons one interacts with and by whose further behaviors one is rewarded or punished. Furthermore, in conforming concepts to one another's, we each strive for some notion of a communal set of concepts; in the human case, expressed by the communal meanings of words. But, once more, while these truths may help motivate Behaviorism and Anti-Individualism, they are perfectly compatible with Cartesian Internalism. The account provided by Scientific Cartesianism agrees that we do attempt to conform our concepts to those of others, but we have to do so on the basis of hypotheses about what others' concepts are and on our willingness and ability to conform ours to those of others (that willingness may itself be a built-in factor, just as the ability is certain to be). But because we have to hypothesize, we cannot know that we have succeeded.56 The system of rewards and punishments only improves the likelihood that the behaviors resulting from our concepts are closely enough in conformity to those of others. But this conformity of behavior does not guarantee exact duplication of another's conceptual content. Moreover, different amounts of closeness may be tolerable for getting along: concepts about different things may tolerate more or less closeness. There is no one standard of closeness — other 56
Unless externalist theories of knowledge are correct, but externalist theories of knowledge may not have to be closely tied to Externalist theories of content (though, then again, they may be inseparable. So an externalist theory of knowledge's being correct would not much affect things — even on this issue. Among other things, even though we would (possibly) know, we would be unlikely to think that we did.
276
Selves
than that of toleration. Of course, it may be that all, or almost all, of our concepts are exactly like those of everyone else. If so, that achievement is once more something we cannot know. And there is enough difference in behavior, both linguistic and otherwise, to make that possibility unlikely to be actual. Although conceptualizations (typings) are internal to the organism and, to a degree, idiosyncratic, most of them are not intended as categories of our own experience but as categories of that independent world, the not-self, we take ourselves to inhabit. And that world — as far as we are concerned, the world — contains instances of types that both precede us and will outlast us. Having categorized the world as containing dogs, one can believe Fido exists prior to oneself and that Rover outlasts oneself. And similarly, one can think of mountains and stars both preceding and outlasting all conceivers. The only objects taken by Internalists to instantiate concepts and thoughts are most often not private objects on the "screen" of one's own mind, but objects out there in the world. If there are no such objects, then Internalists, like Externalists, would find most concepts to have no instantiations. One more extremely important claim about concepts can be extracted from the discussion of concepts that has been presented in this chapter, and now is perhaps a good time to extract it. One might ask questions like, "What is the content of a concept?" and "What determines the best description of concept content?" But juxtaposing these two questions in this way can be misleading. For to possess a concept is not to possess a set of descriptions. Concepts display their contents by their use in thoughts, judgments and the like, and in behavior. But that content that is displayed is describable only post hoc — after we have states like thoughts and judgments. And to possess a concept is not to possess this set of descriptions. "Possessing a concept" is a mental primitive; and as theoretical primitives display their content by their use in a theory, mental primitives behave similarly. Ironically, in this view of concepts, there is contained both something like Wittgensteins Tractarian "showing/saying" distinction and his Investigations insistence on meaning as use. But my claim is not that meaning is use, but that concept contents are displayed in their uses. Concept content is prior to those uses and makes them possible. It is also worthy of note that the internality and individuality of concepts in no way requires a denial of Realism. Our individually 277
Apperception
arrived-at concepts may converge on kinds that in a sense provide the best description of the world, and in that sense are kinds that really exist in the world. The issue of Realism is one on which Scientific Cartesianism has, in itself, no stance. Scientific Cartesianism does maintain that we can only have theories of the world, for that is what concepts are and that is what the whole superstructure that depends on them is; and barring an externalist theory of knowledge, we may never know whether our theories are true or not (cf. Peirce 1934, #407, 268). Still, having theories is at least compatible with having true theories. And what we can know is that the world supports our categorizations of it to greater or lesser degree — supports in the sense that we can more or less successfully predict future experience, explain and predict future events in the world itself. Unlike Orwell's animals, not all theories are equal; but like Orwell's animals, some are more equal than others.
21. To conclude, let me express a global doubt about Scientific Cartesianism and reply to it. If a disembodied mind, let alone a brain in a vat, could make all these distinctions we make, wouldn't it be an accident of cosmic proportions that our thoughts are, in fact, about a real, external world? The short answer to the question is, "No." Surely, the causal processes of the world, including how the world acts on our senses and thereby on our brains, help determine and constrain the concepts we form; and given the likely truth of evolutionary theory, it would be an accident of cosmic proportions if our internal states were not attuned to the world, at least to the degree of enabling us to anticipate future experiences. Additionally, it is almost certain that our possibilities for concept formation are constrained by innate structures; and they, in combination with the actual world we run up against, help determine the concepts we form.57 If a natural clone or a brain in a vat had just my internal states, that would be a cosmic accident; but Scientific Cartesianism gives me no good reason, in itself, to believe that I am - or anyone else is - such a cosmic accident. 57
For evidence that such innate constraints exist, see, for example, Keil 1991, Gelman 1991, and Marler 1991, among others.
278
10 Things Two issues from the previous chapter require attention in this one. First, and perhaps most obvious about the previous chapter, was the defense of a Cartesian theory of mind. Second, and nearly as obvious, was the handwaving when it came to '"space""1, ""time"1, and ""object"1. In this chapter, the gap on these three key concepts will be partially filled in, and in a manner consistent with a Cartesian account. Actually, the aim is not so much to provide a Cartesian account of acquiring and possessing these concepts (for reasons to be explained, an account cannot yet be provided), but more weakly, to show that present scientific accounts of these concepts are compatible with Cartesianism. Thus, this chapter is not itself so much a contribution to this empirical research as it is a meditation on it in light of the Internalist/Externalist debate. The handwaving concerning these three key concepts is closely tied to my handwaving concerning an even larger issue. These three key concepts are the concepts we employ in moving from a concept of the pro to-not-self to a concept of the not-self as a world populated by many individual things. And I suggested in the last chapter that this move from ^proto-not-self"1 to ^not-self"1 involves aspectualizing phenomenal information that is available to the infant s apperception. However, the only plausible account - in fact the only account - 1 gave of how this sort of early aspectualization of phenomena might work was from our having no acquired concepts to our conceiving of the two sides of the protoself/proto-not-self distinction. But I owe an account of how we get from these two concepts to our concepts of self and not-self— a large undertaking. And it is an undertaking I cannot complete — or even much begin — in this book. But my inability here does not concern me greatly, for I think it is the result of this task being primarily a task for empirical science (which claim does not imply that philosophers cannot have useful things to say about it — /just don't have many useful things to say). 279
Apperception
I have some well-formed guesses about how this task might be completed. Moreover, I think cognitive neuroscientists are already working on this task, whether they realize it or not. In the last chapter, and elsewhere, I have indicated that although the information in phenomenal states is locked up in their image-like nature, the states themselves may actually have a great deal of structure that either allows them to be "read-off" on the way to being aspectualized or allows them to cause (without being "read-off") aspectualized states reflecting that structure. Insofar as cognitive neuroscientists are looking for constants ("maps," "cone shapes," and so on) that are constants in our experience, I think it is these structures they are looking for. And these projects are, as said, empirical projects. But it must be remembered that I deny that phenomena have properties like shape. So any explanation must also explain how we get from the analogues of spatial properties to the spatial properties themselves. And this task is one no empirical scientist seems to be at work on. Nor do I have any less generalized suggestions for them about how to proceed. I am merely calling attention to the (very difficult) problem, not offering a solution to it. Section II of this chapter will be primarily devoted to a discussion of spatial concepts (actually just of one sort); but much of what is said about spatial concepts will, at least in spirit, carry over to temporal and object concepts. Before any of these concepts is investigated, section I continues the comparison of Cartesianism with its Externalist rivals.
1. It needs to be recalled, and emphasized, that a Cartesian theory of mind is rooted in scientific results and evidence, and that science presupposes an existent external world. And as pointed out as early as chapter 1, scientific explanations of the world have ontological commitments that a philosophic theory, compatible with and underlying that scientific theory, can and should avoid. The upshot of these facts is that scientific explanations — psychological ones in the present case — from a Cartesian standpoint will resemble many Externalist ones almost exactly. Suppose for the case in point (scientific explanation of concept possession and acquisition) that something like Fodor's early (1987) causal account of concepts were most nearly correct among Externalist alternatives. Then Cartesians might provide an account that would take in and utilize many of the 280
Things same data that Fodor s account relies on. That is, as regards psychology, there will be few differences, and perhaps no obvious ones, between Cartesian science and Externalist science. Cartesianism, as section IX of the previous chapter showed, can even allow for interactions among individuals to matter in concept acquisition and possession. (It had better: we learn many of our concepts from others, especially at our parents' knees.) So Cartesianism can even allow for much of the motivation behind Anti-Individualist Externalism. That is, the dispute between Internalism and Externalism is not so much about what the empirical data are, but about how to interpret those data so as to understand the determinants of contents in concepts and thoughts. But much psychology of concept acquisition can be practiced without ever considering this high-level problem of what makes content content. The scientists are more interested in the empirical determinants of how concepts and thoughts, with their contents, come into being. But these sorts of question are prior to, and independent of, the question of what makes content content. And on these scientific questions, the ones present-day empirical scientists are trying to answer, the Internalism/Externalism dispute will matter little, and Internalists and Externalists can agree on similar scientific solutions to these scientific disputes. Moreover, given that the grounds for Cartesianism are themselves based in science, philosophical theories like skepticism are almost certainly false. As said at the end of the last chapter and as indicated in previous chapters, the odds of something like a natural clone existing are staggeringly small, vanishingly close to impossible. As Wittgenstein (1969) also pointed out, if our scientific beliefs are totally misguided, enormous numbers of our other beliefs would have to be surrendered — to the point of speechlessness.1 But as argued briefly in the previous chapter, and as will be argued further in the next chapter, there is no good reason for surrendering our beliefs. For skepticism to be right, belief after belief would have to be peeled away and rejected. Dennett (1991b, chapter 1) gives a good account of the vast knowledge that would be required to cause a brain in a vat to have perceptual-like experience. Moreover, as seen earlier, there is excellent reason to think that skepticism is false, and none — except for its very possibility — to think it true. So, once more, the rational position for Cartesianism will 1
Though Wittgenstein was not concerned with scientific beliefs per se.
281
Apperception
involve a commitment to a scientific account of concept acquisition and possession that depends on interactions between an organism and its environment, including its interactions with like organisms, and also including built-in genetic and neural constraints on its conceptforming abilities. That is, once more, in its rejection of skepticism, Cartesian science will look simply like science. 2. But then, why, if Cartesianism makes no difference to science, is the third part of this book dedicated to a defense of Cartesianism? Why not just get on with the work of scientific explanation, letting philosophers who want to, argue among themselves over issues that make no scientific difference? These are not idle questions, and they deserve answers. First, I am a philosopher; and I am interested in philosophical questions. And Cartesianism does make a difference to certain philosophical disputes. For example, both many Externalists and many Cartesians find skepticism to be false, but in different ways. On many Externalist accounts (though perhaps not all), if Externalism were true, skepticism would seem to be a priori false; and its defenders would seem to be making conceptual and logical errors in defending it. For Cartesians, skepticism is just false, a not-very-well-supported theory. Its defenders are mistaken, but in a more understandable and reasonable way. Hume's mistake, for instance, was equating the mental with the phenomenal; but Hume's error lay in an empirically false theory of mental states. He was not making an a priori logical or conceptual error. Moreover, most philosophers who put skepticism forward are not themselves skeptics. Instead, they believe only that the possibility of skepticism sheds a good deal of light on the nature of mind. This belief certainly lies behind Descartes' project in the Meditations. The very possibility of skepticism shows how deeply theoretical is our relation to the world. Such insight is humbling, occasionally frightening, but wondrous and empowering at the same time. That our relation to the world is deeply theoretical does not mean, however, that some theories are not better than others. For the third time in this chapter, I emphasize that there is no good reason to think skepticism true, and good reason to think it false. Nor does the deeply theoretical nature of our relation to the world mean that there is not a way the world is, that there is no true theory. Of course, it does not mean that Realism is right either. Though, once 282
Things more, in so far as science has a Realist bent and in so far as the grounds for Cartesianism are rooted in science, a fairly strong notion of Realism survives. Thus, Cartesianism makes a philosophical difference. And to philosophers, at least, this difference is important. But, second, although there is an important distinction between proto-theory (philosophy) and theory (science), the two are intimately connected and the border demarcating the one from the other is probably never a clearly drawn border. So while the Internalism/Externalism debate matters little to present scientific concerns, it is almost certain to matter to future ones. Third, while Cartesianism is going to tell a scientific story involving interactions between organisms and their environment, just as Externalism would, that fact does not mean that Cartesianism makes no difference to the science that both grounds it (epistemologically) and is grounded by it (metaphysically). Two important results for science come quickly to mind - even in regards to present scientific concerns. If Cartesianism is correct, (1) we should expect people s concepts to differ to various degrees from each other s, and a single person s to change over time. There may be a sort of communal concept, that which is the meaning of a shared word, for instance; but the communal concept is the result of a kind of vectoring of individual concepts, represented by each person s use of the word, and not the real meaning of the individuals concept. Communal meaning is a sort of logical construction, an abstraction, having no necessary and sufficient conditions of inclusion. And no fact of the matter may exist, in every case, to decide disputes about what the real public meaning of a term is. And because communal concepts are abstractions, linguistic meaning provides only quite limited insight into the concepts individual people actually possess. Public meanings yield but minimal insight into the psychology of individuals. Externalists are mistaken to think that truths about public meaning carry over to, and determine, individual concepts. These Cartesian claims are at least compatible with Murphy and Medins (1985) discovery that people maintain that there are necessary and sufficient conditions for their concepts, even when they are unable to cite them. That is, one way of interpreting the subjects' reactions is to read them as saying that ordinary language is inadequate to capture the content of their concepts. As said near the end of the previous chapter, concept content displays itself; it is not a collection of descriptions. 283
Apperception
(2) Cartesianism allows for much greater distance between the way the world is and our concepts of it than do many sorts of Externalism (Fodorian [Fodor 1987] causal ones, for instance). Science, from a Cartesian point of view, can be seen as an attempt to discover concepts that better fit on the world than do our pre-scientific ones.2 We should not expect the content of our pre-scientific concepts of the world to be determined by the way physics now tells us the world really is (except causally determined — recall the discussion of ^tree"1 from the previous chapter). We can no more (but no less) expect our concepts to closely match the world than to closely match each other s. These consequences of Cartesianism matter for psychology and not merely for philosophy. Nor is it likely that these two constitute the only consequences of Cartesianism relevant to present scientific interests. But even if there are no others, these are important enough in their own right to make the time spent defending Cartesianism worthwhile — even for present-day psychology. 3. If Cartesian accounts of the world allow for causal and other interactions between environment and organism, then it should come as no surprise that Cartesian accounts of concept acquisition do so likewise. So when it comes to spatial, temporal, and object concepts, Cartesiandriven scientific accounts will not be much different, if at all different, from Externalist-driven ones. A Cartesian does not need to claim that it is technically possible that brains in a vat or disembodied minds, say, could possess these concepts, only that it is metaphysically possible. The project of the next section is to clarify why scientific accounts of how we acquire and possess these concepts are likely to be compatible with Cartesianism. No attempt will be made to defend the particular scientific data underlying that project. Those data may turn out to be false. But they are, at present, thought to be true; and as far as my own knowledge goes, I am not selecting only data and scientific theories that are compatible with Cartesianism while omitting those that are incompatible. The data and theories mentioned are representative, in relevant respects, of every relevant scientific theory that, to my knowledge, is trying to understand our possession of spatial, temporal, and object concepts. 2
If I am right about theories (for instance, that categorizations are theories), then in one important sense, there are no nontheoretical concepts, though the distinction between pre-scientific and scientific concepts would remain.
284
Things The version of Cartesianism defended in the previous chapter is committed to three innate abilities or faculties: (1) the ability to distinguish the in-control from the not-in-control, (2) the ability to apperceive at least some of our mental states, and (3) the ability to recognize enabling relations (especially causal ones). In the next section, I speculate on how these same three abilities are involved in our acquiring spatial, temporal, and object concepts. This project will mean going beyond present scientific knowledge, though it has to be compatible with that knowledge. Speculation, however, is necessary, and in the very nature of a philosophical project of providing a prototheory. And for trying to get a picture of all the data now available, only philosophical theories seem possible at present, for we first have to tell a plausible story about the accepted data if scientific theory is ever to be possible. The scramble that is now on — for many of us interested in the cognitive neurosciences — is to tell the most plausible story about the accepted data. II l
4. Spatial concepts include ~far~ , '"near"1, r~here~l, ""there"1, ""left"1, Tight" 1 , ""up""1, ""down"1, ""on"1, r in"", ^through~", and many others.3 Spatial relations can be abstracted and, thus, captured in maps of various sorts, and so on. How do we come to acquire these concepts (or a subset of them)? Do we acquire them or are they innate? As remarked in the previous chapter, answers to these and many other questions are only beginning to be provided. But everyone agrees that we human beings possess spatial concepts: spatial concepts are obviously expressed in our various languages. And because they are, it follows that we human beings not only possess spatial concepts, we apperceptively possess them. But is apperception necessary for acquiring 3
1
I am presuming that we possess spatial concepts like these before we possess a concept of space itself. This presumption is based on the belief that our concept of space is a quite sophisticated concept that involves an interconnection among spatial properties. If one wanted to hold that having a concept of a spatial property is to have at least a minimal concept of space itself, I would not strenuously object, since I am not sure that the two views have any important differing consequences. One might also claim that one cannot acquire any of these spatial concepts without acquiring all of them, that is, that '"space"1 is either conceptually and temporally prior to spatial concepts or at least arises simultaneously with them. This view does have consequences different from the view I am presuming, but none that in any important respect affects the discussion to follow.
285
Apperception
spatial concepts in the first place? Still another question in need of an answer. No one is in a position to answer these questions a priori. This rather unsurprising claim, nevertheless, distinguishes my view of philosophy from that of many other philosophers; for many philosophers (the most significant of whom is Kant) believe these questions can be settled only in an a priori fashion. Kant's own view of space, however — despite his own claims for it — is only a theory-sketch (a proto-theory). And proto-theories are a lesser species of theory than full-blown scientific theories. But "lesser" does not equal "less important" in every sense. As remarked on several previous occasions, these proto-theories make full-blown ones possible. They make possible the Gestalt shifts that lead to scientific theories.4 5. Before turning to spatial concepts themselves, a few more general remarks about Cartesianism on space are apropos. One reason for opposing Cartesianism is the following inference: Either spatial concepts acquire their meaning from adverting to actual spatial relations (Externalism) or there exists a private, internal space from which spatial concepts get their meaning, and to which the concepts primarily apply. But no private, internal space exists. Therefore, spatial concepts acquire their meaning from adverting to actual spatial relations (see Brewer 1992 for an instance of this argument). But this valid disjunctive syllogism is weighty only in proportion to the exhaustiveness of the disjunction, and Scientific Cartesianism rejects the exhaustiveness of the disjunction. While Internalist about mean4
Because of the importance of proto-theories, I think that Kant was probably the greatest of all intellects. Newton is his only rival. But their greatness lies in different kinds of intellectual achievement. Newton created an encompassing scientific theory of grand scale from proto-theory (admixed with previously scattered small-scale scientific theories). It is the scale of Newton's achievement that is so impressive. (Such scale is more typical of proto-theory.) Kant, on the contrary, displayed his genius as a proto-theoretician. The scale of his theory is great, though no greater than that of many other philosophers. But the actual proto-theory he arrived at had an unprecedented paucity of precedents. Kant's leap was so much greater than anyone else's (with the possible exception of Plato's, about whose precedents we know but little) that it defies understanding (at least mine). Kant, in my view, invented cognitive "science" a hundred and fifty years before it was reinvented with all the usual precedents in place. The quotes surround "science" in "cognitive 'science'" because, at present, cognitive science is a set of proto-theories, and maybe a few small-scale, scattered scientific theories. It is not a large-scale scientific theory.
286
Things ings, Scientific Cartesianism rejects the idea that our concepts are about an internal space; but Scientific Cartesianism also denies that its rejection of this idea is tantamount to accepting Externalism. Defenders of the inference seem to think that Internalism is committed to the idea of a phenomenal space that mirrors, in some sense or other, actual space. But as we have seen, Cartesianism is compatible with any of the six perceptual theories limned in chapter 4, and four of these six maintain that phenomena are, at best, merely epiphenomena of perceptual processes. Moreover, in chapter 2 it was argued that phenomena do not have spatial properties of any kind. So Cartesianism is not by its very nature committed to a phenomenal space. More to be noticed is that even a Cartesian who accepts one of the last two theories of chapter 4 (the revised read-off view and the causal view), which maintain that phenomena have a more integral role in perception, is also not committed to phenomenal space — or to any other internal space. These latter views are committed to phenomenal analogues of space but not to phenomenal space. A can be an analogue of B without having the properties of B. "But if A is an analogue of a spatial relation, isn't A itself that very spatial relation? Surely, only a spatial relation can be an analogue of a spatial relation." The reply is that the claim is false, and the answer to the preceding question is, "No." For some reason, when we think about vision, we are sorely tempted to think of visual phenomena as being spatially laid out. We are, perhaps, less tempted to think of haptic phenomena (including kinaesthetic phenomena) as being themselves spatial, though the temptation is still there. But, surely, the temptation is considerably lessened, or even nonexistent, when it comes to aural phenomena. There may be intensity differences in sounds, and timing differences, that constitute analogues of spatial relations; but aural phenomena do not themselves have spatial properties. Being an analogue to a spatial relation and being a spatial relation are different from each other. As argued in chapter 2, the theory of this book, while Internalist, holds visual and haptic phenomena to be like aural ones in this regard. There is no literal left—right distinction, say, in visual or haptic phenomena. Analogues of left—right may exist phenomenally — as in aural experience — but not the relations themselves. The mention of aural experience is worthy of emphasis here because 287
Apperception
aural experience is often completely neglected in discussions of spatial concepts and judgments. Spatial properties are often listed among the "cornmonsensibles," a term usually meaning "in common to the haptic and visual senses." Yet we aurally identify location, distance, and direction — spatial properties all. So if the phenomenal relations are only analogues — and not the real things — Cartesians can hold that we read off the analogues to acquire our spatial concepts or hold that the analogue relations of phenomena cause, in a noncognitive way, our spatial concepts (see chapter 4). How, though, might we get from analogue relations to the concept of spatial relations? I have no developed answer to that question. The point here, though, is that even if Cartesians accept a large role for phenomena in perception, Cartesians need not be committed to anything like an internal space, phenomenal or otherwise. For Cartesians, spatial concepts are by their nature applicable to an external world (a not-self), not to private internalized structures; but the meanings of those concepts are not bestowed by adverting to that world.5 That is, Scientific Cartesianism rejects both the notion of a private space and Externalism. 6. The case I begin with, in order to establish the point that Cartesianism is compatible with known scientific evidence and theories of spatial cognition, is that of barn owls. This case is especially interesting because it involves a nonhuman animal and because it involves aural processing of space, rather than haptic or visual processing. The spatial property focused on will be direction. Barn owls can detect and catch prey (usually small mammals, such as mice and voles) in nearly total darkness. Moreover, unlike bats, barn owls are usually completely silent in their hunting (like other owls, even the feathering of their wings makes their take-offs and landings more silent than those of other birds). They do not emit the highpitched squeaks picked up by a bat s "sonar" sense. That is, barn owls seem to rely on ordinary hearing — very acute hearing, but hearing, nevertheless.6 5
6
Kant (1787/1961) argued that there is an internal space, but that it is, at one and the same time, external space. It is in this commitment to a phenomenal world where I think Kant most went astray on this issue. Most of the data concerning barn owls in the following discussion appear in Konishi 1993. Some I have learned from many years of birdwatching.
288
Things Let me set out a few basic facts concerning a barn owls directional hearing. Only then will I be in a position to discuss the owls directional concepts.7 First, it is known, even from human hearing studies, that there are two sources of directional hearing (as reflected in an organism s turning in the direction of a noise); and these sources are useful to the relevant organisms because of the fact that those organisms have two, somewhat separated, ears (much aural information is available to one ear alone, but not directional information): (1) timing differences, i.e., a sound s reaching one ear either simultaneously with, earlier than, or later than the other ear; and (2) intensity differences, i.e., the force of the sound waves, which again can be different for the different ears, with the head causing a loss offeree to the ear farther from the source. Second, there are "space-specific" neurons, neurons that "react only to acoustic stimuli originating from specific receptive fields, or restricted areas in space" (Konishi 1993, 68—69); and these neurons are situated in the external nucleus (part of the auditory area located in the owls midbrain). These neurons primarily "map" spatial regions on the contralateral side (i.e., neurons in the left external nucleus mostly map spatial regions to the right of the owl, and vice versa for space-specific neurons in the right external nucleus), though an overlap exists. Third, owl ears, unlike human ones, are offset vertically (the left ear being higher, but pointing downward; the right ear, lower, but pointing upward). Fourth, the eyes of an owl are so large (making an owl well adapted to night activity) that they cannot move in their sockets. An owl, thus, has to turn its entire head to look in a new direction (it can turn its head approximately 270 degrees), making observing its directional looking easier than with most animals. By putting tiny earphones into the owls' ears, Konishi and his colleagues found that they could control an owls directional looking behavior. By offsetting the time a tone is heard in each ear, while keeping intensity constant, they could get an owl to turn in the direction of the ear in which the tone was first played. And the more the second tone was delayed (within limits, of course), the farther the owl 7
The distinction I drew between the concept of space and spatial concepts is mirrored closely by a distinction between a concept of direction and directional concepts.
289
Apperception
turned to that side. Similarly, by offsetting the intensity of the tones heard, making a tone more intense in one ear than in the other, while keeping timing constant, the experimenters could get their owls to look up or down, depending on which ear heard the more intense tone. Finally, by changing timing and intensity in each ear, the Konishi group could get the owls to look in predictable directions of left—right and up—down at the same time.8 7. Konishi s (1993) data certainly seem compatible with a Cartesian account of the owls directional looking. In fact, the experiments are such that the owl looks in directions in which there is no sound source. They "map" areas as aural sources even in the absence of external sources. The owls looking behavior is the result of aural illusions created by the tones being played first through one earphone and then the other. But of course, the real question is what the relevance of the Konishi results is to the issue of spatial concept acquisition and possession. Why does directional turning show anything relevant to the issue of whether barn owls possess spatial concepts or of how they acquire them? Let me begin with the question of concept possession, which itself can be divided into two further questions: (1) Do barn owls possess directional concepts? (2) Could a being utilizing as its sensory information only that available to a barn owl possess directional concepts? These are obviously different questions, though if the answer to the first is "Yes," then so must be the answer to the second. However, the answer to the second could be "Yes" even when the answer to the first is "No." Consider the questions in order. (1) That owls turn in a given direction according to aural cues does not mean that owls possess directional concepts. A robotics expert could probably construct a relatively simple machine, one most of us would agree has no concepts, that could mimic owls'behavior in these experiments. Whether owls possess directional concepts is an empirThe Konishi (1993) article uses the above information only as its starting point rather than as it end point. Its main purpose is to trace the processing from the lowest neurological levels to this high level (the external nucleus). It appears that the processing of timing and of intensity are parallel and independent of each other in early stages, becoming combined only well up the aural ladder (just before reaching the external nucleus, which seems to be the last-stage aural area, resolving phase ambiguities in the data). As with the visual hierarchy, cells further up the aural hierarchy are larger and sensitive to more kinds of information than those cells lower in the hierarchy
290
Things ical/theoretical question that is resolvable only on grounds additional to their directional head-turning behaviors. As suggested in previous chapters, to have a concept is to possess something like a mental word. A concept resembles a word in at least two respects. First, a concept is word-like in that it has an aspectual nature; but, unlike words, its aspectual nature is not set as a matter of convention. Second, a concept is word-like in that its role in judgments is similar to that of words in the sentences that express the judgments. Just as words have little life of their own outside the sentences they are used in, concepts have little life of their own outside the thoughts, judgments, and so on in which they are embedded. Besides these similarities to words, it must be remembered that a concept is a theory, a law-like generalization, that allows for bringing instances under itself, enabling us to organize what we experience so as to better predict (postdict) and explain. Supposing these conjectures are true, then using the results of chapters 1 and 2, one can conclude that owls possess directional concepts if they perceive directions, for as shown in those chapters, perception is judgment. 9 Since I think it plausible that owls perceive directional properties and are not merely caused to turn their heads by a processing stage prior to judgment, I find it plausible that owls possess directional concepts. And as we saw in the last chapter, there are independent reasons to think that owls make judgments. First, if owls feel pain, then given the conclusions of chapter 3, owls make judgments. And since pains are primitive experiences, it would not be surprising if owls made aural perceptual judgments, especially since pains, in being evaluative, require an apperceptive level of judgment in addition to a first-order representation. Second, if owls conceptually distinguish themselves from others (as they must do if they have material concepts at all, i.e., concepts of an external world), then as the last chapter argued, they are capable of making certain kinds of judgment (being in-control versus being not-in-control, and so on). Third, since owls have complex brains, the "broadcasting" function of apperception leads us to think that they are more like us than like current computing machines. Of course, in the end, it is an empirical question whether owls perceive, feel pain, and conceptually dis9
Or "judgment." Throughout the rest of the book, I am going to assume theory PI from the previous chapter. Using P2 instead would require some adjustments in the text — but none that would be surprising or that would imply a rejection of the main points of this and the next chapter.
291
Apperception
tinguish themselves from others; but I know where I would place my money.10 (2) But even if my empirical conjectures are false, the question remains whether beings possessing no further sensory information than that derived from an owl's aural experiences could, within realistic empirical limits, possess directional concepts.11 The answer to this question is easier than might at first be imagined. Since human directional hearing appears to work much like the owls (though our ears are not offset vertically, and so the account of our directional hearing will be somewhat different from that of the owl) and since we do possess directional concepts, the answer must be "Yes." It may reasonably be argued that the previous footnote precludes this "easy" answer since it is not at all certain that a human-being-like creature that only heard direction could acquire a concept of it. But my concerns are unaffected by that worry. What I am wondering is whether a being whose sensory apparatus resembled that of a barn owl — and I gather that barn owls, in fact, also visually and haptically process directional information — could (again, within reasonable empirical limits) come to possess the concept of direction. That I focus here on a single sense is irrelevant — as far as I can see — to the task at hand. The real question underlying all these others, though, is what makes these concepts directional concepts (or what makes the judgments in which the concepts are embedded directional judgments). It will help to answer this question if we speculate about how we acquire directional concepts. As maintained earlier in this chapter, both Cartesians and many Externalists will give a similar kind of account: Actual sounds, at actual directions, with actual intensities, cause the timing and intensity differences that, in turn, cause us to perceive them as being in different directions. There also, then, needs to be a further story, perhaps involving an awareness of head turning and so on, that accounts for the rise to the aspectualized judgments of directional perception we know ourselves to have. If it were not for the fact that the sound waves reach different actual ears, with different timing and intensity, this second And of course, it is a theoretical question whether my arguments of the previous chapters are correct. For empirical evidence, albeit about rats rather than barn owls, that nonprimates employ concepts, and don't merely respond, see Dickinson 1988. This is an oversimplified question. It may be that multimodal sensory perception is required for acquiring any spatial concept. Whether it is, is another empirical question. For the present, I ignore the complications resulting from the issues raised by this note.
292
Things story could not get off the ground. That is, Cartesians are committed in their science to actual noises, actual sound waves, actual ears, actual brains to process the sound waves and the information they contain, as much as Externalists are. Both will agree that neural structures12 are created in this way, and Partist Externalists may well agree that one of these structures is the concept ""direction"1. But relevant Externalists will claim that what makes this particular structure that concept is that the concept adverts to (perhaps by being mapped onto or by being caused by) actual directions. It is this claim Cartesians deny. And so Cartesians must face two further questions: (1) Can they make sense of, for example, a brain in a vat s acquiring (and, thus, possessing) directional concepts — even if there is no real reason to think any brain in a vat does possess such concepts? (2) Suppose Cartesians can show that a brain in a vat could have the same brain structures that in normal brains result from sounds in an actual world, what would make one of those structures into a directional concept? These questions, like the pair previously considered, are not independent of each other. If nothing could make a relevant brain structure into the relevant concept, the answer to the first question would be "No." Since there are two questions here, even though not independent of each other, two stages will be used in considering them. First, it will be shown that the same structures could, in principle, emerge in a brain in a vat. This result will answer neither question. But then it will be argued that no known scientific data or acceptable philosophical arguments are incompatible with the idea that among these structures could be a concept of direction. And this argument, while still answering neither question, will at least show that nothing but hard work appears to stand in the way of a Cartesian's providing answers. The Konishi experiments pretty obviously do for the first stage of the argument, for the owls turn their heads due to illusions: no sounds emanate from the directions to which they turn. And we can imagine a later Konishi who would bypass playing tones through earphones, opting instead for more direct stimulation of various aural, neural nets. And it is only a further small step to imagining a yet later Konishi stimulating those same circuits in an owls brain in a vat. Granted that what we imagine would likely be physically very difficult, if not impossible, still it does not seem to be, in principle, impossible. 12
"Neural structure" should be understood loosely here, to include a whole realm of relevant neural structures, states, and processes.
293
Apperception
Suppose, then, that these neural nets could, by a further causal route, lead to another net that in normal, intact owls would be a directional concept. What would make that structure, when reproduced in an owl brain in a vat, into a directional concept? I am at present unable to answer this question — either for owls or for owls' brains in vats. Whatever the answer, it will have to allow for relations of that structure to other neural structures, relations that will, when understood in their complexity, only be able to be understood as relations of a concept to judgments containing it, relations of those judgments to other structures, themselves understandable only as judgments, those latter relations being best understood as various sorts of inferences, logical relations of other sorts, evidential relations, and so on. However, the arguments of section V of the last chapter pretty convincingly show that certain Individualist Externalisms are unlikely to provide an understanding of concept content. For one thing, as argued there, the world is just too different from the way we often conceive it to be. The evidence overwhelmingly points to our theorizing the nature of the world. To pursue the same issue through a different example from those presented in chapter 9, suppose the world is, as a matter of fact, Riemannian non-Euclidian, so that Riemannian, non-Euclidian triangles cause us to have our concept ""triangle"1. Does that truth mean that "plane," "triangle," and so forth were always words for nonEuclidian concepts, but we just didn't know it? Have we been conceptually correct all along, although we never understood the content of our concepts? I find it hard to believe the answer is "Yes" to either question. Suppose it is, though. What, then, is the status of Euclidian geometry and of Lobachevskian non-Euclidian geometries? Are "plane" and "triangle" Riemannian terms, and so these other geometries are not other at all (or are thoroughly mistaken, since, for instance, triangles are not equal to 180 degrees)? Or are the sentences of the alternative geometries meaningless because their terms are? If "No" is the correct answer to these questions, as it surely is, then we are owed an explanation of how these terms have meanings in those geometries that do not "match" the world. And if an explanation can be given, it is almost certain to be one that a Cartesian can generalize to all cases. Perhaps Anti-Individualist Externalists can avoid these problems; but for the reasons cited in the last chapter, Anti294
Things Individualism is even a less likely position than its fellow Externalist rivals. 8. I have provided no account of how brain structures are spatial concepts (i.e., have content). Instead, I have continued the theme of the previous chapter: since Externalism is most likely incorrect as an account of concept content, there must be a Cartesian account of it. And I have continued by showing that work in science here to date is not incompatible with Cartesianism. Cartesian science is going to look a great deal like Externalist science. More to the point, it is going to look exactly like science as it has always been practiced. But before closing on space (no pun intended), a further piece of speculation might be useful for clarifying Cartesianism. Is apperception required for an owl to acquire spatial concepts? Let me say right off that as with most of the other questions raised in this section, I can offer no developed answer. Still, a few things are worth saying. If apperception is required, it would not be too surprising. After all, as chapter 3 argued, apperception is a more primitive state of organisms than we might have thought. Second, apperception is necessary for a self/not-self conceptual distinction and thus, for a conception of a world (the not-self) where spatial dimensions exist. Moreover, as Kant noted, '"space"1, Hame"1, and ""object"1 are unlike most other concepts, being clearly more fundamental. One might even argue that they have to precede the self/not-self distinction, for that distinction is itself rooted in an in-control/not-in-control distinction; but for us to have a sense of being in-control (or not-in-control), there must be an idea in us of something we are in-control (or not-in-control) of. And, surely, the argument might conclude, spatio-temporal objects are the only candidates around for that "something." While initially plausible, this argument is mistaken. Phenomenal experiences, as argued in the previous chapter, are the best candidates for the "something"— in or out of control — that underlies the self/not-self distinction. So the question remains open as to whether apperception is necessary for acquiring these other fundamental concepts. In the case of spatial and object concepts, my own speculation is that apperception is necessary. I believe that there are certain structures, analogue to spatial and object ones, locked up in the phenomena and that apperception is necessary both for unlocking the information in the phenomena and 295
Apperception
for moving from analogues of space/object to spatial and object concepts. But these claims are mere speculations and they are empirical speculations. A correct account could turn out quite otherwise.13 Even though a self/not-self distinction is primary and precedes and underlies all other acquired concepts — and in that sense apperception is required for having spatial concepts at all - it is possible that dividing the not-self into a spatial (or temporal or object-filled) world itself does not require apperception. If spatial judgments are possible at the level of C2, why not also at the level of C l alone? But if Cl judgments are possible, then there must be concepts that C l judgments contain. It could be the case, of course, that the concepts they contain must first be learned through apperceptive consciousness. However, certain Cl judgments are, according to the present theory, acquired very early on. "This experience is in-control" is a C l judgment. C2 is an awareness of this judgment. C2 makes possible the self/not-self distinction but only because of this C l judgment. "X causes this experience," or something close to this judgment, is also a C l judgment we are from the beginning capable of making; and it also plays an important role in acquiring the self/not-self distinction. So if phenomenal states, say, can cause a spatial judgment in C2, why could they not cause a spatial judgment in C l directly, at least after the concept of a not-self has been acquired? I know of no a priori arguments against this possibility. Barn owls may acquire spatial concepts and make spatial judgments without being apperceptively aware that they possess the first or think the second. However, even if the relevant concepts are first in C l and not themselves apperceived, it does not follow that apperception of other states isn't necessary for acquiring these concepts. Furthermore, the complex brain and its need for broadcasting modules may suggest that owls are apperceptively aware of their spatial concepts (which is not the same as also being aware of this higher-order awareness — most apperception is not introspection). Scientific Cartesianism, even with its strong commitment to the importance of apperception, is noncommittal on the question that began this subsection and views it as a question still to be resolved, but by empirical/scientific methods. 13
Despite my earlier disclaimers, I think it may also turn out that multimodal sensory experience is necessary for acquiring spatial and object concepts, that such multimodal experience is required for getting from analogues of these concepts to the concepts themselves. However, my ideas on this issue are so ill-formed at this stage that I cannot present them — even as speculation.
296
Things 9. In regard to space, only directional concepts have been considered, and only as they might arise through a single sense. But it is at least possible that the points made concerning them generalize to every spatial concept, whether acquired through an aural, visual, or haptic sense. Nothing in the discussion seems to be inclusive of directional properties while excluding other spatial relations in a way relevant to the issues at hand. 10. And in a similar way, the spatial example generalizes to object concepts. Temporal concepts may require a quite different account. As Kant saw, the really hard question about ^time" 1 for an Internalist is why we conceive of time, unlike space, as having a single direction. Merely that one experience follows another cannot account by itself for times directionality. That experience E2 succeeds experience El does not mean that what is experienced in E2 occurs later than El (for a similar point see Dennett 1991b, 144—53), nor that we should conceive of it as such. I would suspect that, as Kant maintained, the concept of causality underlies our temporal concept s having a unique directionality. And on the Cartesian view presented in this book, human beings and other organisms possess an innate conception of enabling relations (including causality), at least in a rough way (see Leslie 1988; Spelke 1988; Bauer 1993; and Bailleargeon 1993 for evidence showing early awareness of causality). The work on our possession and acquisition of object concepts is in as early a stage as that on spatial and temporal ones. But whether we consider the ideas ofBiederman (1987), Kosslyn (1987), Spelke (1988), or virtually anyone else,14 it is pretty obvious that our object concepts are deeply theoretical. Gibsonians would deny the theoretical nature of these concepts and raise the same point about object concepts earlier raised about spatial ones: Even if homologous structures could occur in a brain in a vat homologous to the various cone representations of Biederman, say — and even if other structures homologous to normal ones occur, what would make those structures object concepts or other structures object judgments'? Once more, the answers are contained in the previous chapter. On the one hand, there is the apparent failure of Externalist accounts; and, on the other hand, there is the plausibility (as Spelke 14
Including Marr 1982 - though I realize that this claim is controversial (see Egan 1991, who would side with me on this reading of Marr).
297
Apperception
[1988], Kosslyn [1987], and so many others have argued) of a deeply theoretical account of our perceptual concepts. And the terms of a theory cannot be fully dependent for their meanings on mapping the world or the "theories" would not be theories at all. They would be descriptions. And it needs to be emphasized once more that categorization is itself a basic kind of theorizing. 11. As Wittgenstein was the focus of the previous chapter, Kant has been the focus of this one; and highlighting a difference between his view and mine is a good way to close this chapter. Kant, unlike anyone before him, saw the depth of the problem of our possession of the concepts, '"space"', •"-time""1, and ""object"1. And unlike the present author in the present chapter, Kant went into great depth in order to explain how these concepts are possible. Yet, Kant's in-depth analysis not only helps constitute his greatness, it reveals a major error on his part as well. Kant thought one could say all that was important about how these concepts are possible in an a priori fashion. To the contrary, Scientific Cartesianism holds that correctly understanding concept acquisition and possession is largely an empirical/scientific question (actually many such questions). And it is because science has only begun to grapple with these questions that the present chapter can provide no extended answers. The Cartesianism of this book is — and shouldfee—called Scientific Cartesianism, and the point of this chapter has been that science and Cartesianism are fully compatible.
298
11 Will In chapter 9, the will was said to play a prominent — a pre-eminent — role in our cognitive lives, underlying virtually all concept formation. It was briefly mentioned there that because of this function the will also plays a vital role in our emotional lives — in our psychological wellbeing and in our psychological ill-being. In this chapter, I wish to expand on those claims, both because the issues are intrinsically interesting and because, by doing so, the place of the will in Scientific Cartesianism will be clarified. These ends are best achieved in considering some familiar problems of the will.
1. The problem of free will has been a vexing one. It is one of those philosophical problems where even trying to state it clearly has proved taxing, and often unconvincing. And it is a problem where none of the proposed solutions - hard determinism, soft determinism (compatibilism), indeterminism - may seem satisfactory. Each may seem to be mistaken on some issue or to overlook important data to be accounted for. Yet, despite the problem s being difficult to state, there really does seem to be a problem. Indeed, many would describe the situation introspectively, not merely by saying that we feel that there is a problem, but that we feel the very problem itself.1
I believe there is at least a partial solution to the free will problem, or at least that one solution is more reasonable than the others and can 1
Though I know philosophers who are puzzled as to why anyone would be much moved by this problem. This is a frame of mind that I myself find puzzling in its turn, although I understand, and I think appreciate, its origin. In fact, all of these people hold the position on the problem of free will that I arrive at in this chapter; however, my reaching that position seems to have been much more psychologically hard won than it was for them. Much of the material for this chapter is drawn from Nelkin In Preparation-b.
299
Apperception
be arrived at by understanding salient facts about how human beings form concepts. Yet, there is also a way of interpreting these facts that leads to the conclusion that there is no solution to the free will problem and that a fundamental paradox resides at the basis of existence. But this latter interpretation is, I will argue, stronger than what the data themselves support. 2. Let us, for the moment, focus on a slightly different problem from that of free will itself: that many of us feel the problem of free will so deeply and feel it to be such a deep one. That is, I want to consider how we ever came to be worried by the free will problem in the first place and why this problem seems to matter so much to us. I want to maintain, based on the arguments of chapter 9, that we feel this problem so deeply, not simply because it seems to threaten the distinction between human actions and things that merely happen to human beings, but because it seemingly threatens to undermine the world itself and everything in it, including our very self. I am not the first to concern myself with this aspect of the problem. Thomas Nagel has struggled with it, most especially in his paper, "Moral Luck" (1979c); and Nagel is one of those who conclude that the problem of free will is intractable.2 But the results of chapter 9 provide new insights and new explanations for questions such as: Why do we feel the problem so deeply? Why do so many of us, even when unconvinced by indeterminism, feel so strongly pulled in its direction? Why does the will matter to us? And in answering these questions, I hope 2
"I believe that in a sense the problem [of free will] has no solution, because something in the idea of agency is incompatible with actions being events, or people being things" (Nagel 1979c, 37). "We are unable to view ourselves simply as portions of the world, and from inside we have a rough idea of the boundary between what is us and what is not, what we do and what happens to us, what is our personality and what is an accidental handicap" (Nagel 1979c, 37). The significance of this quote will be clarified soon. "The inclusion of consequences in the conception of what we have done is an acknowledgement that we are parts of the world, but the paradoxical character of moral luck which emerges from this acknowledgement shows that we are unable to operate with such a view, for it leaves us with no one to be. The same thing is revealed in the appearance that determinism obliterates responsibility. Once we see an aspect of what we or someone else does as something that happens, we lose our grip on the idea that it has been done and that we can judge the doer and not just the happening. This explains why the absence of determinism is no more hospitable to the concept of agency than is its presence - a point that has been noticed often. Either way the act is viewed externally, as part of the course of events" (Nagel 1979c, 38).
300
Will to show why Nagel's skeptical conclusion is both understandable and yet in error. 3. Two sorts of answers are usually given to the above questions: The will matters (1) to morality, and (2) to a self-concept. Rather than focus on the first answer, as have so many, including Nagel, I will focus on the second. Why does the will matter to a self-concept? The answer to this question involves the account of cognitive development argued for in chapter 9. That account, as we saw, is constrained by data obtained from developmental psychology. However, those data, while accumulating rapidly, are still admittedly skimpy, and open to varying interpretations, not just to mine. So the best justification for the speculation to follow, like the earlier, is that, if right, much is explained by it — including, in this case, why the will matters so much to us. It also enables us to gain insight into some kinds of mental disturbances and to gain a deeper understanding of why subjectivity itself matters. Finally, it enables us to see how Nagel might have arrived at his skeptical conclusion (though the route described is not so much the route he took as it is a clarification of the route he took) and correlatively, gain insight into why his route is neither the only, nor the best, route to have taken. II
4. Before moving on to the "solution" to the psychological problem, let us first take a brief look at the many ways in which the will does matter to our lives. A large literature documents the relation between our feeling in control of our lives and psychological well-being. Let me recite a few documented cases. One of the more interesting is a study done by Watson and Ramey (1972). Infants, eight to ten weeks old, had their heads placed on pressure-sensing pillows in their cribs for ten minutes a day for a two-week period. There were three groups, and for each group their head movements were counted during the relevant periods. An infant in the experimental group, by moving its head, put in motion a mobile attached to its crib. The other two groups served as comparisons.3 In 3
I realize that such groups are usually called "control groups," but I am using "comparison groups" here in order to avoid confusion, since the experimental group is, in another, and more relevant, sense of "control," the control group.
301
Apperception
one, the mobile moved every so often, though not in response to anything the infant did. In the other, the "mobile" was a stabile: it didn't move at all. The results were that the experimental group of infants moved their heads far more often than did either of the comparison groups (which moved theirs about equally often to each other). Perhaps the only surprise here was the very young age of these children, and these results provide evidence that we recognize an in-control/not-in-control distinction quite early on. Of course, a more straightforward stimulusresponse interpretation of these results is possible as well. If the first reading is correct, though, then we can understand the nonaction of the comparison group infants in the following way: since they were not in control of the movements of the mobile, they had no particular reason to move their heads. Even more interesting, and more in favor of the first interpretation, is a further set of results that initially were neither tested for nor solicited, and became more formally surveyed only because of spontaneous comments from several mothers of the experimental babies. The in-control babies experienced a good deal of delight when the mobile manipulations took place, while little delight was reported (with a couple of exceptions) in the other babies, even in cases where the mobile moved every so often. While these last results are more anecdotal than the first set, Watson and Ramey take them seriously enough to devote several pages of their article to them. A second relevant study involves an aspect of one that was considered in chapter 3. Glass et al. (1973) demonstrate how a sense of control in adults appears to lessen stress, even perhaps diminish pain. As discussed earlier, volunteer subjects in these studies were to push a button as soon as possible after being given a six-second, somewhat painful shock (this was at a level that each person had previously identified as painful). Several shocks were administered in each testing period. The subjects were told that reaction time was being measured. Afterwards, the subjects were divided into two groups. The comparison group was told that the experiment would be the same except that the shocks would last only three seconds. The experimental group was told instead that the shocks would be shortened to three seconds if their reaction time was of a sufficient speed. In fact, each group received the same number of three-second shocks (speed-threshold played no role). Following these sessions, in order for the experimenters to measure 302
Will stress, both groups were given a Stroop Color Word Test (where the subject is measured for reaction time to reading color words that are often printed in a different color from what they say - for instance "red" might be written in blue). The Stroop Test was used because there is an apparently established correlation between amount of stress felt and reaction time on this test. Subjects were also surveyed as to the quality of their pain experiences. The reaction times of the experimental group were significantly more rapid, showing less stress in members of that group; and the only difference that apparently accounts for this enhanced performance and reduced stress seems to be their belief that they had been in control of their shocks (and of course, their beliefs were, in fact, false). Moreover, as noted in chapter 3, the experimental group also reported diminished degrees of pain relative to their pre-test reports at the same level of shock intensity, although their autonomic response measurements presented a like profile to that of their counterparts in the comparison group, who reported no diminution of pain level. Perceiving oneself as in control apparently enables one to manage stress and pain better than when one doesn't.4 In a related kind of experiment, Staub, Tursky, and Schwartz (1971) found that subjects who could self-administer shocks and control their intensity endured longer shocks and reported less discomfort at higher levels of intensity than their paired partners in the experiment who received the shocks passively. Also of interest is that when the incontrol group were made passive for the next set of shocks, they were found to be less pain tolerant than the comparisons (see previous footnote). Along these same lines, cancer patients who are able to selfadminister morphine doses for their pain (as opposed to having hospital staff give regular, time- and dose-dependent injections) use less morphine - and use it less often (Melzack and Wall 1983, 275). Even studies on nonhuman animals point to the importance of being in-control, and to the relation of being in-control to pleasure. Rats' brains have been wired with electrodes such that by learning to push a lever, the rat can stimulate activity in the relevant brain area. When this intracranial self-stimulation (ICSS) occurs in the ventral 4
If the subjects, instead, got six-second-long shocks following the second set of instructions, subjects who thought they would be in control did worse on the Stroop Test than those in the comparison group. As remarked in chapter 3, the likely interpretation here is that thinking one is in control and then finding out one isn't, or isn't able to succeed, is psychologically more damaging than not starting with the belief in the first place.
303
Apperception
tegmental area (VTA), rats have been found to push the lever over and over again, often to the neglect of all their other bodily functions. That is, they will pass up food, drink, sex and so on to auto-stimulate their VTA. Yet, when the animals receive "experimenter-administered electrical stimulation (EAS) at rates and parameters for which they had previously self-stimulated" (Porrino 1987, 53), both metabolic and behavioral events show a different profile from ICSS: " . . . [T]he behavioral context of stimulus presentation is a significant factor in determining the neurochemical effects of a variety of stimuli in the brain" (Porrino 1987, 56). For whatever reasons, being in control of their stimulations, even when number and pattern are similar, seems to matter both to the rats' later behaviors and, one supposes, to the pleasure they take from the stimulation. In addition, although Porrino s studies are on rats, they lend support to the in-control interpretation (versus the stimulus-response interpretation) of the Watson and Ramey infant experiments. Many clinical studies report a tie between feelings of loss of control and suicidal tendencies (Lefcourt 1976, 77) as well as with depression in general (Lefcourt 1976, 149; Ferster 1973). Depressed individuals often report that they feel overwhelmed by life, that nothing they do matters, and so on. We know that a classical symptom of clinical depression is one s "inability" to get out of bed. The existence of a belief that our will is helpless to effect important actions, a loss of the sense of control, seems to lead to a generalized hopelessness and to an inability to act, even where action seems otherwise possible (like getting out of bed). Consider this quote from a major figure in the psychological literature on control: This book focuses upon research that has been conducted in psychological laboratories and in field settings concerning the effects of an individual's perception of control. Whether people, or other species for that matter, believe that they are actors and can determine their own fates within limits will be seen to be of critical importance to the way in which they cope with stress and engage in challenges. In other words, what Skinner believes to be an irrelevant illusion will be shown to be a very relevant illusion — one that seems central to man's ability to survive, and, what is more, to enjoy life. (Lefcourt 1976, 2) These studies, reminders, and quotes serve here only to emphasize, and provide evidence for, what most of us already suspected simply 304
Will from living our own lives: the sense of control is, in some mysterious way, deeply important to our lives, to the way we experience that life. in
5. But we are still faced with the question of why the will matters to us as much as it does. A brief summary of the account of concept formation from chapter 9 might be helpful. As newborn infants, we are presented with an unbroken stream of experience.5 This stream, as such, displays neither its types nor its tokens on its surface. Metaphorically - but only metaphorically - the neonates experience is, to quote William James (1890/1959, vol. I, 488), a "blooming, buzzing confusion." Two factors allow the infant to begin sorting out this undifferentiated stream into tokens and types. One factor is apperception: the infant is second-order aware of at least some of this unbroken stream. The second factor is that in some cases the infant affects (and effects) its experiences, while in other cases it does not. The latter just occur. Thus, an infant finds itself apparently in control of some experiences, but not others. The words, "finds itself," are used purposefully, though advisedly. I use the words purposefully, in that the infant apperceives that it is sometimes incontrol, sometimes not-in-control. In view of studies such as the one involving the pressure-sensing pillow (described in the previous section), there is reason to believe that we are aware of this distinction very early on. If the theory of chapter 9 is correct, we are capable of recognizing this distinction right from the beginning. The incontrol/not-in-control distinction we make is a quite primitive one; and we are apperceptively aware of it, also from early on. Yet I use the words advisedly, because in the beginning the infant has no awareness of itself as itself, i.e., no concept of a self. The in-control/not-in-control distinction is the primitive one, probably innate, and precedes awareness of a self/not-self distinction. The basic distinction neonates are apperceptively aware of, then, is that of in-control/not-in-control. In addition, this distinction is tied to a primitive concept of causation, or at least to an early apperceptive 5
By experience, I do not mean only phenomenal experience. See Part Two. Only PI is summarized here.
305
Apperception
awareness of it.6 Combining the two primitive kinds of awareness, we get an either—or dichotomy: either this (kind of) cause of experience (an in-control experience — an action) or that (kind of) cause of experience (a not-in-control experience — not an action). This distinction allows the infant to begin dividing the stream, at first into tokens: This part of the stream is in-control; that part, not-in-control. Moreover, the in-control/not-in-control distinction is also the basis for the first type distinction, the self/not-self distinction, though, as stated in chapter 9, the "self" is not exactly conceived of as an individual thing, a self, but as a proto-self. The not-self, also not yet individualized, can similarly be thought of as the proto-external-world (the proto-outthere). It is crucial to emphasize that the willed/unwilled distinction underlies both one's primitive concepts of the proto-self (over and against the proto-not-self) and one's concept of a pro to-externalworld (over and against the proto-self). Only because one apperceptively takes some behaviors but not others to be in-control, can this primitive (and primary) distinction of proto-self and proto-other be made. Without an apperceptive awareness of the willed/unwilled distinction, one would possess neither of these basic concepts. Almost all concept formation begins with this willed/unwilled distinction and our apperceptive awareness of it. We can think of these primitive kinds of apperceptive awareness (of seeming to be in-control or not-incontrol and of causation) as constituting a kind of essential subjectivity - because internal to experience - a deep subjectivity that underlies, and makes possible, the ordinary - but derived - subjective/objective (self/not-self) distinction. To continue the account, the focus is turned onto the not-self. The not-self, by whatever conceptual mechanisms, gets broken up into objects: bodies. And those bodies are categorized into types of objects. Only after bodies are differentiated do we come to recognize our self (as an individual thing): our self is a body inhabiting the same world as all other bodies, yet different from all other bodies. We first identify our individual selfas a body, which not only has shape, size, and so forth, 6
Piaget (Piaget and Inhelder 1969) claimed that the causation we first recognize is personal causation only. Leslie (1988) has given results disputing that claim. Actually, both sorts of causation recognition have now been shown in quite young infants (Wellman 1992, 233). My bet is that some sort of causation recognition is either innate or acquired extremely early — earlier even than experiments so far seem to have established. For other evidence that causation is recognized quite early on, see Spelke 1988; Bauer 1993; Bailleargeon 1993.
306
Will but also thinks — especially wills. We then come to perceive that our self, this particular body, is a member of a type: human being. So we come to ascribe thinking to all members of the type, to all human beings (and to many other animal bodies, though to human beings first and foremost). The sense of control, the apperceptive seeming to be in-control, I would claim, is the basis for our belief in free will. And, as said, this essential subjectivity of will experiences is the very foundation of our being able to acquire any other concepts at all.7 IV
6. What conclusions can be drawn from this account as to why the will matters, and more ultimately, as to the free will problem? The answer depends on how one understands and interprets the theory of chapter 9. In this section, a Nagelian way of interpreting it will be considered. It is an important part of this speculation on the origins of concept formation that we first understand thinking, willing beings — our own self and others — as physical objects: objects in space and time. The mind/body division comes only later, when we come to realize that the body is often outside our control — as in disease, or when a bodily part is asleep, or when we are frozen with fear, and so on, all those cases where actions we normally successfully will to perform do not come off (or those we will not to come off, nevertheless do). Only then are 7
It is important to note that the notion of subjectivity that I have labeled "essential subjectivity" says nothing about phenomenality, about phenomenal experience. A kind of subjectivity may go along with phenomenality; but there is no reason to think that it is to be identified in any way with essential subjectivity, which is based in the sense of control (see Nelkin 1994d). Nagel's "what it is like to be" confuses phenomenality with that deep, and essential, subjectivity that is to be identified with a sense of control. Only the latter is relevant to the free will problem. And because Nagel has conflated these two distinct notions of subjectivity, he is bothered by issues that are extraneous to the free will problem. Essential subjectivity does not feel any way at all, at least not in the sense of phenomenal experience, or in any sense closely analogous to that one. As noted in previous chapters, some want to distinguish "phenomenological" from "phenomenal" (see, for instance, Searle 1983; Leon 1988; Goldman 1993). In the way usually attempted, it is a distinction without a difference (or "phenomenological" has no instantiations). On the other hand, the motivation for the shift to "phenomenological," insofar as it is rooted in the understanding that the phenomenal is not what essential subjectivity consists in, is correct. Essential subjectivity is something much deeper than - and different from - phenomenal - or phenomenological - subjectivity.
307
Apperception
we tempted to think of the body merely as something belonging to us, and only then do we begin to identify our self with our mental activities alone, especially with our will. Inherent in these loss-of-control experiences is an increased recognition of how much is out of our control, of how even what we think is in our control is instead at the mercy of external forces. Further along, some confront the mind/mind problem: we come to realize that our very thoughts, desires, and other mental states are often out of our control. Despite our best efforts, we cannot stop thinking about someone or something; despite our strong wishes not to, we cannot help eating a second pastry; and so forth. In fact, we come to discover that our beliefs, desires, emotions — those causes of nearly all, if not all, our actions — are almost never in our control. At this point we might begin to suspect that our very self, however we identify it, is not in our control, that we are always in subjection to the not-self, that our sense of being in-control is nought but illusion, and never has been anything else. In initially identifying our self, we relied on a sense of control that makes it appear that our actions originate entirely within ourselves. This sense is not one of omnipotence (as Freud thought): Without a recognition of the not-in-control, we would not have a concept of self at all. Rather it is a sense of total freedom in those actions over which we take ourselves to have control, those actions of which we are the cause. The sense of autonomy, not of omnipotence or megalomania, underlies all concept formation. It was in these seemingly autonomous actions that we apperceptively found ourselves acting on the world, rather than being acted on by it. But now we discover that we are not the sole and unfettered origin of our own behaviors. Apperception misled us. The external world hinders/helps our actions, it affects and effects our very thoughts, and it influences or thwarts our very will itself. Our sense of control, our sense of autonomy, begins to look to be mere illusion. The will looks not to be free. And we are frightened and disturbed. 7. And now we can see why. Now we can see why the will matters so much, why the problem of free will is felt to be so deep and so disturbing. Consider what that sense of control, that belief in autonomous action, has enabled us to do. It has allowed us to distinguish a self from the not-self; to have a concept of the external world; to distinguish our 308
Will self as a thing; to distinguish our own self from other selves, while at the same time recognizing our self as part of a community of selves, as a member of a type. But if the key ingredient - the sense of control - is illusory, what can be said of all the crucial distinctions based on it? Why doesn't illusion just give rise to further illusion? If we can be so wrong about that very foundation for making all distinctions, if our sense of control is not itself based in fact, then how can we be certain — how can we be anything but uncertain — about the resulting distinctions themselves? It all begins to slide away. All the distinctions dependent on our sense of control — all our distinctions — seem threatened: that there is a community we are part of; our very self; our very world. No wonder the problem of free will is so disturbing! There is an essential subjectivity, grounded in a sense of autonomy, underlying all of our objective distinctions. Moreover, yet ironically, objective science — in this case, developmental psychology — provides a basis for believing that this essential subjectivity itself exists; but at the same time, objective science, considered more generally, seems to tell us that such a subjectivity is grounded in illusion: there can be no total cause of the kind autonomy requires. But if this essential subjectivity is itself called into question, then our hold on both the subjective and the objective is loosened, for the path to the objective is by way of the essentially subjective. It is this seeming tension that Nagel, perhaps more than anyone else since Kant, has called to our attention. On this reading of the developmental data and of the theory of concept formation based on them, there appears to be a deep and unresolvable tension in our lives. And that tension appears to be rooted in the subjective versus the objective, and so Nagel claims. As we reflect on how little, if any, actual control we have over our lives, we may initially think that it is possible that we are no more than mere pushed-and-pulled objects in the external world. But, a Nagelian would claim, what makes the problem so hard, and so disturbing, is that we cannot "rest" there. The problem goes deeper, and in two ways: first, essential subjectivity, a sense of our own autonomy, underlies our very notion of the objective. We would have no concept of the external, of the objective, at all if we did not first have that sense of control that forms the essential subjectivity in our lives. So to question the veridicality of that subjectivity is to raise questions of the deepest sort about the objective itself. Second, the objective is itself the guide to one's 309
Apperception
belief that there must be such an essential subjectivity. It is the power of scientific (and so objective) data and theory that leads to a rational belief in essential subjectivity as the origin of all concept acquisition. Both perceptual psychology and developmental psychology underlie, and drive us toward, a belief in the existence of an essential subjectivity that takes willed action as basic. Thus, we cannot resolve the tension simply by accepting our objectness. The contradiction is contained within the objective itself: it gives us reasons for believing that autonomy exists as a primitive in concept formation and that it is responsible for our ability to conceive the objective (that which is outside the autonomous), but the objective also tells us that there cannot be any such autonomy. Nor can we withdraw into some pure essential subjectivity, whatever that may be, without losing our very selves — and all other distinctions. Essential subjectivity leads us ineluctably to the objective, and that objective world leads directly, but in contradictory ways, back to essential subjectivity. There seems to be no remainder to rest in. We are really — and deeply — stuck. All of our distinctions seem to result from a profound falsehood: that there are actions we are autonomously in control of. And yet that falsehood appears false only from an objective perspective made possible only on the basis of that supposed falsehood. If we were to accept these Nagelian conclusions, we could also claim to see why none of the "solutions" to the free will problem is satisfactory. Determinism claims that the sense of control is illusory; but if it is, so may well be the belief in an objective world, the very origin of which belief lies in that subjectivity. Yet, indeterminism also leads directly to the contradiction: In affirming autonomous experience, we are led inevitably to the objective and again back to all the problems. Let us remember also that those earliest of distinctions, discussed in section III, are made on the basis of a dichotomy: Either the source of action is "outside" or it is "inside," either the source is not autonomous or it is. There is no third alternative. The inside/outside distinction can itself be made only in the belief that the origin of some experiences is wholly other than that of those that merely happen. This fundamental notion of autonomy accounts for the fact that we are so disturbed when we run up against cases where the sense of control turns out to be merely that — a sense of control — and not the real thing. The belief in autonomy underlies all acquired distinctions. And this belief comes to seem false, or paradoxical. The problem of free will is a deep and disturbing one. The will matters. If it is correct that a sense of autonomy makes possible all 310
Will conceptualization, then we can understand why the problem is so disturbing: when our autonomy is questioned (as it surely must eventually be for any thinking being), everything — all thought, all world, our very self— is at risk. And once we grasp the nature of the danger, there appears to be no safe haven — except forgetfulness. Backgammon, anyone?
8. But are we left with this bleak picture? Is there no solution to the problems raised by this Nagelian view? Is there a fundamental and unresolvable paradox at the very basis of existence? There have certainly been attempts to resolve the problems, in one way or another. One attempt to escape the paradox is made by Kant, to whom this sort of developmental account of concept formation owes many debts.8 Kant — faced with the need for in-control essential subjectivity to underlie all concept formation, yet also faced with the seeming paradox at the heart of it all — proposed to solve the dilemma by maintaining that the essential, autonomous subject is outside the world of things, in a transcendent world, and so not in subjection to its laws (instead the world of things - and its laws - are thought by him to be in subjection to the laws of that purely autonomous essential subjectivity). In the world of things, according to Kant, resides only an empirical self, a derived subjectivity, half of the subjective/objective distinction, which distinction essential subjectivity made possible; and the empirical self, according to Kant, is in subjection to the laws of nature. But, I will argue, Kant's "solution" is to a problem that doesn't really arise. Both he and Nagel mistakenly accept the seeming paradox of existence described in the previous section. But the "paradox" results from an unnatural and over-interpreted reading of the developmental theory of chapter 9.9 8 9
Piaget, for instance, a more recent progenitor of this type of account, was much influenced by Kant. From conversations I have had with my colleague, Jim Stone, who is quite knowledgeable about these matters, it is my understanding that Buddhists, while in a sense accepting the paradox, claim that we can escape it through meditation, which, if done correctly, allows us to retreat into that essential subjectivity that precedes all distinctions, including that of the subjective/objective. The state I have called essential subjectivity is itself selfless (our concept of agency precedes our concept of agent). Perhaps reaching back to such a state is possible, but it is not obvious why there would be any virtue in reaching it. If the motivation behind Buddhism has been correctly described, then, as will be argued, Buddhism's goal - its hope - is based on erroneously accepting the alleged existential paradox in the first place.
311
Apperception
9. One route to that paradox, as we have seen, is by treating incontrol subjectivity as a sense of total autonomy. But to identify essential subjectivity with such an autonomy depends on a far stronger reading of the sense of control than is required for the concept-formation account of chapter 9. I earlier said that the sense of control, the recognition of the in-control/not-in-control distinction, is primitive. I mean this "primitiveness" in two ways. First, the recognition of the distinction and its accompanying sense of control are temporally and causally primitive: they are necessary for forming any further concepts. Second, "in-control" is a theoretical primitive, and so an undefinable.10 Still, theoretical primitives, while undefined, acquire meaning from how they are used in the theory (compare Newton's "mass," "force," and "acceleration"). When we see how the notion of in-control leads, in the theory spelled out in chapter 9, to concept formation, including the concepts of self and of not-self and of things in the external world (in the not-self), then one can also see that considerably weaker notions of autonomy and free will are required than those called for by Nagel and Kant. And so we get a different kind of solution to the problems raised by a Nagelian interpretation of the developmental theory of concept formation — or rather a dissolution of those problems — and it is one of the standard responses to such interpretations: compatibilism (soft determinism). Compatibilists claim that the paradox doesn't really arise. One is in-control — and therefore exhibiting free will — when one's beliefs and desires (and will), play an appropriate causal role in one's behavior.11 The initial sense of in-control need not be (and almost certainly is not) questioned by the infant as to the origins of that control itself. The baby who moves its head to get the mobile moving does not wonder what caused that willing. It just distinguishes experiences where that willing played a role from those where it did not. The agency/nonagency distinction is a forward-looking distinction, moving from will to act, and not a distinction that looks backward from the will. And 10 II
I owe my colleague, Ed Johnson, thanks for helping me see that I mean both sorts of primitiveness. There are several difficulties with this move: among them, how to spell out the "appropriate" conditions and how to justify a reliance on belief^iesire theory, which itself may not hold up to analysis. However, these are not problems for compatibilists alone; so they will be ignored in what follows.
312
Will once a self as agent (as that which seems to be in-control) is distinguished, the infant does not then start to wonder why it is willing. Those kinds of questions are highly sophisticated ones that only some people ever worry about. There is no sense of total autonomy in the infant. Such a notion is way too sophisticated. The processes that lead to concept formation require only a sense of in-control for the infant that is weaker than this sophisticated sort, a sense that is no stronger than compatibilists require. 10. This weaker notion of in-control, the actual one we early on distinguish from the not-in-control, is adequate for giving us a notion of our self, adequate for explaining why the will matters to us, adequate for explaining the importance to us of things we do as opposed to things that happen to us, adequate for explaining why being in-control means so much to our psychological well-being, adequate for explaining the source of our notion of responsibility. And those results seem to be important. The will matters to us because, as on the Nagelian interpretation, its use allows us to distinguish our self, our world, and our place in that world. But what allows us to make those distinctions is that the will causes actions. The thought of what causes the will itself is irrelevant to the infant's making these key distinctions, acquiring these key concepts. Similarly, the things we do matter to us because they help identify our self as a self Doing things brings us more pleasure than having things happen to us — even when the things are the same (as in the pressure-sensitive pillow and rat I CSS cases) — undoubtedly, because evolution has "seen to it" that such pleasure enhances the likelihood of our making the self/not-self distinction. And that connection of pleasure with action in turn undoubtedly helps entrench and make important the notion of self to us. And because loss of control means loss of self, loss of control is scary for us. But the notion of loss-of-control here is no more than that our willing fails to bring about our desired actions. The relevant sense of control is not the backward-looking one that our desires, beliefs, and will are themselves in our control. As said, such thoughts as these last are sophisticated, and much more so than we have any right to think the infant — or child — possesses. And, finally and similarly, the child learns that it is responsible (by being praised and punished, among other things) for actions it does — but, once more, only in this compatibilist, forward-looking sense of control. The compatibilist story is by no means new; but seeing it 313
Apperception
as adequate to a developmental account of how we make these key distinctions, acquire these key concepts, does throw a new light on it, does make it appear considerably more substantial. To many, there seems to be something ad hoc about compatibilism, but that ad-hoc-ness disappears when we realize the actual long and deep historical — developmental — roots of a forward-looking notion of in-control in our lives. It may be that my silence on the relation of in-control to issues in morality, especially to the issue of moral responsibility, may reflect a fact that this account gets us nowhere towards understanding a notion of free will as it connects to morality. That is, the responsibility explained by this theory is not moral responsibility. But even if "moral responsibility" be a separate sort of responsibility from the responsibility underlying selfhood (that which arises from a forward-looking in-control/not-in-control distinction), it is still important to realize that our first understanding of responsibility of any kind is derived from the one that underpins our cognitive development. Moreover, if there is a sense of "moral responsibility" in which the notion of "responsibility" is other than that of what I will call "developmental responsibility" it would not be too surprising if moral responsibility retained developmental responsibility as a necessary condition. And if there is a sense of "moral responsibility" in which "responsibility" itself means more than developmental responsibility, then we are owed an account of that notion. If the moral notion is tied to a notion of total autonomy, as Kant and Nagel claim, the result is, by Nagel's own account — as section IV points out — at best a paradox. If the only clear, meaningful, nonparadoxical notion of "responsibility" we possess is the developmental one, and if it is based in the forward-looking notion of control favored by compatibilists, then it is quite reasonable that, as they have long argued, that is the notion any meaningful morality will require. Of course, additional criteria may be required to turn responsibility simpliciter into moral responsibility; but those criteria, while establishing what counts as being morally responsible, will not change the meaning of the term "responsibility" itself. The word has a univocal sense. Perhaps there are other routes to the paradox than through the developmental account of concept formation. There must be, since neither Nagel nor Kant employs an account exactly like this one. However, both employ accounts closely similar to it (see the quota314
Will tions from Nagel in footnote 3, especially the second). And this account seems to me to provide the most plausible route to the paradox. But this route arises only as a result of overinterpretation. So if we interpret the developmental theory more judiciously and thereby block the route at the very beginning, we are provided a plausible reason to believe that compatibilism, like it or not, is a genuine solution to the free will problem. The compatibilist notion of free will seems to be all that is needed to play the critical role of in-control in concept formation, and sufficient for our notions of r~self~1, '•"will""1, •"free will""1, '""autonomy">, ^responsibility"1, '"-praiseworthiness""1, and ^blameworthiness"1. Since a stronger reading of "free will" does lead to paradox and incoherence and since our original concept of free will is this weaker one, we are justified in being satisfied with a compatibilist 's account.
315
Concluding remarks Since the time of John Locke, many have thought that phenomena were collectively the passport to understanding the mind. Indeed, for many, phenomenal states constituted the mind. But we have seen (Parts One and Two) that phenomena, while indeed states of the mind, cannot bear the weight required for understanding those mental states that are most crucial to us as Lockean persons: our cognitive states — those states that make us thinking things. Phenomena may play an important role in perception (see chapter 4); but perception itself is a proposition-like state, the result not of passive processing, but of active and constructive processing. Even a state like pain, the most likely candidate for being a purely passive, phenomenal state, has been shown to involve proposition-like cognitive activity. And in Part Two, it has been argued that phenomenal consciousness, while indeed a type of consciousness, is only one type of consciousness — and the least important to our sense of ourselves as Lockean persons. As these first two parts were developed, it became more and more evident that another type of consciousness, apperceptive consciousness, is that state that is essential to our being Lockean persons. Part Three describes in more detail its importance. And by doing so, Part Three mounts a defense of Cartesianism/Internalism — of Scientific Cartesianism — against twentieth-century anti-Internalist attacks on it. Considered thought about how we acquire concepts of the propositional attitudes (chapter 8) led to an Internalism about them, and an examination of the developmental data (chapter 9) led to a larger Internalist theory of concepts. From the data, it is not unreasonable to conclude that we have both an innate awareness of an in-control/not-in-control distinction and an innate awareness of causal relations. "Awareness" is ambiguous here, and I mean it to encompass not only a first-order awareness of these primitive concepts, but also an apperceptive awareness of this 316
Concluding remarks
first-order awareness. Apperception, if the theory of the book is correct, is, then, exceedingly important to us. For from our apperceptive awareness of these primitive, innate concepts, we are ultimately able to acquire the concepts of our self, of the external world, of our self as a species being (i.e., as a human being) — indeed, all our other concepts. Moreover, these concepts could be acquired in principle (though virtually impossibly in fact) by a brain in a vat. That is, the trail through the developmental data leads in a compelling way to Cartesian Internalism. Since the second quarter of the twentieth century, many philosophers, influenced by the anti-Internalist attacks, have believed that understanding language use would provide its own passport into the mind. But if the Internalism of Part Three of this book is correct, then we see that linguistic meaning (including word meaning) is only an abstraction from the idiosyncratic concepts each of us possesses. It is almost certainly true that organisms tend toward trying to discover "shared" meanings, toward dovetailing their concepts with those of others of their species. And the advent of language has probably by and large improved our human abilities to do so. Studies like Carey s (1985) on children's use of ""alive"1 provide evidence for the evolution of this dovetailing in young children. But it is important to remember that there are no concrete communal concepts. There are only ones — individualized — theories about what that communal meaning might be, and the subsequent internalization of these theories as ones own current concepts. And behavior undoubtedly provides us feedback on how good our theories are. Others' reproaches and praises matter to us, matter to the final "look" of our concepts, as does our success, or lack of it, in being able to predict behaviors of others and of the world. Behavioral interactions, both with others and with the world, undoubtedly play a large role in causally determining the concepts we ultimately light on and maintain, and explain why there is an appearance of public, shared concepts. These are the truths that underlie Behaviorism, Externalism, and Anti-Individualism. But these views provide, in the end, the wrong origins and status of our concepts. Although neither phenomena nor language use provides a passport to the road to the mind, it is easy to see why both of them were thought to do so. It is 50 hard to see any reasonable way into the mind. And it is natural to begin with those aspects of mind that are most manifest 317
Concluding remarks
and most familiar to us. But these ways ultimately lead to dead ends. We need another way into the mind, one that deviates from these familiar roads only a little (see the Wittgenstein quote that is one of the epigraphs of this book). We have to learn to think in a different way about the mind if we want to understand it. The theory of this book is intended as a contribution to that Gestalt shift. And the theory of this entire book, while Internalist, fits with current science, and is meant to serve as a guide to future science. And that is why I have called that theory Scientific Cartesianism.
318
Bibliography Andreasen, N. C , S. Arndt, V. Swayze II, T. Cizadio, M. Flaum, D. O'Leary, J. C. Ehrhardt, and W. T. C. Yuh. 1994. Thalamic abnormalities in schizophrenia visualized through magnetic resonance image averaging. Science 266:294-98. Armstrong, D. M. 1980. The nature of mind. In Readings in the philosophy of psychology, vol. I, ed. N. Block, 191—99. Cambridge, Mass.: Harvard. Baars, B. J. 1987. Biological implications of a global-workspace theory of consciousness: Evidence, theory, and some phylogenetic speculations. In Cognition, language, consciousness: Integrative issues, eds. G. Greenberg and E.
Tobach, 209-36. Hillsdale, N.J.: Erlbaum. Bailleargeon, R. 1993. Learning from infants' failures: Physical knowledge is not enough. Paper read to the Society for Research in Child Development
(March 27). Barinaga, M. 1992. Unraveling the dark paradox of "Hindsight". Science 258:1438-39. Bauer, P. J. 1993. Application of world knowledge: Examples from research on event memory. Paper read to the Societyfor Research in Child Development
(March 27). Bennett, J. 1988. Thoughtful brutes. Proceedings and Addresses of the American
Philosophical Association 62, Supplement to volume 1:197—210. Berkeley, G. 1713/1965. Three dialogues between Hylas and Philonous. In George Berkeley: Principles, dialogues, and philosophical correspondence, ed. C.
M. Turbayne, 103—211. Indianapolis: Bobbs-Merrill. First published 1713. Biederman, I. 1987. Recognition-by-components: A theory of human image understanding. Psychological Review 94:115—47. Bilgrami, A. 1989. Realism without internalism: A critique of Searle on intentionality. Journal of Philosophy 86:57—72. Block, N . 1993. Daniel Dennett: Consciousness explained. Journal of Philosophy
90:181-93. Forthcoming. Intentional inversion. Block, N., and J. A. Fodor. 1981. What psychological states are not. In Representations,^.]. A. Fodor, 77-99. Cambridge, Mass.: MIT/Bradford. 319
Bibliography Boghossian, P. A., and J. D. Velleman. 1991. Physicalist theories of color. Philosophical Review 100:67-106. Bremner, J. G. 1988. Infancy. Oxford: Blackwell. Brewer, B. 1992. Unilateral neglect and the objectivity of spatial representation. Mind & Language 7:222-39. Broad, C. D. 1960. The mind and its place in nature. London: Littlefield Adams &Co. Brown R., and R. J. Herrnstein. 1982. Icons and images. In Imagery, ed. N. Block, 19-49. Cambridge, Mass.: MIT Press. Burge, T. 1979. Individualism and the mental. Midwest Studies in Philosophy 4:73-121. 1986. Individualism and psychology. Philosophical Review 95:3—45. Butler, K. 1991. Towards a connectionist cognitive architecture. Mind & Language 6:252—71. 1995a. Compositionality in cognitive models: The real issue. Philosophical Studies 78:125-51. 1995b. Representation and computation in a deflationary assessment of connectionist cognitive science. Synthese 104:71—97. 1995c. Context, content, and cognitive science. Mind & Language 10:3-24. In Preparation. Internal affairs: A defense ofpsychosemantic Externalism. Campion, J., R. Latto, and Y. M. Smith. 1983. Is Hindsight an effect of scattered light, spared cortex, and near threshold vision? Behavioral and Brain Sciences 6:423-86. Carey, S. 1985. Conceptual change in childhood. Cambridge, Mass.: MIT/Bradford. 1991. Knowledge acquisition: Enrichment or conceptual change? In The epigenesis of mind: Essays on biology and cognition, eds. S. Carey and R. Gelman, 257-92. Hillsdale, N.J.: Erlbaum. Cavonius, C. R., M. Muller, and J. D. Mollon. 1990. Difficulties faced by color-anomalous observers in interpreting color displays. SPIE 1250 (Perceiving, measuring, and using color): 190—95. Chalmers, D. J. Forthcoming. Toward a theory of consciousness. Cambridge, Mass.: MIT/Bradford. Chandler, M. 1988. Doubt and developing theories of mind. In Developing theories of mind, eds. J. W. Astington, P. L. Harris, and D. R. Olson, 387-413. Cambridge: Cambridge University Press. Chisholm, R. 1957. Perceiving: A philosophical study. Ithaca: Cornell University Press. Churchland, P. M. 1989. A neurocomputational perspective: The nature of mind and the structure of science. Cambridge, Mass.: MIT/Bradford. Clark, A., and A. Karmiloff-Smith. 1993. The cognizers innards: A psycho-
320
Bibliography logical and philosophical perspective on the development of thought. Mind & Language 8:487-519. Cooper, L. A., and R. N. Shepard. 1984. Turning something over in the mind. Scientific American 251, 6:106-14. Davies, M. Forthcoming. Externalism and experience. In Philosophy and cognitive science: Categories, consciousness and reasoning, eds. A. Clark, J. Ezquerro, andj. M. Larrazabal. Dordrecht: Kluwer. Dennett, D. C. 1978a. Brainstorms: Philosophical essays on mind and psychology. Montgomery, Vermont: Bradford Books. 1978b. Intentional systems. In Brainstorms: Philosophical essays on mind and psychology, 3—22. Montgomery, Vermont: Bradford Books. 1978c. Why you can't make a computer that feels pain. In Brainstorms: Philosophical essays on mind and psychology, 190—229. Montgomery, Vermont: Bradford Books. 1978d. Skinner skinned. In Brainstorms: Philosophical essays on mind and psychology, 53—70. Montgomery, Vermont: Bradford Books. 1987. The intentional stance. Cambridge, Mass.: MIT/Bradford. 1988a. Precis of The intentional stance. Behavioral and Brain Sciences 11:495-505. 1988b. Quining qualia. In Consciousness in contemporary science, eds. A. J. Marcel and E. Bisiach, 42—77. Oxford: Clarendon Press. 1991a. Real patterns. Journal of Philosophy 88:27-51. 1991b. Consciousness explained. Boston: Little Brown & Co. 1991c. Lovely and suspect qualities. In Consciousness, ed. E. Villanueva, 37-43. Atascadero, Cal.: Ridgeview. Descartes, R. 1642/1986. Meditations on first philosophy. In Rene Descartes: Meditations onfirstphilosophy, with selections from the objections and replies, ed. and trans. J. Cottingham, 1—62. Cambridge: Cambridge University Press. First published 1642. Diamond, A. 1991. Neuropsychological insights in the meaning of object concept development. In The epigenesis of mind: Essays on biology and cognition, eds. S. Carey and R. Gelman, 67-110. Hillsdale, N.J.: Erlbaum. Dickinson, A. 1988. Intentionality in animal conditioning. In Thought without language, ed. L. Weiskrantz, 305—25. Oxford: Clarendon Press. Dixon, N. E 1987. Subliminal perception. In The Oxford companion to the mind, ed. R. L. Gregory, 752-55. Oxford: Oxford University Press. Dretske, F. I. 1981. Knowledge and the flow of information. Cambridge, Mass.: MIT/Bradford. 1986. Misrepresentation. In Belief: Form, content and function, ed. R. J. Bogdan, 17-36. Oxford: Oxford University Press. 1988. Explaining behavior: Reasons in a world of causes. Cambridge, Mass.: MIT/Bradford.
321
Bibliography Edelman, G. M. 1987. Neural Darunnism:The theory of neuronalgroup selection.
New York: Basic Books. Egan, F. 1991. Must psychology be individualistic? Philosophical Review 100:179-203. Fendrich, R., C. M. Wessinger, and M. S. Gazzaniga. 1992. Residual vision in scotoma: Implications for blindsight. Science 258:1489—91. Ferster, C. B. 1973. A functional analysis of depression. American Psychologist 28:857-70. Fischbach, G. D. 1992. Mind and brain. Scientific American 267, 3:48-57. Fischer, K. W., and T. Bidell. 1991. Constraining nativist inferences about cognitive capacities. In The epigenesis of mind: Essays on biology and cogni-
tion, eds. S. Carey and R. Gelman, 199-236. Hillsdale, N.J.: Erlbaum. Flavell, J. H. 1988. The development of children's knowledge about the mind: From cognitive connections to mental representations. In Developing theories of mind, eds. J. W. Astington, P. L. Harris, and D. R. Olson, 244—67. Cambridge: Cambridge University Press. Fodor, J. A. 1975. The language of thought. Cambridge, Mass.: Harvard University Press. 1981a. Representations: Philosophical essays on the foundations of cognitive science.
Cambridge, Mass.: MIT/Bradford. 1981b. Three cheers for propositional attitudes. In Representations: Philosophical essays on the foundations
of cognitive science, 100—23.
Cambridge, Mass.: MIT/Bradford. 1981c. The present status of the innateness controversy. In Representations: Philosophical essays on the foundations
of cognitive science, 257—316.
Cambridge, Mass.: MIT/Bradford. 198Id. Methodological solipsism considered as a research strategy in cognitive psychology. In Representations: Philosophical essays on the foundations
of cognitive science, 225—53. Cambridge, Mass.: MIT/Bradford. 1983. The modularity of mind. Cambridge, Mass.: MIT/Bradford. 1987. Psychosemantics: The problem of meaning in the philosophy of mind.
Cambridge, Mass.: MIT/Bradford. Forguson, L., and A. Gopnik. 1988. The ontogeny of common sense. In Developing theories of mind, eds. J. W. Astington, P. L. Harris, and D. R. Olson, 226-43. Cambridge: Cambridge University Press. Gallistel, C. R., A. L. Brown, S. Carey, R. Gelman, and F. C. Keil. 1991. Lessons from animal learning for the study of cognitive development. In The epigenesis of mind: Essays on biology and cognition, eds. S. Carey and R.
Gelman, 3-36. Hillsdale, N.J.: Erlbaum. Gardner, H. 1985. The mind's new science. New York: Basic Books. Gazzaniga, M. S. 1970. The bisected brain. New York: Appleton-CenturyCrofts.
322
Bibliography
1977. On dividing the self: Speculations for brain research. Excerpta Medica, International Congress Series, 434:
Neurology:233-44.
1985. The social brain. New York: Basic Books. Gazzaniga, M. S., and J. E. LeDoux. 1978. The integrated mind. New York: Plenum Press. Gelman, R. 1991. Epigenetic foundations of knowledge structures: Initial and transcendent constructions. In The epigenesis of mind: Essays on biology and
cognition, eds. S. Carey and R. Gelman, 293—322. Hillsdale, N.J.: Erlbaum. Gibson, J.J. 1966. The senses considered as perceptual systems. Boston: Houghton-
Mifflin. 1979. The ecological approach to vision. Boston: Houghton-Mifflin.
Glass, D. C , J. E. Singer, H. S. Leonard, D. Krantz, S. Cohen, and H. Cummings. 1973. Perceived control of aversive stimulation and the reduction of stress responses. Journal of Personality 41:577—99. Goldman, A. I. 1992. In defense of the simulation theory. Mind & Language 7:104-19. 1993. The psychology of folk psychology. Behavioral and Brain Sciences 16:15-28. Goldstein, I. 1989. Pleasure and pain: Unconditional intrinsic values. Philosophy and Phenomenological Research 50:255-76.
Gopnik, A. 1993. How we know our minds: The illusion of first-person knowledge of intentionality. Behavioral and Brain Sciences 16:1-14. Gopnik, A., and H. Wellman. 1993. The child's theory of mind. Paper read to the Society for Research in Child Development (March 26).
Gordon, R. M. 1992. The simulation theory: Objections and misconceptions. Mind & Language 7:11—34.
Green, O. H. 1991. The emotions. Dordrecht: Kluwer. Gregory, R. L. 1988. Consciousness in science and philosophy: Conscience and con-science. In Consciousness in contemporary science, eds. A. J. Marcel
and E. Bisiach, 257—72. Oxford: Clarendon Press. Gunderson, K. 1971. Mentality and machines. Garden Doubleday/Anchor.
City:
Hardin, C. L. 1988. Color for philosophers: Unweaving the rainbow. Indianapolis:
Hackett. Harman, G. 1982. Conceptual role semantics. Notre Dame Journal of Formal Logic 23:242-56. Hilbert, D. R. 1987. Color and color perception: A study in anthropocentric
realism. Menlo Park, Cal.: Center for the Study of Language and Information. Holender, D. 1986. Semantic activation without conscious identification. Behavioral and Brain Sciences 9:1—23.
323
Bibliography Hume, D. 1739/1967. Treatise on human nature. Ed. L. A. Selby-Bigge. Oxford: Oxford University Press. First published 1739. Hurvich, L. M. 1981. Color vision. Sunderland, Mass.: Sinauer and Associates. Huttenlocher, J., and P. Smiley. 1990. Emerging notions of persons. In Psychological and biological approaches to emotion, eds. N. L. Stein, B. Leventhal, and T. Trabasso, 283-95. Hillsdale, N.J.: Erlbaum. Jackson, E 1977. Perception. Cambridge: Cambridge University Press. James, W. 1890/1959. The principles ofpsychology. New York: Dover. First published 1890. Johnson, C. N. 1988. Theory of mind and the structure of conscious experience. In Developing theories of mind, eds. J. W. Astington, P. L. Harris, and D. R. Olson, 47-63. Cambridge: Cambridge University Press. Kant, I. 1787/1961. Critique ofpure reason. Trans. N. Kemp Smith. New York: St. Martins Press. First published 1787. Karmiloff-Smith, A. 1991. Beyond modularity: Innate constraints and developmental change. In The epigenesis of mind: Essays on biology and cognition, eds. S. Carey and R. Gelman, 171-97. Hillsdale, N.J.: Erlbaum. Kaufman, L. 1974. Sight and mind: An introduction to visual perception. Oxford: Oxford University Press. Keating, E. G. 1979. Rudimentary color vision in the monkey after removal of striate and preoccipital cortex. Brain Research 179:379—84. Keil, F. C. 1981. Constraints on knowledge and cognitive development. Psychological Review 88:197-227. 1991. The emergence of theoretical beliefs as constraints on concepts. In The epigenesis of mind: Essays on biology and cognition, eds. S. Carey and R. Gelman, 237-56. Hillsdale, N.J.: Erlbaum. 1992. Concepts, kinds, and cognitive development. Cambridge, Mass.: MIT/Bradford. Konishi, M. 1993. Listening with two ears. Scientific American 268, 4:66—73. Kosslyn, S. M. 1980. Image and mind. Cambridge, Mass.: Harvard University Press. 1987. Seeing and imaging in the cerebral hemispheres: A computational approach. Psychological Review 94:148—75. Kosslyn, S. M., R. A. Flynn, J. B. Amsterdam, and G. Wang. 1990. Components of high-level vision: A cognitive neuroscience analysis and accounts of neurological syndromes. Cognition 34:203—77. Leach, P. 1989. Your baby and child: From birth to age five. New York: Alfred A. Knopf. Lefcourt, H . M. 1976. Locus of control: Current trends in theory and research. Hillsdale, N.J.: Erlbaum. Leibniz, G. W. v. 1714/1989. Principles of nature and grace. In G.W. Leibniz:
324
Bibliography Philosophical essays, eds. R. Ariew and D. Garber, trans. D. Garber, 206-13. Indianapolis: Hackett. First published 1714. Leon, M. 1988. Characterising the senses. Mind & Language 3:243—70. Leslie, A. M. 1987. Pretense and representation: The origins of "theory of mind". Psychological Review 94:412-26. 1988. Some implications of pretense for mechanisms underlying the child's theory of mind. In Developing theories of mind, eds. J. W. Astington, P. L. Harris, and D. R. Olson, 19-46. Cambridge: Cambridge University Press. Leslie, A. M., T. P. German, and F. G. Happe. 1993. Even a theory-theory needs information-processing: ToMM, an alternative theory-theory of the child's theory of mind. Behavioral and Brain Sciences 16:56—57. Locke, J. 1690/1959. An essay concerning human understanding. Ed. A. C. Fraser. New York: Dover. First published 1690. Lycan, W. G. 1986. Tacit belief. In Belief: Form, content and function, ed. R. J. Bogdan, 61—82. Oxford: Oxford University Press. 1987. Consciousness. Cambridge, Mass.: MIT/Bradford. Malcolm, N. 1963. Knowledge of other minds. In Knowledge and certainty, 130-40. Englewood Cliffs, N.J.: Prentice-Hall. Mandler, G. 1987. Emotion. In The Oxford companion to the mind, ed. R. L. Gregory, 219—20. Oxford: Oxford University Press. Marcel, A. J. 1980. Conscious and preconscious recognition of polysemous words: Locating the selective effects of prior verbal context. In Attention and performance, vol. VIII, ed. R. S. Nickerson, 435-57. Hillsdale, N.J.: Erlbaum. Marler, P. 1991. The instinct to learn. In The epigenesis of mind: Essays on biology and cognition, eds. S. Carey and R. Gelman, 37-66. Hillsdale, N.J.: Erlbaum. Marr, D. 1982. Vision:A computational investigation into the human representation and processing of visual information. San Francisco: W. H. Freeman. Marshall, J. C , and P. W. Halligan. 1988. Blindsight and insight in visuospatial neglect. Nature 336:766-67. McGinn, C. 1988. Consciousness and content. Proceedings of the British Academy 74:219-39. 1989. Can we solve the mind-body problem? Mind 98:349-66. Medin, D. L., and E. J. Shoben. 1988. Context and structure in conceptual combinations. Cognitive Psychology 20:158—90. Mellor, D. H. 1978. Conscious belief. Proceedings of the Aristotelian Society 78:87-101. Melzack, R. 1973. The puzzle of pain. New York: Basic Books. 1975. The McGill pain questionnaire: Major properties and scoring methods. Pain 1:277-99.
325
Bibliography Melzack, R., and P. D. Wall. 1983. The challenge of pain. New York: Basic Books. Middleton, Frank A., and P. L. Strick. 1994. Anatomical evidence for cerebellar and basal ganglia involvement in higher cognitive function. Science 266:458-61. Mill, J. S. 1889. An examination of Sir William Hamilton's philosophy. N e w York:
Longmans, Green & Co. Miller, J. G. 1942. Unconsciousness. New York: John Wiley. Millikan, R. G. 1989. Biosemantics. Journal of Philosophy 86:281-97. Morillo, C. R. 1990. The reward event and motivation. Journal of Philosophy 87:169-86. 1995. Contingent creatures: A reward event theory of motivation and value.
Lanham, Md.: Littlefield Adams Books. Murphy, G. L., and D. L. Medin. 1985. The roles of theories in conceptual coherence. Psychological Review 92:289-316. Nagel, T. 1974. What is it like to be a bat? Philosophical Review 83:435-50. 1979a. Mortal questions. Cambridge: Cambridge University Press. 1979b. Panpsychism. In Mortal questions, 181—95. Cambridge: Cambridge University Press. 1979c. Moral luck. In Mortal questions, 24-38. Cambridge: Cambridge University Press. 1980. Armstrong on mind. In Readings in the philosophy ofpsychology, vol. I,
ed. N. Block, 200-06. Cambridge, Mass.: Harvard University Press. 1986. The view from nowhere. Oxford: Oxford University Press. Natsoulas, T. 1983. Concepts of consciousness. Journal of Mind and Behavior 4:13-59. 1989a. From visual sensations to the seen-now and the seen from here. Psychological Research 51:87-92.
1989b. An examination of four objections to self-intimating states of consciousness. Journal of Mind and Behavior 10:63—116. 1989c. The ecological approach to perception: The place of perceptual content. American Journal of Psychology 102:443—76.
1990a. Perspectival appearing and Gibsons theory of visual perception. Psychological Research 52:291-98.
1990b. Reflective seeing: An exploration in the company of Edmund Husserl and James J. Gibson. Journal of Phenomenological Psychology
21:1-31. Nauta, W , and M. Feirtag. 1979. The organization of the brain. Scientific American 241,3:88-111. Nelkin, N. 1986. Pains and pain sensations. Journal of Philosophy 83:129-48. 1987a. How sensations get their names. Philosophical Studies 51:325—39. 1987b. What is it like to be a person? Mind & Language 3:220-41.
326
Bibliography 1989a. Unconscious sensations. Philosophical Psychology 2:129-41. 1989b. Propositional attitudes and consciousness. Philosophy and Phenomenological Research 49:413—30.
1989c. Reid's view of sensations vindicated. In The philosophy of Thomas Reid, eds. E. Matthews and M. Dalgarno, 65—77. Dordrecht: Kluwer. 1990. Categorising the senses. Mind & Language 5:149—65. 1993a. The connection between intentionality and consciousness. In Consciousness: Psychological and philosophical essays, eds. M. Davies and G.
W. Humphreys, 224-39. Oxford: Blackwell. 1993b. What is consciousness? Philosophy of Science 60:419—34. 1994a. Patterns. Mind & Language 9:56-87. 1994b. Phenomena and representation. British Journal for the Philosophy of Science 45:419-34. 1994c. Reconsidering pains. Philosophical Psychology 7:325-43. 1994d. Subjectivity. In The companion to the mind, ed. S. Guttenplan, 568-75. Oxford: Blackwell. Forthcoming-a. The belief in other minds. In Wittgenstein and the cognitive sciences, eds. D. Gottlieb and S. J. Odell. Forthcoming-b. Searle s argument. Behavioral and Brain Sciences. Forthcoming-c. The dissociation of phenomenal states from apperception. In Consciousness: The current debate, ed. T. Metzinger. Paderborn:
Schoningh. In Preparation-a. Descartes' dream argument. In Preparation-b. Why the will matters. Norman, D. A. 1986. Reflections on cognition and parallel distributed processing. In Parallel distributed processing: Explorations in the microstructure of
cognition, eds. D. E. Rumelhart and J. L. McClelland, 531-46. Cambridge, Mass.: MIT. Olson, D. R., J. W. Astington, and P. L. Harris. 1988. Introduction. In Developing theories of mind, eds. J. W. Astington, P. L. Harris, and D. R. Olson, 1—15. Cambridge: Cambridge University Press. Paterson, D. 1980. Is your brain really necessary? World Medicine (May 3):21-24. Patterson, S. 1991. Individualism and semantic development. Philosophy of Science 58:15-35. Peacocke, C. 1983. Sense and content. Oxford: Oxford University Press. Peirce, C. S. 1934. How to make our ideas clear. In Collected papers of Charles Sanders Peirce, eds. C. Hartshorne and P. Weiss, 5:248—71. Cambridge, Mass.: Harvard Universtiy Press. Perkins, M. 1983. Sensing the world. Indianapolis: Hackett. Perner, J. 1988. Developing semantics for theories of mind: From proposi-
327
Bibliography tional attitudes to mental representation. In Developing theories of mind, eds. J. W. Astington, P. L. Harris, and D. R. Olson, 141-72. Cambridge: Cambridge University Press. Piaget, J. 1954. The construction of reality in the child. London: Routledge and
Kegan Paul. Piaget, J., and B. Inhelder. 1969. The psychology of the child. New York: Basic Books. Pitcher, G. 1974. A theory ofperception. Princeton: Princeton University Press. Porrino, L. J. 1987. Cerebral metabolic changes associated with activation of reward systems. In Brain reward systems and abuse, eds. J. Engel and L. Oreland, 51-60. New York: Raven Press. Poulin-Dubois, D., and T. R. Shultz. 1988. The development of the understanding of human behavior: From agency to intentionality. In Developing theories of mind, eds. J. W. Astington, P. L. Harris, and D. R. Olson, 109—25. Cambridge: Cambridge University Press. Povinelli, D. J., and L. R. Godfrey. 1993. The chimpanzees mind: How noble in reason? How absent of ethics? In Evolutionary ethics, eds. M. H. Nitecki and D. V. Nitecki, 277-324. Albany: State University of New York Press. Premack, D. 1988. Minds with and without language. In Thought without language, ed. L. Weiskrantz, 46-65. Oxford: Clarendon Press. Price, H. H. 1932. Perception. London: Methuen. 1938. Our evidence for the existence of other minds. Philosophy 13:425—56. Putnam, H. 1975. The meaning of "meaning". In Mind, language, and reality: Philosophical papers, vol. I. Cambridge: Cambridge University Press. 1981. Reason, truth and history. Cambridge: Cambridge University Press. Pylyshyn, Z. W. 1981. Imagery and artificial intelligence. In Readings in the philosophy ofpsychology, vol. II, ed. N. Block, 170—94. Cambridge, Mass.: Harvard University Press. Quine, W. V. O. 1963. Reference and modality. In From a logical point of view, 139-59. New York: Harper and Row. Ramachandran, V. S., D. Rogers-Ramachandran, and M. Stewart. 1992. Perceptual correlates of massive cortical reorganization. Science 258:1159-60. Reid, T. 1785/1969. Essays on the intellectual powers of man. Ed. Baruch Brody.
Cambridge, Mass.: MIT. First published 1785. Reingold, E. M., and P. M. Merikle. 1990. On the inter-relatedness of theory and measurement in the study of unconscious processes. Mind & Language 5:9-28. Rock, I. 1983. The logic of perception. Cambridge, Mass.: MIT/Bradford. Rorty, R. 1982. Consequences of pragmatism. Minneapolis: University of Minnesota Press.
328
Bibliography
Rosenthal, D. M. 1986. Two concepts of consciousness. Philosophical Studies 49:329-59. 1991. The independence of consciousness and sensory quality. In Consciousness, ed. E. Villanueva, 15-36. Atascadero, Cal.: Ridgeview. 1993. Thinking that one thinks. In Consciousness: Psychological and philosoph-
ical essays, eds. M. Davies and G. W. Humphreys, 197-223. Oxford: Blackwell. Rumelhart, D. E., and J. L. McClelland, eds. 1986. Parallel distributed processing: Explorations in the microstructure of cognition. Cambridge, Mass.: MIT. Russell, B. 1948. Human knowledge: Its scope and its limits. N e w York: Simon
and Schuster. Russell, J. 1989. Cognisance and cognitive science. Part 2: Towards an empirical psychology of cognisance. Philosophical Psychology 2:165—201. Ryle, G. 1949. The concept of mind. New York: Barnes and Noble. Sacks, O. 1993. To see and not see. The NewYorker (May 10):59-73. Schacter, D. L. 1989. On the relation between memory and consciousness: Dissociable interactions and conscious experience. In Varieties of memory and consciousness, eds. H. Roediger and F. Craik, 355-89. Hillsdale, N.J.: Erlbaum. Searle, J. R. 1980. Minds, brains, and programs. Behavioral and Brain Sciences 3:417-57. 1983.
Intentionality: An
essay in the philosophy
of mind. Cambridge:
Cambridge University Press. 1989. Consciousness, unconsciousness, and intentionality. Philosophical Topics 17:193-209. 1990. Consciousness, explanatory inversion, and cognitive science. Behavioral and Brain Sciences 13:585—642. 1992. Hie rediscovery of the mind. Cambridge, Mass.: Bradford/MIT. Shoemaker, S. 1981. The inverted spectrum. Journal of Philosophy 74:357—81. Siegal, M., and K. Beattie. 1991. Where to look first for children's knowledge of false beliefs. Cognition 38:1-12. Slater, C. 1994. Discrimination without indication: Why Dretske can't lean on learning. Mind & Language 9:163—80. Spelke, E. S. 1988. The origins of physical knowledge. In Thought without language, ed. L. Weiskrantz, 168-84. Oxford: Clarendon Press. 1991. Physical knowledge in infancy: Reflections on Piagets theory. In The epigenesis of mind: Essays on biology and cognition, eds. S. Carey and R.
Gelman, 133-70. Hillsdale, N.J.: Erlbaum. Stanovich, K. E. 1993. The developmental history of an illusion. Behavioral and Brain Sciences 16:80—81.
Staub, E., B. Tursky, and G. E. Schwartz. 1971. Self-control and predictabil-
329
Bibliography ity: Their effects on reaction to aversive stimuli. Journal of Personality and Social Psychology 18:157-62. Stephens, G. L., and G. Graham. 1987. Minding your Ps and Q's: Pain and sensible qualities. Nous 21:395-405. Sternbach, R. A. 1968. Pain: A psychophysiologkal analysis. New York: Academic Press. Stich, S. P. 1978. Beliefs and subdoxastic states. Philosophy of Science 45:499-518. 1983. From folk psychology to cognitive science :Tlie case against belief. Cambridge, Mass.: MIT/Bradford. 1990. Thefragmentation of reason: Preface to a pragmatic theory of cognitive evaluation. Cambridge, Mass.: MIT/Bradford. Stoerig, P. 1987. Chromaticity and achromaticity: Evidence for a functional differentiation in visual field defects. Brain 110:869—86. Stoerig, P., and S. Brandt. 1993. The visual system and levels of perception: Properties of neuromental organization. Jlteoretical Medicine 14:117—35. Stoerig, P., and A. Cowey. 1989. Wavelength sensitivity in Hindsight. Nature 342:916-18. 1992. Wavelength discrimination in blindsight. Brain 115:425-44. Strawson, G. 1992. Review of Daniel Dennett, Consciousness Explained. TLS (August 21):5. Strawson, P. 1963. Persons. In Individuals: An essay in descriptive metaphysics, 81-113. Garden City, NY: Doubleday/Anchor. Thompson, E., A. Palacios, and F. J. Varela. 1992. Ways of coloring: Comparative color vision as a case study for cognitive science. Behavioral and Brain Sciences 15:1—26. Trigg, R. 1970. Pain and emotion. Oxford: Oxford University Press. Van Essen, D. C. 1985. Functional organization of the primate visual cortex. In Cerebral cortex, vol. Ill, eds. A. Peters and E. G. Jones, 259-329, New York: Plenum Press. Van Essen, D. C , and J. H. R. Maunsell. 1983. Hierarchical organization and functional streams in the visual cortex. Trends in Neuroscience 6:370-75. Volpe, B. T , J. E. LeDoux, and M. S. Gazzaniga. 1979. Information processing of visual stimuli in an "extinguished field". Nature 282:722—24. Watson, J. S., and C. T. Ramey. 1972. Reactions to response-contingent stimulation in early infancy Merril-Palmer Quarterly of Behavior and Development 18:219-27. Weinberg, S. 1984. The first three minutes: A modern view of the origin of the universe. Toronto: Bantam Books. Weiskrantz, L. 1977. Trying to bridge some neuropsychological gaps between monkey and man. British Journal of Psychology 68:431-45.
330
Bibliography 1985 (ed). Animal intelligence. Oxford: Clarendon Press. 1986. Blindsight: A case study and implications. Oxford: Oxford University Press. 1988. Some contributions of neuropsychology of vision and memory to the problem of consciousness. In Consciousness in contemporary science, eds. A. J. Marcel and E. Bisiach, 183-99. Oxford: Clarendon Press. 1990. Outlooks for blindsight: Explicit methodologies for implicit processes (The Ferrier Lecture, 1989). Proceedings of the Royal Society, London B 239:247-78. Wellman, H. M. 1992. The child's theory of mind. Cambridge, Mass.: MIT/Bradford. Wilkes, K. V. 1988. _ , yishi, duh, urn, and consciousness. In Consciousness in contemporary science, eds. A. J. Marcel and E. Bisiach, 16—41. Oxford: Clarendon Press. Wimmer, H., and J. Perner. 1983. Beliefs about beliefs: Representation and constraining the function of wrong beliefs in young children's understanding of deception. Cognition 13:103—28. Wimmer, H., J. Hogrefe, and B. Sodian. 1988. A second stage in children's conception of mental life: Understanding informational accesses as origins of knowledge and belief. In Developing theories of mind, eds. J. W. Astington, P. L. Harris, and D. R. Olson, 173-92. Cambridge: Cambridge University Press. Wittgenstein, L. 1921/1961. Tractatus logico-philosophicus. Trans. D. F. Pears, and B. F. McGuinness. London: Routledge & Kegan-Paul. First published 1921. 1953. Philosophical investigations. Trans. G. E. M. Anscombe. London: Macmillan. 1967. Zettel Eds. G. E. M. Anscombe and G. H. von Wright. Trans. G. E. M. Anscombe. Oxford: Blackwell. 1969. On certainty. Eds. G. E. M. Anscombe and G. H. von Wright. Trans. D. Paul and G. E. M. Anscombe. Oxford: Blackwell. Yaniv, I., and M. Shatz. 1988. Children's understanding of perceptibility. In Developing theories of mind, eds. J. W. Astington, P. L. Harris, and D. R. Olson, 93-108. Cambridge: Cambridge University Press. Young, A., and E. H. F. De Haan. 1990. Impairments of visual awareness. Mind & Language 5:29—48. Zeki, S. 1992. The visual image in mind and brain. Scientific American 267, 3:69-76.
331
Index acquisition of color concepts, 43, 49, 57, 59 of concepts, 10-11, 112, 183, 200, 242ff., 299-300, 305-07, 316-17 of concept of external objects, 11, 228, 250ff., 306, 308, 317 of concept of other minds, 11, 228, 236ff., 263, 307 of concept of self, 11, 228, 251fF., 306, 308, 317 of concepts of senses, 16—35 of object concepts, 280, 284-85, 296n, 297 of propositional attitude concepts, 10, 193ff., 208ff. of spatial concepts, 280, 284ff. of temporal concepts, 280, 284-85, 297 Adverbial View, 37-38, 41n, 58 affects (affective states), 86, 136, 144-46, 161, 229n, 234n see also feelings Alexander, G., 96n Amsterdam, N. C , 324 analogue/non-analogue representations, see representations, analogue/nonanalogue Anti-Individualism, see Individualism apperception, see apperceptive consciousness Apperceptionalism, 128-30, 134ff., 145-46, 212n Apperceptionism, 212, 215-16 apperceptive consciousness (C2), 7-10, 23-24, 57n, 81-82, 126ff., 316 and concept formation and possession, 10,189ff. as direct and non-inferential, 205-07 and introspection, 7n and pain, 81-96 Argument from Analogy, 10, 228, 232ff.
epistemological version, 233, 235ff., 270 philosophy of mind version, 233, 235ff., 252, 263,270 Armstrong, D. M., 127n, 319 Arndt, S., 319 arthritis case, 250 aspectualized representations, see representations, aspectualized Astington, J. W, 327 attitude, 62, 73ff. attitudinal theory of pain, see pain, attitudinal theory of Baars, B. J., 93-94, 207, 223n, 226-27, 319 Bailleargeon, R., 258, 297, 306n, 319 Barinaga, M., 153n, 319 barn owl experiments, 288—96 Barrett, R., 95n bats, 53, 135, 140, 148, 288 Bauer, P. J., 258, 297, 306n, 319 Beattie, K., 233n, 329 beetle-in-the-box example, 6In, 71, 75, 82 Behaviorism, 1-2, 3, 19, 199, 200, 204, 208n, 215, 276, 317 Bennett, J., 213, 319 Berkeley, G., 37n, 38, 53-54, 60n, 271n, 319 Bidell, T, 233n, 246n, 256n, 261n, 322 Biederman, I., 297, 319 Bilgrami, A., 195, 319 blind spot, 240-41 blindsight, 22-24, 27, 29-30, 56-57, 73-75, 80-81, 98, 102, 116, 120, 125, 127-29, 151-56, 166-67, 175-79, 183-84 color and hue discrimination experiments, 22-23, 57, 177-79 semantic priming experiments, 152, 154-55
332
Index semi-circle experiments, 151-52 "X"s and "O"s discrimination experiments, 23-24, 73-74, 151, 175-76 Block, N., 42n, 88, 218n, 319 Boghossian, P. A., 15, 177, 242, 320 brain in a vat case, 11, 196, 263, 278, 281, 293-94, 297, 317 Brandt, S., 178, 226-27, 250, 330 BremnerJ. G., 254n, 320 Brewer, B., 286, 320 Broad, C. D., 37n, 320 Brown, A. L., 322 Brown, R., 38n, 320 Buddhism, 31 In Burge, T, 250, 320 Butler, K., lOOn, 105n, 158n, 243, 252, 266-67, 320
Cl, see propositional-attitude consciousness C2, see apperceptive consciousness CN, see Nagel-consciousness CS, see sensation consciousness Campion, J. R., 74n, 124, 131, 133, 320 Carey, S., 214, 246n, 262, 263n, 275, 317, 320, 322 Cartesian Rationalism, 1, 3, 8-9, 189, 193, 229, 255, 273 science, 281,282, 295 theater, 208n, 219, 227 theory of mind, 229ff., 234, 236, 239-42, 245, 253ff., 267, 270, 279,
280; see also Psychological Solipsism world (turned-around), 136n, 137n, 138-39 Cartesianism, xi-xiii, 4-6, 220, 231, 233, 240, 242-43, 248, 250, 253, 261n, 270, 279ff., 316 see also Scientific Cartesianism categorization of the senses, see criteria of sense recognition; criteria of sense individuation causal position, 36 revised, 117-20 Cavonius, C. R., 41n, 44n, 320 Chalmers, D. J., 15, 137n, 186, 320 Chandler, M., 183, 233, 262, 320 chiliagon, In, 8, 159, 179, 182, 232 Chisholm, R., 37, 320 Churchland, P. M., 206, 208n, 211, 229n, 243, 249, 320
Churchland, P. S., 211 Cizadio, T., 320 cocktail party effect, 175 Cohen, S., 323 color, 17n, 21ff., 37ff., 47-50, 54-59, 70, 101, 115, 172, 177-79 see also blindsight, color and hue discrimination experiments; color blindness; concept, of color; natural kinds, and color/hue color blindness, 40—41, 55 anomalous trichromats, 41 n, 42, 44n, 102 deuteranopes, 40n, 41-42 protanopes, 40n recovered trichromats (tinted-lens wearers), 28, 41,44, 65-67 commissurotomy cases, 23, 24n, 29, 57, 74-75, 80-81, 125, 127, 131-33, 152-53, 166, 175-76 Paul (Gazzaniga's patient), 132-33, 142, 144, 166n tachistoscopic experiments, 74-75, 131-32 compatibilism, 299, 312 computational models of perception, see perception, computational models of Computeresers, 52-53 concept of action, 200 of agency, 259n, 262, 300n, 311-13 of agent, 259n, 31 In of alive (children's), 246-47, 275-76, 317 of autonomy, 315 of ball, 264 of behavior, 199, 204, 212 of bodies, 92, 26 In of cats, 200 of causation, 257, 297, 305 of color, 3, 8-9, 43, 46, 49, 56-57, 59 of direction, 289n, 292-93 of dog, 230 of edges, 251 of electrons, 209 of external (world), 234, 251-52, 256, 259, 260, 261n, 263, 291, 308, 309, 317 of free will, 315 of goal, 213 of higher-level functional states (e.g. believing), 201, 212-13
333
Index concept (cont.) of horse, 247 of immateriality, 262 of in-control/not-in-control, 257ff. of information, 213 of (natural) kinds, 248-49 of life, 186 of lower-level functional states (e.g. eating), 201,212-14 of objects, 11, 228, 250, 260-61, 263, 264, 279, 295, 298 of other minds, 264 of other selves, 11, 228, 264 of pain, 62, 77, 83, 236-38, 240, 244 of praiseworthiness/blameworthiness, 315 of propositional attitudes, 10, 193ff., 228, 263, 316 of proto-self/proto-not-self, 259-61, 279, 306 of reading, 244 of responsibility, 313, 315 of Russell-pain, 236 of self/not-self, 10, 11, 92, 228, 251, 258ff., 271-73, 279ff., 295-96, 301, 305ff., 317 of senses, 16ff., 30-31, 34 of shape, 38-39, 56-57 of space, 260-61, 263, 279, 285, 295, 298
of spatial orientation, 24 of thinking, 214, 263 of time, 260-61, 263, 279, 295, 298 of tree, 248-49 of triangle, 294 of understanding, 244 of will, 315 concepts, 1-3, 10, 112, 189, 200, 236ff., 273-78, 291 communal, 275-76, 283, 317 directional, 289-94, 297 innate, 1-2, 3, 201-03, 209, 214-15, 254, 257, 265, 285, 297, 305, 306n, 316-17 non-Euclidian, 294 object, 280, 284-85, 295, 296n, 297 perceptual, 7, 251, 253, 298 proto-, 259 scientific, 284 spatial, 280, 285ff. as theories, 208, 210-211, 245, 247, 248n, 249, 251, 273, 274, 278, 282, 284n, 291, 294, 297-98, 317
temporal, 280, 284-85, 295, 296n, 297 see also acquisition, of concepts; concept connectionism, lOOn, 105, 106n, 154, 183, 211,258,261,266-67 consciousness, 8-9, 32, 58-59, 114, 118-19, 120, 123ff. see also apperceptive consciousness; dissociability; Nagel-consciousness; propositional-attitude consciousness; sensation consciousness content, 1-3 and apperception, 205, 254, 264 bearers of, 1, 8, 20, 99, 101, 111-14, 116, 120, 140, 149n, 234, 255 of concepts, 10-11, 59, 193, 197n, 228ff., 247-48, 251n, 252-53, 275-77, 281, 283-84, 294-95; see also Externalism; Individualism; Internalism and information, 8, 99, 105, 111-14, 231n, 255 and propositional-attitude consciousness, 148, 216 narrow, 211 control, 68n, 89, 256fF., 271-73, 295, 301ff. see also in-control/not-in-control Cooper, L. A., 37n, 164, 172, 321 Cowey, A., 22, 57, 82, 177-78, 330 creative thinking case, 158, 182 criteria of sense individuation, 17, 26—35 judgment criterion, 27ff. organ criterion, 26 phenomenon criterion, 27ff. criteria of sense recognition, 17-25, 34—35 combinations of, 21-25 external property criterion, 17-18 judgment criterion, 20-21 organ criterion, 18-19 phenomenon criterion, 19—20 Cummings, H., 323 Davies, ML, 2n, 321 De Haan, E. H. E, 152, 331 Dennett, D. C , 10, 15, 36n, 48n, 58n, 61, 65n, 182-83, 193-213, 215, 217-27, 228,250,259,281,297,321 depression, 304 Descartes, R., xi-xii, 4-6, 8-10, 80, 107n, 133, 136n, 148, 159n, 179, 183, 185, 229, 232, 234, 250-51, 253n, 272-73, 282, 321
334
Index feelings, 16n, 21n, 139, 144-45, 158, 159, 161, 174-76, 222 see also emotions; affects Feirtag, M., 50, 326 Fendrich, R., 153n, 322 Ferster, C. B., 304, 322 first-person point of view, see point of view, first-person Fischbach, G. D , 50n, 322 Fischer, K. W., 233n, 246n, 256n, 261n, 322 Flaum, M , 319 Flavell,J. H., 233n, 259, 322 Flynn, R. A., 324 Fodor, J. A., 15, 20, 34n, 88, 164n, 194n, 20In, 204-05, 208n, 209, 211, 214, 220, 259-60, 280-81, 284, 319, 322 Forguson, L., 233, 322 freewill, 11,273,299-315 functionalist account of pain, 88 account of phenomena, 65 account of mental state contents, 230
determinism, 139, 258, 300n, 310 hard, 258, 299 soft, 258, 299, 312 Diamond, A., 26In, 321 Dickinson, A., 292n, 321 dissociability ofCl fromC2, 127, 149-61, 182, 225 ofCl from phenomenality, 149-50, 158-61, 182 ofC2fromCl, 182 of C2 from phenomenality, 159-60, 182 of CS from C2, 91-92, 172-82 of phenomenality from Cl, 153 of phenomenality from C2, 153, 157, 172-80 of types of consciousness, 8—9, 81, 91-92, 147-48, 184, 208n dissociability thesis, 150-51, 161, 173, 181-82 Dixon, N. E, 154-55, 321 Dream Argument, xi, 5 dreams, 34, 124, 148, 185-86 Dretske, E I., 66n, 105, 113n, 200n, 213n, 321 Dreyfus H., 211 driving case, 125-29, 131 dualism, xii, 4, 5, 107, 229, 234 eagles, see vision, of eagles Edelman, G. M., 211,322 Egan, E, 297n, 322 Ehrhardt, C , 319 electric "eyes", 31-32 emotions, 86, 136, 144-46, 161, 229n, 234n, 308 see also feelings Empiricism, 1-2, 7, 35, 107n, 185-89 British, 1-2, 5, 6-9, 32, 193, 210, 222-23, 255, 273 Classical, 215, 221 epiphenomenal position, 36 revised, 114-15, 120,287 Externalism, 2, 2n, 3, 5, 10-11, 193, 197n, 221, 228-78, 279-85, 292-95, 297, 317 see also Internalism externalist theories of knowledge, see knowledge, externalist theories of evaluative theory of pain, see pain, evaluative theory of evil-genius manipulated world, 138—39
Gallistel, C. R., 247, 322 Gardner, H., 57, 322 Gazzaniga, M. S., 23, 57, 74, 131-33, 142, 152,166, 176, 322-23, 330 Gelman, R., 247, 278n, 322-23 German, T. P., 325 Gibson, J. J., 32n, 98, lOOn, 103-09, 117, 119, 206n, 242n, 297, 323 Gibson, R., 95n Gibsonian models of perception, see perception, Gibsonian models of Glass, D. C , 68, 71, 83, 88n, 92, 95n, 102, 302-03, 323 Godfrey, L. R., 265, 328 Goldman, A. I., 143n, 210n, 221, 223n, 224n, 307n, 323 Goldstein, I., 71n, 158n, 178n, 323 Gopnik, A., 201n, 207n, 224n, 233n, 322, 323 Gordon, R. M., 210n, 323 Graham, G., 83n, 89, 330 Green, O. H., 83n, 90n, 323 Gregory, R. L., 49-50, 53n, 70, 323 Gunderson, K., 65n, 323 HAL, 144 Halligan, P. W., 152, 325 hallucinations, 19, 33-34
335
Index Happe, G. G., 325 hard determinism, see determinism, hard Hardin, C. L., 15, 23n, 41n, 42n, 48, 177, 178n, 242, 323 Harman, G., 42n, 323 Harris, P. L., 327 hemineglect with apparent hemianopia, 56 hemispherectomized patients, 178—79 Herrnstein, R. J., 38n, 320 Hilbert, D. R. 47n, 323 HogrefeJ., 331 Holender, D., 127, 133, 323 Hume, D., 1, 32, 143, 159, 163, 221, 282, 324 hurtfulness, 71-72, 77n, 83 types of, 86 Hurvich, L. M., 28, 41-42, 46, 70n, 324 Huttenlocher, J., 233n, 259, 262, 324 idealism, 54 Berkeleyan, 271 Categorical, 23In Identificationism, 124-28, 131, 134-35, 145-46 identity theories, 186-87 type-type, 222 image-like representations, see representations, image-like in-control/not-in-control, 11, 112, 257-65, 272-73, 285, 291, 295, 302ff., 316 incorrigibility, and apperception, 82, 195, 204-05, 210-11,232 and phenomena, 72 and propositional attitudes, 204, 214 indeterminism, 139, 258, 299, 300, 310 indicators, 84n, 108 Individualism, 229-30, 242, 244-47, 249, 276, 281, 294-95, 317 information, see content, and information information theories of perception, see perception, information theories of Inhelder, B., 264, 306n, 328 innate abilities, 207n, 285 awareness, 316 color blindness, 41 concepts, see concepts, innate content, 3 knowledge, 106 structures, 210, 278
Instrumentalism, 10, 193-204, 207, 212, 226, 230 intentional stance, 198n, 201-02, 209, 213n intentionality, xiii, 111, 113, 141, 148-49, 150-58, 160, 171-72, 183, 188, 210, 219, 220-21, 231, 250, 264, 266-67 and "intentionality", 264 intrinsic, 252 natural, 54-56 Internalism, 1-3, 5, 10-11, 193, 197n, 226, 230, 231n, 245, 274-77, 279-84, 286-88, 297, 316-18 see also Externalism intracranial self-stimulation (ICSS) experiments, 303-04, 313 intrinsic/extrinsic representations, see representation, intrinsic/extrinsic introspection, 7n, 38, 44n, 72, 77n, 93, 115, 160, 170, 174, 176, 181, 184, 186, 194n, 223, 296 and apperception, 7n, 296 inverted spectrum, 42-46 Jackson, E, 15, 37n, 54, 324 James, W, 305, 324 Johnson, C. N., 260n, 262, 324 Johnson, E., 312n judgments, 10, 256, 260 and apperception, 149n, 164, 205-06, 223, 254, 296 and concepts, 240, 267, 277, 291, 294, 296
and "judgments", 264 role in categorizing senses, 20-35 role in pain, 83n, 85, 88-89, 95-97, 98
role in perception, In, 7, 35, 36ff., 98ff., 151, 164, 201n, 205-06, 225, 232, 255, 291-92, 296 Kant, I., xi-xii, 10-11, 206, 250, 253, 260, 272n, 273, 286, 288n, 295, 297-98, 309,311-12,314,324 Karmiloff-Smith, A., 260n, 320, 324 Kaufman, L., 40n, 41n, 53n, 324 Keating, E. G., 82, 177, 324 Keil, E C , 207n, 223n, 246n, 247-48, 251, 260, 262, 278n, 322, 324 knowledge, 1, 2, 91, 157, 218, 281 externalist theories of, 276n, 278 Konishi, M , 288n, 289-90, 293, 324
336
Index
language of thought, 8, 211, 220-21 Latto, R., 320 Leach, P., 77, 90, 324 Le Doux, J. E., 23, 57, 74n, 131, 166, 176, 323, 324, 330 Lefcourt, H. M., 304, 324 Leibniz, G. W, 7n, 9, 185-6, 324-25 Leon, M , 20n, 32, 307n, 325 Leonard, H. S., 323 Leslie, A. M., 143n, 201n, 258, 260n, 297, 306n, 325 lobotomized patients, 61-62, 86-87, 89 Locke, J., 1, 37n, 41, 53, 153, 316 Lockean persons, 10, 123, 135-37, 139, 141, 148n, 161, 189,316 Lycan, W. G., 65n, 194, 325
Millikan, R. G., 213n, 326 mind (definition of), 345 mind/body distinction, 262, 272, 307 problem, 271-72 mind/mind distinction, 272-73 problem, 308 modularity of mind/brain, 85—86, 89, 92-93, 95-97, 101, 188, 207, 220-21, 223-27, 269-70, 296 MollonJ. D., 320 moods, 229n, 234n morphine-dosed patients, 61-62, 86—87, 89 self-administered dosage experiments, 303 Morillo, C. R., 71, 81n, 82, 234n, 326 Muller, M., 320 Murphy, G. L., 283, 326
Machine View (MV), 265-70 Malcolm, N., 140n, 228, 236, 239n, 240, 325 Mandler, G., 144, 325 Marcel, A. J., 152, 154-55, 325 Marler, P., 278n, 325 Marr, D., 56, 101, 297n, 325 Marshall, H. R., 71n Marshall, J. C , 152,325 MaunsellJ. C , 152,325 McClelland, J.L., 211,329 McGinn, C , 9, 41, 91, 123, 139, 150, 159, 171-72, 181n, 325 meaning, 1-2, 112, 134, 148-49, 220-21, 231, 252-53, 273-74, 276-77, 283, 286-88, 294, 298, 312, 317 Medin, D. L., 248n, 283, 325, 326 Mediterranean/Nordic pain threshold experiments, see pain, Mediterranean/Nordic threshold experiments Mellor, D. H., 205n, 206, 325 Melzack, R., 68n, 69, 71n, 78, 86-89, 93-94, 95n, 303, 325-26 Melzack-Wall Gate Theory of Pain, see pain, Melzack-Wall Gate Theory of Merikle, P. M., 56, 328 methodological solipsism, 34n Middleton, E A., 227n, 326 Mill,J. S., 235, 262, 326 Miller, J.G., 148n, 326
Nagel, T, 9, 15, 82, 91, 123-24, 135-42, 143n, 144, 148, 150, 159, 181n, 273, 300-01, 307n, 309ff., 326 Nagel-consciousness (CN), 124, 129-46, 147-70 revised definition of, 165 Natsoulas, T, 9, 91, 107n, 123, 147, 148n, 152n, 164, 165n, 168n,171, 180-81, 326 natural-clone case, 249-50, 278, 281 natural kinds and color/hue, 23n, 47-49 and color phenomena, 49-50 concept of, 249 and pain phenomena, 7, 6Iff., 98 and propositional attitudes, 196—99, 204, 213 and the senses, 34 and visual (and other types of) phenomena, 63ff., 98-99 Nauta, W, 50, 326 neglect, 152, 166 Neisser, U , 218 Nelkin, N., 15n, 32n, 36n, 47n, 58n, 61n, 8In, 82n, 123n, 148n, 151n, 152n, 157n, 193n, 204, 228n, 249, 253n, 299n, 307n, 326-37 Neural Darwinism, 211, 261 nondissociability thesis, 151, 161, 172, 173, 174, 177, 179n, 180, 181, 183 Norman, D. A., 213, 327
Kosslyn, S. ML, 101-02, 118, 164, 165n, 172,250,261,297-98,324 Krantz, D., 323
337
Index O'Leary, D., 330 Olson, D. R., 233n, 327 Ontological Solipsism, 233, 263, 271-72 opacity, 111—12
computational models of, 98, 100-05, 108, 109 Gibsonian models of, 32n, 98, 100, 103-08, 206n information theories of, 100-08, 114-20 PA-awareness, see propositional-attitude Reids model of, 98, 107-08 consciousness sense-data views of, 103 pain, 7, 16n, 24n, 60-97, 98-99, 119, Perkins, M, 15, 37n, 54, 327 144n, 161, 165, 188n, 198n, 210, 234, Perner, J., 201n, 233n, 259, 327, 331 244, 255, 262, 266, 269, 291, 316 phenomena/phenomenal states, 8, 15ff., and apperception, 225, 227n, 269, 291 123, 221-23, 229n, 234, 282 attitudinal theory of, 72-81, 86, 96 and apperception, 195, 204-10, 232 causalgia, 67 as qualitative, 7, 15-16, 32n, 88n, chronic, 67 98-99, 114n, 115-16, 118, 141, 165, dental, 68 168-69, 171-72, 186 evaluative theory of, 80-97, 98-99, 255, as representational, 7-8, 84-85, 99, 269, 291 114fF., 141, 165, 168, 171-72, 242, functional analysis of, 88 255, 279-80 hypochondria, 89 role in conceptual development, 255ff., and lobotomized patients, 87—88 279-80, 295-96 masochism, 87 roles in lives, 2, 5-6, 58, 98, 120, 255, Mediterranean/Nordic threshold 316-17 experiments, 71, 78, 88n role in perception, 1, 7, 15-16, 19-22, Melzack-Wall Gate Theory of, 78, 88 25, 27-35, 36-59, 98-99, 101-02, and moral considerations, 79—80, 83 103, 104, 107-20, 193, 255-56, 287, and morphine, 61, 87, 89, 303 316 and nausea, 76-77 Phenomenal View, 37-58 phenomenal-identity theory of pain, see neuralgia, 67 people who never feel, 76 pain, phenomenal-identity theory of phantom-limb, 67, 85, 89-90, 108n Phenomenalism, 37n phenomena, 61ff., 98, 210 phenomenality, 9, 82, 112, 147ff., 195, phenomenal-identity theory of, 62—72 204-05, 208n, 266, 268, 307n referred, 90 see also dissociability shock experiments, 68, 71, 88n, 302-03 phenomenologicality, 20n, 32, 124, 135n, two-phenomena theory of, 71-72, 74 142-43, 159-60, 171, 182, 208n, 215, Palacios, A., 330 307n Partism, 196, 197n, 229-30, 239 phi phenomenon, 217-19, 241 see also Wholism philosophy, xii, 216-17, 285-86 Paterson, D., 66n, 327 and psychology, 3, 6, 216-17, 284 patterns, 197-98, 213n, 215, 217, 226 see also theories, philosophical of behavior, 199-204, 208-09, 215 physicalism, xi, xii, 4, 66, 195, 222, 234 of internal states, 204, 208, 210, 212, Piaget, J., 253, 264, 306n, 31 In, 328 215, 222 Piaget 1 (PI), 254-68, 270, 291n, 305-07 Piaget 2 (P2), 264-68, 270, 291n of neural states, 222-24 Pitcher, G., 60n, 328 Patterson, S., 243, 246, 327 point of view, 156-57 Paul (Gazzaniga's patient), see first-person, 195 commissurotomy cases third-person, 194-95 Pavlov, I., 94, 95n Porrino, L.J., 304, 328 Peacocke, C , 15, 37n, 165n, 327 Poulin-Dubois, D., 256n, 328 Peirce, C. S., 278, 327 Povinelli, D. J., 264n, 265, 328 perception, 1-2, 7, 15, 34-35, 36-59, 60, Premack, D., 265, 328 98-122, 193, 316
338
Index pressure-sensitive pillow experiments, 301-02, 313 Price, H. H., 37n, 235, 262, 328 primary qualities, 17-18, 20, 23-24, 37n, 40, 51-53 Private Language Argument, 6In, 236-40, 242-45 proposition-like representations, see representations, proposition-like propositional-attitude consciousness (Cl, PA-awareness), 8-9, 81-82, 85n, 126-35, 145-46, 147-70, 181-84, 185-89, 254-56, 296 propositional attitudes, 8, 29, 194 categorization as theoretical, 205, 207-08, 211-12, 215, 224, 229n, 232, 278 and instrumentalism/realism debate, 195ff. and phenomenality, 139-40, 142, 160 see also acquisition of concept of; concept of prosopagnosia, 152, 166 proto-self/proto-not-self, 259-61, 279ff., 306 proto-theories, see theories, protoPsychological Solipsism, 229, 231, 233, 245, 248n, 259-61,271-72 see also Cartesian theory of mind psychology, 29, 34, 79, 194-95, 198-99, 203-05 Putnam, H., 250, 328 Pylyshyn, Z. W., 38n, 164, 328 qualia, 15n, 23n, 99, 114, 119, 156, 168, 210, 219 Qualism, 135-46 Quine, W. V Q, 111,328 Ramachandran, V S., 85, 89, 328 Ramey, C. T, 301-02, 304 Rationalism, xiii, 1, 2, 3, 10, 185 see also Cartesian Rationalism "read-off" position, 36-58 revised, 115-20,287 realism, 20, 251n, 271n, 277, 282-83 categorical, 23In reference, 3 Reid, T., 39n, 98, 107-08, 222, 328 Reid's model of perception, see perception, Reid's model of Reingold, E. M , 56, 328
representations analogue/non-analogue, 110, 113, 165, 287-88 aspectualized, 8, 85n, 111-20, 135, 141, 151, 155-58, 187, 200n, 205-07, 209, 211,219, 220, 223, 225, 229n, 255-57, 260, 279-80, 291-92 digital, llOn image-like 7-8, 37n, 38n, 5In, 52, 99, 101, 108-19, 149, 164, 171-72, 180-81,241,255,257,280 intrinsic/extrinsic, 172n, 23In proposition-like, 7-8, 99, 101, 105, 108-13, 117, 119, 129, 141n, 145, 149, 161-62, 167, 169-71, 181, 183, 185, 187, 193, 206n, 232, 234, 242 responsibility, 300n, 313-14 developmental, 314 moral, 314 Riesen, A. H., 53n Rock, I., 37n, 328 Rogers-Ramachandran, D., 328 Rollins, M., 90n Rorty, R., 243n, 328 Rosenthal, D. M., 115n, 128n, 130, 146n, 165n, 168, 205n, 206, 328-29 Rumelhart, D. E., 211,329 Russell, B., 37n, 221, 234ff., 262, 329 Russell, J., 253, 256n, 260n, 329 Ryle, G., 208n, 244, 329 saccades, 242 Sacks, Q, 53n, 329 Schacter, D. L., xiii, 109n, 329 Schwartz, G. E., 303, 329 science, see theories, scientific Scientific Cartesianism, xi—xii, 3, 4, 9—11, 193, 228-34, 236n, 252, 255, 266, 267, 270-71, 273, 276, 278, 286-87, 296, 298 Searle, J. R., 9, 20n, 91, 111-13, 123,
134n, 135, 137n, 139, 147n, 148, 150-60, 171, 181n, 188, 211, 219,
230, 307n, 329 secondary qualities, 17-18, 37n, 40ff., 53 self, 92, 209, 228, 251fF., 279, 295-96, 300-01, 305ff. /not-self distinction, 258ff., 265, 295-96, 305-06, 313 see also concept of self/not-self
339
Index self-reflective states, 9, 147-48, 150, 152n, 164, 168-69, 171, 181 view of consciousness, 9, 147-48, 150, 152n, 164, 171 semantic priming experiments, 152, 154-55 sensation consciousness (CS), 81, 165-84, 185-89 and pain, 84-96 sensations, xi, 6-7, 39n, 76, 81, 84, 89, 161, 165-70 more formally introduced, 165 and the Private Language Argument, 238n, 244 sense-data views of perception, see perception, sense-data views of senses, 15—34 see also criteria of sense individuation; criteria of sense recognition; vision Shakey the robot, 208n Shatz, M., 233n, 331 Shepard, R. N., 37n, 164, 172, 321 Shoben, E.J., 248n, 325 shock experiments Glass et ah, 68, 71, 88n, 302-03 self-administered, 303 Shoemaker, S., 42n, 329 Shultz, T. R., 256n, 328 Siegal, M , 233n, 329 sign/symbol distinction, 109 skepticism, xii, 6, 11, 19-20, 22, 33, 38n, 138n, 233, 271,281-82 types of, 6n, 53n, 27In Slater, C , 95n, 213n, 329 Smiley, P., 233n, 259, 262, 324 Smith, Y. M., 320 Sodian, B., 331 soft determinism, see determinism, soft Solipsism, 3 see also Psychological Solipsism and Ontological Solipsism spatial "maps", 101-02, 118-19, 261, 280 Spelke, E. S., 258, 261n, 297, 306n, 329 sphex wasp, 139, 213 split-brain cases, see commissurotomy cases Stanovich, K. E., 198n, 329 Staub, E., 303, 329 Stephens, G. L., 83n, 89, 330 Sternbach, R. A., 69, 93, 330 Stewart, M., 328 Stich, S. P., 211,214, 252, 330
Stoerig, P., 22, 57, 82, 153n, 177-79, 226-27, 250, 330 Stone, J., 31 In Strawson, G., 218n, 330 Strawson, P., 228, 233, 262, 330 Strick, P. L., 227n, 326 Stroop color word test, 303 subjective/objective distinction, 306, 311 subjectivity, 44n, 112, 143n, 159, 301 essential, 306-07, 309-12 subliminal perception experiments, 29, 125, 127, 154-55 Swayze II, V, 319 synaesthesia case, 49, 70 theories, 3, 6n, 44n, 205, 207-08, 227 philosophical, 3, 184, 216, 253, 280, 281, 285 proto-, 3-4, 216-17, 283, 286 proto-scientific, 253 scientific, 3-4, 11, 184, 207n, 211, 216, 253, 280, 285, 286 thermostats, 32n, 112, 119, 134, 136, 162 "thin-brain" cases, 66—67, 71, 75-76 third-person point of view, see point of view, third-person Thompson, E., 48n, 330 thousand-sided figure, see chiliagon tinted-lens wearers case, see color blindness, recovered trichromats Torjussen, T., 151-52 Trigg, R., 76-77, 330 turned-around Cartesian world example, see Cartesian, world (turned-around) Tursky, B., 303, 329 twin-earth case, 250 two-phenomena theory of pain, see pain, two-phenomena theory of Van Essen, D. C , 118, 178, 226, 250, 330 Van Gulick, R., 155n, 160 Varela, F.J., 330 Velleman, J. D., 15, 177, 242, 320 Verbalizationalism, 131-34, 145 virtual machines, 220-21, 224, 227 vision, 40, 219-20, 226, 240-42, 250, 254n, 257, 261, 266, 287-88, 290n categorizing of visual sense, 17ff. compared to pain, 63-75, 78, 88, 96 computational view of, 100-02 of eagles, 63-67, 69-70, 75 Gibsonian view of, 104-06
340
Index Marrs theory of, 101 Neisser's view of, 218 phenomena, 24, 28, 52-53, 55, 63ff., 118, 129n, 287; see also natural kinds and visual phenomena see also Hindsight, color visual cortex ablated (VCA) cats and monkeys, 24, 30 visual extinction, 56, 152, 175—76 Volpe, B. T. J., 56, 152, 176, 330 Von Senden, M., 53n Wall, P. D., 68n, 69, 78, 86, 88-89, 94, 95n, 303, 326 Wallace, J. G., 53n Wang, G., 324 Watson, J. S., 301-02, 304, 330 wax passage, 1, 250-51 Weinberg, S., xiii, 330 Weird Argument, 268-70 Weiskrantz, L., viii, 23-24, 38n, 56-57, 73-74, 93, 123, 151-53, 166, 175-77, 330-31 Wellman, H. M , 201n, 207n, 259, 262, 264n, 306n, 323, 331
Wessinger, C. M., 322 Wholism, 10, 196-97, 198n, 229, 230 see also Partism Wilkes, K. V, 148n, 262, 331 Wimmer, H., 201n, 233n, 331 Wittgenstein, L., viii, xi-xiii, 4n, 5, 10, 32, 61n, 71, 75, 81n, 82, 83n, 136, 141n, 143, 144, 159n, 180, 204, 221, 228, 230n, 232, 235n, 236ff., 277, 281, 298, 318, 331 words and concepts, In, 264, 273, 275-76, 283,291,294 meaning of, 31 and non-analogue representations, 110 and Private Language Argument, 235n, 237, 239n, 244, 246-47 and semantic priming experiments, 154-55 Yaniv, L, 233n, 331 Young, A., 152, 331 Yuh, W T. C , 319 Zeki, S., 118, 226-27, 250, 331
341