OXFORD UNIVERSITY PRESS
Great Clarendon Street, Oxford OX2 6DP Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide in Oxford New York Auckland Bangkok Buenos Aires Cape Town Chennai Dar es Salaam Delhi Hong Kong Istanbul Karachi Kolkata Kuala Lumpur Madrid Melbourne Mexico City Mumbai Nairobi Sao Paulo Shanghai Taipei Tokyo Toronto Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries Published in the United States by Oxford University Press Inc., New York © the several contributors, 2002
The moral rights of the authors have been asserted Database right Oxford University Press (maker) First published 2002 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this book in any other binding or cover and you must impose this same condition on any acquirer British Library Cataloguing in Publication Data Data available Library of Congress Cataloging in Publication Data Reason and nature: essays in the theory of rationality / edited by Jose Luis Bermudez and Alan Millar. p. cm.-(Mind Association occasional series) Based on 2 conferences held in 1997 and 1998 at the Universtiy of Stirling. Includes index I. Rationalism-Congresses. 1. Bermudez, Jose Luis. II. Millar, Alan, Ph. D. III. Series. B833 .R46 2002 128'.33-dc21 2002070153 ISBN 0-19-925683-7 (hbk) I
3 5 7 9 10 8 6 4
2
Typeset by Newgen Imaging Systems (P) Ltd., Chennai, India Printed in Great Britain on acid-free paper by T. J. International Ltd Padstow, Cornwall
I
Introduction ALAN MILLAR AND JOSE LUIS BERMUDEZ
• ••
THE LEADING THEMES
It is hardly a matter of dispute that there is a normative dimension to reason and rationality. Actions, beliefs, and inferences can be reasonable or unreasonable. Reasons for belief or action can be good or bad. A body of beliefs can be rationally coherent or incoherent. A person's behaviour can be rational or irrational. Presumably, then, there are norms or standards which contribute to determining such matters. The essays in this collection deal with the normative dimension of reason and rationality and with how it should be understood as relating to the natural world. Despite a wide variation in approach and style the collection as a whole addresses three leading themes: (i) The status of norms of rationality (ii) The ~hape of norms of rationality (iii) The role of norms of rationality in the psychological explanation of belief and action. (i) and (ii) are the main themes of Part I, which includes the essays of Paul Boghossian, Crispin Wright, John Broome, and Alan Millar. (iii) is the principal concern of Part II, which includes the essays of Nick Chater and Michael Oaksford, Jonathan Lowe, David Over, Jose Luis Bermudez, Isaac Levi, and Allan Gibbard. Inevitably, the essays in Part II address issues of status and shape as well. The place of reason in nature, or at least in a world which is exhaustively describable in non-normative terms, is a major, but by no means the only, preoccupation in the collection as a whole.
THE STATUS OF NORMS OF RATIONALITY
The norms of rationality, whatever form they take, are not purely descriptive of how things are. They concern the normative characteristics of beliefs
2
Alan Millar and Jose Luis Bermudez
or actions, such as the characteristic of being held, or done, for good reasons. In the light of this it is possible to envisage a debate about the status of normative judgements relating to rationality which would address considerations parallel to those which J. L. Mackie advanced in connection with moral judgement (Mackie 1977). Mackie drew attention to three reasons for thinking that moral judgements do not track objective moral truths. (a) Moral norms vary from culture to culture. The differences seem to be explicable on the assumption that moral norms reflect ways of life. There is no need to assun1e that they are objectively authoritative. On the contrary, it is plausible that correctness of moral judgement is culturally relative. (b) The facts which would need to be posited as truth-makers for objective moral truths would be metaphysically 'queer', as Mackie put it. That Fred makes a cutting remark directed at Bill is a fact of a fairly straightforward kind. But what kind of fact could make it true that Fred ought to apologize to Bill or make it true that Fred ought to apologize to Bill because he made the remark? (c) We have no convincing way of explaining how objective moral truths could be known. Moral truths must be prescriptive or have prescriptive implications. It is hard to see how the objectivist can respond without appealing to 'a special kind of intuition'. If (b) and (c) present challenges to the objectivity of moral judgement, then analogous considerations also present challenges to the objectivity of normative judgements concerning rationality and to the norms which govern such judgements. Nothing about the metaphysical or epistemological issues to which (b) and (c) allude turns on the moral content of the judgements and the related norms. It is the normativity of the judgements and norms, not their moral content, which seems to require the positing of strange facts and strange modes of apprehension. On the face of it, the matter is different with (a). Even if we happily endorse the idea that differences in moral norms are best explained in terms of cultural differences, we are liable to baulk at the parallel claim directed against the objectivity of norms of rationality. Norms of rationality, we are apt to think, provide a framework in terms of which sensible intellectual debate and enquiry can take place. Even so, there is surely a question concerning how we might support the claim that such norms have objective authority. This is precisely the issue tackled by Boghossian and Wright. Their essays may usefully be contrasted with that of Gibbard. The main aim of Gibbard's contribution concerns the character of psychological explanation-hence its location in Part II. But according to the expressivist theory of judgements about rationality
Introduction
3
(Gibbard 1990) on which the essay relies there need be no norms which are authoritative for all thinkers. Boghossian aims to resist relativism about the epistemic principles governing the justification of beliefs. (He also discusses and rejects Gibbard's expressivism.) These principles fall within the spectrum of what we have been calling norms of rationality, since they determine what would count as a good reason for believing this or that. In his construction of the antiobjectivist dialectic the challenge is to show how knowledge of objective epistemic norms is possible in the face of a pressing problem. The problem arises for those principles which might seem to have the best possible claim to be objective-those modelled on logical rules of inference. Consider a putative epistemic principle, say, a principle modelled on modus ponens to the effect that if S is justified in believing p and is justified in believing if p then q, and S infers q from these premisses, then S is prima facie justified in believing q (EP2). If we are justified in believing that the principle is true, we must be justified in believing that it is true that p and p ~ q imply q (MPP). But what could justify us in believing something as basic as this? The suggestion that we could have a non-inferential justification, Boghossian argues, does not look promising. The problem for thinking that we could have an inferential justification is that any such justification would seem to be rulecircular-the justification would have to rely on MPP. That raises the problem of explaining how a justification which is rule-circular can be any kind of justification at all. Boghossian's response relies on the idea that 'if fundamental inferential dispositions fix what we mean by words, then, ... we are entitled to act on those dispositions prior to and independently of having supplied an explicit justification for them' (p. 39). The idea here is that since certain basic inferential dispositions are meaning-fixing the objective authority of the corresponding epistemical principles is no longer in question. The central preoccupation of Wright'S essay-a direct response to Boghossian-is how Boghossian's proposal relates to issues concerning internalism and externalism about justification. Boghossian is anxious to avoid an internalist conception of justified belief which implies that for a subject to be justified in believing the conclusion of a modus ponens inference the subject must know that the premisses entail the conclusion. It is this conception which leads to rule-circularity. He is equally anxious to avoid an externalist conception for cases of this kind, according to which it suffices for the transmission of justification from belief in the premisses to belief in the conclusion that the rule implemented should be truth-preserving. The problem here, he suggests,· is that the 'mere fact that a particular inference is truth-preserving bears no intuitive link to the thinker's entitlement to it' (p. 38). At the heart of a plausible internalism is the idea that for a subject to be justified in believing something the formation of the belief must not have been epistemically irresponsible. Boghossian thinks that this condition
4
Alan Millar and Jose Luis Bermudez
is met by his proposal: [I]f it is really true that someone's being disposed to reason according to modus ponens is a necessary condition of their having any logical concepts at all, and so of being able to reason in any shape, manner or form, there can be no intuitive sens,e in which their disposition to reason according to modus ponens can be held to be irresponsible, even in the absence of a reflectively appreciable warrant that justifies it. (p. 41).
In response Wright asks why lack of irresponsibility should be thought to suffice for warrant. It is surely possible, he suggests, that a community should employ bad inference rules yet not display epistemic irresponsibility. (The envisaged scenario is one in which the considerations which would show the rules to be bad are at least for the time being beyond epistemic reach.) Although such people would not be irresponsible, they would lack entitlement to believe the conclusions they reach via the bad rules. What is needed for entitlement, Wright argues, is not only that the rules can be followed without irresponsibility, but in addition that they should be actually sound. Wright doubts that the soundness of basic rules is guaranteed by Boghossian's thesis that they are meaning-constituting. Mackie drew attention to the unsatisfactoriness of invoking a 'special kind of intuition' to explain how it is possible to have moral knowledge. Boghossian considers and rejects a similar move in that part of his discussion which addresses the possibility that our knowledge that modus ponens is truth-preserving might be non-inferential (pp. 20-1). Wright is more sanguine about the prospects for non-inferentialist justification. He canvases support for a kind of internalisrn about justification according to which 'recognition of the validity of a specific inference whose premises are known provides a warrant to accept its conclusion ... in a direct manner' (p. 82). This, he thinks, is an idea to which the non-inferentialist will in any case have to appeal to make the position coherent.
THE SHAPE OF NORMS OF RATIONALITY
The exchange between Boghossian and Wright raises issues about the shape of norms of rationality. On Boghossian's picture there are epistemic principles specifying conditions under which beliefs of a certain type are prima facie justified. Some such principles are modelled on rules of inference. An alternative way of representing such principles might invoke the idea that holding certain beliefs can require or con1mit one to accepting others. Being required or committed to doing something in virtue of satisfying a certain condition is a matter of there being something wrong with satisfying the condition yet not doing the thing in question. It does not follow that the subject
Introduction
5
ought to do the thing in question; the correct way to put things right may be to alter the requirement-imposing (commitment-incurring) condition rather than do the thing in question. In cases of the sort on which we have been focusing this would involve giving up one or n10re of the premise-beliefs, rather than accepting the conclusion. These considerations take us into the territory marked out by the essays of John Broome and Alan Millar though the prin1ary focus of these essays is on instrumental practical reasoning. In discussions within a broadly naturalistic perspective, means-end reasoning is often taken to be relatively unproblematic and to be helpful in explaining norms governing the formation of belief and the perforn1ance of action. If belief aspires to truth, then what is normative for belief-formation is dictated by the best means for arriving at truth. If action is aimed at utility, then what is normative for action might be what maximizes utility or at at least provides the agent with enough utility to satisfy. If Broome and Millar are right, then the shape of instrumental reasoning is more problematic and the supposed parallels between belief and action need closer scrutiny. On Broome account practical reasoning is intention-reasoning-it takes us from intentions and associated beliefs about means to further intentions. As such it contrasts with belief-reasoning which takes us from beliefs to beliefs. Broome claims that neither intention-reasoning nor belief-reasoning is ought-giving or reason-giving. In Broome's terminology, where q is a logical consequence of p, believing p (normatively) requires you to believe q. But, crucially, it does not follow that believing p gives you a reason to believe q. Whereas having a reason to believe p is a n1atter of it being the case that you ought, at least pro tanto, to believe q, you may have no such reason even if you satisfy some condition such that you are required to believe q. Parallel considerations apply to practical reasoning. My intention to buy a boat, and my belief that to do so I shall have to borrow money, require me to borrow money, yet do not give me a reason to borrow money. Broome extends his general approach to accommodate intentionreasoning where the associated belief about means does not represent the means as being necessary. Reflection on decision theory suggests that the content of the belief about means might have the forn1 of a conditionalized proposition, 'Conditional on X's
6
Alan Millar and Jose Luis Bermudez
Nonetheless, the excursion through decision theory suggests to Broome that what is needed in place of the proposed conditionalized propositions are propositions which are explicity about best means, for example, 'The best way for me to buy a boat is to borrow money.' The focus of Millar's discussion, which has a close affinity with Broome's, is on the character of normative reasons for action-reasons which in some way favour or recommend an action. In the practical reasoning literature such reasons are commonly understood to supply the agent with a justification for the action in question. Millar argues that there are normative reasons for action which merely confer a point on an action without justify= ing it. To use a variant of Broome's example, the fact that I intend to buy a boat and that borrowing money is necessary if am to buy a boat gives me a reason to borrow money. Speaking to my accountant I might ask, 'Do I have any reason to borrow money right now?' and receive the truthful reply, 'Yes, your buying a boat.' A reason of this kind recommends an action to the extent that it shows that the action would have a point. But it does not provide a justification for the action. To this extent Millar is in agreement with Broome but he thinks that the class of normative reasons for action is broader than the class of reasons which supply justification for an action.
THE ROLE OF NORMS OF RATIONALITY IN THE PSYCHOLOGICAL EXPLANATION OF BELIEF AND ACTION
If the norms of rationality are to be binding on theoretical reasoning and practical deliberation, then, arguably, they must be psychologically realistic. An account of the norms of rationality is open to a prima-facie objection if it entails that there is widespread irrationality and very little rationality in the world. The issue here is in part one of 'ought' implying 'can'. Widespread irrationality calls into question whether the norms to which we are subject are in practice attainable. But there is also a difficulty posed by the fact that we use the normative principles of rationality as regulative principles governing psychological explanation (if one widely accepted philosophical orthodoxy is to be accepted). This practice works because the explanations we offer of a particular action in some sense track the reasoning which gave rise to it. There are, of course, many different ways of interpreting this tracking requirement, but each of them den1ands a certain harmony between the norms of rationality and the psychology of reasoning. The problen1 of the psychological reality of norms, and thus of their role in explanation, comes particularly to the fore because of the well-documented evidence that human reasoning strategies consistently fail to respect the
Introduction
7
canons of deductive and inductive logic. The tension between the practice of reasoning and the norms that govern that practice is the focus of several of the papers in Part II of this collection. Nick Chater and Mike Oaksford provide a wide-ranging survey of the experimental and theoretical literature on rationality and offer an alternative to the two principal ways of interpreting the awkward experimental results. Some philosophers have concluded fronl these results that humans are fundamentally irrational (Stich 1990). Others have contested this by arguing that the results reflect reasoning performance rather than underlying competence (Cohen 1981). The suggestion that Chater and Oaksford develop is, in effect, that both these approaches are evaluating performance on reasoning tasks with respect to the wrong set of norms. The norms of rationality that we deploy to assess everyday reasoning must reflect the fact that practical decision-making is decision-making under uncertainty, whereas the formal canons of deductive reasoning are only appropriate to decision-making in conditions where the outcomes are certain. With particular reference to Wason's selection task, the canonical reasoning task in the experimental literature, and ostensibly a test of deductive rationality, Chater and Oaksford argue that many seeming paradigm cases of irrationality come out as perfectly rational when interpreted according to the formal norms of probabilistic rational analysis. The standard errors on the selection task, for example, come out as sensible strategies relative to an overall information-theoretic objective of reducing uncertainty. Jonathan Lowe offers a critical discussion of the proposals made by Chater and Oaksford, with respect both to the details of their analysis of the selection task and, more generally, to their conception of the relation between a priori reflection on the norms of rationality and empirical investigation of reasoning. He finds their proposal to understand the norms of rationality in terms of formal principles of probabilistic rational analysis psychologically implausible. For Lowe our understanding of formal principles is parasitic on the deliverances of our 'rational intuition' which yields judgements of validity in the case of ordinary everyday inferences. There is no sense in which formal principles can be the ultimate arbiters of rationality. Lowe has further objections to the specific claim that the formal principles of rational analysis should guide our response to the Wason selection task. Not only are there other sets of norms which yield conflicting and competing verdicts on the rationality of the subjects in the tasks; there are further sets of norms which make the subjects come out as rational but with different interpretations of their reasoning. What could justify the choice of one set of norms over another? The psychology of reasoning is directly the concern of David Over's essay. Over criticizes the view, currently influential among evolutionary psychologists, that what we take to be abstract and content-independent
8
Alan Millar and Jose Luis Bermudez
domain-general reasoning abilities should really be understood in terms of the operation of dedicated and content-sensitive modules that have evolved to deal with relatively circumscribed problems confronted by our hominid ancestors. Evolutionary psychology shares with Chater and Oaksford's programme of rational analysis the desire to provide a more psychologically realistic account of hun1an reasoning. Unlike the programme of rational analysis, however, evolutionary psychology leaves no room for a formal analysis of the norms of rationality (of the sort one might expect to govern abstract and content-independent reasoning). Over criticizes the evolutionary psychology hypothesis, both in terms of its internal coherence and in terms of how well it is supported by the experimental evidence that is cited in its favour. . The problem of psychological reality is also to the fore in Levi's essay. Levi starts from two ideas. The first is that we are responsible for our thinking, and thus for the beliefs we hold. The second is the pragmatist idea that rationality has to do with changes of view and, specifically, with which changes are best for the purposes of promoting the goals of enquiry. The first idea generates a problem for naturalistic accounts of change in belief (of the sort that we find, for example, in Quine). Levi claims that such accounts are committed to a dispositional analysis of belief which is hard put to do justice to the degree of our responsibility for our beliefs. In addressing these matters he distinguishes between changes in doxastic commitments and changes in doxastic performance. Doxastic commitments are commitments of a normative kind which mayor may not be fulfilled. Failure to fulfil a commitment is what Levi calls a failure of performance. Failures of doxastic performance 'call for changes that in1prove performance rather than change in commitment' (p. 212). As Levi sees it, whereas changes in commitment are 'subject to control by the inquiring agent', changes in performance 'call for therapy, training or the use of prosthetic devices such as con1puters, the printing press or paper and pencil' (p. 212). Levi elaborates his approach to change of commitment in terms of a Boolean algebra designed to characterize those states of full belief which an enquirer in a given state of full belief is 'conceptually capable of moving to' (p. 213). Against this background, Levi argues that the norms of rationality lack predictive and explanatory usefulness. If they were to be useful in these ways, the norms would need to be ones to which there is a high degree of conformity. But they are not. As Levi puts it, states of rational health 'are rarely if ever attainable by flesh and blood' (p. 217). The normative theory of rationality should be viewed as providing prescriptive principles which rational agents can reflectively employ to control and police their own deliberations, even though it is generally not the case that those very same agents employ the same prescriptive principles in their non-reflective decision-making. Bermudez agrees with Levi that if there are genuinely informative psychological explanations invoking rationality, then our actual thinking must
Introduction
9
exhibit a high degree of conformity with the norms. But where Levi applies modus tollens to the conditional, Bermudez affirms the antecedent and applies modus ponens. He maintains that psychological explanations constrained by considerations of rationality are indispensable, not simply for understanding the behaviour of language-using adult humans, but also for many of the explanations applied to non-linguistic creatures by developmental psychologists, cognitive archaeologists, and cognitive ethologists. This gives rise to the problem of developing notions of rationality and reasoning that are both applicable to non-language-using creatures and sufficiently robust to underwrite the practice of giving psychological explanations of the behaviour of non-linguistic creatures. To this end he distinguishes three different types of norm-governed psychological explanation, and concomitant conceptions of practical decision-making, which might be applied in the non-linguistic realm. The first, level-o rationality, deals with tropistic behaviours. At this level 'rational' amounts to no more than 'adaptive' and applies only to behaviour-types, not to what animals do on particular occasions. Level-1 rationality applies to creatures, who though incapable of anything properly describable as decision-making are, nonetheless, capable of selecting an appropriate course of action from among a range of alternatives available at a specific time. Level-2 rationality is where decision-making enters the picture. Much of the work of Bermudez's essay goes into explaining how there can be genuine decision-making at the nonlinguistic level. Gibbard aims to solve the problem of how rationality can be accommodated within a naturalistic fran1ework by drawing upon the assumption that while the concept of being rational is normative, the property of being rational is a straightforward naturalistic one. To speak of the property is a little misleading in view of Gibbard's expressivist account of judgements about rationality (Gibbard 1990). On this account if, for example, you condemn someone's reasoning as irrational you are saying, in effect, 'Let me not reason that way if in his shoes'. Two people could agree on the naturalistic properties possessed by a thinker but disagree on whether that thinker is rational. What they would be disagreeing about is how to reason in that thinker's shoes. But the disagreement may not be one which can be resolved by appeal to norms which are treated as objectively authoritative by everyone. An implication of Gibbard's theory is that there may be no single naturalistic property constituting rationality. The naturalistic property an agent would need to possess in virtue of being rational on one way of thinking about that matter might differ from the property an agent would need to possess in virtue of being rational on some other way of thinking. This raises the question, 'What determines the property which constitutes being rational, for a given way of thinking about rationality?' By way of response to this question Gibbard introduces the concept of a hyper-decided thinker-a thinker
10
Alan Millar and Jose Luis Bermudez
who has a view on every matter of fact and a contingency-plan for every conceivable hypothetical situation. This concept makes possible a variant of possible world semantics. For example, a valid argument turns out to be one such that any hyper-decided thinker who accepts the premisses accepts the conclusion. Suppose that a hyper-decided thinker thinks that a certain agent acted rationally in carrying out a certain plan of action. Such a thinker is committed to thinking that she would have done likewise if in this agent's shoes. To have done likewise would have been to possess a property which can be specified exclusively in non-normative terms and which relates to what was done by way of carrying out the plan. The specification of this property would allude to the course of action taken by the agent and what alternatives were open to him at each stage. Gibbard, if we understand him aright, thinks that it is possession of this property which constitutes the agent's having been rational in carrying out the plan according to the norms adopted by this hyper-decided thinker. The effect of all this is to take X's being rational relative to the norms of sonlehyper-decided thinker to be a matter of possessing the non-normatively specifiable property which that hyper-decided thinker would bring it about that she possessed if she were in X's shoes. There is no requirement, however, that this property be the same for every hyper-decided thinker. How then do the judgements and commitments of a hyper-decided thinker relate to what we who are not hyperdecided are comn1itted to? As we understand it, Gibbard thinks that when we judge that an agent's course of action is rational we commit ourselves to thinking that the agent has that property which certain hyper-decided thinkers would ensure that they possess were they in the agent's shoes. Which hyper-decided thinkers? Those which are in any hyper-decided state which we could reach without changing our minds about anything. The upshot is wherever there is a rationality-based explanation of behaviour there will also be a naturalistic explanation (even though the two types of explanation are fundamentally different). For example, it might be claimed that an attack failed because a commander blundered. The situation described will be one in which there is a naturalistic story about what caused what. Suppose there are no disagreements about that story. Still, there could be a disagreement about whether the attack failed because the commander blundered. But in that case the issue is not about causalexplanatory matters, but about the normative characterization of the commander's thinking. As this collection testifies, current work in the theory of rationality is subject to very diverse influences ranging from experimental and theoretical psychology, through philosophy of logic and language, to meta-ethics and the theory of practical reasoning. This work is pursued in various philosophical styles and with various orientations. Straight-down-the-line analytical, and
Introduction
II
largely a priori, enquiry contrasts with high-level theorizing with a close eye on experimental evidence. A focus on human rationality contrasts with a focus on rationality in the wider natural world. As things stand work in one style often proceeds in isolation from work in others. Our view is that if progress is to be made on rationality philosophers will need to range widely. Our hope is that this collection will provide a stimulus to that.
REFERENCES Cohen, Jonathan L. (I98I), 'Can Human Irrationality be Experimentally Demonstrated?', Behavioural and Brain Sciences, 4, 3 I7-70. Gibbard, Allan (I990), Wise Choices~ Apt Feelings: A Theory of Normative Judgement (Oxford: Oxford University Press). Mackie, J. L. (I977), Ethics: Inventing Right and Wrong (Harnlondsworth: Penguin Books). Stich, Stephen (I990), The Fragmentation of Reason (Cambridge, Mass.: MIT Press).
I
OBJECTIVITY AND NORMATIVITY
2
How Are Objective Epistemic Reasons Possible?* PAUL BOGHOSSIAN
• •• Epistemic relativism has the contemporary academy in its grip. Not merely in the United States, but seemingly everywhere, most scholars working in the humanities and the social sciences seem to subscribe to some form of it. Even where the label is repudiated, the view is embraced. Sometimes the relativism in question concerns truth, sometimes justification. The core impulse appears to be a relativism about knowledge. The suspicion is widespread that what counts as knowledge in one cultural, or broadly ideological, setting need not count as knowledge in another. While it is true that these views are often very poorly laid out and argued for, I found myself surprised, on reflection, at the extent to which a relativism about justification-as opposed to one concerning truth-may be seen to be a natural, if ultimately ill-advised, response to a real problem. For there is a serious difficulty seeing how there could be objectively valid reasons for belief, a difficulty that has perhaps not been adequately faced up to in the analytic tradition. In this essay, I aim to explain what the problem is; to say why relativism, and its sophisticated cousin, non-factualism, are unpalatable solutions to it; and to try to point the way forward.
THE PROBLEM
I take it for granted that we aim to have true beliefs and that we attempt to satisfy that aim by having justified beliefs. Let us represent a thinker as ~. For helpful comments, I would like to thank Christopher Peacocke, Stephen Schiffer, Josh Schechter, and the audience at the Pacific APA meetings in Albuquerque, N. Mex in April 2000. I am especially grateful to Crispin Wright for agreeing to comment on that occasion and for numerous stimulating conversations on this and related topics. Some sections of the present paper overlap with parts of my 'Knowledge of Logic' (Boghossian 2000).
Paul Boghossian
16
possessing a certain set of beliefs and a certain set of rules-epistemic rules-that specify how to modify those beliefs in response to incoming evidence. An example of such a rule may be: (ER1) If lighting conditions are good, etc., and it visually seems to you as if there is a cat in front of you, then believe that there is a cat in front of you.
Another example: (ER2) If you are justified in believing that p, and justified in believing that 'If p, then q', then believe q or give up one of the other beliefs.
By saying that a thinker has and operates according to these rules I don't mean that the thinker grasps these rules as propositions. I mean that he follows these rules, and that this shows up in his behaviour, however exactly that is to be analysed. It will do no harm, for present purposes, to think of rule-following as a disposition to rule-conform under appropriately idealized circumstances. I Now, epistemic rules are rules and as such make no claims. They are rules of obligation governing belief. But the point of saying that we aim to have justified beliefs is to say that their function is to so modify belief that what results from their application is always a justified belief in the circumstances. In adopting (ER1) as our rule of belief modification, in other words, we are implicitly cOll1mitted to the truth of a corresponding epistemic principle: (EP1) If S is in good lighting conditions and etc., then if it visually appears to S that there is an x in front of him, then S would be prima facie justified in believing that there is an x in front of him. 2
I
2
For discussion, though, see Boghossian (19 8 9). As Jan1es van Cleve points out, any epistemic principle has the following form: (EP) If a belief of type B is based on a reason of type R, then the belief is justified.
On a foundationalist view (and adopting van Cleve's terminology), such principles will include both generation principles and transmission principles. Generation principles specify circumstances under which a belief is justified independently of its logical relations to other beliefs; transmission principles specify under what circumstances the warrant for a given belief transmits to other beliefs. On a coherentist view, epistemic principles will largely consist of some sort of hybrid of these two, assuming the form:
If P coheres with the system of propositions accepted by S, then P is justified for S. This is analogous to a generation principle in that its antecedent does not mention any term of epistemic appraisal, but analogous to a transmission principle in that its antecedent specifies relations to other propositions. See Van Cleve (1979). For the sake of concreteness, in this essay I will assume that epistemic principles always take the form characteristic of foundationalism; but the argun1ents will apply to either type ~!~2i~~~J!1i~~y~t~I!1------------------ -----------
Objective Epistemic Reasons
17
Similarly for (ER2): (EP2) If S is justified in believing p and is justified in believing 'If p then
q', and S infers q from those premisses, then S is prima facie justified in believing q. Against this backdrop, the thesis of the objectivity of reasons can be stated as the claim that there is an objective fact of the matter which epistemic principles are true, and, consequently, which sets of rules a thinker ought to employ to shape his beliefs, if he is to arrive at beliefs that are genuinely justified. We certainly act as though we believe in the objectivity of reasons. We don't behave as though anything goes in the way of belief, suggesting that we operate with a specific set of epistemic rules. And we don't hold that others are at liberty to operate with whatever epistemic rules they like. The problem for the objectivity of reasons can now be stated succinctly, in the form of the following argument: If there are objective facts about which epistemic principles are true, these facts should be knowable: it ought to be possible to arrive at justified beliefs about then1. 2. It is not possible to know which epistemic principles are objectively true. Therefore, 3. There are no objective facts about which epistemic principles are true. I.
The remainder of this first part is devoted to a defence of the first and second premisses.
THE FIRST PREMISS
This is not the strong and implausible claim that, if S is to know anything, he must know the underlying epistemic principles to which he is committed. To learn by observation that there is a cat in front of me does not require me first to know that observation justifies perceptual beliefs. It is rather the much weaker claim that, if there are objective facts about which epistemic principles are true, there should be humanly accessible circumstances under which those facts can be known. And this much weaker claim seems to me almost not to require argument. If there are such facts, why should they be in principle unknowable? My claim here does not stem from a generalized verificationism: for any fact, if it is to obtain, it must be knowable. I am perfectly happy to admit that there might be facts about the world that are not accessible to creatures such as ourselves. What I don't see is how this could apply to the sort of epistemic fact currently under consideration. There is no intuitive sense in
18
Paul Boghossian
which such epistemic facts are analogous to evidence-transcendent or undecidable facts of a more familiar variety-those four consecutive sevens in the decimal expansion of pi, for example. Rather, what is at issue are facts of the form encoded in (EPI) and (EP2). It would be peculiar, to say the least, if truths of this type were in principle unknowable. It would certainly be peculiar for us to suppose that they are unknowable. For in what could our confidence that there are such facts consist, if we simultaneously take it that we cannot know what they are? Prima facie, indeed, a much stronger claim seems plausible: not merely that these facts are knowable, but that they are, at least in large measure, known. For are we seriously to suppose that we don't know what it takes to justify a belief or a claim? If we don't, should we not be far more diffident about putting forward any claim, including the claim that we don't know which epistemic principles are true? Although, as I have conceded, it does not logically follow from S's knowing something that he knows which epistemic principles are true, it does seem to be true that, if S is a sufficiently self-conscious knower, he must assume that he knows which epistemic principles are true. So there is at least something pragmatically problematic about claiming that we don't know, and can't know, which epistemic principles are true. A further source of support for the claim that we know comes from the nearly universal agreement about which epistemic principles are true. With the exception of certain postmodern thinkers-and in their case they merely pretend to believe otherwise-nearly everyone agrees that observation generates justification for certain sorts of belief and that deductively valid inferences transmit the justification attaching to their premisses to their conclusions. What better explanation could there be for this practically universal agreement than that there are objective facts about what the correct principles are and that these facts are relatively obvious? If we wished we could go further and plausibly claim not only that these facts are known, but that they are known a priori. For we don't seem to have learnt from experience that deductively valid arguments transmit justification, nor it seems, could we have) Nevertheless, for the purposes of this argument, I will rely only on the weaker claim that, if there are correct epistemic principles, they are knowable.
THE SECOND PREMISS
\Vhy should there be any difficulty in knowing which they are? Having emphasized how widely known they seem to be, how do we now contrive 3
For discussion, see Boghossian
(2000).
Objective Epistemic Reasons
19
a difficulty about knowing them in the first place? Unfortunately, it is not too difficult to say what the problem is. Let's concentrate, for now, on deductive reasoning. As I said, we modify our beliefs according to: (ER2) If you are justified in believing that p, and justified in believing that 'If p, then q', then you should either believe q or give up one of the other beliefs.
In subscribing to ER2, we are evincing our acceptance of the rule of inference modus ponens: (MPPR) p,
p~q/q.
50me might choose to regard acceptance of MPPR as simply consisting in an acceptance of ER2; others might prefer to regard MPPR as the more basic and hence as leading to an acceptance of ER2. It won't matter for my purposes how the relation between these rules is conceived. In the interests of keeping matters as simple as possible, let us restrict ourselves to propositional logic and let us suppose that we are working within a system in which MPPR is the only fundamental, underived, rule of inference. In that case, 5's fundamental transmission principle becomes our familiar: (EP2) If 5 is justified in believing p and is justified in believing 'If p then q', and 5 infers q from those premisses, then 5 is prima facie justified in believing q.
And this principle will in turn be true provided that a certain logical fact obtains, namely: (MPP) p,
p~q
imply q.
Now, if 5 is to know that his fundanlental transnlission principle is true, he must, at a minimum, be justified in believing that MPP is true. 50 our question about the knowability of epistemic principles becomes: Is it possible for 5 to be justified in believing that all arguments of the form modus ponens are necessarily truth-preserving?4 (I am not at the moment concerned with how thinkers such as ourselves are actually justified, but only with whether it makes sense to suppose that we could be.) When we look at the available options, however, it seems hard to see how we could be justified in believing something as basic as MPP. For in what could such a justification consist? It would have to be either inferential or 4 Some philosophers distinguish between the activity of giving a justification and the property of being justified. My question involves the latter, more basic, notion: Is it possible for our logical beliefs to have the property of being justified?
20
Paul Boghossian
non-inferential. And there look to be serious problems of principle standing in the way of either option.
NON-INFERENTIAL JUSTIFICATION
For us to be non~inferentially justified in believing something we would have to be justified in believing it either on the basis of some sort of observation or on the basis of nothing. But what sort of observation could possibly serve as the basis for the belief that all arguments of the form MPP are truth-preserving? Henry Kyburg has given voice to a temptation that we must all have felt at some point: I think that in some sense ... our justification of deductive rules must ultimately rest, in part, on an element of deductive intuition: we see that modus ponens is truthpreserving-that is simply the same as to reflect on it and fail to see how it can lead us astray. (Kyburg 1965 cited in Van Cleve 1984)
It is possible to discern two distinct thoughts in this short passage, although Kyburg seems to wal).t to equate them. One is that we can simply see that MPP is truth-preserving. The other is that, try as we might, we cannot see any way in which it could fail us. Neither thought seems particularly helpful. As for the first thought, in any sense of 'see' that I can make sense of, we cannot just see that MPP is valid. To be sure, the idea that we possess a quasi-perceptual faculty-going by the name of 'rational intuition'-the exercise of which is supposed to give us direct insight into necessary truths has been historically influential. It would be fair to say, however, that no one has succeeded in saying what this faculty really is nor how it manages to yield the relevant knowledge. 'Intuition', or 'clear and distinct perception', seem like names for the mystery we are addressing, rather than solutions to it. 5 As for the second thought, when we say that we cannot see or conceive a counter-example to some general claim-for example, to the clain1 that all arguments of the form modus ponens are truth-preserving-we cannot plausibly mean that we have some direct, non-ratiocinative ability to detect whether such an example exists. The only thing we can legitimately mean is that a more or less elementary piece of reasoning shows that there cannot be any such counter-example. The 'reflecting' on the matter that Kyburg mentions is n1ediated by reasoning. We think: A conditional statement is true provided that if its antecedent is true so is its consequent. Suppose, 5 Laurence Bonjour attempts to defend a 'rational insight' view of the a priori (Bonjour 1998). For a critical discussion, see Boghossian (forthcoming).
Objective Epistemic Reasons
21
then, that a particular conditional statement is true and that so is its antecedent. Then it simply has to be the case that its consequent is true. Hence, there can be no counter-example. Talk of 'conceiving' and 'seeing' here are just thin disguises for a certain familiar style of logical reasoning. This is not, of course, to condemn it. But it is to emphasize that its acceptability as an epistemology for logic turns on the acceptability of an inferential account lllore generally.
DEFAULT REASONABLE BELIEFS
But perhaps it is a mistake to think that some positive act of observation or imagining is required, if a belief is to be justified non-inferentially. According to an increasingly influential line of thought, certain beliefs are simply 'default reasonable', reasonable in and of themselves, without any supporting justification from either observation or argument. In particular, the fundamentallogical beliefs have this feature. 6 It is reasonable to believe them, but not because there is some positive ground by virtue of which they are reasonable. If believed, they are reasonably believed, period. I am not implacably opposed to the idea that there might be beliefs that are reasonable on the basis of nothing, especially if this is understood to nlean simply that they are beliefs that are presumptively but defeasibly justified. It is possible that this will prove to be the best description of the epistemology of our first-person knowledge of the contents of our own nlinds. What I don't see, however, is how this idea could plausibly apply to the case at hand, to the generalization that all inferences of a certain form are necessarily truth-preserving. If the notion of default reasonableness is to playa significant role in the theory of knowledge, there has to be some principled way of saying which beliefs are default reasonable and why. What is needed, in other words, is a criterion for determining whether a belief qualifies for that status and an explanation for why satisfaction of that criterion is sufficient for it. Which beliefs are default reasonable and what is it about them that gives them this special standing? This insistence does not contravene the root idea that, in the case of a default reasonable belief, there is no ground that makes it reasonable; for it is consistent with a belief's having that status that there be a criterion by virtue of which it has that status and an explanation for why it has it. The trouble with default reasonable beliefs is that there do not seem to be very many plausible answers to these questions: it is hard to see what condition could plausibly qualify a belief for default reasonable status. 6
See Field
(2000).
22
Paul Boghossian
One idea that might seem initially promising concerns the class of selffulfilling beliefs: beliefs that are such that, having them guarantees that they are true. Surely these beliefs count as default reasonable if any do. Tyler Burge has discussed such beliefs in connection with the phenomenon of authoritative self-knowledge. For example, the belief-With this very thought I am thinking that water is wet-looks to be self-fulfilling: thinking it logically guarantees its truth. 7 It would, however, be a mistake to think that being logically self-confirming is sufficient for default reasonableness.. A guarantee of truth is not in itself a guarantee of reasonableness; and it is reasonableness that's at issue. What is missing from a merely self-confirming thought is some knowledge, however . trivial, on the part of the thinker that the thought is self-confirming. But such knowledge would transform the source of the reasonableness to an inference based on that knowledge and we would now no longer have anything that is reasonable by default. A second thought, more directly connected to the thinker's justification, has it that a default reasonable belief is any belief which, by virtue of being presupposed in any justification that a thinker might give, is neither justifiable nor refutable for that thinker. But this suggestion has two implausible consequences. First, it entails that what is default reasonable has to be relativized to individual thinkers, for different thinkers may build their epistemic systems around different claims. Second, it has the consequence that some very implausible claims would come out as default reasonable for someone if they happened to be presupposed by that person's epistemic system. For exanlple, suppose that someone takes as basic the negation of the law of non-contradiction; on this view, we would have to say that the negation of that law is default reasonable for him, because, by assumption, it will be neither justifiable nor refutable for that person. A third suggestion has it that the beliefs that are default reasonable are those beliefs that a thinker finds 'self-evident'-that is, that he is disposed to find plausible simply on the basis of understanding them and without any further support or warrant. But this proposal, too, would seem to be subject to the previous two objections. Once again, it is entirely possible that two people will find very different propositions 'self-evident', and that some of those will include propositions that are intuitively highly implausible. Nor would it help to strengthen the requirement so that it concerns those beliefs that actually are self-evident, as opposed to those that merely seem self-evident. Here the problem is that no one seems to me to have shown how this notion is to be spelled out. In particular, no one has supplied a criterion for distinguishing those propositions that are self-evident from those that-like the parallel postulate in Euclidean geometry or the proposition 7
See Burge (19 88 ).
Objective Epistemic Reasons
23
that life cannot be reduced to anything biological-merely seemed selfevident to many people for a very long time. By contrast, there is one form of explanation that seems to me to have some promise. There may be beliefs that are such that, having those beliefs is a condition for having one of the concepts ingredient in them. Thus, Christopher Peacocke has written of a special case in which it is written into the possession conditions for one or more concepts in [a] given principle that to possess those concepts, the thinker must be willing to accept the principle, by reaching it in [a particular] way. (Peacocke 2000)
The special case that Peacocke has in mind concerns our belief in the validity of the basic truths of deduction. Under the terms of our assumptions, then, the idea would be that it is written into the possession conditions for conditional, that to possess it a thinker would have to believe that all arguments of the form MPP are truth-preserving. If this were true, then, it seems to me, it would be correct to say that the belief that MPP is truth-preserving is default reasonable. For if it really were part of the possession condition for a given concept that to possess it one had to believe a certain proposition containing it, then that would explain why belief in that proposition is presumptively but defeasibly justified. If it really were a precondition for being able to so much as entertain any thought involving the concept phlogiston that one believe that phlogiston is a substance, then it would seem to me right to say that the belief that phlogiston is a substance is presumptively reasonable, subject to defeat by other considerations. It seen1S wrong to call the belief in question unreasonable when having it is a precondition for having any thoughts about itincluding thoughts about its reasonableness. Unfortunately, it is not remotely plausible that anyone possessing the concept of conditional would have to have the belief that MPP is valid. One can have and reason with conditional without so much as having the concept of logical implication. At most what the theory of concept possession would license is that inferring according to MPPR is part of the possession condition for conditional, not the belief that MPP is valid. But what we are after now is the justification for the belief. (Henceforth, to avoid unnecessary prolixity, I will drop the distinction between the labels, 'MPP' and 'MPPR'. I will talk simply about the difference between believing that MPP is valid and reasoning according to MPP.)
INFERENTIAL JUSTIFICATION: RULE-CIRCULARITY
This brings us, then, to the inferential path. Here there are a number of distinct possibilities, but they would all seem to suffer from the same master
Paul Boghossian
24
difficulty: in being inferential, they would have to be rule-circular. If MPP is the only underived rule of inference, then any inferential argument for MPP would either have to use MPP or use some other rule whose justification depends on MPP. And many philosophers have worried, legitimately, that a rule-circular justification of a rule of inference is no justification at all. Thus, it is tempting to suppose that we can give an a priori justification for modus ponens on the basis of our knowledge of the truth-table for 'if, then'. Suppose that p is true and that 'if p, then q' is also true. By the truthtable for 'if, then', if p is true and if 'if p, then q' is true, then q is true. So q must be true, too. As is clear, however, this justification for MPP must itself take at least one step in accord with MPP. But why should this be considered a problem? While it may be immediately obvious that a grossly circular justification-one that includes among its premisses that which it is attempting to prove-is worthless, it is not equally obvious that the same is true of a merely rule-circular justification. What intuitive constraint on justification does a rule-circular justification violate? It will be useful to approach this question by looking first at what is wrong with a grossly circular justification and to examine subsequently to what extent these problems afflict mere rule-circularity as well. There are at least two things wrong with a grossly circular argument. First, it assumes that which it is trying to prove and that, quite independently of any further consequences, seems wrong. An argument is put forward with the intent of justifying-earning the right to believe-a certain claim. But it will only do so if it proceeds from premisses that are justified. If, however, the premiss is also the conclusion, then it is simply helping itself to the claim that the conclusion is justified, instead of earning the right to it. And this manoeuvre offends against the very idea of proving something or arguing for it. As we are prone to say, it begs the question. A second problem is that by allowing itself the liberty of assuming that which it is trying to prove, a grossly circular argument is able to prove absolutely anything, however intuitively unjustifiable. Let us call the first problem the problem of 'begging the question' and the second that of 'bad company'. 8 Is a merely rule-circular justification subject to the same or analogous worries? It is not obvious that a rule-circular argument begs the question, for what we have is an argument that is circular only in the sense that, in purporting to prove the validity of a given logical law, it must take at least one step in accordance with that law. And it is not immediately clear that we should say that an argument relies on its implicated rule of inference in the same way as we say that it relies on its premisses.
8
lowe the term 'bad company' to Crispin Wright.
Objective Epistemic Reasons
Well, perhaps not in the same way, but it is not difficult to motivate a worry on this score. One clear way of doing so is to look at the role that a rulecircular argument might play in a dialectical context in which it is being used to silence a sceptic's doubt about its conclusion. Suppose that you doubt some claim C and I am trying to persuade you that it's true. I offer you an argument A in its support. In general, in such a context, you could question Ns cogency either by questioning one of its premisses or by questioning the implicated rule of inference R. If you were to proceed by challenging R, then I would have to defend R and my only option would appear to be to try to defend my belief that R is truth-preserving. Now suppose that the context in question is the special case where C is the proposition that R is truth-preserving and my argument for C is rulecircular in that it employs R in one of its steps. Here it very much looks as if I have begged the question: I have certainly begged your question. You doubt MPP. I give you an argument in support of MPP that uses MPP. Alert enough to notice that fact, you question my argument by reiterating your doubts about MPP. I defend my argument by asserting that MPP is truth-preserving. In this dialectical sense, a rule-circular argument n1ight be said to beg the question. At a minimum, then, the sceptical context discloses that a rule-circular argument for MPP would beg a sceptic's question about MPP and would, therefore, be powerless to quell his doubts about it. In doing this, however, it reveals yet another sense in which a worry might arise about a rule-circular argument. An argument relies on a rule of inference. As the sceptical scenario highlights, one's reliance on such a rule might be questioned. But, quite apart fro1l1 whether it is questioned, in what does one's entitlement to rely on that rule consist, if not in one's entitlement to the belief that the rule is truthpreserving? And if it does consist in that, how can a rule-circular argun1ent in support of belief in MPP confer warrant on its conclusion? In relying on a step in accord with MPP, in the course of an argument for MPP, one would be leaning on the very conclusion one is allegedly trying to prove. Under the general heading of a worry about begging the question, then, I want to distinguish two problems. First, to say in what the entitlement to use a rule of inference consists, if not in one's justified belief that that rule is truth-preserving. Second, to say how a rule-circular argument can confer warrant on its conclusion even if it is powerless to move the relevant sceptic. What about the problen1 of bad company? Prima facie, anyway, there looks to be a big difference between a grossly circular argument, on the one hand, and a rule-circular argument on the other, so far as their potential to positively rationalize belief is concerned. A grossly circular argument is guaranteed to succeed, no matter what proposition it is attempting to rationalize. A similar charge could not be made against a merely rulecircular argument: the mere licence to use an inferential step in accord with
26
Paul Boghossian
modus ponens, for example, does not in and of itself guarantee that a given argument will succeed in demonstrating the validity of modus ponens. Appropriate premisses from which, by (as it might be) a single application of MPP, we can get the general conclusion that MPP is truth-preserving, may simply not exist. In general, it is a non-trivial fact that a given rule of inference is self-supporting in this way. While this point is strictly correct, however, the fact is that unless constraints are placed on the acceptability of rule-circular arguments, it will nevertheless be true that we will be able to justify all manner of absurd rules of inference. We must confront the charge that unconstrained rule-circular justifications keep bad company. Consider someone who has somehow come to adopt the unreflective practice of inferring according to Prior's introduction and elimination rules for the 'tonk' connective: (I) AlA tonk B; (E) A tonk BIB
If we suppose that we are allowed to use inferences in accord with these rules in n10unting a justification for them, then it would seem that we could justify then1 as follows: 9 'P tonk Q' is true iff 'P' is true tonk 'Q' is true P 'P' is true 'P' is true tonk 'Q' is true 'P tonk Q' is true P tonk Q If P, then P tonk Q
Meaning Postulate Assumption 2, T-scheme 3, tonk-introduction 4, I, biconditional-elimination 5, T-scheme 6, logic
Here line 7 expresses a canonical statement of tonk-introduction dependent just on the meaning postulate in line I. SO this template is available to explain how someone for whom inference in accordance with tonk introduction was already part of their unreflective practice could arrive at an explicit justification for it. And an exactly corresponding example could be constructed to yield a 'justification' for the principle of tonk-elimination. Or consider the following example. 10 Let R~:- be the rule that, for any P, P, therefore All snow is white. Now, we seem to be in a position to mount a justification for it along the following lines. Pick any proposition P: P All snow is white 3. If P, then All snow is white I.
2.
Assumption I, R ~:. Conditional Weakening
9 The example is Crispin Wright's, drawn from his commentary on a related paper at the Stirling Conference on Naturalism, April 1997. 'Tonk' is discussed in greater detail in Section III. 10 Due to Marcus Giaquinto.
Objective Epistemic Reasons
27
Therefore, the inference from P to all snow is white is truth-preserving. Since this is independent of the particular proposition P that is chosen, then, for any proposition P, the inference from P to 'All snow is white' is truthpreserving', i.e. R ~:. is valid. Prima facie, then, there look to be serious objections to supposing that a rule-circular justification can confer any sort of warrant on its conclusion.
HOW TO RESPOND?
If the preceding considerations are correct, it's a serious question how there could be objectively correct deductive transmission principles. And this result by itself would deal a powerful blow to the objectivist pretensions of the concept of knowledge. If there are no objectively correct facts about how one ought to reason deductively, much of what we take to be knowledge would not be binding on those who would prefer to reason differently. But the true situation is probably worse even than this. For given the inevitable involvenlent of deductive reasoning in any account of how we might know the correctness of non-deductive epistemic principles, the problem is likely to be global: it will be difficult to see how there could be objectively correct epistemic principles of any sort. I don't have the space to argue for this general claim in detail here. In outline, this is how the argument would go. All the points about the inadequacy of observational or default reasonableness accounts would carryover to the non-deductive case. That means that any justification for the principles governing non-deductive reasoning would have to be inferential. As inferential, they would either have to be non-deductive or deductive, or a mixture of the two. If non-deductive, then the justification would be rulecircular and so subject to a version of the worries just outlined. If deductive, then ditto. If a mixture, then ditto. To put matters another way, it seems to me that all we really need, in order to raise a serious problem about the possibility of objectively correct epistemic principles, is the simple and seemingly inescapable claim that reasoning of some sort will be involved in any putative knowledge that we might have of any high-level epistemic claim. Once that simple thought is in place, seemingly insuperable problems are upon us virtually immediately. How should we respond? For obvious reasons, we can't just say that there are no objectively correct principles and leave it at that. We cannot but think that some beliefs are more justified than others, and that fact entails that we cannot but think that some epistemic principles are preferable to others. But how are we to make sense of this preference, if we are not allowed to think that some principles are objectively correct and others aren't?
Paul Boghossian
There look to be two options: we can either treat judgments about justification as capable only of relative truth, or we can treat them expressively, as not expressing genuinely truth-evaluable propositions in the first place. On the first view, we accommodate the result that there are no objective facts about justification by appealing only to relative facts about it; on the second view, we accommodate it by not appealing to any facts at all. I will start with a discussion of the relativist option. II
II RELATIVISM
Against the backdrop of the problem for objectivism just outlined, a relativism about justification can seem almost forced. There appears to be no way to justify one set of epistemic principles over another except by the use of those very episten1ic principles. However, depending on what principles we begin with, distinct sets of principles will come out looking correct. In response, it seems very natural to say that there can be no such thing as the objectively correct epistemic principles. There is just where we start, and how we find it natural to reason. I2 There are, of course, a number of ways in which such a relativism about justification might be elaborated, but the core idea is this: whether, under the appropriate circumstances, a given body of information supports a particular belief isn't some absolute relation between the information and the belief but is rather to be understood as obtaining only relative to some further parameter-theepistemic principles accepted by a community: whereas the objectivist thinks that some proposition P can simply be justified (by the evidence, under the appropriate conditions, all this henceforth suppressed), the relativist thinks that we can only cogently talk about P's being justified relative to a communal epistemic practice C, for variable C. On the relativist's view, in other words, there is, in our usual use of attributions of justification a hidden reference to a relation that obtains between the claim being put forward and the speaker's own community, a reference that his analysis purports to reveal, in just the way that Russell's famous I I Why not consider instead a relativism or non-factualism about logic itself, rather than about justification? The reason is that these views are well-known to be hopeless. A relativism about logic is just a version of a conventionalism about it, a view decisively defeated in Quine (1966). And as I have argued in Boghossian (2000) those objections carryover straightforwardly to a non-factualist construal of logic. 12 Some may find this thought expressed in Wittgenstein's remark: 'If I have exhausted the justifications I have reached bedrock, and my spade is turned. Then I am inclined to say: "This is simply what I do'" (Wittgenstein 1953, §217).
Objective Epistemic Reasons
29
analysis of definite descriptions purported to disclose a hidden reliance on existential quantification: (J) For any speaker S asserting that P is justified: S is making a judglnent of the form: P is justified relative to S's communal principles C.13
As for the communal norms themselves, there can be no question of their being justified or correct. They are what they are. Now, philosophical tradition has it that relativism so understood is subject to a decisive dilemma. Either the relativist is putting forward his view as objectively justified or only as relatively justified, justified for his community. If it's the former, then the view refutes itself, for there would then be, by its own admission, at least one proposition that is objectively justified (and if there is that one, is it really plausible that there shouldn't be others?). If, however, he insists that it is meant to be justified only in a relative sense, only justified for his community, then why do we as non-relativists need to worry about it? The argument I have just presented is nearly as old as philosophy itself. 14 Objectivists seem to find it decisive, whereas relativists are prone to dismiss it as worthless, a clever bit of logical trickery that has no real bearing on the issues at hand. It's hard to see how the relativist's attitude here is to be vindicated: there is absolutely nothing illicit about self-refutations of this sort. In fairness, however, it is important to note that this famous argument does suffer from three significant weaknesses. First, the argument is a self-refutation argument of a pragmatic variety, and the point is that such an argument proceeds not by uncovering a genuine contradiction in the target view, but by uncovering a contradiction between asserting the view and the view's content. It follows, therefore, that we cannot say, merely on the basis of the argument, that we have demonstrated the falsity of the claim that all justification is relative, but only that such a claim would not be assertible or believable. Another limitation that self-refutation arguments of a pragmatic variety are subject to is that they depend on a particular vocabulary for describing the activity of knowledge-for example, on the propriety of describing the activity of knowledge in terms of the notions of assertion and belief. But perhaps these are not the right concepts for the description of cognitive activity, as some eliminativists have claimed. 15 Perhaps this whole way of describing what we do, when we seek knowledge, will be replaced by 13 The relativist could also be understood as arguing not that we already speak this way, but that we ought to, if we are to speak cogently. 14 It can be found in Plato's Theaetetus, and in Nagel (1996). 15 See e.g. Churchland (19 83).
Paul Boghossian 3° some other set of terms. What would be the value of our pragmatic refutation then? Obviously, if the notion of asserting something, or of believing it, were replaced by some other way of thinking about knowledgeshmasserting and shmelieving, for example-it would be irrelevant that there is an inconsistency between asserting or believing that all justification is relative and the claim that it is. What we would need to do is find an inconsistency between shmasserting that all justification is relative and the claim that it is. A third problem for the argument is perhaps the most serious. If the relativist opts for the horn of saying that J is meant to be justified only relative to his community, he has not yet committed himself to the view that that community is identical with the community of relativists. For all we are entitled to assume, he may mean that J is justified for a community that includes non-relativists and, hence, that it is equally justified for them. So we are not immediately entitled to say that, if he adopts that horn, we are entitled to ignore him. For instance, the anti-objectivist argument that I presented in Section I relies only on ordinary and widely accepted epistemic norms. If the relativist motivates his view by appealing to that argument, we can hardly dismiss him by saying that his view is justified only relative to relativists. His view would appear to have been motivated for all of us. There are certainly things that can be said in reply to these objections. In response to the first objection we may point out that it is highly significant that a view is not coherently assertible or believable. If we know a view to be not coherently believable, we know that we cannot take it seriously as a possible candidate for truth. In response to the second objection, one can say two things. The first is that no one has come close to saying what an alternative to classical epistemology would look like, no one has provided the slightest guidance as to how we are to think of our basic cognitive activities if not in terms of the notions of asserting, claiming, saying, believing, and the like. And the second is that it's very hard to see how any putative replacen1ent would be able to evade the sorts of consideration that the pragmatic refutation employs, given how austere those considerations actually are. Surely, any replacement epistemology will have to have some notion that plays the same role as our notion of a reason for believing something; and for any such notion we will be able to run a version of the argument that we deployed above. But it is hard to see what to say in response to the third objection, it seems to me. The epistemic norms that are relied upon in the anti-objectivist argun1ent of Section I are ordinary norms that beg no question against the objectivist. With what right, then, does the objectivist claim the freedom simply to ignore the relativism that they seem to motivate?
Objective Epistemic Reasons
3I
At least as traditionally formulated, then, the classical pragn1atic refutation of relativism seems to me to be far from decisive. Unfortunately for the relativist, however, there is a different way of formulating the objection to his view that evades these difficulties. To bring it out, let me introduce a notion-that of being 'epistemically blameless'. If someone is epistemically blameless in believing something, then it makes no sense to criticize him for believing it. I intend this to be an absolute notion, by contrast with the relativist's relative notion of justification. Consider next a community C, and a given state of information I that C finds itself in. If justificatory relativism is true, then, even while keeping the state of inforn1ation I fixed, it is possible for C to believe any proposition P that it wants, and be blameless. All C has to do is adopt whatever epistemic norm sanctions P under I. Since, according to the relativist, there can be no higher facts about which epistemic principles it would be correct to adopt, C can adopt any epistemic principle it wants and be blameless. Since, for any P, there will be some set of principles that will sanction believing it, any state of information is consistent with blameless belief in any proposition, if relativism is true. In particular, C can blamelessly adopt epistemic norms that prohibit a relativism about justification. Indeed, because it can adopt whatever epistemic norms it wants, it can keep most of the ordinary norms in place and simply accept certain exceptions to them, whatever it takes to selectively prohibit whatever view it doesn't like, including relativism. By the relativist's own lights, there can be no objection to this manoeuvre. The original hunch behind the classical pragmatic anti-relativist argument is that relativisn1 may be blamelessly rejected. That hunch is now vindicated by our reformulated anti-relativist argument. But more than that, we see that, at least on this straightforward way of formulating a relativism about justified belief, relativism does indeed lead to an unacceptable form of 'anything goes'. On its own terms, any state of information is consistent with blameless belief in any proposition, given only appropriate (and guaranteed to be blameless) adjustments in the epistemic system.
NON-FACTUALISM ABOUT JUSTIFICATION
The question is: Would we do better if we accommodated an anti-objectivism about justification not in relativist terms but in expressivist ones? Allan Gibbard has developed just such an expressivist theory of judgments of rationality; adapted to the present case it would yield something like the following view: When someone says that 'x is a justified belief' they are not attributing any sort of property to it at all, relational or ot~~r~J~ei
Paul Boghossian
rather, they are expressing their acceptance of a system of norms that permits that belief under those circumstances. 16 Now, it might appear at first glance that this is a considerable improvement over a relativist construal of justification. Since, in saying that a belief is justified, we are not attributing any sort of property to it, but merely expressing our acceptance of a system of norms that permits it; and since we don't as a matter of fact accept epistemic norms that permit believing anything, it looks as though the consequence that one can believe anything one likes and be blameless is blocked. Unfortunately, I shall argue that this appearance is illusory and that a non-factualism about justification is subject to much the same sort of objection as an outright relativism about it. To see why, let's imagine that I come across someone--eall him AR-who holds a view I consider utterly unjustified: for example, that there is a spaceship trailing the comet Hale-Bopp that is going to come down and swoop him away. What can be my attitude towards such a person, given a Gibbard-style expressivism? I can express my acceptance of a system of norms that forbids that belief, all right, but that seems to leave something important out. If I tell AR that his belief that p is irrational and unjustified, I am not merely expressing my acceptance of a system of norms that forbids it; I am claiming to see something that he is not, namely, that p ought not to be believed, given the available evidence. I am saying (roughly): I do not believe p; you should not either. Gibbard tries to account for the normativity of such judgments by invoking a classic expressivist resource: the conversational demand. In saying that x is unjustified, he says, I am expressing my acceptance of a system of norms that forbids x and adding: Do so as well! In and of itself, however, this does not capture the claim that I appear to be making when I claim that I am justified and AR isn't, for even someone who is simply browbeating his interlocutor can issue a conversational den1and. To browbeat someone is to issue a conversational demand whilst knowing that one is not entitled to do so. So the question is: with what right do I insist that someone accept my view and abandon his, on non-factualist views of justification? Could not AR insist, with equal right, that I abandon my view in favour of his? Indeed, as a non-factualist, would not I have to recognize that our claims to normative authority here are perfectly symmetrical, thereby undermining any hold I might have had on the thought that I am justified and he is not? And is not this a version of the sort of relativism expressivism was supposed to avoid? Now, AR's belief about alien spaceships may arise in a number of different ways. He may share all my epistemic norms on the fixation of belief and he may be very good at reasoning from those norms and the available evidence to the relevant conclusions. He may simply not be aware that there is not a scintilla of evidence that there is a spaceship trailing Hale-Bopp. 16
Gibbard (199 0 ).
Objective Epistemic Reasons
33
In that case, there is no difficulty accounting for my demand that he give up his view in favour of mine. Knowing that his problem stems sin1ply from an ignorance of the relevant facts, I can coherently ask that he take my reasoning as proxy for his own. And he, for his part, would be entirely reasonable in taking me up on my invitation. Then, again, AR's curious belief may derive not from his ignorance of any item of evidence but from his poor abilities at reasoning: he may be bad at moving from the epistemic norms that we share and the evidence to the appropriate conclusions. Here, again, there is no difficulty accounting for the normative authority that I claim. Given that we share the relevant norms, I can again ask him to take my reasoning as proxy for his own. But suppose that the difference between AR's beliefs and mine sten1S not from such mundane sources but rather from a deep-seated difference in the fundamental epistemic norms to which we subscribe, norms for the fixation of belief that are not derived from any others. In calling his view irrational, then, I am in effect demanding that he give up his fundamental epistemic norms in favour of the ones that I en1ploy. And the question I am asking is: With what right do I do this, on a non-factualist view? As an objectivist, I would have no trouble explaining my attitude here. Since, as an objectivist, I take there to be a fact of the matter which fundan1ental norms are correct, and since I take myself to know what they are, I can easily explain why I am insisting that my interlocutor give up his norms in favour of mine. Of course, my interlocutor, convinced of the correctness of his own norms, may make a similar demand on me. If the norms are fundamental, this may well result in an impasse, a disagreement from which neither of us can be budged by argument. But it would at least make sense that there is a disagreement here and that we should be issuing (potentially ineffective) conversational demands on each other. But what explanation can the non-factualist offer of these matters? The non-factualist may reply that there is no difficulty here. After all, he will say, the epistemic norn1S that I accept are unconditional: they apply to someone whether or not that person is inclined to accept them. There seem to me to be two problems with this reply, however, one with the assumption that I accept unconditional norms in the first place, the other with my insistence that someone else also accept them. First, if a non-factualism about justification is correct, with what right do I accept epistemic norms that are unconditional, so that they apply to someone whether or not they accept them?I7 If there really are no perspectiveindependent facts about which epistemic norms are correct, with what right do I accept norms that apply to people whether or not they accept them? Should not an appropriate sensitivity to the fact that there is nothing that makes my norms more correct than anyone else's result in my being hesitant I7
David Velleman has emphasized this point to me.
34
Paul Boghossian
about accepting norms that apply to others regardless of whether they are also inclined to accept them? Second, and putting this first problem to one side, on what basis do I insist that AR give up his unconditional norms in favour of mine? I accept a particular set of fundanlental norms, he accepts another. By assumption, the norms in dispute are fundamental, so there is no neutral territory on which the disagreement can be adjudicated. Furthermore, on the non-factualist view, there are no facts about which fundamental epistemic norms are correct and which ones are not. So, on what basis do I insist that he give up his norms in favour of mine? l'he expressivist thinks he can evade the clutches of an unpalatable relativism by claiming that talk about a belief's being justified expresses a state of mind rather than stating anything. But this stratagem does not long conceal the view's inevitable relativistic upshot, which can now be restated in terms of the problem of normative authority. If no evidential system is more correct than any other, then I cannot coherently think that a particular belief is blameworthy, no matter how crazy it may be, so long as that belief is grounded in a set of fundamental epistemic norms that permit it, no matter how crazy they may be. To repeat: the point here is not about suasive effectiveness. I do not mean that the realist about justification will have an easier time persuading anyone of anything. In fact, it is quite clear that there are lots of extreme positions from which no one can be dislodged by argument, whether confronted by a realist or an expressivist (this is a point to which we will have occasion to return). The issue is rather about having the resources with which to think certain thoughts coherently. By virtue of believing that there are objective facts about what justifies what, the realist can coherently think that a particular epistemic system is mistaken. The non-factualist, however, cannot. In a sense, the difficulty should have been evident from the start. For the root problem is with the claim with which the expressivist about justification must begin, that there is nothing that episten1ically privileges one set of epistemic principles over another. Once that basic thought is in place, it becomes impossible to evade some sort of relativistic upshot. It doesn't matter whether the basic thought is embedded in an expressivist or a nonexpressivist framework.
III VINDICATING RULE-CIRCULARITY: WARRANT TRANSFER
Where do we stand? In Section I, we saw that there are powerful considerations in favour of thinking that there could not be objectively valid epistemic
Objective Epistemic Reasons
35
reasons. In Section II, on the other hand, we saw that there appears to be no palatable way to accommodate this result. Unless we are to be mired in paradox, then, we have to find some way of vindicating the claim that we can know what the correct epistemic principles are. And if we are to do that we have to find some way of vindicating rule-circular justifications-of defending them from the objections that they beg the question and keep bad company-for so far as I can see, that can be our only route to knowing them. That is the idea I propose to explore in this third and final section. If rule-circular arguments are in fact capable of transferring warrant from their premisses to their conclusions, we should expect this result to flow in some natural way fron1 the conditions that govern warrant transfer quite generally. So let's begin with the general question: Under what conditions does an argument transmit the warrant for its premisses to its conclusion? One condition seems clear enough: the thinker, S, must be justified in believing the premisses p. Beyond that, however, matters get less straightforward. It will be instructive to start with an incorrect, overly rich account of what is required in order to try to converge on something more plausible. In his article, 'Epistemic Circularity', 18 William Alston considers, without fully endorsing, a version of the following account (I have modified it in small ways). S's belief that p confers warrant on his belief that q just in case: (A) S is justified in believing the premisses, p. (B) P and q are logically related in such a way that if p is true, that is a good reason for supposing that q is at least likely to be true. (C) S knows, or is justified in believing that the logical relation between p and q is as specified in (B). (D) S infers q from p because of his belief specified in (C). The conditions are intended to be singly necessary and jointly sufficient for the inference to warrant the conclusion. Now, one problem that I wish to set aside concerns the sufficiency of these conditions. Crispin Wright and others have remarked that there are important cases where one's knowledge that p depends on one's prior knowledge that q, and in those cases it would be wrong to claim transfer of warrant from premisses to conclusion. 19 We may assume, however, that this problem has been accommodated by the stipulation that knowledge of the premisses be suitably independent. The problem I will be interested in concerns the necessity of these conditions, specifically that of C. It is easy to appreciate why. If C were a correct necessary condition on warrant transfer, then it would follow immediately that there could be no such thing as a rule-circular 18
Alston (19 86).
19
See Wright (forthcoming) and Wright
(2000).
Paul Boghossian
justification. For C requires that, in order to use an argument employing a given rule to support the claim that that rule is truth-preserving, one already has to know that that rule is truth-preserving. And that would make the rule-circular justification otiose: the knowledge arrived at would already be presupposed. Fortunately, however, it can readily be seen that C is intuitively too strong. One problem with it we have already had occasion to note in connection with the passage cited from Peacocke above: it is far too sophisticated a requirement. A child who reasoned: • If he were hiding behind that tree, he wouldn't have left his bicycle leaning on it • But it is leaning on it • So, he must be hiding behind some other tree would, other conditions permitting, have reasoned his way to a justified conclusion. But such a child would not have beliefs about logical entailment. He wouldn't even have the ingredient (meta-)logical concepts. A second, more severe, problem is suggested by Lewis Carroll's observations in his note 'What the Tortoise Said to Achilles'.20 There are a number of ways of reading that famous argument, of course, and it is not clear which, if any of them, Carroll actually had in mind. But on one suggestive reading, its moral is precisely that condition (C) is too strong if there is to be any such thing as transfer of warrant by argument. 21 According to the propositional picture, one can only be justified in inferring a given conclusion from a given premiss according to a given rule R, if one knows that R has a particular logical property, say that it is truth-preserving. So, for example, no one simply reasoning from the particular proposition p and the particular proposition 'if p, then q' to the proposition q could ever be justified in drawing the conclusion q; in addition, the thinker would have to know that his premisses necessitate his conclusion. Let us suppose that the thinker does know this, whether this be through some act of rational insight or otherwise. How should we represent this knowledge? We could try: (I) Necessarily: p ~ ((p ~ q) ~ q))
Some may feel it more appropriate to represent it meta-logically, thus: (2) p,
P ~ q logically imply q
Carroll (18 95). James van Cleve also suggests this as the moral of the Lewis Carroll argument; but the argument he outlines is distinct fronl the one I shall present. See Van Cleve (1984). 20
21
Objective Epistemic Reasons
37
The question is: However the knowledge in question is represented, how does it help justify the thinker in drawing the conclusion q from the premisses with which he began? The answer might seem quite simple. Consider (I). Doesn't knowledge of (I) allow him to appreciate that the proposition that q follows logically from the premisses, and so that the inference to q is truth-preserving and so justified? In a sense, the answer is obviously 'Yes', knowledge of (I) does enable an appreciation of just that fact. But it doesn't do so automatically, but only via a transition, a transition, moreover, that is of a piece with the very sort of transition it is attempting to justify. (I)
p-7 ((p-7 q)-7 q))
(2) p (3) (4)
(P-7q)-7q p-7 q
(5) Therefore, q As is transparent, any such reasoning would itself involve at least one step in accord with modus ponens. What about representing the knowledge in question as in (2)? The problem recurs. To know that p and p -7 q logically imply q is just to know that if p and p -7 q are true, then q must be true. Once more, there is an easy transition from this knowledge to the knowledge that q must be true, given that p is true and that p -7 q is true. But the facility of this transition should not obscure the fact that it is there and that it is of the same kind as the transition that it is attempting to shore up. If, therefore, we insist that the original inference from p and p -7 q to q was unjustified unless supported by the propositional knowledge represented either by (I) or by (2), then we commit ourselves to launching an unstoppable regress. Bringing any such knowledge to bear on the justifiability of the inference would itself require justified use of the very same sort of inference whose justifiability the general knowledge was supposed to secure. What this Lewis Carroll-inspired argument shows, it seems to me, is that at some point it must be possible to use a rule in reasoning in order to arrive at a justified conclusion, without this use needing to be supported by some knowledge about the rule that one is relying on. It must be possible simply to move between thoughts in a way that generates justified belief, without this movement being grounded in the thinker's justified belief about the rule used in the reasoning. Condition (C), we are agreed then, must go. But do we simply scratch it out and remain content with the external condition mentioned in (B)? And, what, exactly should that external condition be?
Paul Boghossian
38
If we look at the external condition described by Alston, we notice something striking: it is thoroughly unhelpful. It says: If inferring q from p is to provide a good reason for believing q, then p and q must be so related that p's being true is a good reason for believing q to be true. That verges on the platitudinous. Can we do better? What about the suggestion that p and q be so related that the inference from p to q is reliably truth-preserving? That won't do, for it leaves out the inductive case. What about saying that p and q be so related that the probability of q given p be reliably high? The trouble with this suggestion is that it is not clear that we have a grip on this that is other than in terms of subjective probability. And if that is right, then the suggestion collapses back into the unhelpful proposal just considered. However, even if there were an external condition that was both helpful and general enough to cover the requisite range of cases, it's clear, it seems to me, that it would not be sufficient to explain under what conditions arguments transfer warrant. Henceforth, and for the remainder of this essay, I shall concentrate on the deductive case, leaving a treatment of induction for another occasion. (The extension of the ideas of this paper to the inductive case involves questions to which I currently have no settled answers.) The reason why not is familiar from discussions of reliabilist conceptions of justification more generally. The mere fact that a particular inference is truth-preserving bears no intuitive link to the thinker's entitlement to it. There are infinitely many hopelessly complicated truth-preserving inferences that it would be absurd to suppose are justifiably performed just because they are truth-preserving. For example, any inference of the form If x, y,
z, and n are numbers and n is greater than
2
then
x n + yn is not equal to zn is, as we now know, reliably truth-preserving. But it would be absurd to suppose that anyone making that inference, whether or not they knew anything about Andrew Wiles's proof of Fermat's last theorem, would be drawing a justified conclusion. Someone may object: 'Of course that would not be enough. It is only in the fundamental cases, where the inference cannot be broken down into further steps, that mere truth-preservation is sufficient for warrant transfer. In non-fundamental or derived cases, actual recognition that the rule is truth-preserving may well be required.' It is difficult to see, however, how this qualification is to be motivated. Why should it matter, one way or the other, whether the inference is fundamental
Objective Epistemic Reasons
39
or not? How do we explain why it is only in cases that are fundamental that truth-preservation is sufficient for justification? The missing intuitive link between the external condition and the entitlement may be especially vivid in cases where the inferred conclusion is one that the thinker is not already entitled to; but the point would appear to hold quite generally. We find ourselves in a familiar philosophical predicament, looking for a satisfying intermediate position between two unpalatable extremes. We cannot say that all that's required for a deductive inference to be justified is that it be truth-preserving. But we cannot supply the missing ingredient, on pain of regress, by requiring that the thinker know that his inference is of a truthpreserving sort. So what are we to do? Can we make sense of the idea that a thinker is entitled to reason in a particular way, without this involvingincoherently-that the thinker know something about the rule involved in his reasoning? We can, I think, if a natural, indeed virtually inevitable suggestion, is true: namely, that our logical words (in the language of thought) mean what they do by virtue of their inferential role, that 'if, then', for example (or more precisely, its mentalese equivalent) means what it does by virtue of participating in some inferences and not in others. If this is correct, and if, as is overwhelmingly plausible, it is by virtue of its role in fundamental (i.e. underived) inference that the conditional means what it does, then we have an immediately compelling answer to the question: how could someone be entitled to reason according to MPP without having a positive belief that entitles him to it. If fundamental inferential dispositions fix what we mean by our words, then, as I shall now try to show, we are entitled to act on those dispositions prior to and independently of having supplied an explicit justification for them. The satisfying intermediate position concerning warrant transfer, I therefore want to propose, is that in the case of fundamental inference the implicated rule must be meaning-constituting. Unlike the purely external requirement of truth-preservation, this view explains why the thinker is entitled to the rule; and yet unlike the impossible internalism, it does so without requiring that the thinker know that the rule is truth-preserving.
EXTERNALISM, INTERNALISM, INFERENCE, JUSTIFICATION
It is beyond the scope of this essay to defend the correctness of this account of inferential warrant in the detail that it requires. But in order to begin to get a sense of why it might be on the right track, it will be necessary to look briefly at the notion of justification more generally, and at the controversy about 'externalist' versus 'internalist' construals of it.
40
Paul Boghossian
The issue can be usefully approached by considering an objection to the view I'm proposing that was put to me by Crispin Wright: Boghossian's reaction to the simple externalist account betrays an interest in reflectively appreciable warrant-warrant that makes a phenomenologically appreciable impact, as it were. But he does not connect his own proposal with such impacts; and it is not clear how the connection might be made. If it cannot be, one might as well stick with simple externalism. 22
Wright's point can be restated in the form of a dilemma. Either the account is trying to reconstruct an externalist warrant, or an internalist one. If an externalist one, then the account in terms of meaning-constitution has no obvious advantage over simple truth-preservation; if an internalist one, then it is not clear that the demand is satisfied: a rule's being meaning-constituting does not necessarily have any 'appreciable phenomenological impact'. I intend my account to capture a broadly internalist notion of warrant, and so I embrace the second horn of the bruited dilemma. To see why that characterization is correct, however, and why my account does satisfy the constraints appropriate to an internalist notion, we have to look at how that distinction is properly conceived. Start with a crude externalism about justified belief (and put aside worries about how reliability is to be defined): a belief is justified just in case it is produced by a reliable belief-forming nlechanism. If I reject a crude externalism-and I do-it is because I was convinced by some familiar examples-Bonjour's Samantha, Casper, Maud and Norman and Lehrer's TrueTemp-that it is false. These examples show conclusively, I think, that mere reliability is not sufficient for justification. 23 If we look at these examples, we find their structure to be this: a subject's belief that p is produced by a reliable mechanism but the belief is, nevertheless, in some strongly intuitive sense, epistemically irresponsible. And our response to such cases is that, under those circumstances, the subject cannot count as justified. It appears to be a condition on someone's being epistemically justified that they not be epistemically irresponsible in forming their belief. What makes a belief epistemically irresponsible? An inspection of the examples seems to suggest a unifornl answer: the absence of a reflectively appreciable warrant for the belief (which can sometimes assume the form of the presence of a reflectively appreciable warrant for its negation). A steady diet of such examples has encouraged philosophers simply to identify possession of an internalist warrant-and hence warrant as such-with the possession of a reflectively appreciable item of information that justifies
22 The objection is from a draft of a commentary on an earlier version of this paper delivered at the Pacific Division meetings of the APA, Albuquerque, NM, in April of 2000. 23 Bonjour (199 8), Lehrer (199 0 ).
Objective Epistemic Reasons
41
the belief, and that is in effect what Wright does in his formulation of the dilemma. Understandable as this identification may be, it is not justified by the considerations that have been adduced by internalists. For all that the examples show, it is possible that there is some other way in which a belief might be responsibly held-or at least held not irresponsibly-other than by being supported by some reflectively appreciable warrant. All that the examples actually teach us is that being justified cannot coexist with being epistemically irresponsible. They don't-and can't-teach us that the only way to avoid epistemic irresponsibility requires support from a reflectively appreciable warrant. As I have already indicated in my discussion of default reasonableness, I think that beliefs that are meaning- or concept-constituting can be held responsibly even in the absence of a reflectively appreciable warrant for them. This is particularly compelling in the case that is the focus of the present discussion-justifiable inference. There are two key related points. First, if it is really true that someone's being disposed to reason according to modus ponens is a necessary condition of their having any logical concepts at all, and so of being able to reason in any shape, manner or form, there can be no intuitive sense in which their disposition to reason according to modus ponens can be held to be irresponsible, even in the absence of a reflectively appreciable warrant that justifies it. If you doubt that this is true, try to construct a Bonjour-style case that will make it seem intuitively irresponsible for someone to reason according to modus ponens, without first having satisfied themselves that the inference form is truth-preserving, when their doing so is a precondition of their being able to engage in any reasoning whatsoever. Second, if my Lewis Carroll-inspired argument is correct, we know that, at least in the most basic cases, no richer warrant-nothing that would count as a phenomenologically appreciable belief about the rule, for example-could so much as be coherent. At some point, as that argument shows, it must be possible simply to move to a justified conclusion. But that fact should not be taken to imply that in that range of cases we have to settle for a merely externalist warrant. The core distinction between externalism and internalism in the theory of justification is properly characterized in terms of the notion of epistemic responsibility. Fundamental inferences that are meaning-constituting are not epistemically culpable, even if they are not supported by reflectively appreciable warrants. To demand more of a thinker is to demand the provably impossible.
BEGGING THE QUESTION (2): REPLY
So far in this section of the paper, we have been looking at what we should say to the general question: Under what conditions is an argument
Paul Boghossian
warrant-transferring? And the answer, based purely on general considerations, has been that, in the most basic cases, the relied upon rule should be meaning-constituting. If this answer is correct, though, it points the way forward with our question concerning the legitimacy of rule-circular justifications. For it answers the second part of the problem about begging the question: how could we be entitled to use a particular rule of inference independently of being entitled to believe that that rule is valid? So long as the rule-circular justifi.cation at issue involves a meaning-constituting rule, there can be no question of our entitlement to reason in accordance with it, even in the absence of a reflectively appreciable belief that justifies it. What about the problem of bad company? To see how the very same resources can supply a solution to this further set of problems, we need to further explore the idea of a conceptual role semantics. CONCEPTUAL ROLE SEMANTICS
When we say that meaning is determined by conceptual role, how exactly should this be understood? On one view, every possible conceptual role determines some meaning or other. But we know this not to be a plausible view, on purely meaningtheoretic grounds. That is the minimal lesson of Arthur Prior's 'tonk' example (Prior 1960-1). Prior imagined a connective governed by the following introduction and elimination rules:
AIA tonk B A tonk BIB. The specification defines a conceptual role; but what meaning does it determine? If we said that there is one, then we would have to hold that there is a thinkable proposition expressed by sentences of the form 'A tonk B'. If there were such a thinkable proposition, then there would have to be a way the world is when the proposition is true. How, though, must the world be if 'A tonk B' is to be true? Since the sentence is compound, its truth value will depend on the truth values of its ingredient sentences A and B. But we can readily see that there can be no consistent assignment of truth value to sentences of the form 'A tonk B' given the introduction and elimination rules for 'tonk' . Given those rules, both A~A tonk
B
and A tonk
B~B
Objective Epistemic Reasons
43
have to come out tautologous, for any A or B. It is impossible to satisfy that demand. Pick an A that's true and a B that's false. Then, for the first conditional to come out true, 'A tonk B' has to be true. However, given that B is false, 'A tonk B' has to be false if the second conditional is to come out true. So there can be no determinate way the world has to be, if 'A tonk B' is to come out true. 24 But we don't need actual inconsistency to make the point that not every conceptual role determines a meaning. Consider, for example, the following connective 'shmand'. Its introduction and elimination rules are exactly like those for conjunction, except that the sentence that occupies the '.N. position is restricted to a length of twenty-five letters. A «25), B/A shmand B A «25) shmand BIB What proposition would be expressed by sentences of the form 'A shmand B'? How would the world have to be if this sort of proposition is to be true? Clearly, there is no determinate answer to that question. 25 For purely meaning-theoretic reasons, then, we should deny that every conceptual role determines a meaning. We should insist that a conceptual role determines a meaning for an expression only if it manages to contribute in some determinate way to determining how the world would have to be if sentences involving the expression are to be true. Put in other words, the way to understand a conceptual role theory of the logical constants is to see them as subject in part to the implicit stipulation: Let x express that meaning, if any, whose semantic value makes a particular class of inferences truth-preserving. If there is no such value, then there is no such meaning. 2.6
BAD COMPANY: REPLY
Now, if that is the correct way to think of a conceptual role semantics, then the problem of bad company· takes care of itself. If, in a given fundamental rule-circular justification, there is a meaningful inference to begin with, then it is guaranteed to be truth-preserving, for a rule of inference doesn't get to determine a meaning unless it is truth-preserving. As a result, the problem Cf. Peacocke (199 2 ). Christopher Peacocke gives a similar example in Peacocke (1993). Peacocke has long urged that a conceptual role semantics be understood in this restrictive truth-theoretic way. Although my route into these issues is distinct from his, I find myself in agreement with much of what Peacocke has to say about the logical constants and the role of meaning in justification. 26 See Boghossian (1997). 24
25
44
Paul Boghossian
of bad company does not arise: it is impossible, intelligibly, to justify a nontruth-preserving rule, such as the tonk rules or R::-. The key insight is that, just as there are objective constraints on what is true, so there are objective constraints on what we can mean. This is something that we have reason to accept entirely independently of our epistemological investments. A conceptual role semantics, by virtue of its ties to the notion of justification, transforms this constraint on meaning into a constraint on justification that simultaneously vindicates the possibility of rule-circular justifications while staving off the threat of an unpalatable relativism. 27 BEGGING THE SCEPTIC'S QUESTION
It is time now to turn to the final problem I outlined for a rule-circular justification, its incapacity to move the appropriate sceptic. The point at issue is prefigured in Dummett's discussion when he says that rule-circularity will be damaging only to a justificatory argument that is addressed to someone who genuinely doubts whether the law is valid, and is intended to persuade him that it is.... If, on the other hand, it is intended to satisfy the philosopher's perplexity about our entitlement to reason in accordance with such a law, it may well do so. The philosopher does not seriously doubt the validity of the law and is therefore prepared to accept an argument in accordance with it. He does not seek to be persuaded of the conclusion; what he is seeking is an explanation of its being true. 28
Before inquiring into the significance of this, let us make sure that we do not underestimate all that a rule-circular justification is capable of accomplishing. First, it is not at all similar to a grossly circular argument in that it is not trivially guaranteed to succeed. For one thing, the relevant premisses from which, by (as it might be) a single application of the rule, the desired conclusion is to follow, may not be available. For another, not all rules are self-supporting. Second, the rule-circular argument for MPP asks in effect that it be granted that one application of MPP and from that it promises to deliver the conclusion that MPP is necessarily truth-preserving, truthpreserving in any possible application. That seems like a significant advance. Finally, this one application will itself be one to which we are entitled if, as seems plausible, MPP is meaning-constituting. For all that, it is nevertheless true that if we were confronted by a sceptic who doubted the validity of MPP in any of its applications, we could not use this argument to rationally persuade him. Doubting the rule, he would rightly reject this particular argument in its favour. Since, by assumption, we 27 I suspect that it is Wittgenstein's failure to appreciate the point that not every conceptual role determines a meaning that led to the relativistic-sounding passages of the Remarks on the Foundations of Mathematics. 28 Dummett (199 1).
Objective Epistemic Reasons
45
have no other sort of argument to offer him, it seems that we are powerless to persuade him of the rightness of our position. The question is: What is the epistemic significance of this fact? But could not we say to him: 'Look, MPP is meaning-constituting. If you reject it then you simply mean something different by "if, then" and therefore there is no real disagreement after all.' But if our sceptic were playing his cards right, he would deny that MPP is meaning-constituting. To persuade him otherwise we would have to offer him an argument and that argument would in turn have to use MPP. And then we would be right back where we started, faced with the question: What is the epistemological significance of the fact that we are unable to persuade the sceptic about MPP? In the passage cited above, Dummett seems to think that its significance lies in the way in which it highlights a distinction between two distinct projects: quelling the sceptic's doubts versus explaining to a non-sceptic why MPP is valid. But I do not really understand what it would be to explain why a given logical law is true. What could it mean except something along the lines of a conventionalism about logical truth, an account which really does aspire to explain where logical truth comes from? As any reader of Quine's 'Truth By Convention' will be aware however, there are decisive objections to conventionalism, objections that probably generalize to any explanatory project of that form. 29 The question that we need to be asking, I think, is rather this: Can we say that something is a real reason for believing that p if it cannot be used to answer a sceptic about p? Is it criterial for my having a genuine reason for believing that p that I be able to use it to persuade someone who doubts whether p? Well, in fact, we are very drawn to the idea that if I am genuinely justified in believing that p, then, in principle, I ought to be able to bring you around as well-or, at the very least, I ought to be able to take you some distance towards rational belief in p. Of course, you may not understand the warrant that I have; or, being more cautious than I, you may not assign it the same weight that I do. But, prescinding from these and similar considerations, how could I be genuinely justified in believing something and yet be totally unable to have any sway with you? As Thomas Nagel puts it in his recent book The Last Word: To reason is to think systematically in ways that anyone looking over n1Y shoulder ought to be able to recognize as correct. It is this generality that relativists and subjectivists deny. (Nagel 199 6: 5)
Notice how naturally it comes to Nagel to equate the claim that there are objectively valid reasons, reasons that would apply to anyone anywhere, 29
See Quine (I9 66). For further discussion, see Boghossian (2000).
46
Paul Boghossian
with the epistemic claim that anyone exposed to them ought to be able to recognize them as reasons. There is a principle behind this thought, one that we may call the 'principle of the universal accessibility of reasons': if something is a genuine reason for believing that p, then, subject to the provisos just made, its rationalizing force ought to be accessible fron1 any epistemic standpoint. I think that this principle has played a very large role in our thinking about justification. It is what explains, it seems to me, why the theory of knowledge is so often centred on a refutation of scepticism. We take it to be criterial of our having a genuine warrant for a given proposition that we be in a position to refute a sceptic about p. If n1Y discussion has been on the right track, however, then one of its main lessons is that this principle is false. For consider: We cannot accept the claim that we have no warrant whatsoever for the core logical principles. We cannot conceive what such a warrant could consist in (whether this be a priori or a posteriori) if not in some sort of inference using those very core logical principles. So, there must be genuine warrants that will not carry any sway with a sceptic. Answering the sceptic about modus ponens cannot be criterial for whether we are warranted in believing modus ponens. To put this point another way: we must recognize a distinction between two different sorts of reason-suasive and non-suasive reasons. And we have to reconcile ourselves to the fact that in certain areas of knowledge, logic featuring prominently among them, our warrant can be at most nonsuasive, powerless to quell sceptical doubts. It seems to me that this is a conclusion that we have reason to accept entirely independently of our present concern with knowledge of logic, that there are many other compartments of knowledge in which our warrant can be at most non-suasive. One such area concerns our knowledge of the existence of other minds; another concerns our knowledge of the external world. I think that in both of these areas it is very unlikely that we will be able to provide warrants for our belief that would be usable against a determined and level-headed sceptic.3° The correct project in epistemology is to show how knowledge is possible. It is not the refutation of arbitrarily extreme sceptics.
CONCLUSION
A central problem for the possibility of objectively valid epistemic prin-. ciples has to do with explaining how we might know what they are: how 3° In related though distinct contexts, similar points are made both in Alston (I986) and Van Cleve (I979).
Objective Epistemic Reasons
47
could there be any if our only means of access to them is via rule-circular reasoning? I hope to have shown that, if the notion of justification and its transfer across argument is understood correctly, rule-circular justifications can be vindicated. The case is constructed on the basis of several independently plausible elements. First, that a plausible construal of warrant transfer has it that, in the most basic cases, warrant is transferred only across inferences that are meaning-constituting. Second, that if an inferential disposition is meaning-constituting then it is a fortiori reasonable, reasonably used independently of any belief about its properties. Third, that something can be a warrant for something even if it is powerless to bring about a determined sceptic. Putting all. this together allows us to say that we are justified in our fundamental epistemic beliefs in spite of the fact that we can produce only rule-circular arguments for them. The price is that we have to admit that we cannot use this fornl of argument to silence sceptical doubts. It is arguable, however, that with respect to matters that are as basic as logic and principles of justification, that was never in prospect anyway.
REFERENCES Alston, W. (1986), 'Epistemic Circularity', Philosophy and Phenomenological Research, 47, I-30. Boghossian, P. (1989), 'The rule-following considerations', Mind, 9 8, 507-49. --(1997), 'Analyticity', in Hale and Wright (eds.) (1997). --(2000), 'Knowledge of logic', in Boghossian and Peacocke (eds.)(2000). - - (forthcoming), 'Inference and insight', Philosophy and Phenomenological Research. --and Peacocke, C. (eds.) (2000), New Essays on the A Priori (New York: Oxford University Press). Bonjour, L. (1998), In Defense of Pure Reason (Cambridge: Cambridge University Press). Burge, T. (1988), 'Individualism and self-knowledge', Journal of Philosophy, 85, 649- 6 3. Carroll, L. (1895), 'What the Tortoise said to Achilles', Mind, 4, 278-80. Churchland, P. M. (1983), Matter and Consciousness (Cambridge, Mass.: MIT Press). Dummett, M. (1991), The Logical Basis of Metaphysics (Cambridge, Mass.: Harvard University Press). Field, H. (2000), 'Apriority as an evaluative notion', in Boghossian and Peacocke (2000). Gibbard, A. (1990). Wise Choices, Apt Feelings (Cambridge, Mass.: Harvard University Press).
Paul Boghossian Haldane, J., and Wright, C. (eds.) (1993), Reality, Representation and Projection (Oxford: Oxford University Press). Hale, B., and Wright, C. (eds.) (1997). A Companion to the Philosophy of Language (Oxford: Blackwell). Kyburg, H. (1965), 'Comments on Salmon's "Inductive Evidence"', American Philosophical Quarterly, 2, 274-6. Lehrer, K. (1990), The Theory of Knowledge (Boulder, Colo.: Westview Press). Macmanus, D. (ed.) (forthcoming), Wittgenstein and Scepticis1n (London: Routledge). Nagel, T. (1996), The Last Word (Oxford: Oxford University Press). Peacocke, C. (1992), 'Sense and Justification', Mind, 101, 793-816. --(1993), 'Proof and Truth', in Haldane and Wright (1993). --(2000), 'Explaining the a priori', in Boghossian and Peacocke (2000). Prior, A. N. (1960-1), 'The Runabout Inference Ticket', Analysis, 21, 38-9. Quine, W. V. O. (1966), 'Truth by Convention'. Reprinted in The Ways of Paradox (New York: Random House). Van Cleve,]. (1979), 'Foundationalism, epistemic principles, and the Cartesian circle', Philosophical Review, 88, 55-91. - - (1984), 'Reliability, justification and induction', Midwest Studies in Philosophy 9, 555- 68 . Wittgenstein, L. (1953), Philosophical Investigations (Oxford: Blackwell). Wright, C., 'On the Acquisition of Warrant by Inference', in Macmanus (forthcoming). --(2000), 'Cogency and Question-Begging: Some Reflections on McKinsey's Paradox and Putnam's Proof', Philosophical Issues, 10, 140-63.
3 On Basic Logical Knowledge: Reflections on Paul Boghossian~s 'How are Objective Epistemic Reasons Possible?~* CRISPIN WRIGHT
•••
§ I. Frege wrote that 'There is nothing more objective than the laws of arithmetic.'I Acceptance of the objectivity of logic, mathematics, and of epistemic norms generally-norms determining when particular beliefs are justified, or knowledgeable, and which beliefs commit one to which others-used to be orthodoxy. Paul Boghossian is concerned to respond to its increasing disesteem among thinkers in the humanities and social sciences. 'The suspicion is widespread', he writes, 'that what counts as knowledge in one cultural, or broadly ideological, setting need not count as knowledge in another.'2 He takes it that, unimpressive though the characteristic statements of and arguments for this relativistic outlook may often be, they nevertheless draw attention to a genuine intellectual challenge. I agree. Nor is the seriousness of the challenge qualified by its contemporary connection with post-modernist or 'post-analytic' orientations. In fact it belongs squarely within the analytic tradition. It comes from the extension to epistemic norms of concerns that have been and continue to be widely debated with respect to norms of other ~:. Paul Boghossian and I have discussed the present and related issues over a number of years, and it is a pleasure to have the opportunity to record something of the cross-flow of our ideas in print. My paper here is a development of comnlents made on Boghossian's presentation of the same title at the Pacific Divisional meetings of the American Philosophical Association held at Albuquerque in March 2000. That exchange grew out of an earlier one, on 'Concepts and the A Priori', staged at the Epistemology and Naturalism Conference held at Stirling in May 1997 under the aegis of the Consciousness in the Natural World project. (A version of Boghossian's paper on that occasion has since been published as 'Knowledge of Logic' in Paul Boghossian and Christopher Peacocke (eds.), New Essays on the A Priori (Oxford: Clarendon Press 2000): 229-54.) For helpful criticisms and observations, I would like to record my thanks to participants in the discussions at Stirling and at Albuquerque, to Bob Hale who conlmented on an earlier draft of my present paper, and to Paul Boghossian. I
Grundlagen § 10 5.
2
This volume at p. 15.
50
Crispin Wright
kinds-par excellence, in ethics. (And the varieties of seriously considered and worked-out forms of anti-objectivism in ethics, of course, go well beyond anything that could usefully be described as 'relativist'.) Boghossian's main concern in his paper is with basic logical knowledge. His response to the challenge is a carefully constructed case that fundamental principles of logic, like modus ponens, allow of a certain kind of justification-'rule-circular' justification-which may, despite unpromising initial appearances, rightly be regarded as intellectually satisfying, provided certain philosophical mistakes and confusions are avoided. This is a fascinating suggestion, and most of what I want to say about it will bear on its credentials as an epistemological thesis. But before getting on to that, I want to flag an issue about how exactly it connects with Boghossian's ultimate target: the question of objectivity. If he is right, we can justify basic rules of inference by inferences involving those very rules. Yet there seen1S no reason why an anti-objectivist about a discourse should deny that its characteristic claims, even ones that are regarded as in some way fundamental, are justifiable: what she must deny is, rather, that such justifications can be given in purely objective terms (whatever that is taken to mean). True, one form of anti-objectivisn1 about fundamental epistemic principles would be the view that they allow of no intellectually serious form of justification. That position will be defeated if Boghossian's argument succeeds. But it is not the only relevant form of anti-objectivism. An ethical relativist, for instance, should be undaunted if someone confronts her with perfectly sober, impressive-seeming justifications of fundamental ethical principles, but ones which are, so to say, ethically internal-which presuppose strong entrenched ethical commitments and established patterns of moral sentiment. Rule-circular justifications of basic logical laws-if indeed they are possible-would presumably be similarly undismaying for a logical relativist. I am making the obvious point that there is a distinction between the epistemological question: Is some substantial form of justification in principle possible for a range of basic beliefs that we have-and if so, what is it?and the metaphysical question: Are those beliefs capable of objective justification? To have a worked-out, positive response to the first need not amount to being in a position to give a positive answer to the second. Boghossian focuses on basic logical beliefs and, having targeted the second question, Iproceeds to devote most of his work to the first. So there is a question !whether he ever succeeds in connecting with the metaphysical issue; and ia prima-facie doubt about how his particular proposal-about ruleFircularity-could possibly do so. I shall come h~~_to_thj~ ----------
On Basic Logical Knowledge
§2. Not the least of the merits of Boghossian's discussion, however, is that it brings out the intimate connection between the epistemological and metaphysical issues. I myself would characterize this connection a little differently (though the differences may only be ones of emphasis). Many philosophers seem quite ready to suppose that justification of basic rules of inference is neither possible nor-they may go on airily to say-required. 'Justification has to come to an end somewhere.' But so long as we accept that it is an objective matter whether a given set of inference rules permit the derivation from true prenlisses only of true conclusions, this is a stance of questionable coherence. For in that case to grant that we cannot justify our basic inference rules but can merely go with what we find natural will invite what seems like an intolerable scepticism. It would be to say that, at the most fundamental level of our reasoning, we hold ourselves subject to an objective and rational constraint-that of truth-preservation-while not having, or at any rate not being able to produce the slightest reason for supposing that our basic inferential norms measure up to it. To resist this scepticism while granting the norms' unjustifiability would therefore appear to require some form of surgery-be it relativist or subjectivist-on the conception of 'the logical facts'. In effect, we would need to surrender the idea that the possible patterns of truth-value distribution among the statements which our basic inference rules allow us to link as premisses and conclusions are settled independently of those links. That is exactly what Wittgenstein is getting at when, in the Remarks on the Foundations of Mathematics, he endorses a conception of logic and mathematics as 'antecedent' to truth. It is here that I would locate the intimate relation between the epistemological and metaphysical issues. It is true, as I stressed above, that even if a justification of our most fundamental epistemic principles-in particular, our basic rules of inference-can be provided, the spectre of relativism and other forms of anti-objectivisnl will not recede until the detail of that justification has survived a certain kind of scrutiny. But for the would-be objectivist, making out a feasible model of how our acceptance of those principles can rank as justified may still be indispensable work if the spectre is ever to recede, unless we are prepared for a pervasive and paralysing intellectual scepticism. Certainly, the question will still arise whether any possible justification of basic rules of inference could sustain a belief in their objectivity (and at that point we would finally have to give some serious thought to what we mean by that.) But once justification is conceded to be impossible, we confront a dilemma. We must either defend against scepticism if what is at stake (the truth-preservingness of our basic rules) is regarded as an objective matter. Or we must find some way to provide an argued repudiation of that objectivity. In this way, the belief, or hope, that logic can rank as objective in some worthwhile sense does indeed demand a review of the possibilities for justification of our basic logical beliefs.
52
Crispin Wright
§ 3. Those possibilities can be divided into the inferential and the noninferential. Boghossian is brisk with the latter. He sees two further subpossibilities: a non-inferential justification for the belief that a rule of inference is truth-preserving may go via an analogy-no doubt very broad-with perception, or it may try to make out that such a belief is somehow justified by default-in effect, by its merely being held. The (quasi-)perceptual proposal has figured prominently in the thought of some important philosophers. Boghossian makes a brisk case that it is hopeless. In support of his impatience, it may be reflected that someone who is inclined to believe in the objectivity of son1e contested region of thought always has the option of postulating a special capacity of direct sensitivity to the relevant putative region of special fact. This move does nothing to support objectivism unless the claim that we have such a faculty is rendered appraisable, and that demands some sort of account of how the proposed faculty-in the present case, a quasi-perceptual faculty of logical intuition-goes to work on the relevant subject matter and is indeed conducive to beliefs which keep track of it. While that account is missing, to invoke such a faculty, and the attendant conception of the range of facts which constitute its special province, is simply to pay ourselves an empty compliment. But in the present case no one seems to know how to deliver such an account. A sympathizer with the (quasi-)perceptual proposal has a prima-facie interesting rejoinder, though. The fact is that not every a priori belief implicated in successful deductive inference-n10re specifically: every a priori belief whose falsity would defeat the claim that a particular inference which we accept is validly drawn-can be plausibly (nor, one would suppose, coherently) conceived as admitting only of inferential justification. Justifying the rules involved is one thing. Verifying that they are correctly applied in a given instance is another. To accomplish the latter we provisionally identify a putative conclusion with the upshot of, say, a modus ponens step and then check whether the given premisses configure a pattern suitable to serve as the basis for a modus ponens step to that particular conclusion. One kind of judgement which fully self-conscious in1plementation of rules of inference implicates will thus be judgements about logical form. But judgements of this kind-at least in the simplest case-are surely directly recognitional, rather than inferential. How does such recognition work? It would be a mistake to think of it as literally perceptual since such judgements may be actively involved and-as we should like to think-justified in cases where inference is carried out in the medium of pure thought, without any written or spoken physical representation, so that there is no literal object of perception. But basic judgements of logical form are a priori on any good definition of that notion that I can anticipate-they can be justified by pure reflection if any judgements
On Basic Logical Knowledge
53 can be. So it seems that we do have to make space for a category of basic, a priori, non-inferentially justified beliefs which are necessarily implicated in fully self-conscious inference. The strictures above notwithstanding, then, oughtn't we to take seriously the idea that the phenomenon might extend to beliefs about the validity of basic rules of inference too? Well, the main-and large-obstacle to that suggestion is that beliefs about logical form are particular: they are to the effect that this proposition, or configuration of propositions, has such-and-such a form. Whereas the belief that modus ponens is truth-preserving is the belief that every (possible) instance of it with true premisses has a true conclusion. This generality makes it additionally implausible to think of such rules as having a (quasi-) perceptual justification. One might (quasi-) perceive that a particular object of attention is thus-and-so-but how can one (quasi-) perceive that all objects of a certain kind are thus and so? Doesn't the generality of the content of the alleged (quasi-) perception just give any serious analogy with perception away?3 Still, if we dismiss the quasi-perceptual proposal on the ground of the generality of the knowledge involved, we had better do so with our eyes open. For the point will be bound to return to constrain the attempt at an inferential justification of basic rules of inference. Simply: if the type of justificatory inferences to be proposed are to be deductively compelling, and are to exhibit convincing grounds for knowledge of their conclusions, then the relevant additional generality of content will have somehow to be packed into their prenlisses. So those premisses will then, it is to be expected, present the same epistemological problem in their turn. In § 10 below, I'll review the response to this which I believe Boghossian must make. §4. Should we assume, however, that inferential justification will be the only avenue left open if the (quasi-) perceptual model fails? Boghossian dismisses the option of 'default-justification' equally briskly. And he is surely right that this idea, too, is merely hand-waving unless work is done to explain how the situation can arise. How can it happen that a certain class of beliefs may rightly be regarded as justified just in virtue of their being held, without the need for any particular epistemic pedigree, and which
3 It may be countered that there is generality in the apprehension of logical form. In apprehending that a proposition is of a certain form, we do precisely apprehend a generality: one which quantifies over all tokens of the proposition in question. Generality of this kind, however, is not germane. If I perceive that the mug on my desk is blue, I likewise apprehend a generality: that all mugs like that are blue. But the generality apprehended in grasping the validity of a basic rule of inference is not of that (trivial) sort: if it were, one could not identify an instance of the rule except via a judgement about its validity (just as one could not verify that a mug was relevantly like that except via the judgement that it was blue). Whereas the whole point about knowledge of the validity of a rule of inference is that, in tandem with the identification of an instance, it grounds the judgement that the instance is valid.
54
Crispin Wright
exactly are the beliefs which are fitted to occupy this peculiar situation? The proposal needs to be developed to a point where it can be properly distinguished from dogma, complacency, and impatience. If our only datum is that we haven't got a clue how to make out a justification for a class of beliefs we'd like to be justified, 'default justification' is just a marker for wishful thinking. What can be done? I think there is one line which is worth some attention. This would have it that a belief is default-justified just in case any attempt to make out that it was unjustified would have to presuppose it. Consider, for example, my belief that I am capable of rational thought. Anything that I might do, for whatever reason, to try to make a case that I have no justification for this belief would have to involve some process of ratiocination. And whatever that process was, I oughtn't to attach any credibility to it unless I take it that I am capable of rational thought. So, in the sense proposed, I am defaultjustified in holding that I am capable of rational thought. I cannot consistently suppose that any doubt I might entertain about that is rationally grounded. One might well hold out some hope for a case that modus ponens too can be default-justified in this kind of way-that any case against it4 would somehow have to presuppose it) Pay attention, though, to the species of 'justification' which is delivered in this kind of case: it is not that I get a justification for thinking, for instance, that, as a matter of real fact, I am capable of rational thought. It is merely that I cannot, self-credibly as it were, take myself to have contrived a doubt about it. In parallel, this kind of default-justification of modus ponens would merely bring out that scepticism about it was self-undermining. Why, though, should the fact that a rule is so deeply entrenched in our procedures of argument that anything we'd recognize as a case against it would have in1plicitly to rely on it-why should this fact be supposed to have any tendency to show its objective validity? Default-justification of this kind doesn't seem to be what a defender of the objectivity of logic should be looking for. § 5. Let us turn, then, to Boghossian's suggestions about the possibilities for inferential justification of basic rules of inference. Without loss of generality, we may stipulate that such a justification should consist in a -or even for agnosticism about itIt is, for instance, arguable that no one could coherently believe that they had a counterexample to modus ponens. For that would involve accepting instances both of P and of If P, then Q while, doubting the corresponding instance of Q, whereas it is plausibly constitutive of a grasp of the conditional-and hence a necessary condition of possession of any belief configuring it-that one precisely not be inclined to doubt what immediately follows by modus ponens from others of one's beliefs. Likewise, any demonstration that modus ponens was unsound when used in tandem with other rules would perforce depend on some sort of prooftheory in which to conduct it; and it seems hardly credible that such a proof-theory could avoid reliance on conditionals and their usual associated rules of inference. These thoughts are pursued by Bob Hale in forthcoming work. 4
5
On Basic Logical Knowledge
55
derivation terminating in an appropriate 'corresponding conditional' schematizing the rule in question. The evident prima-facie difficulty, exactly as Boghossian says, is that of circularity: inferential justification needs an inferential apparatus and, with sufficiently basic rules, there is little prospect of reasoning to a suitable schematic statement of them unless we can use those very rules en route. To illustrate, consider the kind of thing that it would con1e very naturally to say if someone-a very dull student, perhaps (or maybe a rather clever one)-really did ask for a justification of rnodus ponens. You'd probably say something like Look, a conditional staten1ent is true just provided that if its antecedent is true, so is its consequent. Right? So suppose you're given that a certain statement is true, and that so is a certain conditional statement in which that statement features as the antecedent. Then it follows that the consequent is true. And that will hold no matter which statements you are concerned \ivith. See?
A fully explicit representation of that train of thought would involve the use of modus ponens itself in the underlying logic. But the fact remains that some such is the natural thing to say. It wouldn't seem nearly so apposite to say 'Well, you just have to see that it's valid', or 'Well, it's just defaultjustified; that we believe it is enough to justify it'. This point-that the natural response to a request for a justification of modus ponens is to outline reasoning running along something like the indicated lines-is prima-facie supportive of Boghossian's proposed direction. Still, the problem of circularity seems acute. As Boghossian rightly stresses, there are in fact a number of distinct concerns under this heading. One, gestured at by the idea of 'question-begging', concerns the power of a rule-circular argument to induce justified conviction about its conclusion. One way of crystallizing this concern is to ask what prior attitude a recipient has to have to the rule in question if a rule-circular argument is to do that for her. Suppose, for instance, she starts out thinking that there's a doubt about the validity of the rule in question. Then she ought not to be----ean hardly be blamed if she is not--persuaded by the argument, any more than by any argument which makes essential use of a suspect rule. Suppose on the other hand that she is already convinced of the rule's validity. Then first, she won't think she needs the argument; and second, if the question is rather the justification for that prior conviction, how can she get reassurance about that by using the very rule in question? If her original conviction is unjustified, won't that strip any argument which makes essential use of the rule of justificatory force, just as happens with any argument that makes essential use of an (undischarged and) unjustified premiss? Finally, if a recipient starts out leaning neither one way nor the other but is open-minded about the validity of the rule, ought she not to be unpersuaded about the force of the rule-circular
Crispin Wright argument too and so remain open-minded about its conclusion? How can one move from open-mindedness to a justified conclusion about something except by considerations which are independent of it? Yet doubt, acceptance, and open-mindedness would seem to exhaust the possible prior attitudes. So who can a rule-circular argument be for? We'll return to the question whether Boghossian can address this concern. But in any case, it's clear in advance that if it is a condition for achieving knowledge by inference. that a thinker actually possess prior knowledge that all the rules involved are valid, then rule-circular arguments can never work to confer such knowledge. So the least that the friend of rule-circularity must do in order to provide a satisfactory response to the foregoing trilemma will be to make out that what I'll call the acquisition-condition-the minimum condition that must be met by the rules which mediate a particular inference if it is to subserve a thinker's coming to knowledge of the truth of its conclusion-amounts to something less than the existence of prior knowledge that those rules are valid. It must be possible to use a rule which we have no standing warrant to regard as valid to acquire warrant for a conclusion to which its use leads. Boghossian himself is crystal clear about this. A critical issue for his purposes, then, is how this possibility is to be made out. The other principal problem attending rule-circularity is that of 'Bad Company'-the fact that rule-circular 'justifications' are available for what we (take ourselves to) know are unsound rules of inference: the rules for Prior's connective tonk, for instance. In the Stirling syn1posium, I observed how a derivation of a canonical statement of tonk-introduction: If P, then P tonk Q might flow from a (purported) meaning postulate, homophonically characterizing the truth conditions of 'P tonk Q': 'P tonk Q' is true just provided that 'P' is true tonk 'Q' is true. Boghossian rehearses this derivation, 6 but it is worthwhile taking stock of the generality of the template which it illustrates. Starting, for instance, with a homophonic characterization of the truth-conditions of the conditional 'If P, then Q' is true just provided that if 'P' is true, 'Q' is true, we can just as easily advance to a rule-circular justification of any rule you like which takes the conditional as a premiss. For instance, the fallacy of Denying the Antecedent may be 'justified' as follows: I
(i)
2
(ii)
'If P, then Q' is true iff. 'P' is true, 'Q' is true
if
Not-P and if P, then Q 6
This volume at p. 26.
Meaning Postulate Assumption
On Basic Logical Knowledge 2 2 1,2 2 2 1,2 1,2 I
(iii) (iv) (v) (vi) (vii) (viii)
If P, then Q 'If P, then Q' is true If 'P' is true, 'Q' is true Not-P Not: 'P' is true Not: 'Q' is true
(ix) (x)
Not-Q If not-P and if P, then Q, then not-Q
57
(ii), &-E (iii), T-scheme (i), (iv) (ii), &-E (vi), T-scheme, logic (v), (vii), Denying the Antecedent (viii), T-scheme, logic (ii), (ix), Conditional, Proof
The situation is thus that a rule-circular 'justification' would seem to be available for any rule whatever, whether it configures some specially defined (putative) concept (like tonk) or merely works with familiar ones. §6. Boghossian's bold suggestion is that these two problems essentially admit of the same solution. That solution flows from his proposed account of the acquisition-condition. What-rule-circular justification, etc., apartdo we want to say about that condition? Two polar answers spring to mind. The simple internalist proposal is: in order to acquire knowledge by inference, the thinker in question must have warrant to suppose that the rules utilized are valid. The opposed-simple externalist-proposal says: it's enough for the rules to be valid-knowledge that they are so is not required. Boghossian rejects both. He suggests that the simple internalist proposal is at odds with intuitions we have about e.g. children's capacity to learn by inference and that a train of thought suggested by Lewis Carroll's famous 1895 Mind paper shows that proposal to be incoherent in any case. But the simple externalist proposal, Boghossian suggests, seems open to obvious intuitive counter-examples, illustrated by his Fermat's theorem case: if a rule of inference is valid but highly unobviously so-or even perhaps known by nobody to be so-a thinker who has no warrant for its validity cannot, we feel, use it to get knowledge of a proposition by inferring it from other known beliefs in accordance with that rule. If neither the simple internalist nor simple externalist proposals will do, what should we offer instead? In the version of his paper presented at the Albuquerque APA meetings, Boghossian in effect proposed a mix: each proposal is appropriate in some cases. The simple externalist proposalthat the use of a valid rule (that is inference in accord with it) is apt to confer warrant for a conclusion even in cases where the subject has no antecedently warranted belief in the validity of that rule-is correct provided the practice of (or the disposition to) inference in accordance with the rule is meaning constituting: that is, provided some key expression featuring in its characteristic premisses or characteristic conclusion (or both) would not mean what it does were it not for the use of sentences containing
Crispin Wright it being subject to the particular inferential discipline in1posed by the rule. The simple internalist proposal-that the use of a rule (that is inference in accord with it) is apt to transmit warrant only in cases where the subject has an antecedent warrant for belief in its validity-is acceptable otherwise. The restricted version of the externalist proposal is acceptable, Boghossian reasoned, because-if I understood him correctly-in the relevant special class of cases, where the inferential practice in question is meaningconstituting, the belief that the rule in question is valid could not so much as exist if the disposition to inference in accordance with it did not come first; so the idea that we might first enquire whether belief in the validity of the rule was warranted, and then on that basis decide to go in for inference of the type in question, or not, is incoherent. 7 But the internalist proposal had better remain appropriate for other cases, where the question whether the rule in question is or is not warranted can be raised in advance of any disposition to practise in accordance with it. This response was then adapted to the problem of bad company. What is wrong, Boghossian proposed, with a rule-circular justification of one of the tonk-rules, say, is not its circularity but the fact that practice in accordance with such rules fails to constitute any meaning: a practice which allowed that 'A tonk B' n1ay be inferred from either A or B individually, and that both A and B individually may be inferred from it, would establish no meaning for 'tonk'. By contrast, the practice of inference in accordance with modus ponens is part of a meaning constituting practice: a practice which constitutes the meaning of 'if ... then ... '. And that, ultimately, is why we may in principle justify the belief that modus ponens is sound by a derivation which uses modus ponens in its course. 8
7 This is a close relative of the defensive thoughts outlined in note 5 above and, as such, is not-it seems to me-really to Boghossian's larger purpose. As soon as we try to harness it to that purpose-addressing the issue not of our commitment to modus ponens but of its objective validity-we confront a dilemma: is the intelligibility of the question whether modus ponens is valid taken to imply that it is valid, or not? If not, then to suppose that we could not so much as understand the question unless we had the prior practice of inference in accordance with modus ponens carries no implications of its actual validity. On the other hand, if the very intelligibility of the question is taken somehow to imply the validity of the rule, then the issue becomes with what right our practice in accordance with modus ponens (and other rules for the conditional) is assumed to constitute an intelligible meaning for that question. (As will become clear in the next paragraph, it seems that Boghossian was thinking of the situation in terms of the second alternative.) 8 A reader may wonder how this thought would play in relation to the rule-circular derivation of Denying the Antecedent given at the end of §5. Boghossian would claim presun1ably that, just as the tonk-rules fail to establish any meaning for 'tonk', so Denying the Antecedent, taken as a conditional-elimination rule, fails, when teamed with Conditional Proof as the corresponding introduction rule, to establish any meaning for 'If ... , then ...'. In that case, both derivations may be rejected on the ground that the Meaning Postulates they respectively use as premisses actually have no content and so may not be used to support any conclusion at all. But this could not be the objection to the derivation for Denying!~~~~t~~E~nJ:ll,!)j1~ ~c~l!'!lio__
On Basic Logical Knowledge
59
§7. In my comments at Albuquerque, I sought to put some pressure on this mixed account of the acquisition-condition. Why exactly is the simple externalist account unacceptable? What's wrong with the idea that if I start off with a warranted set of premisses and validly infer some conclusion from them, then I thereby acquire-whether or not I know that the inference is valid-a warrant for the conclusion? We can expect an externalist to counter that this position will seem unacceptable only if one takes possession of a warrant to require being in a position reflectively to appreciate that one has one-that is, in effect, possession of an internalist warrant. For the externalist, such a construal is at best a narrow redefinition, at worst an important mistake about the nature of epistenlic warrant. It is, familiarly, the distinction between warrant and ret1ectively appreciable warrant that lets the externalist say that classic sceptical arguments, even in their most seductive presentation, have no tendency to establish that we have no warrant for large classes of propositions which we routinely take ourselves to know. Rather, if the beliefs in question are formed by the exercise of what are in fact appropriate powers, by what are in fact appropriate methods, and in what are in fact appropriately conducive circumstances, that's enough to make them warranted, even if not always enough to put us in a position reflectively to appreciate-I'll henceforward say: to claim-the warrants which we thereby possess. I myself have no particular brief for this externalist line of response to scepticism in general, since it seems really rather obvious that it was at the legitimacy of claims that scepticism addressed its challenge in the first place; no interesting sceptical argument ever denied that we might as a matter of serendipitous fact be dispositionally reliable gatherers of actually true beliefs. But the reminder of the distinction should cause some rethinking of the issues about the acquisition-condition. Is our interest in when warrant is transmitted to a conclusion, with knowledge the result, or in when a thinker may claim that it is? If we allow this contrast at all, then Fermat-type examples should motivate no objection to the simple externalist account as a response to the first. A thinker who habitually infers via what are in fact valid rules should be accorded warrants for his conclusions-whether or not she can support the rules or has any beliefs about them at all-in just the way in which, for the externalist, warrant accrues, ceteris paribus, to any beliefs which are formed by reliable mechanisms. where a meaning is independently established for 'If ... , then ...' -for instance, by practice in accordance with modus ponens and Conditional Proof. Rather Boghossian's point would then be, I take it, that if a content is established for 'If ... , then ...' independently of any practice in accordance with Denying the Antecedent, then the latter is not a meaning-constituting rule, and the use of it in the bogus derivation thus has to answer to the simple internalist account of the acquisition-condition, and so will confer warrant on the conclusion only if we are already justified in regarding Denying the Antecedent as a valid rule. I confess, though, to some uncertainty about this.
60
Crispin Wright
Given that he sustains the Fern1at-example style of objection, I therefore concluded either that Boghossian didn't admit the contrast, holding that warrant is an essentially internalist notion-that a warrant that cannot be claimed is no warrant at all-or that his interest, anyway, was in what we can claim rather than in what is (externally) warranted. But either way, there seemed to be a consequential problem about how Boghossian's proposal, that it suffices for warrant transmission that the relevant rules encode a meaning-constituting practice, could be germane. Being in a position to claim warrant requires being in a position to make a reflectively appreciable case that one has authority for a certain belief. That is what one cannot do in the type of case illustrated by the Fermat example. But if, when a rule involved in an inference is valid but not meaning-constituting, it does not in general suffice to make such a reflectively appreciable case just to carry out the inference, what determines that the meaning-constituting case is different-that inference to a conclusion by meaning-constituting rules will, ceteris paribus, put one in a position to claim a warrant? Why should the, so to say, mere-external-fact that the rules are meaning-constituting make any reflectively appreciable difference? Boghossian's proposal was supposed to be motivated by the reflection that in the meaning-constituting case, the question of warrant for the belief in validity could not so much as intelligibly be raised unless the practice-or at least the inferential dispositions which give rise to it-were already in place. But how does that help exactly? That just means that to understand the issue being raised, we must already have certain inferential dispositions. But that fact, as far as I can see, has no tendency to support-indeed seems to have no connection with-the idea that exercise of those dispositions on warranted premisses puts one in position to claim warrant for the conclusion. Thus I arrived at the thought quoted by Boghossian in his present paper: In sum: Boghossian's reaction to the simple externalist account betrays an interest in reflectively accessible warrant-warrant that makes a phenomenologically certifiable impact, as it were. But he does not connect his own proposal with such impacts; and it is not clear how the connection might be made. If it cannot be, one might as well stick with simple externalism.
Boghossian now offers an interesting response to this objection. The response, in essence, is to further generalize the contrast between internalist and externalist warrant as we have so far understood it; and to generalize it in such a way that, rather than offering a local concession to externalism, as it were, the proposal, that inference in accord solely with meaningconstituting rules suffices-assuming suitable (and independent) warrant for the premisses-for the acquisition of warrant for a conclusion, emerges to the contrary as properly internalist (in spirit). One n1ight suppose that, in order to make that suggestion good, it would be necessary to show that conclusions inferred in accordance with an unreflective
On Basic Logical Knowledge
6r
disposition to follow certain meaning-constituting rules would somehow bear a distinctive reflectively appreciable mark, shared with conclusions inferred by reflectively justified rules but missing in other cases of unreflective but valid inference. Thus the sort of case to be made would be, for instance, that unreflective inference by modus ponens-assuming that to be a meaning-constituting rule-would make a 'phenomenologically certifiable impact', of a kind characteristically missing in cases of unreflective inference by, say, transitivity of the conditional (assuming that, as a derived rule, transitivity is not meaning-constituting) or, perhaps, by modus tollens~ But that looks an impossible case to make. And it is not what Boghossian does. Rather he asks, in effect, what virtue there is in seeking reflectively appreciable warrants and returns the answer that, by so doing, a thinker is enabled to form beliefs in a fully epistemically responsible fashion. That then opens the way to the thought that a belief is warranted just when it is formed in a fully responsible manner, with the seeking of reflectively appreciable warranting considerations potentially just one way of ensuring that a belief is so formed. Another way will be if the methods used to arrive at the belief are such that, whether or not they have any additional reflective certification, no thinker who uses them is open to a charge of irresponsibility. Since-the next thought is-no thinker who acquires a belief by inference in accordance with meaning-constituting rules is open to such a charge, it follows that such a belief is no less warranted than one arrived at by inference in accordance with reflectively certified rules. Thus the conception of warrant to which internalism, as standardly characterized, implicitly subscribes proves to be such that Boghossian's version of the acquisition-condition is perfectly in keeping with it. Better characterized, internalism should be the view that what matters as far as warrant is concerned is not the reliability of the methods whereby a belief is formed but whether a thinker who relies on them is thereby properly responsible. If the method in question is inference in accordance with meaning-constituting rules, that condition-in Boghossian's view-is met. There is an overarching idea here whose attractiveness I want to acknowledge immediately. The notion is commonplace that ordinary actions may exhibit two quite different kinds of merit-n1erit, for want of a better terminology, of virtue and merit of utility. It seems only common sense that belief-formation should be subject to a sin1ilar distinction: that the fashion in which a subject arrives at a belief may be open both to appraisal concerning its epistemic virtue-involving consideration of factors, for instance, which the subject is aware of and to which she is properly held responsible for her response-and to appraisal concerning epistemic utility-involving consideration of factors she need not be aware of but which may impinge upon the correctness of beliefs so formed. In the case of ordinary action, both types of quality are genuinely meritorious, and there is no antecedent reason to think that either type of merit is somehow
62
Crispin Wright
reducible to the other. That might encourage the thought that there is a similar independence in the epistemic case, and that many 'internalists' and 'externalists' have been busy emphasizing one type of merit at the expense of the other when the truth is that they are equally valid but irreducible forms of doxastic pedigree. However that may be, Boghossian's internalismgeneralizing thought may now be naturally viewed as an analogue of the idea that the virtue of an action need not depend solely upon the state of the conscience of the perpetrator: that a man who, as a result of his natural character, acts as bravely, or generously, as one who self-consciously weighs his choices, may act no less well, even if the consequences are the sameindeed even if they are disappointing (unmeritorious in point of utility) in a particular case. Likewise a thinker who, unreflectively but as a result of his natural rationality forms beliefs in a fashion which would be ratified by critical reflection, may be regarded as forming beliefs that are no less warranted on that account-beliefs to which he is no less entitled-even if on a particular occasion they are false. It is important to see, however, that plausible and potentially useful as this overarching idea may be, it does not get us to exactly where Boghossian wants. Boghossian wants it to be the case that beliefs formed unreflectivelyby meaning-constituting principles of inference-may share a virtue with fully reflectively justified beliefs: the virtue of responsible formation. And as such, they may be fully warranted. However, it is one thing to grant, in accordance with the overarching idea, that unreflectively formed belief may have a type of merit-specifically, merit of responsibility (or anyway, as Boghossian sometimes says, lack of irresponsibility)-contrasted with those on which externalism places emphasis; but it is a further thesis that inference via meaning-constituting principles suffices for (lack of ir)responsibility; and yet another that (lack of ir)responsibility suffices for warrant. I think that, under pressure, these latter two suggestions prove somewhat fragile. Consider the last. Why should responsibility-more specifically, lack of irresponsibility-be deemed sufficient for warrant? Reflect that Boghossian's notion of 'meaning-constitution' is a strong one: a set of habitual, shared inferential dispositions may be what if anything gives meaning to a certain expression and yet may fail to fix any meaning at all. Such would be the situation of an unreflective inferential practice in accordance with the 'tonk'rules. In Boghossian's view, 'A tonk B', as 'disciplined' by such a practice, simply has no content-there is nothing for 'A tonk B', so disciplined, to mean. Nevertheless, if we can inlagine a community who engage in such a practice-spared, perhaps, often enough from its more calamitous potential consequences by collateral contingencies (a Good Angel)-then I think there is a wholly intuitive sense in which particular such unreflective inferences could be regarded as epistemically responsible-or at least not irresponsible. After all, someone who inferred that way would simply be using the
On Basic Logical Knowledge
63
language as he had been taught it, following the example of his peers. An action cannot justly be regarded as irresponsibly performed if the agent is merely behaving as he and everyone else in his community has been trained to do, or merely following the example of those who are standardly taken to be competent, or guilty only of failing to undertake a more exacting scrutiny of that action than is standardly undertaken by the great and the good. Of course, the example of 'tonk' is rather far-fetched: only a community of morons, it may be felt, could slip into inferential practices of such systematic incoherence. But we can give a more plausible one. Imagine that Frege had invented the Begriffschrift as a system of natural deduction, and that rather than as an axiom, he had gone on to formulate the notorious Basic Law V of Grundgesetze as a pair of rules respectively for the inferential introduction and elimination of contexts containing a course-of-values operator; thus
Course-of-values-I: (X)(FxHGX) {x}Fx == {x}Gx
Course-of-values-E: {x}Fx == {x}Gx (X)(FxHGX)
where 'Fx', 'Gx' are any open sentences in one argument expressible in a suitable higher-order language containing the course-of-values operator itself. These rules, like those for 'tonk', are-famously-unsound. But it took a clever man to find Russell's paradox. It might not have been noticed for many years, and generations of students might have been trained in what was erroneously taken to be the elegant and definitive foundational system that Frege had invented. If that had happened, there is the same clear intuitive sense in which their practices would not have been epistemically irresponsible. The immediate conclusion to draw fron1 these examples is that there is a kind of responsibility-or rather, lack of irresponsibility-which does not per se suffice for warrant. Someone who in this sense responsibly but fallaciously reasons to a true conclusion is not entitled to take it-does not know-that it is true. Since the acquisition-condition is that condition on the rule mediating an inference whose satisfaction ensures that, ceteris paribus, one who draws that inference acquires knowledge, it follows that no satisfactory account of the acquisition-condition can proceed purely in terms of this notion of responsibility. But Boghossian seemed to be canvassing an account in terms of some notion of responsibility. If that is not to be the notion just gestured at, what is it? Boghossian's principal thesis, to be sure, is that an inference's being in accord with meaning-constituting rules suffices for it to meet the acquisitioncondition. I am not at this point challenging that. The point is rather that he wanted to explain why that is so, and to do so in a broadly internalist
Crispin Wright
way; and the explanation proposed was that an inference's being in accordance with meaning-constituting rules suffices to deflect any charge of irresponsibility, which in turn would suffice, ceteris paribus, to ensure warrant for the conclusion. The foregoing considerations challenge the latter part of that. Absence of epistemic irresponsibility, ordinarily understood, in the way a belief is formed does not suffice for warrant. It is arguable moreover that meaning-constitution does not suffice for absence of irresponsibility. Consider another pair of natural deduction rules:
N-I: (3R)(FI-I R G)
N-E: Nx:Fx == Nx:Gx
Nx:Fx == Nx:Gx where '(3R)(F I-IR G)' abbreviates the (higher-order definable) claim that there is a one-to-one correspondence between the Fs and the Gs. Set in a suitable higher-order logic, these two rules are proof-theoretically equivalent to Hume's Principle, that the number of Fs is the same as the number of Gs just in case there is a one-to-one correspondence between the Fs and the Gs, which, as is now well known, provides for derivations of all the basic laws of arithn1etic. I and others have argued that Hume's Principle can serve as an implicit definition of the cardinality operator, 'N', and hence can serve as a foundation for arithmetic of much the kind that Frege aimed for. 9 That position is tantamount to the view that the two rules, N-I and N-E, serve to constitute a suitable meaning for the operator 'N'. Critics have rejoined that the analogy in structure between Hume's Principle and Basic Law Vequivalently, between the N-rules and those for the course-of-values operator-casts doubt on their suitability to serve as a successful such implicit definition. These critics may be-indeed I believe they are-wrong. But they are at least owed an answer. The mere fact-if it is a fact-that the N-rules do indeed succeed in conferring a coherent n1eaning on 'N' cannot by itself bring it about that someone who laid them down and made them the foundation of her arithmetical practice would, in the face of the explicit concern about the analogy with Basic Law V, be open to no charge of epistemic irresponsibility. In sum then: the meaning-constituting character-on Boghossian's strong understanding of it, whereby meaning-constitution suffices for soundnessof a set of rules does not suffice for the responsibility of practice in accord with them; and the responsibility of a certain kind of inferential practice does not suffice for it to serve to transmit warrant. If the foregoing is right, the direction in which it points is obvious: a satisfactory account of the acquisition-condition needs clauses of both kinds: it 9
For an overview of the issues, see Hale and Wright
(200 I).
On Basic Logical Knowledge needs a clause to ensure the soundness of the rules of inference being followed-just as the externalist emphasizes-and it needs a clause to ensure that practice in accordance with those rules is open to no complaint of irresponsibility, just as Boghossian proposes. But if the issue of responsibility may indeed be addressed by reference to the kind of consideration gestured at above-by reference to what passes as good enough and normal by way of training and safeguards in a thinker's intellectual milieu-then the effect is that it now becomes unclear whether the notion of meaningconstitution has any essential part to play in the correct account. Why is it not enough just to require that the rules utilized be sound, and that their use be responsible (or not irresponsible)? Such a suggestion seems to me to be in keeping with the moral suggested by examples like Boghossian's Hide-and-Seek case. IO In that example, the thought that the rules she utilizes are meaning-constituting seems to play no part at all in our willingness to allow that the reasoning child acquires a warrant for her conclusion (that her friend is hiding behind some other tree.) Perhaps the best account of what it is for a set of rules to be meaningconstituting would have the result that modus tol/ens-the main rule involved-is not even a meaning-constituting rule in the first place. I don't know. But that the child is manifesting an ability to learn by inference-and hence that her reasoning meets the acquisition-condition-seems in no way hostage to that issue. Rather it seen1S to suffice for so regarding her that her inference is indeed sound, and of a kind which comes quite naturally and early to an intelligent child, and which-unlike certain fallacies which can snare even the intelligent-we can conjure no reason to doubt. To avoid misunderstanding, I do not regard any of this as yet essentially opposed to what Boghossian wants to say. We started out with the two polar-simple externalist and simple internalist-accounts of the acquisitioncondition: respectively, that the relevant rules merely be sound, and that they be known to be sound. So far the internalist account is a straightforward strengthening of the externalist. Boghossian then suggested that the internalist proposal is best seen as an instance of something more general. To have knowledge of the soundness of the involved rules of inference is to be immune to any charge of irresponsibility in the use of them, and there are other ways of being so immune. One such is practice in accordance with one's basic rational nature, of a kind fostered and applauded by normal training and explanations. However, a condition based just on that wouldunlike the simple internalist account-no longer provide a strengthening of the externalist condition. The Hide-and-Seek example prompts the thought that both conditions are needed: a satisfactory account of the acquisitioncondition will need components to ensure both validity and responsibility. 10
This volume at p. 36.
66
Crispin Wright
§8. Let us take stock. The following appears to be a fair summary of the situation. (i) If Boghossian is right, a correct account of the acquisition-condition will include a clause which is responsive to the intuition which-in his viewunderlies the simple internalist account. The simple internalist account has it that what is both necessary and sufficient is nothing less than that the thinker knows, or warrantedly takes it that the inference in question is sound. The improved account, however, while it encompasses cases of such knowledge, is more general and need make no reference to the thinker's states of awareness; it is that she not be open to a charge of irresponsibility in making the inference in question. (ii) We have seen, however, that absence of irresponsibility, at least as naturally understood, does not suffice for the acquisition of warrant by inference except in cases where it is achieved precisely in the fashion which simple internalism envisages, viz. by explicit knowledge of the soundness of the relevant inference. In other cases, it is necessary to stipulate in addition that the inference actually be sound. In effect, then, the emergent account of the acquisition-condition involves two clauses: assuming that she has (the right kind of) warrant for her premisses, the thinker who draws a certain conclusion from them obtains warrant for that conclusion just if
(Condition a) her inference is epistemically responsible/not irresponsible; and (Condition b) her inference is sound. (iii) Boghossian sometimes seems to suppose-at least, he does not clearly differentiate his view from this-that if the pattern of inference involved is meaning-constituting, this suffices by itself to satisfy the acquisition-condition. But this is not so. While meaning constitution-interpreted as Boghossian intends-ensures soundness, it does not by itself ensure absence of irresponsibility (as witness the example of Hume's Principle.) Thus both clauses are needed in some form. However, meaning constitutionwhatever exactly it comes to-is arguably too narrow a condition to play the role of Condition b. The Hide-and-Seek exan1ple could as well proceed by reference to any pattern of inference which a thinker innocent of any explicitly metalogical conceptual repertoire might unremarkably make or allow on the basis of a normal natural logical intelligence. If, as is not in1plausible, the latter class of inferences-however exactly they might best be demarcated---coincides exactly with those which a thinker might make in the absence of any explicit knowledge of their validity without provoking any sense-'she's just lucky'-of a valid inference irresponsibly made, then Condition b may, in the presence of Condition a, take just the form above. There is a case, then, for various qualifications, or clarifications, of Boghossian's proposal. But, as I say, I do not think any of this seriously
On Basic Logical Knowledge compromises his intent. If basic logical inferences-the kind of inferences which an ordinary thinker could be expected to make or allow just in virtue of her 'natural logical intelligence'-are all inferences which ca.n be made without irresponsibility, then the acquisition of warrant across these inferences requires only that they actually be sound. Modus ponens, for instance, is presumably such a basic pattern of inference. So it would follow that the way is open-at least in principle-for a rational thinker to acquire warrant to believe a conditional schematizing the modus ponens pattern by means of an inference involving that very rule, even though she lacks the (conceptual resources for the) meta-logical belief that the rule, or the particular inference, is sound-and hence has no knowledge of that. We can now anticipate Boghossian's best response to the trilemma on the question, Who is a rule-circular argument for?, put forward in §5. The trilemma was posed by what appeared to be the exhaustive possibilities of attitude-doubt, acceptance, and agnosticism-to a rule of inference featuring in a rule-circular derivation. The response should be that a rule so featuring can meet the acquisition-condition for a given thinker who is presented with that derivation without her taking any attitude to it. §9. We should not, however, overestimate the force of the interim conclusion suggested by these considerations. That conclusion is that rule-circular learning-learning e.g. by an inference which makes use of modus ponens that modus ponens is valid-may be a possibility; that it is not to be dismissed out of hand just on the ground of rule-circularity. But how much does the acknowledgement of this possibility do for Boghossian's wider concern? Consider, as an analogue, the situation that arises under the hypothesis-far-fetched perhaps-that Robert Nozick's famous counterfactual account of knowledge is correct. I I That account, as is familiar, allows it to be a possibility that the thinker knows, e.g. that he has a hand, without being in a position to know, e.g. that he is not a brain-in-a-vat, notwithstanding the fact that his having a hand, and therefore being normally embodied, entails that he is indeed not abrain-in-a-vat. If we accept Nozick's account, we can grant the sceptic that we do indeed not know that we are not brains-in-vats-because the detail of our experience is entirely consistent with that possibility-while nevertheless retaining the ordinary knowledge that we have hands, along with a great deal of other ordinary knowledge besides which is normally taken to be threatened by any such concession. Or more accurately, it follows that it is a possibility that we have the latter knowledge while not knowing that we are not brains-in-vats. This relatively congenial possibility obtains when the supposition that I am a brain-in-a-vat involves a much more remote scenario-and hence, under the normal type of semantics for counterfactual conditionals, consideration of a II
See ch. 3 of Nozick (198 I).
68
Crispin Wright
much more remote range of possible worlds-than the supposition that I lack a hand. That will indeed be the case if the actual fact is that I have a hand, and (therefore) am not a brain-in-a-vat. But it won't be the case if the actual scenario is one in which I am a brain-in-a-vat. Thus, although N ozick's account entails that the inference the sceptic needs in order to do damage with our concession-that we do not know that we are not brainsin-vats-is not truth-preserving in general, it does not entail that it is not truth-preserving in a scenario which-we are (rashly) conceding-we don't know not to obtain, viz. envathood. So all we are left with is the congenial possibility. It 'could be that, despite not knowing that I am not a brain-in-avat, I do know that I have a hand. But unless I can produce a reason for thinking that I am indeed not a brain-in-a-vat, I have no reason for thinking that this congenial possibility obtains, so cannot claim that knowledge. The situation that has emerged after our review of Boghossian's discussion of the acquisition-condition is broadly analogous. It could be that some rule-circular derivation culminating in a statement of the characteristic conditional for modus ponens is indeed such that the use of modus ponens within it meets the acquisition-condition, so that one who follows the derivation through can acquire knowledge that that conditional is true, and hence that the pattern of inference it schematizes is sound. That's an epistemic possibility. But in order to get reason to think that it obtains, we need first to have reason to think that unreflective inference in accordance with modus ponens does indeed meet the acquisition-condition. That, on the account that has now emerged, has to be reason to think both that such inference is not open to a charge of irresponsibility and-here's the rubthat it is sound. In other words: it is only in a context in which we take it that we know that inference by modus ponens is sound that we can claim to know that inference by modus ponens meets the acquisition-condition and hence, at least in principle, that a rule-circular model of its basic epistemology is a starter. I am not denying that a substantial result may still be in the offing. It may be a perfectly good and interesting project to address an issue of the form: on the assumption that we do have knowledge of a certain kind, how should we conceive ourselves as in principle empowered to get it? But that was not Boghossian's announced project. Recall that he set himself something more ambitious: to address the challenge of anti-objectivism about fundamental epistemic principles. It is true that in the concluding part of his paper he readily grants that a rule-circular derivation will be powerless to convince a reflective sceptic about the targeted rule. But the present point is not that Boghossian fails to do something which he explicitly had no pretension to do. It is rather that, as we have just seen, in order to be in a position to claim that a rule-circular acquisition of knowledge of the soundness of modus ponens is possible, we need independently to be in a position to
On Basic Logical Knowledge claim that unreflective uses of modus ponens can meet the acquisltloncondition, and hence that they meet that condition's requirements-in particular, by Condition b, the requirement that modus ponens is sound! So we are not, on the revised account of the acquisition-condition, yet in a position to claim that someone could learn of the soundness of modus ponens by an appropriate rule-circular derivation-not unless we are already in a position to claim. that the rule is indeed sound. This limitation impacts on the issues about objectivity. Any philosopher who supposed that the validity of modus ponens was somehow not an objective issue-the contention which Boghossian set himself to addresswould have no difficulty in accepting the merely conditional contention: if modus ponens meets the acquisition-condition, and hence in particular is (objectively) valid, it may in principle be used to acquire warrant for its own characteristic conditional. Boghossian set himself to explain how in principle fundamental logical rules might be known, in a sense of knowledge which would imply objectivity. An argument which explains how the validity of fundamental logical rules could indeed be objectively known, but only on the assumption that we may take it that we do indeed know them to be valid, obviously has no tendency to establish the latter assumption or to show that the knowledge involved is in any relevant way objective. §ro. What, in any case, would the relevant kind of rule-circular derivations look like? In his present paper, Boghossian is inexplicit about that. There is a clue about the kind of thing he might have in mind from the respectful response he accords to the rule-circular derivation of tonk-introduction which he rehearses-which works, recall, with a premiss consisting in a (putative) homophonic meaning-characterization. But this hint apart, nothing explicit is offered about the nature of the premisses from which he expects effective rule-circular derivations to proceed. In fact it's clear that there are just four cases we need to consider: the premisses for the derivation may be a priori and necessary, a priori and contingent, or a posteriori; or, finally, they may be effectively empty (because discharged in the course of the derivation). We can narrow the range of genuine options just by reviewing this simple taxonomy. It's crucial to be mindful here of the content of the knowledge which the derivation is intended to induce. It is not, or not merely, that the concluding statement-which we will continue to suppose to be a characteristic conditional for the inference rule in question-is true. To grasp the validity of a basic rule of inference is to grasp that it is unconditionally truth-preserving: that it may be relied upon to transmit truth even when the premisses are wildly speculative, even when their holding true would require dramatic changes in the world, including the obtaining of states and laws inconsistent with the actual physical order. Of course that's just another way of saying that what warrant is wanted for in the cases that interest us is warrant for
70
Crispin Wright
an absolute (metaphysical) necessity: to know that modus ponens is valid is to know that it transmits not n1erely actual truth but truth in an arbitrary possible-hypothetical-world. Once that is recognized, there is great difficulty in seeing how premisses in the second or third categories just distinguished-a priori contingents, and statements known a posteriori-could deliver as required. How could a thinker's knowledge that a certain principle holds of absolute necessity be grounded on her recognition that certain contingencies obtain, even if that latter recognition is itself a priori? To be sure, warrant to regard what is in fact a necessary proposition as true could be transmitted from warrant for a contingency-as when, for instance (assuming it is indeed valid), an instance of the Law of Excluded Middle is derived from one of its contingent disjuncts. But in order to recognize not just the truth but the necessity of the conclusion, the thinker will have somehow to acquire the knowledge that it would hold even if her contingent premiss(es) did not. How could that knowledge be acquired just on the basis of recognition of contingent truth(s) ?I2 Given that many philosophers now accept that at least some types of absolute necessity can be known only a posteriori, the suggestion that warrant for basic logical principles might be acquired on a foundation of a posteriori knowledge cannot confront exactly the difficulty just described. But this much at least is clear: while warrant may be transmitted by inference across valid rules, its epistemological character cannot thereby be altered. A proof that proceeds from premisses whose warrant is a posteriori can confer at best an a posteriori warrant for its conclusion. Accordingly, if what we are seeking is-as it surely should be-an account of how basic logical rules can be known a priori, the third distinguished possibility, that the premisses for a suitable (if rule-circular) derivation might encode items of a posteriori knowledge, is likewise to be discounted. Neither of the foregoing objections engages the first possibility: that the premisses for an appropriate rule-circular derivation be necessities known a priori. The problem for a proposal along these lines would be to explain, rather, how such premisses may themselves be known. If they are themselves knowable only by inference, then there is an evident threat of infinite regress. If, on the other hand, they may be known non-inferentially, then there must after all be a species of a priori, non-inferential knowledge of necessity lying at the heart of (our best reconstruction of) our knowledge of 12 If this line of thought is compelling, it scotches immediately all prospect of a thinker's coming to learn of necessities by reasoning from meaning postulates-for instance, in the fashion earlier illustrated in the template for tonk-Introduction and Denying the Antecedent. For given that it is contingent what any expression means, any model of that broad kind n1ust proceed from contingent premisses (whether or not they are plausibly knowable a priori) and thus can ground no more than knowledge of the truth, contrast: the necessity, of the conclusion.
On Basic Logical Knowledge
71
basic principles of inference. And now it becomes uncertain whether there could be any good motivation for the rule-circular proposal: if some species of non-inferential a priori knowledge of necessity is going to be required in any case, why should it not be invoked straight away, with, e.g. the characteristic conditional for modus ponens as one possible content of such knowledge? In any case, the situation is enough to set Boghossian a dilemma: if his criticisms of non-inferential proposals are cogent, then presumably he can have no truck with the present model; but if they are not cogent, then the rule-circular proposal is badly motivated. We thus conle to the only remaining possibility. If Boghossian holds that .our knowledge of fundamental logical principles is a priori knowledge of necessities, and that no non-inferential model of such knowledge is acceptable, then-each of the three possibilities just reviewed proving unsatisfactory-he should propose that fundamental logical knowledge must be based on rule-circular derivations in which all premisses are discharged. So now we can see quite clearly what the fundamental architecture of our knowledge of modus ponens has to be. It has to be something close to this: I
2
1,2 I
(i)
P
(ii) If P, then Q (iii) Q (iv) If (if P, then Q), then Q (v) If P, then if (if P, then Q), then Q
Assumption Assumption (i), (ii) Modus ponens (ii), (iii) Conditional Proof (i), (iv) Conditional Proof
Here the final line schematizes the modus ponens pattern of inference; and-Boghossian should say-the discharge of all assumptions discloses it as a necessity. It is the sufficiency of such a stark derivation which, it seems to me, Boghossian must defend. The ultimate basis of our knowledge of modus ponens is that, using only rules which meet the acquisition-condition (viz. modus ponens itself, and Conditional Proof), one may reason one's way to a conclusion which encodes its characteristic form of transition, and thereby-all assumptions having been discharged-to a warranted conviction of the validity of that form. In this derivation, there is no dependency upon contingency, no input of knowledge a posteriori, and no reliance on non-inferential but a priori knowledge of necessities. One simply moves, using valid rules and in a fashion absolved from any charge of epistemic irresponsibility, to the target item of knowledge. I am not going to argue that this model is not defensible. But I do claim that it has two crucial limitations. First, that if it is the best that can be done, then-to stress-it does no more than to illustrate how, if there are objectively valid basic epistemic principles, they may-in one type of case: that of basic rules of logic-be known. The model offers nothing to establish the conviction that there are indeed such objectively valid basic principles.
Crispin Wright
72
Second, it does nothing to explain how I might get in position to claim knowledge of modus ponens. Again, it just gives us a conditional: if the use of modus ponens and Conditional Proof made in the derivation meets the acquisition-condition, then modus ponens may indeed be recognized to be valid on the basis of that derivation. But, once again, in order to be in a position to claim the consequent of that conditional holds, we will first need to be in a position to claim that its antecedent does; and since the latter requires, in particular, that we be in a position to claim that modus ponens is valid, the derivation is powerless to confer what the old internalist Adam in each of us really wanted-a self-conscious entitlement to the claim that modus ponens is good. § I I. Boghossian's case for the rule-circularity model crucially depended on his dismantling of the primary obstacle to it: the original simple internalist account of the acquisition-condition. But how convincing, in detail, is his criticism of the simple internalist account? That criticism was constituted by two principal reflections. The first, provoked by cases like the Hide-and-Seek exanlple,1 3 was the observation that sinlple internalisnl gets the extension of the targeted phenomenon wrong, that it overly restricts the range of cases in which subjects are (rightly) regarded as learning by inference. It cannot in general be a necessary condition for acquiring knowledge, or warranted belief, by inference that one knows, or warrantedly believes, that the particular inference is sound if creatures are rightly regarded as capable of so learning on occasions when, so far from knowing that the conclusions they draw are entailed by antecedent items of their knowledge, they are not even capable of believing that they. are (since lacking the conceptual repertoire necessary for that belief). It is one thing to have a working grasp of the ordinary logical particles-'if, then', 'not', 'and', 'all', 'some', ... etc.-and quite another to possess a vocabulary for the general concepts of form and logical consequence involved in beliefs about the validity of particular rules or of particular instances of them. So there is an easy recipe for constructing cases, like the Hide-and-Seek exan1ple, which present counter-examples to the simple internalist account of the acquisition-condition. However, there is an issue which Boghossian passes over and which requires to be negotiated before he can appeal to such counter-examples to support the idea of rule-circular justifications. That appeal makes an assumption which stands in need of justification: that if a rule is sometimes at the service of, as it were, blind warrant transmission-as in the Hide-andSeek and kindred examples-then it continues to be so in contexts where users do have the additional conceptual resources to have it as an object of
13
Chrysippus' Dog provides another.
On Basic Logical Knowledge
73
intellectual contemplation and to consider the question of its validity. Since rule-circular justifications will culminate in a schematic representation of the rule in question, it goes with the territory that someone who is potentially to acquire a warrant by means of such a justification always will have the relevant additional conceptual sophistication. But it is by no means obvious that, contrary to Boghossian's implicit assumption, increased sophistication does not 'up the ante' for warrant acquisition. It is not implausible that standards of warrant may be, to a degree, sophistication-relative (of course, they are subject to relativities of other kinds) so that what is good enough for a child who lacks the conceptual repertoire to confront the issue of the validity of a rule she unthinkingly infers by may not be good enough for a more self-conscious reasoner. Consider Alice, a pre-school child perhaps just beginning to learn to read, whose mother very occasionally-say one morning a month-leaves her with a babysitter but is otherwise a constant presence during her waking day. Because the absences are infrequent, we would not regard it as unreasonable if Alice expects her mother's company on any particular morning. But we might take a different view of her older sister who has a grasp of the calendar and is in position to realize that the mother's absences coincide with every second Tuesday in the month (her Bridge morning.) In that case it becomes intuitive that the mere infrequency of the absences is no longer warranting per se-that it ought to occur to the older girl to consider whether there is any such pattern in the absences which would make them firmly predictable. And of course an adult who formed expectations about others' behaviour on purely statistical grounds, without any consideration of the kinds of regularity there might be in occurrences which went against the statistical trends, would be regarded as foolish. More needs to be said but it is plausible that in a wide class of cases increased sophistication raises the standards of epistemic responsibility. So much is certainly predictable in situations where, for whatever reason, a high degree of epistemic responsibility is at a premium. For in order to exercise the maximum epistemic responsibility in determining whether or not a putative warrant for P is good enough, you ought to scrutinize as many ways as you practicably can in which the relevant evidence might be misleading or flawed. So if, as usual, 'ought' implies 'can', then the less one can actually do in that direction, the less will be required of one. In particular, one is not, presumably, required to look into possibilities of which one currently has no concept. Of course, this could be true in a wide class of cases and yet rule-circular derivations still be possible. What seems to be solid is first, that increased conceptual sophistication or knowledgeability sometimes makes the acquisition of warrant for a particular proposition a more demanding exercise than it would otherwise be; and second that one who is to receive a rule-circular
74
Crispin Wright
justification must have meta-logical conceptual resources in excess of those of the kind of unreflective reasoner typified by the child in the Hide-and-Seek example. That is not yet to say that sophistication of the latter sort suffices so to 'up the ante' in any particular case that rule-circular justifications are altogether pre-empted. But the issue is a lacuna in Boghossian's treatment; and his neglect of it means that examples like the Hide-and-Seek case cannot, without further discussion, carry their intended significance. §I2. But perhaps they were only meant to be suggestive. Boghossian's second-and, it would seem, principal-ground for dismissing the simple internalist account of the acquisition-condition is provided by his interpretation of Lewis Carroll's enigmatic 'What the Tortoise said to Achilles'. 14 The character of his point here has changed significantly in the course of our exchanges. In his presentation at Albuquerque, he advanced the following prima-facie impressive argument. If the simple internalist's account of the acquisition-condition is accepted, then in order warrantedly to infer a certain conclusion, I first need warrant for the belief that it is entailed by the particular premisses from which I propose to infer it. However, since I do not have potentially infinitely many independent warrants for beliefs about the validity of specific inferences, the latter warrant is presumably going to have to be based on sonlething more general. For instance, if the inference in question is an application of modus ponens, then warrant to regard it as valid will have to be based on the generalpresumed warranted-belief that modus ponens is valid. But 'based on' here must presumably mean: inferred from. So now in order warrantedly to con1plete the original inference, I must first get warrant for the conclusion of another inference, in which the general validity of modus ponens is a premiss and the conclusion is that the original inference is indeed valid. But simple internalism will grant me a warrant for this conclusion only if I in turn have a prior warrant for taking the second inference to be valid. This too is a specific inference, a belief in whose validity must again presumably be based on a more general belief concerning the validity of the pattern of inference it exen1plifies. And once again, 'based on' presumably means: inferred from . .. If this argument is compelling, we should conclude that simple internalism can offer no coherent account of the acquisition of warrant by inference. In insisting that before a thinker can achieve warrant for a conclusion, she must-whatever other conditions have to be met-acquire warrant to believe that the specific inference is valid, sin1ple internalism transforms the acquisition of warrant by inference into an inferential 'supertask': a task
On Basic Logical Knowledge
75
whose accomplishment requires the anterior completion of an infinity of discrete cognitive feats. I5 I think this argument is compelling just provided it is granted a key assumption-an assumption which sinlple internalism's salvation, if it has one, must involve rejecting. That assumption is that warrant to regard specific inferences as valid must be based on-that is, inferred from-warrant to regard as valid the general patterns of inference which they exemplify. Now, we already know that simple internalism cannot regard the latter, general beliefs as warranted by inference. For to do so is to be commit.ted to a model involving either rule-circularity-if the same rule is used in the derivation-or infinite regress-if that is never to happen. Neither 15
Here is Boghossian's original expression of the argument: According to [simple internalism], one can only be justified in inferring a given conclusion from a given premiss according to a given rule R, if one knows, or justifiably believes, that R has a particular logical property, say that it is truth-preserving. Unless you know that R is valid, you cannot use R to derive justified conclusions. So, for example, no one simply reasoning from the particular proposition p and the particular proposition 'if p then q' to the proposition q could ever be justified in drawing the conclusion q. In addition, the thinker would have to know that his premisses logically entail his conclusion. Clearly, however, the intention cannot be that a thinker know this separately for every inference that he is tempted to draw. That would require a thinker to know an impossible number of things. Rather, the idea is that the thinker has the general knowledge that the rule implicated in his reasoning is valid, that any inference of the form MPP is truth-preserving. Let's assume, then, for the purposes of argument that our thinker S has this knowledge: he knows that any argument of the form MPP is truth-preserving. How does this help him to be justified in justifiably [sic] inferring the particular proposition q from the particular premisses p and 'if p then q'? The answer might seem quite simple. The general knowledge about modus ponens allows him to appreciate that the particular inference he is drawing is itself truth-preserving and so justified. But how does this general knowledge help him to appreciate this? Is there any alternative but to picture our thinker as reasoning, however tacitly, as follows: (a) any inference of the form MPP is truth-preserving. (b) this inference is of the form MPP. (c) therefore this inference is truth-preserving. Any such reasoning, however, would itself involve a step using modus ponens. Here, however, we are on the verge of launching an unstoppable regress. If the unsupported modus ponens inference could not generate justified belief all by itself, how will backing it up with general knowledge of the validity of modus ponens help? Bringing any such knowledge to bear on the justifiability of the inference would itself require justified use of the very same sort of inference whose justifiability the general knowledge was supposed to secure. What this Lewis Carroll-inspired argument shows, it seems to me, is that at some point it must be possible to use a rule in reasoning in order to arrive at a justified conclusion, without this use needing to be supported by some knowledge about the rules that one is relying on. It must be possible simply to move between thoughts in a way that generates justified belief, without this move being grounded in the thinker's justified belief about the rule used in the reasoning.
Crispin Wright
alternative is consistent with the simple internalist account of the acquisitioncondition. Simple internalism is therefore committed to a non-inferential account of the epistemology of general basic logical knowledge. It must hold, for instance, that we apprehend the validity of modus ponens directly, by a faculty of rational insight, without inferential dependence on anything behind it. I have already expressed some sympathy with Boghossian's scepticism about this idea, but I do not think that scepticism is based on anything more substantial than understandable frustration with its obdurate unclarity. It's obvious enough that not all a priori knowledge-at least not if it spreads as widely as traditionally conceived-is inferentially based, even if Boghossian has done a surprising amount to provide house-room for the idea that basic logical knowledge might be. One way or another, a philosophically satisfying account of the a priori is going to have to account for perhaps a variety of non-inferential subspecies. Most philosophers, before Boghossian's contribution, would have thought it very unlikely that there could be any coherent inferential model of basic logical knowledge. We should not be too dismissive, pari passu, of the prospects of non-inferential models, even if nothing terribly satisfactory has so far been offered in that direction. It may understandably be replied that all this is beside the point: that the above version of Lewis Carroll's argument can take in its stride whatever suggestion may be made about the epistemology of general beliefs about logical validity since-the whole point is-more is needed on the simple internalist account than warrant for such general beliefs. Warrant is needed, in addition, for beliefs concerning the validity of particular, specific inferences and we have no model of how these beliefs can be warranted except by inference in turn fronl the relevant general beliefs. But that is enough to allow the argument to get a grip, no matter what the epistenl010gy of those attendant general beliefs. Quite right. But now the remedy for the simple internalist should seem obvious. She must extend whatever account she proposes of general basic logical knowledge so that it applies directly to beliefs about the validity of specific inferences. If some faculty of rational insight allows me to grasp, without inferential mediation, that any instance of modus ponens is valid, then the simple internalist must insist that the same faculty should be credited with the power to enable me to recognize directly, without inferential mediation, or any basis in more general beliefs, that this particular inference (which happens to be of the form of modus ponens) is valid. Once the simple internalist concedes that all non-inferential a priori knowledge is essentially general, she will be ensnared in the above regress. The remedy must be to lay it down as a constraint that any non-inferentialist account of basic logical knowledge must be such as to embrace the particular case directly.
On Basic Logical Knowledge
77
But simple internalism's troubles are not yet over. In the version of the Lewis Carroll argument offered in his paper in this volume, Boghossian effects a crucial change. It is worth quoting the relevant passage in full: .A.ccording to [simple internalisn1], one can only be justified in inferring a given conclusion from a given premiss according to a given rule R, if one knows that R has a particular logical property, say that it is truth-preserving. So, for example, no one simply reasoning from the particular proposition p and the particular proposition 'if p, then q' to the proposition q could ever be justified in drawing the conclusion q; in addition, the thinker would have to know that his premisses necessitate his conclusion. Let us suppose that the thinker does know this, whether this be through some act of rational insight or otherwise. How should we represent this knowledge? We could try: (r) Necessarily: p
~
((p ~ q) ~ q))
Some may feel it more appropriate to represent it meta-logically, thus: (2) p,
P ~ q logically imply q
The question is: However the knowledge in question is represented, how does it help justify the thinker in drawing the conclusion q from the premisses with which he began? The answer might seem quite simple. Consider (r). Doesn't knowledge of (r) allow him to appreciate that the proposition that q follows logically from the premisses, and so that the inference to q is truth-preserving and so justified? In a sense, the answer is obviously 'Yes', knowledge of (r) does enable an appreciation of just that fact. But it doesn't do so automatically, but only via a transition, a transition, moreover, that is of a piece with the very sort of transition it is attempting to justify. (r) p ~ ((p (2) p (3) (p ~ q)
~
q)
~
q
~
q))
(4)p~q
(5) Therefore, q
As is transparent, any such reasoning would itself involve at least one step in accord with modus ponens. \Vhat about representing the knowledge in question as in (2)? The problem recurs. To know that p and p ~ q logically imply q is just to know that if p and p ~ q are true, then q must be true. Once more, there is an easy transition from this knowledge to the knowledge that q must be true, given that p is true and that p ~ q is true. But the facility of this transition should not obscure the fact that it is there and that it is of the same kind as the transition that it is attempting to shore up. If, therefore, we insist that the original inference from p and p ~ q to q was unjustified unless supported by the propositional knowledge represented either by (r) or by (2), then we commit ourselves to launching an unstoppable regress. Bringing
Crispin Wright any such knowledge to bear on the justifiability of the inference would itself require justified use of the very same sort of inference whose justifiability the general knowledge was supposed to secure. What this Lewis Carroll-inspired argument shows, it seems to me, is that at some point it must be possible to use a rule in reasoning in order to arrive at a justified conclusion, without this use needing to be supported by some knowledge about the rule that one is relying on. It must be possible simply to move between thoughts in a way that generates justified belief, without this movement being grounded in the thinker's justified belief about the rule used in the reasoning. 16
Now the dialectical situation has been significantly altered. The question is no longer how the simple internalist can account for the acquisition of knowledge of the validity of specific inferences, as opposed to general rules. She is being granted that. Rather the problem now concerns how even such specific knowledge is supposed to justify making the inference in question. This is a new problem. Boghossian is asking how, even granted knowledge that P, that if P then Q, and the collateral knowledge that those two premisses indeed entail Q, a thinker can proceed warrantedly to conclude Q once we realize-as Boghossian takes it-that such collateral knowledge can bear on the relevant issue only as a premiss for a further inference, whose justification will then require inferential deployment of a yet further collateral belief in turn ... Let's go carefully. Suppose a thinker, Hero, knows (I) that P and (2) that if P, then Q. According to simple internalism, in order warrantedly to infer Q, Hero also needs to know (3) that her premisses, P and if P, then Q, jointly entail Q. Boghossian's question is: how precisely is this knowledge, (3), supposed to bear on her warrant to infer Q? He is not denying, of course, that it does so bear. His claim is rather that its bearing can only be conceived as inferential: that once (3) is regarded as needed at all, it is impossible to understand how it can help to justify the proposed inference except as a premiss for a further inference. It provides that the truth of (I) and (2) suffices for that of Q-provides in effect, i.e. a further conditional If both P and if P, then Q, then Q -but the only way to use that provision to justify the conclusion of Q, so Boghossian is contending, is via a new inference to Q from an enlarged pool of premisses containing it and the original premisses (I) and (2). And now the simple internalist is cOlnmitted to holding that warrant for this inference in turn depends upon warrant for the collateral belief that its premisses jointly entail its conclusion, and the regress is started. There is scope for discussion whether this regress is clearly vicious. The issue is difficult and I reserve one train of thought about it for the footnote 16
This volume, pp. 36-7.
On Basic Logical Knowledge
79
below. 17 We should note, though, that Boghossian develops the regress, vicious or not, in a more specific fashion than he strictly needs and thereby opens himself to a prima-facie con1pelling simple internalist response. His basic idea is to show that simple internalism is somehow committed to the absurdity that an infinity of warranted information is needed before one has a sufficient basis for the move to any conclusion. But, it may be countered, he engineers that appearance only by making no distinction between the idea that, in order to be justified in n1aking a specific inference, Hero needs warrant for the belief that it is valid and the idea that he needs warrant for 17 The regressive thought is that if the information (3) is needed to justify the original inference at all, then-since that justification itself proceeds by inference-more information, so the simple internalist must concede, will be needed to underwrite the inference involved in the justification; and since that underwriting in turn will be inferential, yet more information will be needed to justify it ... and so there is no end to the collateral information necessary to justify any inference. One doubt whether the regress is vicious turns on the reflection that all the successive items of information involved in any particular case are consequences of the first. In the above example, for instance, the successively required items of collateral information, represented as conditionals, look like this:
(3)
P~((P~Q)~Q)
(4)
P~ ((P~
(5)
P~ ((P~
((P
~
Q) ~ ((P~ ((P~ Q) ~ Q)) ~Q)) Q) ~ ((P ~ ((P~ Q) ~ Q)) ~ ((P ~ Q) ~ ((P ~ ((P ~ Q) ~ Q)) ~ Q)))
~
Q)))
and so on. Since each nth one of these differs from its predecessor merely by substitution for the latter's right-most occurrence of 'Q' of a formula of the form, 'n-I ~ Q'-itself, naturally, entailed by Q-it may seem that Hero doesn't need to be a terribly long-sighted logician to conclude that he can obtain-and therefore, in effect, already has-all these items of information just provided he has the first, and that he is therefore in position to justify each of the inferences involved in the regress. Which is accordingly harmless. It may be countered that, in order to claim the successive items of information of this kind, Hero will once again have to be in position to justify inferences to those items of information as conclusions. That will once again call for items of warranting collateral information, this time concerning entailments between earlier and later items in the above series of conditionals. But Hero can reply again that his knowledge of each of the relevant entailments-(3) ==> (4), (4) ==> (5), etc.,-may be obtained by inference from the general reflection about the structure of their antecedents and consequents just outlined; and the obtainability of this knowledge is something he can foresee in advance. Of course, it takes inference to move from that general reflection to particular cases. And the internalist will have to regard the warrant for such inferences as once again depending on collateral information that those inferences are valid. But the reply will be that the requisite items of collateral information-each to the effect that a statement of one the above entailments, (n) ==> (n + I), is itself entailed by an appropriate statement of the general reflection--ean be recursively corralled by a single act of intellectual insight. In short, the merchant of regress charges that the simple internalist cannot explain how any of the relevant inferences in one of these series are justified since no end of collateral information is presupposed in every case; and the internalist responds that Hero can access the needed information without limit-not indeed by successive plodding inferences but by a single insight that each of them goes through; that he can know that he can do so; and hence that he is fully entitled to make each of the inferences concerned. This situation obviously needs further attention if it is to be clear if there is a winner.
80
Crispin Wright
a further premiss to the effect that the inference is valid. And it may be contended that the necessity for such a distinction is exactly the main lesson which Carroll's parable teaches. If a further premiss was needed-if P together with If P, then Q weren't enough to ensure Q without augmentation by: If P and if P, then Q, then Q-then after the augmentation, the question of sufficiency would have to arise again, and a regress would indeed be launched. But a further premiss would be needed-the simple internalist may insist-only if the original argument was enthymematic: only if its listed premisses didn't strictly entail the conclusion. And that is not the situation we are concerned with-nor in any case where the il1fer~ ence in question is known to be valid. What-the simple internalist ought to say-the additional information is needed for is not to get a sufficient set of premisses for the inference in question but rather to get a sufficient basis for knowing that the inference in question is good (i.e. that what is in fact a sufficient such set of premisses is so). Indeed, Lewis Carroll's insight, she may continue, should be viewed simply as being that, when it comes to justifying an inference, a clear distinction must be made between information which is supposed to ensure that the conclusion is true (one's premisses) and information needed to entitle one to draw it (that it does really follow from those premisses). Both kinds of information are indispensable. But if-like Boghossian, or Carroll's Tortoise-one assimilates the second kind to the first, it will seem as if one has none of the second-with inferential paralysis the result. I think this observation is well made. But, as I said above, its force in reply to Boghossian depends upon an inessential feature of his argument. What any argument to his purpose must assume is that, in order for the additional information-that a given inference is valid-to contribute towards the justification of making that inference, it will have to be deployed among the premisses for an inference. What Boghossian does-following the Tortoise's exampleis to deploy it among the premisses for an inference to the same-the original-eonclusion. Yet his challenge would have had no less force if it had been merely that, in order to do anything with the collateral information that simple internalism claims to be necessary to justify any inference, Hero will have to make some inference or other from that information as a premiss. One other way in which that might be so would be if it were supposed that knowledge that the inference from P and if P, then Q to Q is valid justifies making that inference not by sustaining, in combination with the other two premisses, another inference to Q but by sustaining, in combination with other premisses, an inference to: the inference to Q is justified. For instance, Hero might reason: (i) I know that the inference from P and if P, then Q to Q is valid (ii) If an inference is valid, then one who knows its premisses and knows of its validity is justified in inferring its conclusion
On Basic Logical Knowledge
81
(iii) I know that P and that if P, then Q. Therefore (iv) I am justified in inferring Q. If this was the simple internalist model, it would be no less open to Boghossian's difficulty. For now in order to have a warrant to suppose that the original inference-to Q-is justified, Hero is represented as having to negotiate this new inference, from (i),(ii), and (iii) to (iv); and this she will be warranted in doing-according to the model-only if she knows that it is valid and deploys that knowledge, in the manner just illustrated, to conclude that she is justified in inferring (iv). But that manner of deployment will involve negotiating a yet further inference: (i)' I know that the inference from (i), (ii), and (iii) to (iv) to Q is valid (ii)' If an inference is valid, then one who knows its premisses and knows of its validity is justified in inferring its conclusion (iii)' I know that (i), (ii), and (iii). Therefore (iv)' I am justified in inferring (iv) . ... and so on. Again there may be scope for discussion whether the regress is vicious. But it should be clear that the only clean way for simple internalism to defuse all concern of this kind must be to deny Boghossian's leading assumption: that the role of knowledge of validity in justifying an inference n1ust be explicated in terms of some inference from that very knowledge. The manner in which Achilles' knowledge that the inference from P and if P, then Q to Q is valid provides a warrant for his making that inference is to be explained neither in terms of its provision of an additional premiss for that inference nor in terms of its provision of a premiss for some other collateral inference which somehow delivers a warrant for making the first. Rather, the simple internalist should insist, it is not inferential at all. This, at last, is the crux of the issue. Can the required position be made intelligible? Boghossian does not argue that it cannot. He merely asserts to the contrary, as we saw, that 'Bringing any such knowledge to bear on the justifiability of the inference would itself require justified use of the very same sort of inference whose justifiability [that] knowledge was supposed to secure.' The larger question, however, will prove to be whether it can somehow be argued that simple internalism can make nothing of the notion of non-inferential warrant in general: warrant, that is, conferred just by a thinker's appreciation that circumstances obtain sufficient to justify her holding a particular belief. To explain. One could ask, rather in the manner of Boghossian, how such appreciation could have any bearing on the question of warrant; and-at least for a moment-there would be son1ething like the same temptation to answer by making play with a kind of inferential routine. That answer would be that when I recognize that the obtaining
Crispin Wright of conditions C confers on me a warrant to accept P, it is because I accept a correct epistemic norm along the lines: A thinker who knows that conditions C obtain (and meets circumstantial conditions D) is warranted in accepting P. And this acceptance helps get me to recognition of a warrant for P because-the temptation is to think-it subserves an inference like: (i) I am a thinker who knows that conditions C obtain (and meets circumstantial conditions D) (ii) A thinker who knows that conditions C obtain (and meets circumstantial conditions D) is warranted in accepting P. Therefore (iii) I am warranted in accepting P. Obviously, though, this cannot be right all across the board. If warrant in general is to be reflectively appreciable warrant-as a generalized simple internalism will hold-its appreciation cannot always be the product of an inference from acknowledged epistemic principles and recognized antecedent conditions. For that model, on pain of obvious regress, can provide no explanation of the acquisition of warrant for the claim that the relevant antecedent conditions obtain. At some point, sin1ple internalism has to countenance a range of principled warrants-warrants acquired in a fashion which respects certain epistemic principles-whose acquisition involves no inference from those principles but proceeds directly from the cognitive intake of the thinker. So much is implicit in any generalized internalism about warrant which wants to make space for the notion that some warrants are noninferential. And making such space doesn't appear to be optional. I do not claim that we can confidently exclude the possibility that a generalized internalism for which all warrant must be reflectively appreciable is indeed in difficulty over the very notion of non-inferential warrant. Maybe it is to that ironic conclusion that this dialectic eventually leads. But for the time being, it is clear how the simple internalist must reply to Boghossian. To staunch her view against all threat of Carrollian regress, she must insist that recognition of the validity of a specific inference whose premisses· are known provides a warrant to accept its conclusion not by providing additional information from which the truth, or warrantedness of the conclusion may be inferred, but in a direct manner-which she is in any case committed to postulating and explaining by the need to give a coherent account of non-inferential warrant in general. In effect, and perhaps paradoxically, her view must be that warrants acquired by inference are, in a way, a subspecies of non-inferential warrant: that an appreciation that a conclusion follows from warranted premisses confers, when it does, a -----------------------
On Basic Logical Knowledge warrant for an acceptance of that conclusion in no less direct a fashion than that in which a visual appreciation of the colour of the sky confers a warrant for the belief that it is blue. §I3. We can thus see the shape of an alternative position that emerges, if not unscathed, then at any rate so far undefeated by Boghossian's discussion. It will be a simple internalism, qualified by whatever degree of concession needs to be made to Hide-and-5eek-type examples, but insisting that in the general run of cases, when a thinker has the resources to confront the question whether a particular inference she is proposing is valid, nothing less than warrant to believe that it is valid will suffice to justify the inference. In addition, the view will accept a commitment to a non-inferentialist epistemology of basic logical knowledge both of general rules and, at least in sufficiently simple cases, of the validity of particular inferences. And it will insist, finally and crucially, that the role played by the latter type of knowledge in warranting the drawing of specific conclusions is unmediated by further inference; rather, when warrants are bestowed by inference, an appreciation of the validity of the inference (and of one's knowledge of the premisses) plays a role in warranting the conclusion that is merely a special case of the role played by the appreciation of warrant-conferring circumstances in non-inferential cases in general. 50: that's the outline of a possible position. I myself have no particular investment in it. I claim for it only that it is still in play. In a famous footnote, Kant wrote: It still remains a scandal to philosophy ... that the existence of things outside of us ... must be accepted merely on faith, and that, if anyone thinks good to doubt their existence, we are unable to counter his doubts by any satisfactory proof. I8
It is, if anything, a yet greater 'scandal' that we have so far acquired so little understanding of the basic epistemological architecture of logical inference. Notwithstanding the limitations which I have tried to bring out, I think Boghossian's subtle and probing paper teaches us of a possibility which we might otherwise have been likely to overlook. That is no small achievement. But I also think the more traditional kind of view just sketched-more traditional in its foundationalist resonances and the role it accords to some form of direct 'rational insight'-survives his discussion, albeit perhaps in a usefully sharpened perspective concerning its commitments and obligations. More work is needed. For the time being we will have our best chance of quieting this other scandal-if indeed it can be quieted-if we keep all possibilities in view.
Crispin Wright REFERENCES Carroll, L. (1895), 'What the Tortoise said to Achilles', Mind, 4, 278-80. Hale, B., and Wright, C. (2001), The Reason's Proper Study (Oxford: Clarendon Press). Kant, I. (1787/1929), The Critique of Pure Reason, 2nd edn., trans. by N. Kemp Smith: (London: Macmillan). Nozick, R. (1981), Philosophical Explanations (Oxford: Clarendon Press).
4 Practical Reasoning:~ JOHN
BROO~AE
• ••
Aristotle took practical reasoning to be reasoning that concludes in an action. But an action-at least a physical one-requires more than reasoning ability; it requires physical ability too. Intending to act is as close to acting as reasoning alone can get us, so we should take practical reasoning to be reasoning that concludes in an intention. Sections I and 2 of this paper argue that there is such a thing as genuine practical reasoning, concluding in an intention. It can be correct, valid reasoning, and Section 2 explains how. Section 3 deals with an incidental complication that is caused by a special feature of the concept of intention. Sections 4 and 5 then explore the normativity of practical reasoning. They argue that, although practical reasoning concludes in an intention, it gives the reasoner no reason to have that intention. This paper considers only one type of practical reasoning, namely instrumental reasoning. Moreover, up to the end of Section 5 it considers only instrumental reasoning that proceeds from an end to a means that the reasoner believes is necessary. This one special case is enough to demonstrate that genuine practical reasoning exists. But reasoning of this special type is rare, so the remainder of the paper investigates how successfully the particular conclusions of Sections 1-5 can be extended. It considers instrumental reasoning when the means is not believed to be necessary. Sections 6' and 7 ~:. This paper was written while I was a visiting fellow at the Swedish Collegium for Advanced Study in the Social Sciences. I am extremely grateful to the Collegium for its generous support and hospitality. During the paper's long gestation, I have learnt a great deal about practical reasoning from many people. Some sent me long and helpful written comments, some spent time talking to me, and some simply made inspired remarks. An incomplete list is: Lars Bergstrom, Rudiger Bittner, Ruth Chang, Garrett Cullity, Jonathan Dancy, Sven Danielsson, Stephen Darwall, Jamie Dreier, Christoph Fehige, Berys Gaut, Daniel Hausman, Jane Heal, Kent Hurtig, Nadeem Hussein, Christoph Lumer, Tito Magri, Alan Millar, Adam Morton, Jan Odelstad, Derek Parfit, Ingmar Persson, Philip Pettit, Martin Putnam, Christian Piller, Wlodek Rabinowicz, John Skorupski and Howard Sobel.
John Broome
86
examine and reject the idea that this type of practical reasoning depends on normative reasoning. Sections 8,9, and 10 examine and reject the idea that decision theory provides a good account of it. After those negative arguments, Section I I looks for a correct account of instrumental reasoning from an end to a means that is not believed to be necessary.
1.
INTENTION REASONING
You might reason like this: I am going to buy a boat
(Ia)
For me to buy a boat, a necessary means is to borrow money
(I b)
I shall borrow money.
(IC)
and so In this piece of reasoning, I mean (Ia) to express an intention of yours to buy a boat, rather than a belief that you are going to buy a boat. I mean (Ib) to express a belief of yours, and I mean (IC) to express a decision you make. Think of this as a process of reasoning you might actually run through in your head. You might do so when you get your bank statement. You intend to buy a boat, you now form the belief that a necessary means of doing so is to borrow money, and then you go through the reasoning set out in (I). It takes you from two of your existing states of mind, an intention and a belief, to a new state of mind, a new intention. To form an intention by reasoning is to make a decision. So this piece of reasoning concludes in a decision. I shall call reasoning that concludes with the forming of an intention 'intention reasoning'. Intention reasoning is practical reasoning; it gets as close to action as reasoning can. Your reasoning process is a particular type of practical reasoning. It is instrumental reasoning, which means it is concerned with taking an appropriate means to an end. This paper considers instrumental reasoning only. There are other sorts of practical reasoning too, but if we want to understand practical reasoning, it is a good idea to start with instrumental reasoning, because it is less controversial than other sorts. Your reasoning is a very special type of instrumental reasoning: it is reasoning from an end to a means that you believe to be necessary. Later in this paper I shall examine instrumental reasoning more generally, starting in Section 6. All this begs the question of whether intention reasoning like (I) is truly reasoning at all. We could call almost any process of thought 'reasoning',
Practical Reasoning
but I shall use this term only for correct reasoning. (I shall reserve the term 'valid' for the content of reasoning, and use 'correct' as the corresponding term applied to the process of reasoning.) Intuitively, (I) expresses correct reasoning, but we need an explanation of why. Section 2 offers one.
2.
THE CORRECTNESS OF INTENTION REASONING AND BELIEF REASONING
In explaining why (I) expresses correct reasoning, I shall make a few assumptions. I shall assume that intentions and beliefs are propositional attitudes. That is to say, they are states of mind that have contents, and their contents are propositions. I shall assume your name is 'Chris', and I shall assume that the proposition that Chris will buy a boat is the same as the proposition that you, Chris, would express with the sentence 'I am going to buy a boat'. So the content of your intention expressed in (Ia) is the proposition that Chris will buy a boat. Idiomatically, we generally described a person's intentions using an infinitive rather than a noun clause. For instance, we say, 'You intend tomorrow to be a restful day', rather than, 'You intend that tomorrow will be a restful day.' But I take these two sentences to have the same meaning. In this context, an infinitive and a 'that' clause are alternative ways of expressing a proposition. A sentence expresses a proposition. When you say a sentence to yourself or out loud, you often also express a particular attitude to the proposition that the sentence expresses. Sometimes the nature of this attitude is indicated by an inflexion or mood within the sentence, but sometimes not. In English, a first--person, future-tense, indicative sentence can be used to express either an intention or a belief. In saying 'I am going to buy a boat' you nlay express either an intention of buying a boat or a belief that you are going to buy a boat. Sometimes, but not always, the difference is idiomatically indicated by subtle inflections involving 'shall', 'will', and 'going to'. I assumed you use (Ia) and (IC) to express intentions. So, writing'!' for 'you intend that' and 'B' for 'you believe that' (both operators on propositions) your reasoning in (I) can be described like this: I(Chris will buy a boat) and B(For Chris to buy a boat, a necessary means is for Chris to borrow money) so I(Chris will borrow nloney). This is a description of your reasoning, not an inference. From the fact that you intend to buy a boat, and the fact that you believe a necessary
88
John Broome
means of doing so is to borrow money, we cannot infer that you intend to borrow money. You may not have that intention, for instance if you are irrational. I Compare this piece of intention reasoning with theoretical reasoning or, as I shall call it, 'belief reasoning'. All reasoning, conceived as a process, starts from existing states of mind and concludes in a new state of mind. By 'belief reasoning', I mean reasoning that concludes in a belief. Here is an example of belief reasoning that you might go through: B(Chris will buy a boat) and B(For Chris to buy a boat, a necessary means is for Chris to borrow money) so B(Chris will borrow money).
(3C)
(To distinguish the beliefs in (3) from the intentions in (2), imagine the transactions will be made by your attorney, against your will.) Like (2), (3) describes a process of reasoning. It is not an inference; from (3 a) and (3 b), we cannot infer (3C). Even if you have the beliefs (3a) and (3b), you might not have the belief (3c), for instance if you are irrational. The content of the reasoning process (3) is: Chris will buy a boat and For Chris to buy a boat a necessary means is for Chris to borrow money so Chris will borrow money.
(4C)
No doubt you will actually express these propositions to yourself in the first person: I shall buy a boat and For me to buy a boat, a necessary means is to borrow money so I shall borrow money. I G. H. von Wright (r978) claims that statements of the form (2) do actually constitute a valid inference. I understand the temptation to think so: if you believe borrowing money is necessary for your buying a boat, but you do not intend to borrow money, it is tempting to think you cannot really intend to buy a boat. But if that were so, there would be no room for practical reasoning; there would be no opportunity for forming an intention by reasoning like (r). If you had the intention expressed in (ra) and the belief expressed in (rb), you would necessarily, even without reasoning, have the intention expressed in (rc). Because people sometimes fail to reason properly, (2) cannot be an inference.
Practical Reasoning
89
Whether expressed in the first or the third person, the syllogism in (4) constitutes a valid inference. (4 b) is stronger than it needs to be to make the inference valid; I have given it a special modality for a reason I shall explain in Section 3. A material conditional would have done. Still, (4) as it stands is certainly valid. The three propositions in (4) stand in a particular relation to each other, a relation such that, if the first two are true, so is the third. This makes the inference valid, which in turn makes the process of reasoning correct. It is correct to proceed from your first two beliefs to the third. The reasoning is correct because its content is a valid inference. The validity of this content plays a part in other sorts of reasoning besides (3), because propositions do not need to be believed for them to playa part in reasoning. For example, the same content might feature in hypothetical reasoning, where you do not believe (4a) or (4b), but are working out what would be true if they were true. The same validity also plays a part in intention reasoning. Intention reasoning (2) has the same content as belief reasoning (3). This content, expressed in the third person, is the valid inference (4). It turns out that the validity of (4) makes (2) correct just as it makes (3) correct. The difference between (2) and (3) is not in their content but in the attitude you take towards their content. For instance, in the belief reasoning (3) your attitude towards the proposition (4a) is to take it as true, whereas in the intention reasoning (2) your attitude is to be set to make this proposition true. In (3) you take (4a) and (4b) as true. Because (4) is a valid inference, if (4a) and (4b) are true, (4C) must also be true. So you cannot rationally take (4a) and (4b) as true without taking (4C) as true. This is why (3) is correct belief reasoning. In (2) you take (4b) as true, and are set to make (4a) true. Because (4) is a valid inference, if (4a) and (4b) are true, (4C) must also be true. So you cannot rationally be set to make (4a) true, and take (4b) as true, without being set to make (4C) true. That is why (2) is correct intention reasoning. Both (2) and (3) appropriately follow the transmission of truth through the valid inference (4). (2) follows it in a truth-making way and (3) in a truthtaking way. Even if David Hume (1978, bk. 2, pt. 3, sect. 3) was right that reasoning is concerned only with truth, he should still have recognized that reasoning can transmit the truth-making attitude as well as the truthtaking attitude. It can transmit intention as well as belief, so reasoning can be practical. Intention reasoning and belief reasoning both follow truth. Their logic is the same, because belief and intention are both attitudes to propositions rather than features of propositions themselves, and the logic belongs to the propositions. 2 The attitudes are not part of the content of the reasoning. 2 My views about intention and practical reasoning are similar to Bruce Aune's (1977, particularly 155-8). Aune recognizes that intention reasoning and belief reasoning can share the
9°
John Broome
To be sure, some propositions are about your attitudes. An example is the proposition that you intend to buy a boat. You may have attitudes of intention or belief towards these propositions too, and they can figure in reasoning. This is a correct piece of belief reasoning, for example: B(Chris intends to buy a boat) and B(If Chris intends to buy a boat, Chris intends to borrow money) so B(Chris intends to borrow money). (In formal statements throughout this paper, 'if' stands for material in1plication.) No attitudes appear within the content of this reasoning, only propositions that you have particular attitudes. Peter Geach's well-known objection to noncognitivism in ethics (Geach 1965) is no objection to my account of practical reasoning. 3. RESTRICTIONS ON THE NOTION OF INTENTION
If you believe the premisses of a valid inference, then if you reason correctly you will believe the conclusion. It would be nice if I could say in parallel that, if you intend some of the premisses of a valid inference, and believe the others, then if you reason correctly you will intend the conclusion. But in fact this is not so; the parallel between belief reasoning and intention reasoning is not complete. To see why, notice first that you may believe the conclusion of the inference is true anyway, without your intending it. You may intend to buy a boat, and believe a necessary means is to borrow money, but not intend to borrow money because you believe you have already done so. Consequently, you see no need to intend to borrow money now. Let us extend the notion of 'being set to make true' to cover propositions that you believe are true anyway, without your making them true. In this example, let us say that you do indeed set yourself to make it true that you borrow money. You happen to have an easy time of it, because you believe nothing is required of you to make it true. same logic. But he seems not to have recognized that this is because they share the same content. As a result, he does not recognize that belief reasoning and intention reasoning are both concerned with truth, and both track truth in their own way. Instead, he assigns a special sort of valuation to intentions, different from truth and falsity. My views also resemble Richard Hare's in some respects. In several of the papers reprinted in his Practical Inferences (Hare 1971), Hare argues that imperatives share the same logic as indicatives, even though they are expressed in different moods. However, Hare's views and mine differ greatly. I am not concerned with imperatives. Moreover, Hare embeds in1peratives within sentences, and that exposes him to the problem raised by Peter Geach (1965).
Practical Reasoning Granted this extension, then whenever you are set to make true some of the premisses of a valid inference, correct reasoning will bring you to set yourself to make the conclusion true. So with the extended notion of 'being set to make true' in place of intention, we have a good parallel with belief reasoning. However, intention reasoning is not parallel, and we can now see why not. It is because our notion of intention does not coincide with the notion of being set to make true, especially now we have extended the latter notion. The notion of intention is narrower than the notion of being set to make true. Several restrictions apply to the former but not the latter. The first is that intending does not cover propositions that you believe are true anyway, without your making them true. We have extended 'being set to make true' to cover these propositions, but 'intend' does not cover them. The second restriction appears in this example. Consider the putative reasoning: I(Chris will buy a boat) and B(If Chris will buy a boat, Chris will find new friends) so I( Chris will find new friends). The content of this reasoning is a valid inference, but the reasoning itself is not correct. You need not intend to find new friends, just because you intend to buy a boat and recognize that this will be the consequence. You may simply take it as a side effect. Certainly, you nlust adopt the truthmaking attitude towards it; in fulfilling your intention of buying a boat you must make this proposition true. But you need not intend it. You need not intend whatever follows from an end that you intend. That is why I formulated my original example with the strong modality 'necessary means' in the major premiss, rather than sinlply a material conditional. Although you need not intend whatever follows from an end that you intend, you must intend whatever you believe is a necessary means to an end that you intend, unless (the first restriction) you believe it is true anyway, without your intending it. The third restriction is more controversial. It arises from something Frances Kamm (2000) calls 'triple effect'. Consider this putative reasoning: I(Chris will have more fun) and B(For Chris to have more fun, a necessary means is for Chris to find new friends) so I(Chris will find new friends).
92
John Broome
This contains the 'necessary means' modality. Nevertheless, Kamm argues it may not be correct reasoning. Suppose you independently intend to buy a boat, and believe that finding new friends will be a side effect of doing so. In that case, Kamm thinks you need not separately intend to find new friends as a means of having more fun. If Kamm is right, this third restriction is a variant of the first. You need not intend the proposition that you will find new friends, because you believe that proposition is true 'anyway', whether or not you intend it. It is true without your intending it, because it is a side effect of a different intention. In summary, not all putative intention reasoning whose content is a valid inference is correct. It must satisfy some further constraints. I incorporated the second constraint in my example, by using the 'necessary means' modality. The first and third constraints require that you should not believe the conclusion of the inference is true anyway, independently of your intending it. Whenever the constraints are satisfied, intention reasoning is genuinely correct reasoning, provided its content is a valid inference. It is a clear paradigm of practical reasoning.
4. REASONING IS NOT REASON-GIVING
Because intention reasoning is reasoning, we may say it is normatively guided; precisely what this means will appear by the end of this section. But intention reasoning is normative in no other way. Its content is not normative; it is not about what you ought to do or have a reason to do. (I use term 'a reason' for a pro tanto reason. If you have a reason to do something, that means you ought to do it unless you also have a contrary reason not to.) Furthermore, intention reasoning is not ought-giving nor even reasongiving; that is what I shall argue in this section and the next. In my example, intention reasoning takes you from your intention of buying a boat and your belief that borrowing money is the only n1eans of doing so, to an intention to borrow money. But it does not determine that you ought to borrow money, nor even that you have a reason to borrow money. I could equally well say that the intention and belief on which the reasoning is premissed are neither ought-giving nor reason-giving. Your intention -of buying a boat, and your belief that to do so you must borrow money together give you no reason to borrow n10ney. Nor do they make it the case that you ought to borrow money. That is what I shall argue. Reasoning in general is neither ought-giving nor reason-giving. Once again, it is easiest to see this by looking at the more familiar example of belief reasoning. Belief reasoning is neither ought-giving nor reason-giving or, to put it another way, beliefs are neither ought-giving nor reason-giving. Suppose you believe' some proposition p from which q can be inferred by a
Practical Reasoning
93
valid inference. It does not follow that you ought to believe q, nor that you have a reason to believe q. This section defends this claim. Section 5 returns to intention reasoning. My claim has nothing to do with the con1plexity of the inference. Throughout, I shall assume that the inference from p to q is immediate and obvious. From one point of view, a defence scarcely seen1S needed, because the conclusion is surely obvious. Suppose, for instance, that you ought not to believe p, though you do believe it. Then obviously it may not be the case that you ought to believe q, nor that you have a reason to believe it. Moreover, this conclusion can be supported by a simple argument. The proposition p itself follows from p by an immediate and obvious inference. But from the fact that you believe p it plainly cannot follow that you ought to believe p, or have a reason to believe p. Beliefs do not justify themselves. So it cannot be a general principle that if you believe p you ought to believe its in1mediate and obvious consequences, nor that you have a reason to do so. Still, however obvious this may be, there is a plausible contrary thought. If you believe p, then surely in some sense or other that gives you a reason to believe its consequence q, when q is different from p itself. No doubt, we would not say that believing p gives you a reason to believe p itself, because you do not need a reason for that; you already believe it. But if you are going to believe a consequence of p that is distinct from p, you do need a reason for that, and surely believing p gives you one, in some sense or other. So two plausible views conflict. The truth lies with the first, but is easy to explain why the second seems attractive. The truth is that a particular relation holds between your believing p and your believing q: one normatively requires the other, as I shall put it. It is not the case that believing p gives you a reason to believe q, but it is easy to mistake the relation of giving a reason to for the relation of normatively requiring. In slightly formal notation, the truth is that Bp requires Bq, where 'B' still stands for 'you believe that', and 'requires' stands for 'normatively requires you to see to it that'. I have examined the notion of normative requirement more thoroughly in Broome (I999b). Here I shall mention only one essential feature) (5) implies O(Bp:) Bq),
(6)
where '0' stands for 'you ought to see to it that'. But (5) does not imply Bp:) RBq, 3 To avert confusion: my notion of normative requirement is not what Chisholm (1978) calls requirement.
94
John Broome
where 'R' stands for 'you have a reason to see to it that'. (5) attaches normativity to the relation between believing p and believing q, not to believing q itself. It does not say literally that if you believe p you have a reason to believe q. On the other hand, (7) attaches normativity to the consequent rather than the relation. This means the consequent in (7) can be detached by modus ponens: from Bp and (7), we can infer RBp. If you believe p, then (7) says you have a reason to believe q. (5) does not allow detachment of that sort. It is easy to confuse a reason and a normative requirement because they both involve a weakening of the notion of 'ought', and it is easy to muddle the two sorts of weakening. A reason is a weakened sort of ought; it is weakened by being made pro tanto. A normative requirement is also a weakened sort of ought; it is weakened by being made relative. The difference between them can be described like this: a normative requirement is strict but relative; a reason is slack but absolute. A normative requirement is relative in the sense that it is a relation between two propositions. It is the truth of the first (such as your believing p) that requires you to see to the truth of the second (such as your believing q), and the requirement cannot be detached from its antecedent. But a normative requirement is strict because it is strictly a requirement: if you do not satisfy it, you fail in something that is required of you. (6) expresses this strictness. It says you ought to see to it that if you believe p you believe q. So if you believe p but not q, you fail to see to something you ought to see to. On the other hand, a reason is not relative, but it is slack in that it is only pro tanto. If you do not do what you have a reason to do, you may not have failed in any way; you may have performed exactly as you should have. You may have had a better reason not to do this thing, and correctly followed the better reason. The relation between believing p and believing q, when q follows from p by an immediate and obvious inference, is plainly strict. That is to say, if you believe p and not q, you are definitely not entirely as you ought to be. If the relation was simply that believing p gave you a reason to believe q, it would be slack; you might believe p and not q, yet still be entirely as you ought to be. This could happen if, say, you had a better reason not to believe q. But actually this is not possible. To be sure, you might have a good reason not to believe q, and an appropriate response might be to stop believing p. That way, you can escape from the requirement that is in1posed on you by your belief in p. But if you do not take this way out, and you believe p without believing q, you are not entirely as you ought to be. So the relation between believing p and believing q is strict, and it therefore cannot be the reason-giving relation. It is the relation of normative requirement. To summarize, believing p does not give you a reason to believe its consequence q. That is to say, (7) may be false. On the other hand, believing p does normatively require you to believe q. That is to say, (5) is true.
Practical Reasoning
95
When one proposition implies another (and the implication is immediate and obvious), believing the one requires you to believe the other. The relation of normative requirement merely reflects, at the level of beliefs, the relation of implication that holds between propositions. At the beginning of this section, I said that reasoning is normatively guided, and this is its only normative feature. The idea of normative requirement makes explicit what this normative guidance amounts to. A final note. One sometimes comes across a third weakening of 'ought', besides a reason and a normative requirement. This is the subjective ought. . People sometimes say that if you believe p then you subjectively ought to believe its consequence q. This idea is right in making the connection between believing p and believing q both strict and relative, but wrong in making it relative to the person rather than to her belief. To see its wrongness, imagine you have inconsistent beliefs: you believe p and also r, and q follows from p, but not-q follows from r. Then according to this story, you subjectively ought to believe q, and you subjectively ought to believe not-q. That is hard to make sense of; it sounds contradictory. The right thing to say is that your belief in p requires you to believe q, and your belief in r requires you to believe not-q. That is easy to make sense of, though it reveals a failing on your part. You ought not to have these inconsistent beliefs. They impose inconsistent requirements on you, as one would expect.
5. INTENTIONS ARE NOT REASON-GIVING
Everything I have said about belief reasoning applies to intention reasoning too. If you intend an end, and believe son1e act is a necessary means to it, your intention and belief normatively require you to intend the means. Reasoning will bring you to intend it. This is simply to restate the correctness of intention reasoning such as at (2). It does not follow that you ought to intend the means, nor that you have a reason to intend it. For example, if you intend to buy a boat, and believe borrowing money is a necessary n1eans of doing so, it does not follow that you ought to intend to borrow n10ney or have a reason to do so. For instance, if you ought not to intend to buy a boat in the first place, it might be that you ought not to intend to borrow money either. Still, instrumental reasoning will bring you to intend to borrow money. This intention is required by your intention of buying a boat and your belief that borrowing money is a necessary means to do so. You might be tempted to think that your original intention and belief must give you some sort of a reason to intend to borrow money. Similarly, you might be tempted to think that believing a proposition must give you some sort of reason to believe its obvious consequences. But I explained in
John Broome Section 4 that the latter view is a mistake. It is an easy n1istake to make, because the truth is that believing a proposition norn1atively requires you to believe its obvious consequence, and it is easy to mistake a normative requirement for a reason. It is easy to make the same mistake with intention reasoning. In Section 4, I applied the test of strictness to distinguish a reason from a normative requirement, and I can apply the same test here. The relation between intending an end and intending what you believe is a necessary means is plainly strict. If you intend an end and do not intend what you believe is a necessary means, you are definitely not entirely as you ought to be. Therefore, this relation must be normative requirement. If intending an end merely gave you a reason to intend what you believe is a necessary means, the relation would be slack. Understanding the relation of normative requirement is essential to understanding how instrumental reasoning is even possible. If you intend an end, you must be able to reason correctly about how to bring it about, and you must be able to do this even if you have no reason to intend the end. Indeed, you must be able to do it even if you actually ought not to intend this end. The possibility of correct instrumental reasoning cannot depend on the rightness of the end. But if you ought not to intend the end, you may have no reason to intend the means. So if instrumental reasoning had to work by giving you a reason to intend the means, instrumental reasoning would not be possible when you ought not to intend the end. Fortunately, it does not work this way. It works through a normative requirement. This point supports an argument of Michael Bratman's (19 87, 23-7). Bratman points out that a theory of practical reasoning n1ust not imply 'bootstrapping', as he calls it. Just because you form the intention of buying a boat, that intention cannot possibly create a reason for you to do so if you do not already have one. An intention cannot justify itself, by providing itself with a reason. Still, your intention must playa role in your reasoning and lead you to other intentions, including the intention of borrowing money. The purpose of Bratman's book is to explain the vital role of intentions in the reasoned planning of our lives. This role creates a puzzle. If your intention leads to other intentions by reasoning, does it create a reason for those other intentions? If it did, that would still be a sort of bootstrapping; a reason would be being pulled into existence out of nothing. We have a solution to this puzzle. There is no bootstrapping. One intention gives rise to another by means of reasoning, but no reasons are involved. There is only the relation of normative requirement, which is given us by the correctness of the reasoning. 4 Christine Korsgaard's 'The normativity of instrumental reason' (1997) is an important discussion of what Korsgaard calls the 'instrumental principle', 4 This argument is developed more thoroughly in Broome (2oora).
Practical Reasoning
97
the principle 'that practical reason requires us to take the means to our ends'.s Korsgaard is concerned with how instrumental reasoning is possible. One of the main conclusions she draws from her argument is that: 'Unless there are normative principles directing us to the adoption of certain ends, there can be no requirement to take the means to our ends' (p. 220). She says, in more detail: For the instrumental principle to provide you \Xlith a reason [to take the means to an end], you must think that the fact that you will an end is a reason for the end. It's not exactly that there has to be a further reason; it's just that you must take the act of your own will to be normative for you. And of course this cannot mean merely that you are going to pursue the end. It means that your willing the end gives it a normative status for you, that your willing the end in a sense makes it good. The instrumental principle can only be normative if we take ourselves to be capable of giving laws to ourselves-or, in Kant's own phrase, if we take our own wills to be legislative (pp. 245-6).
Korsgaard would be right that you n1ust have a reason for your end if instrumental reasoning provided you with a reason to take a means. And if instrumental reasoning did this for you, I dare say her other conclusions would follow. But instrumental reasoning does not provide you with a reason to take a means. That is not how it works. Willing (or intending) an end normatively requires you to will whatever you believe is a necessary means to the end. It does not give you a reason to take the means, and it does not need to. So actually Korsgaard's conclusions do not follow. Willing an end need not give the end a normative status for you, for instance.
6. NORMATIVE REASONING AND NORMATIVE ASCENT
The intention reasoning illustrated in (2) is rare. We rarely have the opportunity to engage in reasoning just like that. It is available only when we believe some particular means is necessary to an end of ours, and we rarely encounter a means that we believe to be strictly necessary. Normally we recognize several alternative ways of achieving each of our ends. How does practical reasoning work then? For the sake of argument up to now, I have ignored the fact that you can buy a boat by other means than borrowing money. For example, you can sell your house or rob a bank. These are definitely inferior alternatives to borrowing money, but they are possible. How can you reason about means of buying a boat, bearing this in mind? 5 This definition of the instrumental principle is implicit in Korsgaard (1997, 215). Although I have eventually concluded that Korsgaard's argument is mistaken, her paper was a major stimulus for this paper of mine.
John Broome When you do not believe the means is necessary, it is tempting to turn to normative reasoning. Consider the inference that you would express to yourself as: I intend to buy a boat and If I intend to buy a boat, I ought to borrow money, so I ought to borrow money. In neutral terms: Chris intends to buy a boat
(8b)
If Chris intends to buy a boat, Chris ought to borrow money,
(8c)
Chris ought to borrow money.
(8d)
and so Let us assume a cognitivist account of normative statements. Given cognitivism, (8) expresses a valid inference concluding in a normative proposition (8d). So the following describes a correct piece of belief reasoning you might go through: B(Chris intends to buy a boat)
(9b)
B(If Chris intends to buy a boat, Chris ought to borrow money)
(9C)
and so B(Chris ought to borrow money). I shall use the term 'normative reasoning' for belief reasoning that concludes in a belief in a normative proposition. The reasoning from (9b) to (9d) is normative reasoning. We might think that normative reasoning could form a component of a longer process of practical reasoning. Your whole reasoning process might be: I(Chris will buy a boat) so B(Chris intends to buy a boat).
(9b)
B(If Chris intends to buy a boat, Chris ought to borrow money)
(9c)
Also so B(Chris ought to borro\\T money), so I(Chris will borrow money).
(ge)
I shall call this reasoning by 'normative ascent'. From (9a) to (9b) is the step of ascent, from (9b) to (9d) is a process of normative reasoning, and (9d) to (ge) is the descent step from a normative belief to an intention.
Practical Reasoning
99
However, this process of reasoning has a fatal flaw. The conditional normative proposition contained in (9C) is not in general true. Even if you intend to buy a boat, it may not be the case that you ought to borrow money. Perhaps you ought not to buy a boat in the first place, for example. I said in Section 5 that your intention to buy a boat does not make it the case that you ought to intend to borrow money. Much less does it make it the case that you ought to borrow money. Since Section 5, I have relaxed the assumption that borrowing money is a necessary means of buying a boat, but that makes no difference to this point. Your intention of buying a boat may normatively require you to intend to borrow money. But we cannot detach the conclusion that you ought to intend to borrow money. Much less can we conclude that you ought to borrow money. No detachable normative conclusion is available, and hence no material conditional proposition such as the content of (9C). The content of (9C) might be true by accident, but it is not generally true, so it could not itself be supported by any general process of correct reasoning. Consequently, (9) cannot serve as a model for practical reasoning. We can invent variants of (9), in which the content of (9C) is replaced with some other conditional. For example: B(If Chris will buy a boat, Chris ought to borrow money)
(9f)
B(If Chris intends to buy a boat, Chris ought to intend to borrow money).
(9g)
or
The rest of the reasoning in (9) would need to be adjusted accordingly. But the san1e objection applies to all these variants. These conditionals with a normative consequent are not in general true, and could not be supported by a general process of correct reasoning.
7. REASONING AND METAREASONING
What can be done about that? We might try to accommodate the lesson of Section 5 by setting up this different sort of reasoning process, which incorporates normative ascent in a different way: I(Chris will buy a boat)
(loa)
B(Chris intends to buy a boat).
(lob)
B(Chris's intending to buy a boat normatively requires Chris to intend to borrow money)
(laC)
I(Chris will borrow money).
(lad)
so Also
so
John Broome
roo
In appropriate circumstances, the content of (roc) might be true and supported by a general process of correct reasoning. For example, that might be so if you believe that borrowing money is the best means of buying a boat. So in this respect (ro) can serve better than (9) as a model for practical reasoning. Nevertheless, (ro) is misbegotten. To see why, let us take as an example a set of circumstances in which we know already that (roc) is a true belief. It is a true belief in the case I investigated earlier, where you believe that borrowing money is a necessary n1eans of buying a boat. So let us return to that case for a moment. In that case, we already have correct reasoning that can bring you to an intention of borrowing money. It is described in my original example of practical reasoning, (2). Compared with (2), (ro) has at least two dubious features. One arises from the ascent to the norn1ative and the other from the descent from the normative. First, your intention of buying a boat figures in (ro) only by supplying you with the belief that you have this intention. This is the ascent step. Your final intention of borrowing n10ney is supposed to derive from beliefs only, the beliefs (rob) and (roc). This is not plausibly the way one of your intentions rationally induces another. It does not do so by means of creating in you the belief that you have it, but n10re directly. In any case, the step from (roa) to (rob) is not plausibly reasoning. Second, the descent step to the intention (rod) is very hard to fathom. In our earlier formulation (9), the corresponding descent from (9d) to (ge) can easily be understood. It sets out from your belief (9d) in the detached normative proposition that you ought to borrow money. It is plausible that, if you believe you ought to do something, this belief normatively requires you to intend to· do it. 6 If so, the step from (9d) to (ge) is normatively required. It is much harder to understand how this intention can be directly derived from (lob) and (roc), without the aid of detachment. And we know that detachment is not available. By now it should be obvious that (ro) is malformed. We already have in (2) an accurate description of correct intention reasoning in the special case we are considering. To say that your intention of buying a boat normatively requires you to intend to borrow money (in the circumstances that you believe this is a necessary means of buying a boat), is merely to say that the reasoning in (2) is correct. It is a remark that belongs to metareasoning, not to reasoning. But in (ro) it is injected into the reasoning itself through the belief (roc). That is why (ro) is a mess; it is a muddle of reasoning and metareasoning. The correct reasoning (2) does not call on either the belief (rob) or the belief (roc), and it concludes directly in the intention (2C) or (rod). So it does not make the worrying steps of ascent and descent. Ascent and descent are only called on 6
This idea is investigated in Broome (200Ib).
Practical Reasoning
ror
when metareasoning becomes incorrectly entangled with the reasoning. Normative ascent should be no part of instrumental reasoning. Normative reasoning belongs to metareasoning, not to instrumental reasoning itself. We need metareasoning. We have to determine when reasoning is correct. In Section 2, I argued that one sort of instrumental reasoning, represented by (2), is correct. Its correctness depended on deductive logic. But now we are dealing with means that are not necessary, deductive logic is not going to be enough. You are not entitled to the belief (2b). We need to discover other correct patterns of instrumental reasoning. There surely are some. When there is a necessary means to your end, the reasoning is clear cut. When there is a means that is not necessary, but still very much the best, reasoning cannot simply leave you in the lurch. By reasoning, you must still be able to arrive at at?- intention to take this means. Once again, we can look at belief reasoning for an analogy. Correct belief reasoning can rest on deductive logic, like the reasoning described in (3). But there are other bases for correct belief reasoning, too: induction, perhaps, or inference to the best explanation. For the sake of argument, let us suppose this is correct reasoning. B(The road is wet) and B(The best explanation of the road's being wet is that it has been raining) so B(It has been raining). Then this metastatement will be true: believing the road is wet and believing that the best explanation of the road's being wet is that it has been raining norn1atively requires you to believe it has been raining. The metastatement does not belong to the reasoning; it supports it. Inevitably, principles of inference other than logic will be more controversial than logic, and I do not insist that inference to the best explanation is a valid principle of inference. It is only an example. There must surely be some valid principles of belief reasoning other than deductive logic. Whatever these nonlogical principles are, we must expect them to impose weaker normative requirements than logic does. There are degrees of normative requirement. To infringe logic is a worse offence than to infringe some other principle of inference. We need to find analogous non-logical principles for intention reasoning. We need to identify correct reasoning of the form I(Chris will buy a boat)
(rra)
B( ... )
(rrb)
I(Chris will borrow money).
(r rc)
and so
102
John Broome
Some belief or other must substitute for the dots in (I I b). This belief must be such that, together with your intention of buying a boat, it normatively requires you to intend to borrow money. Evidently it must in some way link your buying a boat with your borrowing money. We know already that (I I) is correct reasoning when we substitute (4b)-'For Chris to buy a boat, a necessary means is for Chris to borrow money'-for the dots. In that case, (II) comes down to (2). Now you are no longer entitled to believe (4b), we must find a replacement.
8. DECISION THEORY AND CONDITIONAL GOODNESS
Decision theory is generally supposed to supply an account of instrumental reasoning, so I shall start with decision theory. It will fail in the end, but it will take us a long way, and we can learn from its failure. We want to know how you can reason about how to buy a boat when borrowing n10ney is not the only way of buying one. Let us make the example more testing. Let us suppose that, although you intend to buy a boat, it would be better if you did not, because you cannot afford to. In these circumstances, it n1ay be that you ought not to borrow money. Nevertheless, you must still be able to derive an intention of borrowing money from your intention of buying a boat. Instrumental reasoning must be able to do that for you. As I said in Section 5, instrumental reasoning cannot break down just because your end is one you would be better off without. How can decision theory help you? Most often, decision theory is formulated as a theory about the structure of preferences; it describes what structure your preferences ought to have. But I am not going to use it in this form. The reason is that I cannot see how in this form it can contribute to practical reasoning. I cannot envisage any process of reasoning that could take you from a preference to an intention. A preference can cause an intention, but I do not see how this causal process could be one of reasoning. But that is a long story, which I shall not develop here. Instead I shall simply use decision theory in another form that is more promising. The theory can easily be divorced from preferences, to serve as a theory about the structure of goodness instead. 7 That is how I shall use it. One purpose of decision theory is to take account of uncertainty. Uncertainty always intervenes between an act and its results; you never know exactly what the result of any act will be. For each act you might do, there is a range of possible results that may emerge if you do it. If you do the 7 See Broome (1991) and the Introduction to Broome (1999a). Jeffrey (1983) treats decision theory as a theory of 'desirability', which I take to be synonymous with goodness.
Practical Reasoning
103
act, each of its possible results will have some probability, and each will also be good to some degree. For the purposes of my argument, it does not matter what is the source or nature of this goodness. It might be impartial universal goodness, or it might be just your own good. We might be valuing the results of your acts from the point of view of the universe, or from your own self-interested point of view, or from some other point of view. We can apply decision theory in any case. Related to a result's goodness is a quantity that decision theorists call its 'utility'. The utility of a result need not be exactly its goodness, but it represents its goodness, which means that one result is better than another if and only if it has more utility. The distinction between utility and goodness makes no difference to my argument here, and it is safe to ignore it. Each act has an expected utility, which is defined as the mathematical expectation of the utilities of its possible results. Utility is defined in such a way that, of all the acts that are available to you, the best is the one that has the highest expected utility. The probabilities of the results of your acts will depend on circumstances. We are supposing that, given the general circumstances of the world, including your financial situation, it would be better for you not to buy a boat. This means that the expected utility of your buying a boat is less than the expected utility of your not doing so. The reason results from the probabilities: if you buy a boat you will probably get into financial difficulties; a high probability is attached to this bad result. Moreover, let us assume that the expected utility of your borrowing money is less than the expected utility of your not doing so. Again, this results from the probabilities: if you borrow money, that too will probably plunge you into financial trouble. So when we compare borrowing money with not doing so, given the general circumstances of the world, borrowing money is the worse alternative. However, we are assuming that actually you are going to buy a boat, foolish though that is. This is a further fact to be added to the general facts we have taken into account so far. It makes a difference to the probabilities. We now need to recalculate on the basis of conditional probabilities, conditional on your buying a boat. We must recalculate the expected utilities of your acts, using the new conditional probabilities. Given the new fact that you are going to buy a boat, then if you do not borrow money you will probably end up selling your house or robbing a bank. Consequently, the expected utility of borrowing money is now greater. than the expected utility of not doing so. When we compare borrowing money with not doing so, given the new probabilities that arise from the circumstance of your buying a boat, borrowing money is better. Conditional on your buying a boat, it is best to borrow money. That is the conclusion of decision theory.
John Broome
1°4
The suggestion that emerges from all this is that, in place of the dots in (lIb), we should put: Conditional on Chris's buying a boat, it is best for Chris to borrow money.
(12)
We get: I(Chris will buy a boat) and B(Conditional on Chris's buying a boat, it is best for Chris to borrow money) so I(Chris will borrow money). Expressed to yourself, the content is: I am going to buy a boat and Conditional on my buying a boat, it is best for me to borrow money, so I shall borrow money. This is a decision-theoretic account of instrumental reasoning. Is it a good account?
9. MERITS OF DECISION THEORY
How does the belief that it is conditionally best to borrow money fit into this reasoning-how does it help to justify an intention to borrow money? Some assumption of teleology is evidently implicit. Teleology is the theory that, when faced with a choice, you should choose whichever alternative is the best. To support (13), we shall need a version of teleology appropriately tailored to whatever notion of goodness is incorporated in (13 b). If it is goodness from the point of view of the universe, we must assume impartial teleology; if it is goodness for you only, self-interested teleology. The appropriate version of teleology allows us to infer from (12) that Conditional on Chris's buying a boat, Chris ought to borrow money.
(14)
We can plausibly understand this to be saying that your intention of buying a boat normatively requires you to borrow money. So it provides metareasoning to justify the reasoning process (13).
Practical Reasoning
1°5
Proposition (14) is not a material conditional, so it can provide no support for reasoning like (9), which I have already rejected. The 'ought' in (14) cannot be detached. (14) does not imply the material conditional: If Chris is going to buy a boat, Chris ought to borrow money.
(15)
If it did, it would risk contradiction. From (15) and the fact that you are going to buy a boat, we would be able to deduce that you ought to borrow money. But, given teleology, this is false. I assumed earlier that, unconditionally, borrowing money is worse than not doing so. Consequently, teleology implies you ought not to borrow money. In decision theory, conditionalized propositions like (14) do not imply material conditional propositions like (15). Conditionalized propositions stenl from conditional probabilities, and propositions about conditional probability do not imply material conditional propositions about probability. Conditional probability does not permit detachment. Here is the demonstration. For some propositions e, f, and g, let x be the probability of e conditional on f:
Prob(elf)
==
x.
(16)
Let y be the probability of e conditional on f&g:
Prob(elf&g)
==
y.
Assume x and yare different, as is certainly possible. Now suppose these conditional probabilities did imply the corresponding material conditionals. From (16) we would have:
f-::J Prob(e)
==
x.
Hence, by strengthening the antecedent:
f&g -::J Prob(e)
==
x.
But from (17) we would have:
f&g -::J Prob(e)
==
y.
Since x and yare different, this is a contradiction. Therefore, conditional probabilities do not imply the corresponding material conditionals. So (14) does not imply (15), and is not subject to the risk of contradiction I mentioned. The upshot is that (13) has sonle satisfactory features as a representation of instrumental reasoning. It seems to fit what Michael Bratman (1987, 33-4) has in mind as the role of intentions in practical reasoning. Bratman says: Prior intentions and plans ... provide a background framework against which the weighing of ... reasons for and against various options is to take place.
106
John Broome
The idea of providing a background framework is well captured by the process of conditioning on prior intentions. In a note, Bratman (1987, 18o, note 12), mentions this decision-theoretic interpretation as a possibility, though he does not endorse it: It may be possible to see expected utility theory as an account of how one is to bring to bear, in decisionmaking, utility and probability assignments concerning those options that are relevant and admissible, given one~s background framework of prior plans and flat-out beliefs.
10.
DECISION THEORY IS NOT INSTRUMENTAL REASONING
Despite a good start, decision theory's account of instrumental reasoning is unsatisfactory in the end. One unsatisfactory feature is that it depends on teleology, and genuinely correct intention reasoning should not depend on such a specific normative theory as teleology. This might not be disastrous in itself, but it points us to the real problem with the reasoning described in (13). The real problem lies in the notion of goodness incorporated in (12) and (13b). We started with some general notion of good. I call it a general notion, but remember is it is not necessarily good from the point of view of the universe; it may be your own good only. From this general good, we determined the comparative goodness of your acts by means of expected utility. We imposed the condition that you are going to buy a boat, and calculated the expected utilities of acts on the basis of conditional probabilities. In the calculation, the probabilities were conditional on your buying a boat, but the notion of goodness was not. It remained as it started: goodness in general, not goodness in any way relative to your buying a boat. We assessed your alternatives of borrowing money or not, according to how good they are, given that you are going to buy a boat. We did not ask how good they are for buying a boat. (12) says that borrowing money is the best thing to do, given that you are going to buy a boat. It does not say that borrowing money is the best way for you to buy a boat. These are quite different matters. The best thing to do, given that you are going to buy a boat, might not even be a means of buying a boat. For example, it could turn out that the best thing to do, given that you are going to buy a boat, is to join a course in seamanship. But this is not a means of buying a boat. That is why we had to call on teleology to support the reasoning in (13). We had to presume you are acting for the sake of goodness, not particularly to buy a boat. Your pursuit of good is merely constrained by the condition that you are going to buy a boat. Consequently, the reasoning had to call on goodness in general to provide an external objective for you to aim at. The objective of buying a boat was not itself enough to support the reasoning.
Practical Reasoning
1°7
Furthermore, once we have identified good as your aim, we can see that something else has gone wrong. If good is your aim, you ought not to condition on buying a boat, and pursue what is best conditional on that. You intend to buy a boat, but you might not succeed. The bank manager might not cooperate, or you might change your mind before you make the purchase. You ought to allow for the possible failure of your intentions. Consequently, if you are pursuing what it best, you should condition on your intention of buying a boat rather than on your actually buying one. But conditioning on intention takes the decision-theoretic reasoning process even further from choosing an appropriate means to your end. Suppose you intend to kill yourself. Appropriate means might be to slit your wrists in the bath or jump off a cliff. But the best thing to do, conditional on your intention of killing yourself, may be to get help with your problems, so that in due course you will no longer have this intention. The truth is that, despite some merits, (13) does not represent instrumental reasoning at all. It is not about how best to buy a boat, but about how to do what is best. I conclude that, whatever else it might be, decision theory does not provide an account of instrumental reasoning. It tells you how to achieve good, constrained by your intentions, not how to fulfil your intentions. I I.
CORRECT INTENTION REASONING AGAIN
I think we may take it for granted that a satisfactory replacen1ent for the dots in (I I b) will be something like For Chris to buy a boat, the best means is for Chris to borrow money or The best way for Chris to buy a boat is to borrow money. From section 10, we have learnt that the notion of 'best' in these propositions must be in some way relative to your end of buying a boat. Your intention reasoning will go like this, say: I(Chris will buy a boat)
(I8a)
B(The best way for Chris to buy a boat is to borrow money)
(I8b)
I(Chris will borrow money).
(I8c)
and so Its content, expressed to yourself, will be: I am going to buy a boat and The best way for me to buy a boat is to borrow money, so I shall borrow money.
r08
John Broome
This seems intuitively correct, and a natural extension of (2). Another example of intuitively correct intention reasoning is: I( Chris will comn1it suicide) and B(The best way for Chris to commit suicide off the cliff)
IS
for Chris to Jump
so I(Chris will jump off the cliff). This reasoning concludes in an intention to jump off the cliff. If the reasoning is correct, this intention is normatively required by your intention of committing suicide and your belief that jumping off the cliff is the best way of doing so. I am not suggesting you ought to jump off the cliff or that you have a reason to do so. I made it clear in Section 4 that no conclusion about what you ought to do or have a reason to do follows from the relation of norn1ative requirement. I have given two examples of intention reasoning that seen1 intuitively correct. I am sorry to say that is now the best I can do. I can point out what seems intuitively correct, but I have run out of arguments. I have left two tasks undone. One is to explicate the notions of 'the best means' or 'the best way', which figure in the reasoning. The other is to explain why reasoning like this is indeed correct. I recognize these unaccomplished tasks are both major ones. So this paper is really only the beginning of a theory of instrumental reasoning. I hope I have accounted fully for reasoning to a means that you believe is necessary. But about reasoning to a means that you do not believe is necessary, I have not gone beyond negative conclusions. I have argued that normative reasoning is not a correct account of this type of reasoning, and nor is decision theory. A positive account is awaited. I can mention some of the agenda for the two tasks. So far as the first task is concerned, we know already that the idea of goodness contained in 'the best means to the end' must be relative to the end itself. The goodness of a means to an end is not goodness in general. The best means to an end is not the same as what is best, given the end. For this reason, the correctness of (r8) does not depend on teleology; it does not call on the external objective of goodness. The objective is internal to the reasoning.· Taking the best means is normatively required by the end itself, not by the pursuit of good. Undoubtedly, one component of goodness for an end must be reliability in achieving the end. Jumping off a cliff is a good way to kill yourself, because it will probably work. Taking an overdose of aspirins is not a good way; because it often fails. Also, it will often be true that external goodness is a component of goodness for an end, although it cannot be the whole of
Practical Reasoning
1°9
it. What is good, given the end, in many cases plays a significant part in determining what is the best means to the end. The best means of buying a boat is partly determined by the effort and cost involved. Saving effort and money is good in a general sense; its goodness does not issue from your intention of buying a boat. So we must properly integrate external goodness with goodness specific to the end. That is part of the first task I mentioned. A problem within the second task is this. I started this paper by arguing that intention reasoning and belief reasoning are parallel, and both follow truth. I made that argument in Section 2, for the special case of a necessary means. In this special case, both types of reasoning are supported by deductive logic. What about the more general case? As it happens, belief reasoning parallel to (18) is plausibly correct: B(Chris will buy a boat)
(I9a)
B(Thebest way for Chris to buy a boat is to borrow money)
(I9b)
and so B(Chris will borrow money). This reasoning is defensible. It is supported by an inference to the best explanation. If you are going to buy a boat, the best explanation of how this will happen is that you will borrow money. So you might infer that you will borrow money. Any instance of intention reasoning to the best means like (18) will be paralleled by belief reasoning to the best explanation like (19). However, the parallel is not robust. (19) constitutes reasoning to the best explanation only because (18) is reasoning to the best means, and because you may be expected to reason tothe best means. If you were irrational, (19) would not be defensible reasoning. I think the truth is that, when we move away from necessary means, intention reasoning and belief reasoning diverge. Both still follow truth, one in a truth-making way and the other in a truth-taking way. But the former is concerned with the best way of making the end true, and the latter with the most likely way the end will be true. When there is only one means, and so only one way the end will be true, these concerns collapse into one, but otherwise they do not. The case when there is just one means is also the case where both types of reasoning are supported by deductive logic. I do not think this should cast doubt on the conclusion of Section 2. I still think both types of reasoning follow truth. Because of this, they are supported by the same deductive logic, when they are supported by deductive logic at all. That is what Section 2 claimed. But part of the second task-giving an account of correctness for intention reasoning in general-will be to check my claim that intention reasoning, like belief reasoning, always follows truth.
110
John Broome 12. SUMMARY
Like all reasoning, practical reasoning is a process that takes you from some of your existing mental states to a new one. Theoretical reasoning concludes in a belief; practical reasoning in an intention. If a piece of reasoning is correct, it concludes in a state that is normatively required by the states it is derived from. But it does not follow that you ought to be in the concluding state, or have a reason to be in it, even if you are in the states it is derived from. This paper considered only one sort of practical reasoning: instrumental reasoning. If you intend an end, from this intention together with an appropriate belief, instrumental reasoning leads you to intend a means to the end. The simplest cases of instrumental reasoning are those in which you believe a particular means is necessary to the end. In those cases, the correctness of the reasoning is ensured by the logical validity of its content. More commonly, you will not believe that the means is necessary to the end, but instead that it is the best means to the end. In those cases, your reasoning does not rest on logical validity; it requires son1e other principle of reasoning. This principle remains to be worked out. But instrumental reasoning is definitely not to be interpreted as normative reasoning. Nor does decision theory supply an appropriate principle of instrumental reasoning.
REFERENCES Aune, Bruce (1977) Reason and Action (Dordrecht: Reidel). Bratman, Michael E. (1987) Intention, Plans and Practical Reason (Cambridge, Mass.: Harvard University Press). Broome, John (1991) Weighing Goods (Oxford: Blackwell). --(1999a) Ethics Out of Economics (Cambridge: Cambridge University Press). --(1999b) 'Normative requirements', Ratio, 12, 398-419, reprinted in Jonathan Dancy (ed.), Normativity (Oxford: Blackwell, 2000), 78-99. --(2001a) 'Are intentions reason? And how should we cope with incommensurable values?', in Christopher Morris and Arthur Ripstein (eds.), Practical Rationality and Preference: Essays for David Gauthier (Cambridge: Cambridge University Press), 98-120. --(2001b) 'Normative practical reasoning', Proceedings of the Aristotelian Society, suppl. vol. 75, 175-93. Chisholm, Roderick (1978) 'Practical reasoning and the logic of requirement', in Joseph Raz (ed.), Practical Reasoning (Oxford: Oxford University Press), 118-27. Geach, Peter (1965) 'Assertion', Philosophical Review, 74, 449-65. Hare, R. M. (1971) Practical Inferences (London: Macmillan). Hume, David (1978) A Treatise of Human Nature, edited by L. A. Selby-Bigge and P. H. Nidditch (Oxford: Oxford University Press).
Practical Reasoning
III
Jeffrey, Richard C. (1983) The Logic of Decision, 2nd edn. (Chicago: University of Chicago Press). Kamm, Frances (2000) 'The doctrine of triple effect and why a rational agent need not intend the means to his end', Proceedings of the Aristotelian Society, Supp!. vo!. 74, 21-39· Korsgaard, Christine (1997) 'The normativity of instrumental reason', in Garrett Culliry and Berys Gaut (eds), Ethics and Practical Reason (Oxford: Oxford Universiry Press), 215-54. Von Wright, G. H. (1978) 'On so-called practical inference', in Joseph Raz (ed.), Practical Reasoning (Oxford: Oxford University Press), 46-62.
5 Reasons for Action and Instrumental Ratio11ality>:" ALAN MILLAR
• ••
There is a familiar and highly plausible view, deriving from Elizabeth Anscombe, I that an agent
114
Alan Millar
which figure in the Anscombian Principle are motivating reasons. Normative reasons are commonly explained as being reasons which provide the agent who has them with a justification for the action in question. In an influential work on reasons for action Michael Smith endorses the idea that 'our concept of a reason for action is loosely defined by [the] two dimensions of explanation and justification' (Smith 1994, 95). Indeed, he says, 'we work with two quite different concepts of a reason for action depending on whether we emphasize the explanatory dimension and downplay the justificatory, or vice versa' (Smith 1994, 95). It is the justificatory reasons which are normative. In the same vein, Garrett Cullity and Berys Gaut write, 'Nornlative reasons are those providing a justification of the actions for which they are reasons' (Cullity and Gaut 1997, 1).2 Whatever else they may be, normative reasons are reasons which an agent might consider in advance of the action for which they provide a reason. What makes them normative is that they are constituted by considerations which recommend or favour some course of action. (This is precisely how the idea of a normative reason is explained in the glossary to Darwall 1998.) It is a further matter whether all normative reasons should be regarded as providing justification. In this paper, I argue that there are normative reasons for action which do not provide an agent with anything which could properly be called justification. These are reasons which recommend an action by showing that there is, or would be, some point in doing it, but which fall short of providing the agent with a justification for the action. For example, the fact that I intend to buy a newspaper now and can do so at the corner shop supplies me with a reason to go to the shopa reason which would give a point to my doing so. It would be overblown to speak of this reason as giving me a justification to go to the shop. In a case like this, it might well be that talk of having or lacking justification is not to the point. Theorists have been too ready to think of normative reasons for action on the model of normative reasons for belief. The latter can provide nothing less than a justification or warrant for a belief. But the analogy should not be assumed without further argument. I touch on this matter in Sections V and VI. My resistance to counting all normative reasons for action as justificatory reasons raises a terminological nlatter. For some the very idea of a normative reason may be so closely tied to the idea of a reason providing justification, that it seems self-contradictory to talk of normative reasons which do not provide a justification. In the light of this it might be thought best to 2 For the distinction between normative reasons and motivating or explanatory reasons, see, apart from the works cited in the text, Baier (1958, ch. 6), Nagel (I970, I4-15), Bond (1983, Ich. 2), Darwall (1983, ch. 2; 1997), Schueler (1993, 46 ff), Scanlon (1998, 18 ff). Nagel, iDarwall, Scheuler, Smith, Cullity and Gaut, and Scanlon all take normative reasons to be teasons which supply a justification.
Reasons for Action
reserve the adjective 'normative' for reasons which provide justification. My view could then be expressed by saying that there is a class of reasons which favour or recommend an action but which are distinct from normative reasons. Against that way of dealing with the terminology is the fact that it is agreed on all sides that normative reasons for action must recommend or favour an action. The reasons which I wish to pick out meet that condition; it is just that they do not provide a justification for the action. If there are such reasons, then the common conception of normative reasons as providers of justification is overly restrictive. To make out the case I need to work with some conception of reasons which provide justification. To this end I shall make use of a distinction between justificatory and justifying reasons. A justifying reason for you to cf> is a true consideration which justifies your cf>ing. A justificatory reason for you to cf> is a true consideration such that, in the absence of countervailing considerations, you would be justified in cf>ing.3 Countervailing considerations are considerations which override or undermine. Suppose that promising to cf> is a justificatory reason to cf>. Even so, having promised to meet someone you may not be justified in doing so either because the promise was extracted under duress (an undermining consideration) or because you are under an overriding obligation to do something else incompatible with keeping the promise (an overriding consideration). The most important point is that, taken on its own, a justificatory reason provides what is sometimes called a pro tanto reason4 for an action. If there is a pro tanto reason for an action, then in the absence of countervailing reasons the action would be justified. My claim is a relatively strong one. It is that not all normative reasons for an agent to cf> are so much as pro tanto reasons for the agent to cf>. The story so far does not tell us how we are to read 'justified'. A natural suggestion is that if your cf>ing is justified then you ought to cf>.5 It is at least arguable, however, that 'ought' is ambiguous and that oughts vary in strength. If that is so, then it may be that we do not pick out the force of 'justified', as 3 I shall take the foregoing claim when fully spelled out to be tantamount to the following more clumsy, but more accurate, formulation: a justificatory reason R for you to ,p is a true consideration which is such that were it to be conjoined with the further true consideration that there are no countervailing considerations, then R and that further consideration would in conjunction constitute a justifying reason for you to ,p. 4 On pro tanto reasons, see Kagan (1989, 17). The notion is used in this volume by John Broome. S See Darwall (1983, 31). Cullity and Gaut (1997, I) speak of normative reasons as reasons which answer the question, 'Why should the agent do that?'. Smith says, 'To say that someone has a normative reason to ,p is to say that there is some normative requirement that she ,ps, and thus to say that her ,p-ing is justified from the perspective of the normative system that generates that requirement' (Smith 1994, 95). 'Required' seems to amount to much the same as 'ought' on at least one reading of the latter word, though Smith goes on to say that normative reasons are truths of the general form 'Ns ,ping is desirable or required'. 'Desirable' is clearly weaker than 'required' but still expresses something stronger than what is true of an action when there is some point in doing it.
116
Alan Millar
opposed to 'justificatory', by saying that if there is a justifying reason for you to
7
Reasons for Action
117
I shall assume in what follows that justifying reasons for an agent to cP are reasons in virtue of which the agent ought, all things considered, to 4>. The notion of a justificatory reason to 4> is to be taken accordingly, as a consideration such that in the absence of countervailing reasons the agent ought, all things considered, to 4>. The claim to be defended is that there are normative reasons which are not justificatory in this sense. Maybe those who tie normative reasons to justification never intended to en1ploy such a strong notion of justification. That does not matter for the current project. What does matter is that there is a distinction between normative reasons of the sort in which I am interested and reasons which are justifying, or at least justificatory. To explain why it matters, we need to reflect further on the relation between normative reasons and motivation.
II Having a motivating reason for
. If I mistakenly think that I have made an arrangement to meet you at the library, and so head for the library, there is a reason by which I am motivated, but I need not have a normative reason to be at the library, though I may think I have. Still, it is plausible that the two types of reason are connected. When actions are done for (motivating) reasons they admit of a rationalizing explanation-an explanation which shows that there was, at least from the agent's point of view, something to be said for the action. 8 This suggests what I shall call the Motivation Principle: a consideration which constitutes an agent's motivating reason for 4>ing must be one which the agent rightly or wrongly treats as being a (normative) reason for the action-a consideration in the light of which there is something in favour of (to be said for) the action. That an agent treats a consideration in this way could be manifested in all sorts of ways, for example, by presenting the consideration as being a reason for the action if challenged to justify or show the point of it. The Motivation Principle strikes me as being correct. 9 I shall not attempt a defence of it here since I am more concerned with how the principle relates to other aspects of the theory of reasons for action. More specifically, 8 Davidson comes close to such a view when he writes that 'there is a certain irreduciblethough somewhat anaemic sense-in which every rationalization justifies; from the agent's point of view there was, when he acted, son1ething to be said for the action' (Davidson 1963; 1980, 9). Compare McDowell's remark that '[e]xplanations of actions in terms of reasons work by revealing the favourable light in which the agent saw what he did (or at least what he attempted)' (McDowell 1982, 301). 9 Doubts about whether it is correct may stem from the fact that it requires agents to have a point of view on their own reasons. I explore related matters in Millar (2001).
Alan Millar
118
a problen1 arises if the Anscombian Principle and the Motivation Principle are conjoined with the view that all normative reasons are justificatory in the sense I outlined above. The problem is posed by the following assumptions. A.
An agent
Cf. Stocker (1979) and Velleman (1992).
Reasons for Action
I
19
are, then it is possible that agents should act intentionally on such desires. So there can be cases in which an agent acts intentionally but does not think that there is a justificatory reason, far less a justifying reason, for what she does. Yet A-D commit us to supposing there are no such cases. In the face of closely related considerations, Smith offers some reflections which, on plausible assun1ptions, commit him to rejecting B, the Motivation Principle. With reference to one of Watson's examples he writes: If the woman ... drowns her bawling baby in the bathwater ... then, in one sense, we can say that she both has a reason for acting and acts for that reason. For she has a motivating reason and that reason figures in a teleological explanation of what she does ... an explanation that allows us to see the woman as in pursuit of a goal that she has. But in another sense, of course, the woman has no reason for acting and what she does is therefore not explicable, even by her own lights, as having been done for a reason. For she acknowledges that there is no rational justification for what she does; acknowledges that the goal she pursues is itself unjustified and unreasonable. (Smith 1994, 14 0 )
Smith represents the unfortunate woman as having a motivating reason for drowning the baby yet not thinking that she has a justification for what she does. That there can be such cases is exactly what I have been arguing. If that is right then, assuming the Anscombian Principle, the Motivation Principle, understood in terms of C, must be rejected. Rejecting the principle is certainly one way of resolving the tension between A, B, and C. Obviously there are other options. One might hold onto B and give up A, the Anscombian Principle. Holding onto B, understood in the light of C, commits one to denying that the woman acts for a reason. But since she acts intentionally this denial would commit us to rejecting A. My own view is that the kind of examples of perverse action presently under consideration do not provide good reasons for rejecting A. (Further putative counterexamples are considered in the next section.) I argue below that we should reject C. It should be emphasized that in what follows I take all normative reasons to be constituted by true considerations. Motivating reasons are commonly said to be constituted by mental states, in particular, some belief-desire pair. II The belief does not have to be true; a false belief that you can 4> by t/Jing could on this view be an element of a belief-desire pair constituting your motivating reason for t/Jing. Motivating reasons, in n1Y view are also best conceived as being constituted by considerations I2 though a consideration can constitute a motivating reason for an agent's t/Jing only if the agent accepts that consideration and is, as we say, moved by it. II
12
The locus classicus for this view is Davidson (1963). It is endorsed in Smith (1994). On this, see Darwall (1983, ch. 2). Cf. Audi (1993).
120
Alan Millar
III The aim in this section is to motivate the rejection of C and to argue that a normative reason for an agent to do something may simply confer a point on the action without being a justificatory reason for the agent to perform it. There is an important ambiguity, however, in the notion of an action's having a point. Intending to talk to Samantha urgently, and believing that he could do so by catching her just before she leaves her apartn1ent on a trip, Richard heads for Samantha's place. His motivating reason is that if he heads for San1antha's he can talk to her before she leaves. In some sense, his action has a point in that it is directed at carrying out an intention-to talk to Samantha before she leaves. But suppose that Richard is wrong in thinking he can catch her in time, because she has already left. Someone who knew this could quite naturally and truthfully say, 'There's no point in going. She has already left.' In one sense of 'having a point', an action has a point if it is directed at carrying out an intention of the agent. In another sense, an action directed at carrying out an intention would not have a point unless it would in fact contribute to carrying out that intention. Richard's action has a point in the first sense even if he is wrong in thinking he can reach San1antha in time. It has a point in the second sense only if he is right in so thinking. There are further con1plications. It can be that there would·be a point to something which you do not presently intend to do. For example, there might be a point to your reading some book which would help you with a current project. Whether that is so does not depend upon your having an intention to read the book. I shall throughout take it that an action would have a point if either it would contribute to some goal of the agent or it is worth doing. A (normative) reason for an agent to do something must at least confer a point on the action-it must be such that there would be a point to the agent's performing that action. The crucial issue is whether such a reason n1ust be justificatory. Suppose that you are enjoying a day off work. There is no reason for you not to have taken the day off, and you are under no obligation to spend your time in any particular way. You may do as you please and you intend to do just that. As it turns out you have an inclination to go for a walk. You go for a walk, as we say, just because you feel like it. But you are not simply borne along by your inclination. You have decided, and so formed the intention, to satisfy your inclination. Do you go for a walk for a reason? I think you do. Your reason is that you intend to satisfy your inclination to do so, and going for a walk would do just that. True, we might say of the action in question that it is done for no reason, but that might mean simply that there is no reason beyond the consideration that the action would carry
Reasons for Action
121
inclination. 13
out your intention to satisfy the Not only did you act for a reason, there is a (normative) reason for you to go for a walk. For it is true that you intend to satisfy your intention and true that by going for a walk you can do so. This consideration therefore confers a point on your going, and so provides you with a reason to go, but the reason is not justificatory. A justificatory reason for an agent to cP is a true consideration which, in the absence of countervailing considerations, would justify the agent's cPing, in a sense which implies that the agent ought to cP, all things considered. Suppose then that the consideration that you intend to satisfy your inclination to go for a walk, and that going for a walk would do just that, were a justificatory reason for you to go for a walk. In that case, if there were no countervailing considerations, and yet you did not go, you would have failed to do something that all things considered you ought to do. The present proposal does not have such a consequence and that is a large part of its attraction in relation to the example under consideration. For it seems ludicrously overblown to suppose that if you did not go for a walk you would have failed to do something you ought to have done. Of course, a question might arise as to whether you ought to have gone, but a plausible defence would focus not on your reason for going but on your being at liberty to do as you liked. It would involve a claim to the effect that you had a day off and were under no obligation to do anything which going for a walk prevented you from doing. You would have justified yourself by showing that you were at liberty to do as you did, but that is not the same as providing a justificatory reason for your going for a walk. Someone might concede that there was no justificatory reason for you to go for a walk and claim that there was, therefore, no normative reason for you to go. But if by 'a normative reason' is meant a reason which in some way favours or recommends an action, then there is room for a distinction among such reasons between those which are justificatory in the present sense and those which merely confer a point on the action. To this in turn it might be suggested that the current sense of 'justificatory' is too strong; a justificatory reason is just a consideration which favours or recommends an action. We should resist any such watering down of the notion, because it has the consequence that a consideration can constitute a justificatory reason for an action yet not be so much as relevant to justifying the agent in the face of a challenge as to whether she ought to have done what she did. To hold this to be objectionable is not to beg the question at this point. It does not presuppose that justificatory reasons are to be explained in terms of all-thingsconsidered oughts. It relies rather on an independent adequacy condition for 13 Cf. Davidson (19 63; 1980, 6). Davidson's point is importantly different. He thinks the reason is provided by the inclination. I think it is provided by an intention to satisfy the inclination.
122
Alan Millar
an account of justificatory reasons: justificatory reasons should be positively relevant to justifying the agent who perforn1s those actions in the face of challenges as to whether she ought to have done what she did. (So far as that goes justificatory reasons might show an action to be pern1itted.) Normative reasons which confer a point upon an action need not meet the proposed adequacy condition. If we assume that some normative reasons are not justificatory, then the Motivation Principle supplies no reason to reject the Anscombian Principle. But is the Anscombian Principle plausible? Anscombe herself cited doodling as an example of an intentional action done for no reason (Anscombe 1963, s~ 17). Let us concede that doodling, done as in a daydream, might be done for no reason. In such cases it might be doubted that the action is intentional. You might start to doodle in this way but perhaps doodling only becomes intentional when it has a point-for instance, satisfying an inclination to develop a pleasing pattern. Rosalind Hursthouse rejects the Anscombian Principle drawing upon examples of what she calls arational action (Hursthouse 1991). Paradigm cases are kicking a car when it will not start, or scratching a photograph of a person who has upset one. Again, let us concede that such actions might sometimes be done for no reason. This might be so in cases in which the agent is completely out of control. But again, to the extent that such actions are done for no reason it is doubtful that they are intentional. When they are intentional, reasons are not hard to find. In the car-kicking case, you feel an urge to lash out, as if the car deserved to bear the brunt of your anger, and you give way to the urge. In the photograph-scratching case, you have an urge to dan1age the photograph as if you were thereby damaging the person photographed, and you give way to it.
IV
Opposition to the view I have been defending is likely to focus on whether it can adequately accommodate instrun1ental, that is, means-end, reasoning. It might seem obvious that such reasoning relies on a principle to the effect that if you intend to 4> and l/Jing is a way of, or means to, your 4>ing, then there is a justificatory reason for you to l/J. Because I shall be objecting to this principle I shall call it the Dubious Means-End Principle. Let us return to the case of Richard. Suppose Richard is infatuated by Samantha and that, despite having been rebuffed by her in terms which leave no room for reasonable doubt, he now thinks, against all the evidence, that there are things he can say to her which will lead her to view him in a better light. Despite all the manifest signs, he is blind to the fact that he rould almost certainly make matters worse if he called on her just before she departs on her trip. For these reasons, and perhaps others, let us
Reasons for Action
123
suppose that Richard's intention is irrational. Why should we think that his irrational intention gives rise to a justificatory reason for him to do something which would serve to carry it out? Granted that justificatory reasons can be overridden, and so need not actually justify the action for which they are reasons, still, the question is why the irrational intention generates so much as a justificatory reason to head for Samantha's. The query is to the point even if the only way for Richard to carry out his intention is to reach Samantha before she leaves, and the only way to do that is to head for her place. If his intention is irrational, it is no less puzzling that it should give rise to a justificatory reason. Obviously, the fact that one has an intention can feed into one's thinking, along with assumptions as to what is necessary if it is to be carried out, and lead one, through correct reasoning, to a conclusion that one will have to perform a certain action. But that would not show that there is a justificatory reason to perform the action. It would show only that, granted the assumptions about means, the intention commits one to performing the action. That no more shows that there is a justificatory reason to perform the action than the fact that your belief that p commits you to believing that q shows that there is a justificatory reason to believe that q. 14 There is a further, and related, consideration which ought to make us suspicious of the idea that intentions generate justificatory reasons to do what is required to carry them out. It is a truism that to carry out an intention to 4> it is necessary that one 4>. If intentions always give rise to justificatory reasons to do what is necessary to carry them out, then, given the truism, it follows that the intention to cP gives rise to a justificatory reason to 4>. This looks no more plausible than the analogue for belief, that, merely in virtue of believing that p, there is a justificatory reason to believe that p. Clearly, something is wrong with having an intention and yet not doing what is necessary to carry it out. I argue now that we can make sense of the commitments incurred by intentions without assuming the Dubious Means-End Principle. I5 Throughout, it is important to keep in mind that my principal aim is to highlight, and account for, the difference between those reasons to 0/ which are, and those which are not, justificatory reasons to l/J, in the sense specified. I do not deny that there is a sense in which intentions provide pointconferring reasons for action. I shall say more about such reasons later. 14 What I call commitment is close to what John Broome calls normative requirement in 'Practical Reasoning' (Broome 2002, this volume, ch. 4). For related considerations about belief in the context of a discussion of normative reasons and desires, see Broome (1997, esp. 134-5). 15 Much of the stimulus for the discussion in the remainder of this section derives from Broome's contribution in this volume, ch. 4. Note, however, that Broome's framework for discussing practical reasoning is different from that introduced here. Also, Broome is not committed to the major claim of the present paper about the character of normative reasons. Indeed, I take him to deny that what confers an instrumental point on an action can supply a truly normative reason for the agent to perform the action.
Alan Millar
124
Let us focus on the notion of incurring a con1mitment which figures in the following plausible principle for intention. INTCOM
If you intend to cP, then you incur a commitment to doing whatever is necessary if you are to cP.
If you intend to cP and fJiing is necessary if you are to cP, then your intention to cP commits you to fJiing. The conunitment here is conditional upon a contingent fact, in particular, the fact that fJiing is necessary if you are to cP. Some commitments incurred by an intention to cP are unconditional, for example, the commitment to cPing, and the commitment to doing whatever is necessary if you are to cP. It might easily be thought that INTCOM is tantamount to the Dubious Means-End Principle. If the antecedent of the latter principle were satisfied, the justificatory reason for you to do whatever is necessary if you are to cP would be that you intend to cP. Given the principle, and that fJiing is necessary if you are to cP, it would follow that there is a justificatory reason for you to fJi. The reason, in that case, would be that you intend to cP and that fJiing is necessary if you are to cP. It is this reading of INTCOM which leads from the plausible view that intentions incur commitments to the implausible view that intentions provide a justificatory reason to do whatever is necessary to carry them out. Fortunately, there 'is a plausible account of the commitments incurred by intentions which does not assume, or commit us to, the implausible view. The alternative account reads INTCOM in terms of the following Means-End Principle: There is a justifying reason for you to (do whatever is necessary for you to cP, if you intend to cP). This way of reading INTCOM presupposes an understanding of commitment on which, to say that intending to cP incurs a commitment to doing whatever is necessary if you are to cP is to say that there is a justifying reason for you to (do whatever is necessary for you to cP, if you intend to cP). Note that the commitment in this case is expressible in tern1S of a justifying, rather than a merely justificatory reason. It could be called a strong commitment. But the really important point is that it is the bracketed conditional as a whole, and not just its consequent, which captures what it is that there is a justifying reason for you to do. Given the principle, and that you intend to cP, it does not follow that there is a justifying reason for you to cP. The principle is tantamount to the claim that there is a justifying reason for you to avoid its being the case that you intend to cP, yet never do whatever is necessary if you are to cP. There are two ways in which you can, as I shall say, discharge a commitment incurred by an intention, understood in terms of the Means-End Principle: you can do what the intention con1mits you to doing or you can abandon the intention.
Reasons for Action
12 5
The understanding of commitment presupposed here has application beyond the sphere of commitments incurred by intentions. Deans of Faculties, football coaches, and army officers all have duties which define their offices. Since the duties define the offices, it might be thought that occupying such an office, in and of itself, generates at least a justificatory reason for carrying out the associated duties. But that is not the only way to conceive of the matter. We may think of occupying the office as committing one to carrying out the duties of the office, the idea being that you have at least a justificatory reason to avoid its being the case that you occupy the office, yet do not perform its duties. (A commitment which is underpinned by a justificatory, as opposed to a justifying, reason could be called a weak commitment.) There are two ways to discharge such a commitment. One can remain in the office and carry out the duties, or one can resign from the office. So it is certainly not the case that the only way to explain the connection between occupying an office defined by duties, and performing the duties, is to assume that occupying the office in itself generates a justificatory reason to perform the duties. There is reason to prefer the alternative which invokes the idea of commitment. What there is justificatory reason for an agent to do in virtue of intending something surely depends, not just on the fact of having the intention, but on the status of the intention. Similarly, what there is justificatory reason to do in virtue of occupying an office defined by duties depends, not just on the fact of one's occupying the office, but on the status of the duties which define the office. Nothing I have said commits me to denying that there are true instances of the schema if you intend to ¢, and ifring is necessary if you are to ¢, then there is a justifying reason for you to if, read so that the consequent is detachable when the antecedent is satisfied. Instances will be true where the situation is one in which there are true considerations which together, with the proposition that the agent intends to c/J, and the proposition that l/Jing is necessary if the agent is to c/J, entail that there is a justificatory reason for the agent to l/J. We do not need the Dubious Means-End Principle to make this work out. The Means-End Principle will do the trick since it entails that if l/Jing is necessary if you are to c/J, and there is a justifying reason for you to intend to c/J, then there is a justifying reason for you to cPo It might be argued that I have not given due weight to the idea that the reasons generated by intentions may only be pro tanto (merely justificatory as opposed justifying) reasons. On the contrary, it is clear that, aside from the previous considerations, we cannot make sense of means-end reasoning in these terms. r6 Suppose that Richard has a justifying reason not to head 16 The argument which follows was suggested to me by a very similar argument advanced by Broon1e in personal correspondence. Broome is not responsible for any defects my version may have.
Alan Millar
126
for Samantha's (arising, say, fronl the fact that doing so would involve his young children being left alone in his house). Suppose, further, that he does not head for Samantha's, and yet does not abandon this intention to catch her, despite the fact that heading for Samantha's is necessary if he is to carry out this intention. What is wrong with Richard's situation? Well, in line with the idea just mooted, suppose that his intention to catch Samantha gives him a justificatory, though not a justifying, reason to head for Samantha's. We know that such a reason is outweighed by his justifying reason not to head for Samantha's. The trouble is that Richard pays his dues, as it were, to this latter reason so long as he does not head for Samantha's. That reason does not explain what is wrong with his retaining his intention to catch Samantha while not heading for her place. Clearly, Richard ought to abandon his intention; there is a justifying reason for his doing so. But given only the conditional If Richard intends to catch Samantha, there is a justificatory reason for him to head for Samantha's and the assumption There is a justifying reason for Richard not to head for Samantha's there is no valid route to the conclusion there is a justifying reason for Richard to abandon his intention to catch Samantha. The Means-End Principle, by contrast, can account for how this conclusion can be validly reached. By the principle there is a justifying reason for Richard to ensure that either he does not intend to catch Samantha or head for her place. Given this, and that there is a justifying reason for Richard not to head for Samantha's, it follows that there is a justifying reason for him not to intend to catch Samantha. 17 We have then an explanation of what is wrong with having an intention yet not doing what is necessary to carry it out. What is wrong in such a situation is that a commitment incurred by the intention is not discharged. The explanation, involving the Means-End Principle, does not require us to assume the problematic view that intentions, simply in virtue of being intentions, give rise to justificatory reasons to do what is necessary to carry them out. The Dubious Means-End Principle has unacceptable consequences, and cannot, in any case, account for means-end reasoning.
v In this section I respond to an objection to the view I have advanced. It is that unless intentions give rise to at least justificatory reasons we cannot 17 The reasoning mirrors the reasoning familiar in deontic logic from 'O(-P v Q)' and 'O-Q' to 'O-P'.
Reasons for Action
127
adequately explain the role which intentions have in our lives. I shall consider how this objection could be worked out drawing upon some key ideas from Michael Bratman's influential work on intention (Bratman 1987). This will also give me the opportunity to compare how Bratman thinks of the commitments involved in having intentions with the views about commitment which I advanced in the previous section. Bratman discerns two dimensions to the comn1itn1ent involved in having an intention.There is a volitional dimension which 'derives from the fact that intentions are conduct controllers' (Bratman 1987, 16). An agent who intends to '-/1 has a volitional con1mitment to '-/1 in the sense that, so long as the intention survives until the time of action, nothing interferes, and the agent sees that the time has arrived, he will '-/1. Then there is a reasoningcentred dimension. This involves a disposition to retain [the] intention without reconsideration, and a disposition to reason from this retained intention to yet further intentions, and to constrain other intentions in the light of this intention. (Bratman 1987, 17)
Going by these explanations, commitment in Bratman's sense is a psychological notion. A volitional commitment is a resolve to do the thing in question. It is this notion we have in mind when we speak of people as being committed to their work. Reasoning-centred commitment is also psychological and naturally follows on from volitional commitment. Because of the volitional con1mitment involved in my intending to finish this paper soon, my intention will feed into my reasoning about how to organize my time in the near future. I am entirely in accord with the view that intentions involve commitments in the psychological sense. Notice, however, that this notion of commitment is very different from that which I invoked in the previous section. There I was concerned with the idea that intending to c/> incurs a comn1itment to doing whatever is necessary to carry out the intention to c/>. The latter notion is plainly a normative one. It is about what there is justifying reason for agents to do. The psychological notion is concerned with what agents are disposed to do. I8 There is no obvious reason to suppose that the two notions are incompatible. That said, it might seem that there is a certain tension between the way I explain my normative notion of commitn1ent and the functional role of intention as explained by Bratman. On my view having an intention commits you to doing what is necessary to carry out the intention in the sense that there is a justifying reason for you either to do 18 Bratman does speak of commitment involved in intention as having both descriptive and normative aspects. The nornlative aspect, he says, 'consists in the norms and standards of rationality' (Bratnlan 1987, 1°9) associated with the dispositions and roles characteristic of intentions. This does not I think alter the fact that commitment in play in his earlier statements on volitional and reason-centred commitments is purely descriptive, concerning as it does the agent's actual motivational set rather than any conlmitments incurred.
Alan Millar
128
what is necessary to carry it out or abandon it. All other considerations aside, having an intention gives you no more of a reason to carry it out than to abandon it. This might seem to be at odds with Bratman's view for the following reason. It might seem that if intention involves a reasoningcentred commitment in Bratn1an's sense, then an agent who intends to do something will not think of abandoning the intention as being on a par with carrying it out. On the contrary, the argument would go, an agent for whom having an intention provides an input to practical reasoning must take that intention to provide at least a justificatory reason for the intended action. Suppose you have it in mind that p and that if p then q. You might . exploit this consideration by inferring that q. In treating the consideration as an input to theoretical reasoning you thereby treat it as a justificatory reason to believe that q. Similarly, it might be said, if your intention to 4>, and the consideration that it is necessary for you to f/J in order to 4>, feed into practical reasoning leading you to form an intention to f/J, then, in effect, you treat your intention and the related consideration as providing you with a justificatory reason to f/J. This might seem to show that in our ordinary thinking we treat our intentions as providing some kind of justification for taking the means necessary to carry them out. 19 The line of thought just sketched does not tell against the position I am defending. It is true that if you accept a conclusion on the basis of certain assumptions you thereby treat those assumptions as jointly constituting a justificatory reason to believe the conclusion drawn from them. It is also true that you could be right to do so. But it would beg the question against the view I an1 defending to presuppose that parallel claims hold for action. There is an important disanalogy between practical and theoretical reasoning. By 'theoretical reasoning' I mean inferring that a certain conclusion is true from prior assumptions. The point of theoretical reasoning in this sense is to start with truths and end up with truths. That is why we need the assumptions we start with to constitute nothing less than justificatory reasons to believe the conclusion. In fact, we need more than that; we need the assumptions to constitute justifying reasons to believe the conclusion, because we need the conclusion to be at least worthy of being believed. It is different with practical reasoning. Practical reasoning may, but need not, be concerned with whether an action ought to be done. While we certainly need the considerations we draw upon in practical reasoning to be true and we certainly need them to confer a point on an action, we do not necessarily need them to be such that in the light of them the action ought to be done. That is why we do not always need the considerations which provide normative reasons for son1e action to constitute even a justificatory reason for the action. Successful theoretical reasoning takes us to a belief which is 19
I do not suggest that Bratman would himself draw such a conclusion.
Reasons for Action
129
worthy of belief in virtue of the reasons in its support. Successful practical reasoning may take us only to an action for which there is an instrumental point. I do not for one moment deny that there is a clear sense in which for an agent who has an intention, carrying out the intention, and so doing what is required for that end, is not on a par with abandoning the intention. But that is a conceptual point about the psychology of intention, not its normative commitments. It is in the nature of intending to do something that the agent is to some degree committed in Bratman's sense to doing the thing intended and will abandon the intention only if she thinks there is reason to do so or changes her mind. None of this implies that an agent who intends to 4J has, or must think she has, a better reason to 4J and to do what is necessary to that end than to abandon the intention. All that the agent need think is that there is an instrumental point to her 4Jing.
VI
In Section III I challenged the view that the reasons for an agent to do something have to be justificatory drawing upon examples, and using a distinction between an action's having a point, and there being a justificatory reason for it. In Section IV I presented a plausible view of instrumental reason, involving the Means-End Principle, which does not imply that intentions, merely in virtue of being intentions, give rise to justificatory reasons. It is an inlplication of my view that considerations to the effect that an agent has an intention to 4J and that l/Jing is necessary if the agent is to 4J, may constitute genuine normative reasons for the agent to l/J. 20 Indeed, the consideration that an agent has an intention to 4J, and that by l/Jing the agent could 4J (though there may be other means to the same end), is in some sense a reason for the agent to l/J. If you say to me 'Is there any reason for me to go to into town today?', I could intelligibly and correctly reply by reminding you that you intended around this time to see if there is anything worth buying in the January sales. Now if reasons of this sort are indeed normative reasons it ought to be possible to explain why it is appropriate to regard them as such. To do that we need to connect such reasons to appropriate normative principles. It is this matter I briefly address in this concluding section. By the Means-End Principle, in a situation in which going to into town toda y is necessary if you are to check out the sales, there is a justifying 20 Darwall (19 83) rightly questions whether intentions provide reasons, on the assumption that reasons are justificatory reasons (see pp. 44-8). I differ from Darwall in thinking that we need to accommodate the idea that intentions provide reasons.
13°
Alan Millar
reason to avoid its being the case that you intend to check out the sales yet do not go into town today. We cannot validly infer with the help of the principle either that if you intend to check out the sales there is a justifying, or even a justificatory, reason for you to go into town today, or that if you do not go into town today there is a justifying, or even a justificatory, reason for you not to intend to check out the sales. What is true, however, is that a way of discharging the commitment incurred by your intention is to go into town today. Doing so is, therefore, a way of avoiding an incoherence among your attitudes and actions-the kind of incoherence which remains when a strong commitment is not discharged. There are other ways in which the incoherence could be avoided, but going into town today has at least this in its favour: it is a way. That explains why the consideration that you intend to check out the sales and that going into town today is necessary if you are to do so, constitutes a reason for you to go into town. The reason in question is not a reason to go into town rather than to give up your intention. But it is still a reason to go into town. Consider now cases in which the means are not specified as being necessary to carry out the intention. Take the case in which the relevant consideration is that you intend to check out the sales, and that going into town would enable you to do so (though perhaps you could as well go another day). Suppose this consideration is true. In that case you have incurred a commitment to doing whatever is necessary to carry out the intention and going into town today would be a way of discharging the commitment. So in this case too, going into town today is a way of avoiding an incoherence among your attitudes and actions. That explains why the consideration constitutes a reason for you to go into town today. It is not a reason to go into town today rather than abandon your intention or rather than go some other time. So it will not help if you want to work out what is optimal among the options open to you. But it is still a reason for you go into town. Action need not be aimed at what is optimal, and so it is no surprise that normative reasons for action need not pick out an optimal option. It helps, once more, to compare reasons for action with reasons for belief. Suppose you believe that the Earth is a flat finite disc. Then you are committed to believing that if you travel in any direction on the surface of the Earth in a straight line you would eventually arrive at an edge beyond which you cannot travel on ground. So if you do hold the latter belief, you would discharge the commitment. It does not follow that there is a justificatory reason for you to believe that if you travel in any direction on the surface of the Earth in a straight line you would eventually arrive at an edge beyond which you cannot travel on ground. This granted, there might seem to be an objection to my view that non-justificatory reasons for action can be normative reasons for action. The objection proceeds by analogy with the belief case. Surely there is no sense in which your believing that the
Reasons for Action
13 I
Earth is flat gives you a normative reason to believe that if you travel in any direction on the surface of the Earth in a straight line you would eventually arrive at an edge beyond which you cannot travel on ground. So, the argun1ent would go, there is no sense in which having an intention gives you any kind of normative reason to do something which would be a way of carrying it out. The argument fails because of the asymmetry between believing and intending. Believing aspires to truth and thus to beliefs which are worthy of being held. I mean this in a weak sense, and it is plausible only if taken in a weak sense. Believing's aspiration to truth may be nothing more than a matter of the believer's not being indifferent to considerations which seem to show the belief to be false. Faced with considerations which appear to s.how that something you believe is false you might give up the belief or you n1ight convince yourself, possibly quite irrationally, that the considerations do not really show that the belief is false. But you might just avoid dwelling upon the considerations, thereby ceasing to feel the tension between them and the belief. Even this last reaction would be consistent with the claim that believing aspires to truth. It is one way in which the believer's lack of indifference to the considerations n1ight be manifested. Complete indifference to the considerations would undermine the claim that you really have the belief in question. It is because believing aspires to truth that if you believe for a reason you need your reason to provide a justifying reason for belief. If there were an analogue for intending, it would be that intending aspires to action which ought to be done. But it is not true that intending, as such, has such an aspiration. Insensitivity to considerations which suggest that you ought not to cP need not undermine the claim that you intend to cPo The line I take, then, is that normative reasons for action need not be so n1uch as justificatory. The significance of the view is at least twofold: (i) it forces us to think of asymmetries between belief and action and (ii) it removes an argument against holding onto both the Anscombian Principle and the Motivation Principle. REFERENCES Anscombe, G. E. M. (I.963), Intention (Oxford: Blackwell). Audi, Robert (I.993), 'Mental Causation: Sustaining and Dynamic', in Heil and Mele (eds.) I993, 53-74. Baier, Kurt (I 958), The Moral Point of View (Ithaca: Cornell University Press). Bond, E. J. (I983), Reason and Value (Cambridge: Cambridge University Press). Bratman, Michael (I987), Intentions, Plans, and Practical Reason (Cambridge, Mass.: Harvard University Press). Broome, John (I997), 'Reasons and Motivation', The Aristotelian Society, supp!. vol. 7I, I3 I-46.
Alan Millar Broome, John (2002), 'Practical Reasoning' in this volume. Cullity, Garrett, and Gaut, Berys (eds.) (1997), Ethics and Practical Reason (Oxford: Clarendon Press). Darwall, Stephen L. (1983), Impartial Reason (Ithaca: Cornell University Press). --(1997), 'Reasons, Motives and the Demands of Morality: An Introduction', in Darwall, Gibbard and Railton (eds.) (1997). --(1998), Philosophical Ethics (Boulder, Colo.: Westview Press). --Gibbard, Alan, and RaHton, Peter (eds.) (1997), Moral Discourse and Practice; Some Philosophical Approaches (New York: Oxford University Press). Davidson, Donald (1963), 'Action, Reasons and Causes',]ournal of Philosophy 60, 685-7°°. Reprinted in Davidson (1980). --(1980), Essays on Actions and Events (Oxford: Clarendon Press). Heil, John, and Mele, Alfred (eds.) (1993), Mental Causation (Oxford: Clarendon Press). Hursthouse, Rosalind (1991), 'Arational Actions', journal of Philosophy 88, 57-68. Kagan, Shelly (1989), The Limits of Morality (Oxford: Clarendon Press). McDowell, John (1982), 'Reason and Action', Philosophical Investigations 5,3° 1-5. Millar, Alan (2001), 'Rationality and Higher-order Intentionality', in Denis Walsh (ed.) (2001). Nagel, Thomas (1970), The Possibility of Altruism (Oxford: Clarendon Press). Scanlon, T.M. (1998), What We Owe to Each Other (Cambridge, Mass.: Harvard University Press). Schueler, G. F. (1993), Desire., Its Role in Practical Reason and the Explanation of Action (Cambridge, Mass.: MIT Press). Smith, Michael (1994), The Moral Problem (Oxford: Blackwell). Stocker, Michael (1979), 'Desiring the Bad', journal of Philosophy 76,738-53. Velleman, J. David (1992), 'The Guise of the Good', Nous 26, 3-26. Walsh, Denis (2001), Naturalism, Evolution and Mind (Cambridge: Cambridge University Press). Watson, Gary (1975), 'Free Agency', journal of Philosophy 72, 2°5-20. Williams, Bernard (1973), Problems of the Self (Cambridge: Cambridge University Press).
II
PSYCHOLOGICAL REALITY AND PSYCHOLOGICAL EXPLANATION
6 The Rational Analysis of Human
Cognition~}
NICK CHATER At,JD MIKE OAKSFORD
•••
Rationality appears basic to the understanding of mind and behaviour. In practical decisions, such as whether a person is morally responsible for his or her actions, to whether a person can be hospitalized without consent, it seems crucial to be able to draw a boundary between sanity and madness, between rationality and irrationality. In economics, and increasingly, other areas of social science, human behaviour is explained as the outcome of 'rational choice', concerning which products to buy, whom to marry, or how n1any children to have (Becker 1975, 1981; Elster 1986). But rationality assumptions go deeper still-they are embodied in the folk psychological style of explanation in which we describe each other's minds and behaviour (Fodor 1987; Stich 1983). Assumptions of rationality also appear equally essential to interpret each other's utterances and to understand texts (Davidson 1984; Quine 1960). So rationality appears basic to the explanation of human behaviour, whether from the perspective of social science or of everyday life. Let us call this everyday rationality: rationality concerned with people's beliefs and actions in daily life. In this informal, everyday sense, most of us, most of the time, are remarkably rational. To be sure, we focus on occasions when reasoning or decision-making breaks down. But our failures of reasoning are only salient because they occur against the background of rational thought and behaviour which is achieved with such little apparent effort that we are inclined to take it for granted. Rather than thinking of our patterns of everyday thought and action as exhibiting rationality, we think of them as plain common sense-implicitly assuming that common sense must be a simple thing indeed. People may not think of themselves as exhibiting high levels of ~. Please address correspondence concerning this paper to Nick Chater, Department of Psychology, University of Warwick, Coventry CV4 7 AL, UK or to Mike Oaksford, School of Psychology, Cardiff University, PO Box 901, Cardiff CF1 3YG, Wales, UK. We would like to thank Jose Luis Bermudez and Alan Millar for their valuable comments on an earlier version of this paper.
Nick Chater and Mike Oaksford
rationality-instead, we think of people as 'intelligent', performing 'appropriate' actions, being 'reasonable' or making 'sensible' decisions. But these labels refer to human abilities to speak, think, or act appropriately in complex, real-world situations-in short, they are labels for everyday rationality. Indeed, so much do we tend to take the rationality of common-sense thought for granted, that only recently has it been appreciated that commonsense reasoning is immensely difficult. This realization emerged from the project of attempting to formalize everyday knowledge and reasoning in artificial intelligence, where initially high hopes that common-sense knowledge could readily be formalized were replaced by increasing desperation at the impossible difficulty of the project. The nest of difficulties referred to under the 'frame problem' (see e.g. Pylyshyn 1987), and the problem that each aspect of knowledge appears inextricably entangled with the rest (e.g. Fodor 1983) so that common sense does not seem to break down into manageable 'packets' (whether schemas, scripts, or frames, Minsky 1977; Schank and Abelson 1977), and the deep problems of defeasible, or non-monotonic reasoning, brought the project of formalizing common sense to an effective standstill (e.g. McDermott 1987). Thus the cognitive processes underlying plain 'common sense' far outperform any artificial computational system we can devise. Hence, the sentiment with which we began: Most of us, most of the time, are remarkably rational. But in addition to this informal, everyday sense of rationality, concerning people's ability to think and act in the real world, the concept of rationality also has another root, linked not to human behaviour, but to mathematical theories of good reasoning. These theories represented one of the most important achievements of modern mathematics: logical calculi formalize aspects of deductive reasoning; axiomatic probability formalizes probabilistic reasoning; the variety of statistical principles, fron1 sampling theory (Fisher 1922, 1925/1970), to Neyman-Pearson statistics (Neyman 1950), to Bayesian statistics (Keynes 1921; Lindley 1971), aim to formalize the process of interpreting data in terms of hypotheses; 'rational choice' theories aim to explain people's preferences and decisions, under uncertainty and in strategic interaction with other 'players' (Nash 1950; von Neumann and Morgenstern 1944). According to these calculi, rationality is defined, in the first instance, in terms of conformity with specific formal principles, rather than in terms of successful behaviour in the everyday world. How are the two sides of rationality related? How are the general principles of formal rationality related to specific examples of rational thought and action described by everyday rationality? This question, in various guises, has been widely discussed-in this article, we develop a viewpoint rooted in a style of explanation in the behavioural sciences, rational analysis (Anderson 1990). We suggest that rational analysis provides a good characterization of how the concept of rationality is used in explanations in
Rational Analysis of Human Cognition
137
psychology, economics, and animal behaviour, and usefully explicates the relationship between everyday and formal rationality. The discussion falls into four main parts. First, we discuss formal and everyday rationality, and various possible relationships between them. Second, we describe the programme of rational analysis as a mode of explanation of mind and behaviour, which views everyday rationality as underpinned by formal rationality. Third, we consider a case study of rational analysis, concerning a celebrated laboratory reasoning task, Wason's (1966, 1968) selection task. Fourth, we defend the use of formal rationality in explaining mind and behaviour from some critical attacks (Evans and Over "199 6a, 1997; Gigerenzer and Goldstein 1996; McDermott 1987).
RELATIONS BETWEEN FORMAL AND EVERYDAY RATIONALITY
Formal rationality concerns formal principles of good reasoning-the mathematicallaws of logic, probability, decision, or game theory. These principles appear, at first sight, to be far removed from everyday rationality-from how people think and act in everyday life. Rarely in daily life do we praise or criticize each other for obeying or violating the laws of logic or probability. Moreover, when people are given reasoning problems that explicitly require use of these formal principles, their performance appears to be remarkably poor. People appear to persistently fall for logical blunders (Evans, Newstead, and Byrne 1993), probabilistic fallacies (e.g. Tversky and Kahneman 1974), and to make inconsistent decisions (Kahneman, Slovic and Tversky 1982; Tversky and Kahneman 1986). Indeed, the concepts of logic, probability, and the like do not appear to mesh naturally with our everyday reasoning strategies: these notions took centuries of intense intellectual effort to construct, and present a tough challenge for each generation of students. How can we relate the astonishing fluency and success of everyday reasoning and decision-making, exhibiting remarkable levels of everyday rationality, to our faltering and confused grasp of the principles of formal rationality? The problem is especially pressing in view of the fact that psychologists model almost all human cognition as involving inference. Thus, in deciding to cross the road, in parsing a sentence, or in catching a ball, the complex informationprocessing involved is standardly modelled as involving complex inferential processes concerning relevant knowledge about the movements of cars and cyclists, the lexical and grammatical structure of the language, or the trajectory of the ball and the forces generated by, and inertia tensors of, the motor system. Indeed, the view that cognition is, across the board, to be viewed as a matter of inference over representations of knowledge, is close to
Nick Chater and Mike Oaksford
a fundamental assumption of cognitive science. And more specifically, the kinds of reasoning processes that are typically invoked involve precisely the formal models of reasoning (probability, decision theory, and so on) that we have discussed. Hence, almost every impressively fluent and successful aspect of human cognition is typically viewed by psychologists as involving reasoning processes-which suggests that the cognitive system must have remarkable facility at such reasoning. But this contrasts bizarrely with results in direct experiment tests of human fornlal reasoning-which appear to reveal that people have only the most blundering ability in formal reasoning. So we return to the question: how can some reconciliation be found between the effectiveness of everyday reasoning exhibited across cognitive processes and the ineffectiveness of performance on experimental reasoning tasks? We sketch three important possibilities, which have been influential in the literature in philosophy, psychology, and the behavioural SCIences. Everyday Rationality is Primary
This viewpoint takes everyday rationality as fundamental, and views formal theories as flawed in so far as they fail to match up with human everyday reasoning intuitions. This standpoint appears to gain credence from historical considerations-formal rational theories such as probability and logic emerged as attempts to systematize human rational intuitions, rooted in everyday contexts. But the resulting theories appear to go beyond, and even clash with, human rational intuitions-at least if empirical data which appear to reveal apparent blunders in human reasoning are taken at face value. Where clashes occur, the advocates of the primacy of everyday rationality argue that the formal theories should be rejected as inadequate systematizations of human rational intuitions, rather than condemning the intuitions under study as incoherent. A certain measure of tension may be granted between the goal of constructing a satisfyingly concise normalization of intuitions, and the goal of capturing every last intuition successfully, just as linguists allow complex centre-embedded constructions to be grammatical (e.g. 'the fish the man the dog bit ate swam'), even though most people reject them as ill-formed gibberish. But the dissonance between formal rationality and everyday reasoning appears more profound than this. As we have argued, fluent and effective reasoning in everyday situations runs alongside halting and flawed performance on the most elementary formal reasoning problems. The primacy of everyday rationality is implicit in an important challenge to decision theory by the mathematician Allais (1953; see also Ellsberg 1961, and May 1954, for a similar challenge to decision theory). One version of
Rational Analysis of Human Cognition
139
the paradox is as follows. Consider the following pair of lotteries, each involving 100 tickets. Which would you prefer to play?
A.
B.
10 tickets worth $ I ,000,000 90 tickets worth $0
I ticket worth $ 5,000,000 8 tickets worth $1,000,000 9 I tickets worth $0
Now consider which you would prefer to play of lotteries C and D:
C. 100 tickets worth $ I ,000,000
D. I ticket worth $ 5,000,000 98 tickets worth $1,000,000 I tickets worth $0
Most people prefer lottery B to lottery A-the slight reduction in the probability of becoming a millionaire is offset by the possibility of the really large prize. But most people also prefer lottery C to lottery D-we don't think it is worth losing what would otherwise be a certain $1,000,000, just for the possibility of winning $ 5,000,000. This combination of responses, for all its intuitive appeal, is inconsistent with decision theory, which demands that people should choose whichever alternative has the maximum expected utility. Denote the utility associated with a sum of $X by U($X). Then the preference for lottery B over A means that: Io/IOO.U($I,OOO,ooo) + 90/IOO.U($0) < I/IOO.U($5,000,000) 8/IOO.U($I,000,000) + 9I/IOO.U($0)
+ (I)
and, subtracting 90/IOO.U($0) from each side: Io/IOO.U($I,OOO,ooo) < I/IOO.U($5,000,000) 8/IOO.U($I,000,000) + I/IOO.U($o)
+ (2)
But the preference for lottery Cover D means that: IOO.U($I,OOO,ooo) > I/IOO.U($5,000,000) 98/IOO.U($I,000,000) + I/IOO.U($o)
+ (3)
and, subtracting 90/IOO.U($I,000,000) from each side: IO.U($I,OOO,ooo) > I/IOO.U($5,000,000) 8/IOO.U($I,000,000) + I/IOO.U($o)
+ (4)
But (2) and (4) are in contradiction. Allais's paradox is very powerful-the appeal of the choices that decision theory rules out is considerable. Indeed, rather than condemning people's intuitions as incorrect, Allais argues that the paradox undermines the normative status of decision theory-decision theory should be revised to fit with
Nick Chater and Mike Oaksford
our intuItIons (see Chew 1983; Fishburn 1983; Kahneman and Tversky 1979; Loomes and Sugden 19 82; Machina 1982). Another example arises in Cohen's (1981) discussion of the psychology of reasoning literature. Following similar arguments of Goodman (1954), Cohen argues that a normative or formal theory is 'acceptable ... only so far as it accords, at crucial points with the evidence of untutored intuition' (Cohen 1981,317). That is, a formal theory of reasoning is acceptable only to the extent that it fits with everyday reasoning. Cohen uses the following example to demonstrate the primacy of everyday inference. According to standard propositional logic the inference from (5) to (6) is valid: If John's automobile is a Mini, John is poor, and if John's automobile is a Rolls, John is rich.
(5)
Either, if John's auton10bile is a Mini, John is rich, or if John's automobile is a Rolls, John is poor.
(6)
Clearly, however, this violates intuition. Most people would agree with (5) as at least highly plausible; but would reject (6) as implausible. A fortiori, they would not accept that (5) implies (6) (otherwise they would have to judge (6) to be at least as plausible as (5)). Consequently, Cohen argues that standard logic simply does not apply to the reasoning that is in evidence in people's intuitions about (5) and (6). Like Allais, Cohen argues that rather than condemn people's intuitions as irrational, this mismatch reveals the inadequacy of propositional logic: everyday intuitions have primacy over formal theories. But this viewpoint is not without problems. A key danger is of losing any normative force to the notion of rationality~if rationality is merely conformity to each other's predominant intuitions, then there seems no standpoint from which to assess which of our intuitions is rational. On this view, being rational is like a musician being in tune: all that matters is that we reason harn10niously with our fellows. But there is a strong intuition that rationality is not like this at all-that there is some absolute sense in which some reasoning or decision-n1aking is good, and other reasoning and decision-making is bad. So, by rejecting a formal theory of rationality, there is the danger that the normative aspect of rationality is left unexplained. One way to reintroduce the normative element is to define a procedure that derives normative principles from human intuitions. Cohen appealed to the notion of reflective equilibrium (Goodman 1954; Rawls 1971) where inferential principles and actual inferential judgements are iteratively bought into a 'best fit' until further judgements do not lead to any further changes of principle (narrow reflective equilibrium). Alternatively, background knowledge may also figure in the process, such that not only actual judgements but also the way they relate to other beliefs are taken into account (wide reflective equilibrium). These approaches have, however, been subject to much
Rational Analysis of Human Cognition
141
criticism (e.g. Stich and Nisbett 1980; Thagard 1988). For example, there is no guarantee that an individual (or indeed a set of experts) in equilibrium will have accepted a set of rational principles, by any independent standard of rationality. For example, the equilibrium point could conceivably leave the individual content in the idea that logical fallacies are sound principles of reasoning. Thagard (1988) proposes that instead of reflective equilibrium, developing inferential principles involves progress tov/ards an optimal system. This involves proposing principles based on practical judgements and background theories, and measuring these against criteria for optimality. The criteria Thagard specifies are (i) robustness: principles should be empirically adequate; (ii) accommodation: given relevant background knowledge, deviations fronl these principles can be explained; and (iii) efficacy: given relevant background knowledge, inferential goals are satisfied. Thagard's (1988) concerns were very general, in order to account for the development of scientific inference. From our current focus on the relationship between everyday and formal rationality, however, Thagard's proposals seem to fall down because the criteria he specifies still seem to leave open the possibility of inconsistency, i.e. it seems possible that a system could fulfil (i) to (iii) but contain nlutually contradictory principles. The point about formalization is of course that it provides a way of ruling out this possibility and hence is why a tight relationship between formality and normativity has been assumed since Aristotle. From the perspective of this paper, accounts like reflective equilibriunl and Thagard's account, which attempts to drive a wedge between formality and normativity, may not be required. We argue that many of the nlismatches observed between human inferential performance and formal theories are a product of using the wrong formal theory to guide expectations about how people should behave. An alternative normative grounding for rationality seems intuitively appealing: good everyday reasoning and decision-making should lead to successful action; for example, from an evolutionary perspective, we might define success as inclusive fitness (roughly, expected number of offspring), and argue that behaviour is rational to the degree that it tends to increase inclusive fitness. But now the notion of rationality appears to collapse into a more general notion of adaptiveness. There seems to be no particular difference in status between cognitive strategies which lead to successful behaviour, and digestive processes that lead to successful metabolic activity. Both increase the inclusive fitness of an individual (roughly, the expected number of children of that individual); but intuitively we want to say that the first is concerned with rationality, while the second is not. More generally, defining rationality in terms of outcomes runs the risk of blurring what appears to be a crucial distinction-between minds, which may be more or less rational, and stomachs, that are not in the business of rationality at all.
Nick Chater and Mike Oaksford Formal Rationality is Primary
Arguments for the primacy of formal rationality take a different starting point. This viewpoint is standard with the mathematics, statistics, operations research, and the 'decision sciences' (e.g. Kleindorfer, Kunreuther and Schoemaker 1993). The idea is that everyday reasoning is fallible, and that it must be corrected by following the dictates of formal theories of rationality. In this light, for example, the Allais paradox may be viewed as revealing a flaw in human reasoning rather than exposing a problem for decision theory. The viability of this viewpoint depends, in part, on the scope of formal theories of rationality-are they really able to handle the richness of inferences that everyday reasoning actually involves? This issue arises particularly in the context of formal logic, because the principles of logic do not give a general model of how beliefs should be revised (particularly when there is some inconsistency in the knowledge base-which is, of course, the normal situation in cognition) (e.g Harman 1986; McDermott 1987; Oaksford and Chater 1991). But it also arises more generally-for example, although inductive inference can, in n1any contexts, be usefully modelled in terms of probabilistic inference, there are no clear principles concerning how to set prior probabilities from which inference begins; and the choice of prior probabilities will be crucially important given any finite set of data (though see e.g. Jaynes 1989; Jeffreys 1939; Paris 199 2; Rissanen 1987, 1989 for discussion). We shall touch on these issues below-but for now let us leave aside the concern that forn1al principles of rationality are simply too limited to engage with the principles that underlie the full complexity of everyday reasoning. Advocates of the primacy of formal rationality concerns the justification of formal calculi of reasoning: why should the principles of some calculus be viewed as principles of good reasoning, so that they may potentially override our intuitions about what is rational? Such justifications typically assume some general, and apparently incontrovertible, cognitive goal; or seemingly undeniable axioms about how thought or behaviour should proceed. They then use these apparently innocuous assumptions and aim to argue that thought or decision-making n1ust obey specific mathematical principles. Consider, for example, the 'Dutch book' argument for the rationality of the probability calculus as a theory of uncertain reasoning (de Finetti 1937; Ramsey 1926; Skyrrns 1977). Suppose that we assume that people will accept a 'fair' bet: that is, a bet where the expected financial gain is 0, according to their assessment of the probabilities of the various outcomes. Thus, for example, if a person believes that there is a probability of 113 that it will rain tomorrow, then they will be happy to accept a bet according to which they win two dollars if it does rain tomorrow, but they lose one dollar
Rational Analysis of Human Cognition
143
if it does not. Now, it can be shown that, if a person's assignment of probabilities to different possible outcomes violates the laws of probability theory in any way whatever, then it is possible to offer them a combination of different bets, such that they will happily accept each individual bet as fair, in the above sense, but where whatever the outcome they are certain to lose money. Such a combination of bets-where one side is certain to lose-is known as a Dutch book; and it is seems incontrovertible that accepting a bet that you are certain to lose must violate rationality. Thus, if violating the laws of probability theory leads to accepting Dutch books, which seems clearly irrational, then obeying the laws of probability theory seems to be a condition of rationality. The Dutch book theorenl might appear to have a fundamental weaknessthat it requires that a person willingly accepts arbitrary fair bets. But, in reality of course, this might not be so-many people will, in such circumstances, be risk-averse, and choose not to accept such bets. But the same argument applies even if the person does not bet at all. Now the inconsistency concerns a hypothetical-the person believes that if the bet were accepted, it would be fair (so that a win, as well as a loss, is possible). But in reality, the bet is guaranteed to result in a loss-the person's belief that the bet is fair is guaranteed to be wrong. Thus, even if we never actually bet, but simply aim to avoid endorsing statements that are guaranteed to be false, we should follow the laws of probability. We have considered the Dutch book justification of probability theory in some detail to make it clear that justifications of fornlal theories of rationality can have considerable force. I Rather than attempting to simultaneously satisfy as well as possible a myriad of uncertain intuitions about good and bad reasoning, formal theories of reasoning can be viewed, instead, as founded on simple and intuitively clear-cut principles, such as that accepting bets that you are certain to lose is irrational. Similar justifications can be given for the rationality of the axioms of utility theory and decision theory (Cox 1961; Savage 1954; von Neumann and Morgenstern 1944). Moreover, the same general approach can be used as a justification for logic, if avoiding inconsistency is taken as axiomatic. Thus, there may have been good reasons for accepting formal theories of rationality, even if, much of the time, human intuitions and behaviour strongly violate their recommendations (see Dawes 1988, for an exposition of this viewpoint from within psychology). I There are also a range of other justifications of the laws of probability theories as a calculus of uncertain inference, based on preferences (Savage 1954), scoring rules (Lindley 1982), and derivation from minimal axioms (Cox 1961; Good 1950; Lucas 1970). Although each argument can be challenged individually, the fact that so many different lines of argument converge on the very same laws of probability has been taken as powerful evidence for the view that degrees of belief can be interpreted as probabilities (see e.g. Howson and Urbach 1989; and Earman 1992, for discussion).
144
Nick Chater and Mike Oaksford
If formal rationality is primary, what are we to make of the fact that, in explicit tests at least, people seem to be such poor probabilists and logicians? One line would be to accept that human reasoning is badly flawed. Thus, the heuristics and biases programme (e.g. Kahneman, Slovic and Tversky 1982; Kahneman and Tversky 1979; Thaler 1987), which charted systematic errors in human probabilistic reasoning and decision-making under uncertainty, can be viewed as exemplifying this position (see Gigerenzer and Goldstein 1996), as can Evans's {1982, 1989) heuristic approach to reasoning. Another line follows the spirit of Chomsky's (1965) distinction between linguistic competence and performance-the idea is that people's reasoning competence accords with formal principles, but in practice, performance limitations (e.g. limitations of time or memory) lead to persistently imperfect performance when people are given a reasoning task. Reliance on a competence-performance distinction, whether implicitly or explicitly, has been very influential in the psychology of reasoning: for example, mental logic (Braine 1978; Rips 1994) and mental models (Johnson-Laird 1983; Johnson-Laird and Byrne 1991) theories of human reasoning assume that classical logic provides the appropriate competence theory for deductive reasoning; and flaws in actual reasoning behaviour are explained in terms of 'performance' factors. Mental logic assumes that human reasoning algorithm.s correspond to proof-theoretic operations (specifically, in the framework of natural deduction, e.g. Rips 1994). This viewpoint is also embodied in the vast programme of research in artificial intelligence, especially in the 1970S and 1980s, which atten1pted to axiomatize aspects of human knowledge, and view reasoning as a logical inference (e.g. McCarthy 1980; McDermott 1982; McDermott and Doyle 1980; Reiter 1980, 1985). Moreover, in the philosophy of cognitive science, it has been controversially suggested that this viewpoint is basic to the computational approach to mind: the fundamental claim of cognitive science, according to this viewpoint, is that 'cognition is proof theory' (Fodor and Pylyshyn 1988; see Chater and Oaksford 1990, for a critique). The mental models theory of reasoning concurs that logical inference provides the computational level theory for reasoning, but instead of standard proof-theoretic rules, this view uses a 'semantic' method of proof. Such methods involve a search for models (in the logical sense)-a semantic proof that A does not imply B might involve finding a model in which A and B both hold. Mental models theory uses a similar idea, although the notion of model in play is rather different from the logical notion of a model. 2 2 E.g., mental models correspond to mental representations of states of affairs, rather than states of affairs themselves; and these mental representations have a specific syntax, and presumably a specific semantics. The precise semantic properties of mental models representation has not been given, and indeed, it is not clear how this could be done. Instead, the semantics of mental models is left, rather uncomfortably, up to the theorist's intuitions.
Rational Analysis of Human Cognition
145
How can this approach show that A does imply B? The mental models account assumes that the cognitive system attempts to construct a model in which A is true and B is false; if this attempt fails, then it is assumed that no counter-example exists, and that the inference is valid (this is similar to 'negation as failure' in logical programming (Clark 1978)). Mental logic and mental models assume that formal principles of rationality-specifically classicallogic-(at least partly) define the standards of good reasoning. They explain the non-logical nature of people's actual reasoning behaviour in terms of performance factors, such as memory and processing limitations. Nonetheless, despite its popularity, the view that formal rationality has priority in defining what good reasoning is, and that actual reasoning is systematically flawed with respect to this formal standard, suffers a fundamental difficulty. If formal rationality is the key to everyday rationality, and if people are manifestly poor at following the principles of formal rationality (whatever their 'competence' with respect to these rules), even in simplified reasoning tasks, then the spectacular success of everyday reasoning in the face of an immensely complex world seems entirely baffling. Everyday Rationality is Based on Formal Rationality: An Empirical Approach
We seem to be at an impasse. The success of everyday rationality in guiding our thoughts and actions must somehow be explained; and it seems that there are no obvious alternative explanations, aside from arguing that everyday rationality is somehow based on formal reasoning principles, for which good justifications can be given. But the experimental evidence appears to show that people do not follow the principles of formal rationality. There is, however, a way out of this impasse. Essentially, the idea is to reject the idea that rationality is a n10nolithic notion that can be defined a priori, and compared with human performance. Instead, we treat the problem of explaining everyday rationality as an empirical problem of explaining why people's cognitive processes are successful in achieving their goals, given the constraints imposed by their environment. Formal rational theories are used in the developn1ent of these en1pirical explanations for the success of cognitive processes-but which formal principles are appropriate, and how they should be applied, is not decided a priori; but in the light of the empirical usefulness of the explanation of the adaptive success of the cognitive process under consideration. According to this viewpoint, the apparent mismatch between normative theories and reasoning behaviour suggests that the wrong normative theories may have been chosen; or the normative theories may have been misapplied. Instead, the empirical approach to the grounding of rationality aims to 'do the best' for human everyday reasoning strategies-by searching
Nick Chater and Mike Oaksford for a rational characterization of how people actually reason. There is an analogy here with rationality assumptions in language interpretation (Davidson 1984; Quine 1960). We aim to interpret people's language so that it makes sense; similarly, the empirical approach to rationality aims to interpret people's reasoning behaviour so that their reasoning makes sense. Crucially, then, the formal standards of rationality appropriate for explaining some particular cognitive processes or aspect of behaviour are not prior to, but are rather developed as part of, the explanation of empirical data. Of course, this is not to say that, in some sense, formal rationality may be prior to, and separate from, en1pirical data. The development of formal principles of logic, probability theory, decision theory, and the like may proceed independently of attempting to explain people's reasoning behaviour. But which element of this portfolio of rational principles should be used to define a normative standard for particular cognitive processes or tasks, and how the relevant principles should be applied, is constrained by the empirical human reasoning data to be explained. It might seem that this approach is flawed from the outset. Surely, any behaviour can be viewed as rational from some point of view. That is, by cooking up a suitably bizarre set of assumptions about the problem that a person thinks they are solving, surely their rationality can always be respected; and this suggests the complete vacuity of the approach. But this objection ignores the fact that the goal of empirical rational explanation is to provide an empirical account of data on human reasoning. Hence, such explanations n1ust not be merely possible, but also simple, consistent with other knowledge, independently plausible, and so on. In short, such explanations are to be judged in the light of the normal canons of scientific reasoning (Howson and Urbach 1989).3 Thus, rational explanations of cognition and behaviour can be treated as on a par with other scientific explanations of empirical phenomena. This empirical view of the explanation of rationality is attractive, to the extent that it builds in an explanation of the success of everyday rationality. It does this by attempting to recruit formal rational principles to explain why cognitive processes are successful. But how can this empirical approach to rational explanation be conducted in practice? And can plausible rational explanations of human behaviour be found? The next two sections of the paper answer these questions. First, we outline a methodology for the rational explanation of empirical data-rational analysis. We also illustrate a range of ways in which this approach is used, in psychology, and the social 3 Note also that for all reasonably rich scientific theories, any empirical data can be accom\modated, by suitable changes in auxiliary assumptions (Quine 1953). Thus rational explanaitions are no different in this regard, fron1, e.g. explanations in terms of the principles of \Newtonian n1echanics (Putnam 1974).
Rational Analysis of Human Cognition
147
and biological sciences. We then use rational analysis to re-evaluate the psychological data which have appeared to show human reasoning performance to be hopelessly flawed, and argue that, when appropriate rational theories are applied, reasoning performance may, on the contrary, be rational.
THE PROGRAMME OF RATIONAL ANALYSIS
The project of providing a rational analysis for some aspect of thought or behaviour has been described by the cognitive psychologist John Anderson (e.g. Anderson 1990, 1991a). This methodology provides a framework for explaining the link between principles of formal rationality and the practical success of everyday rationality not just in psychology, but throughout the study of behaviour. This approach involves six steps: I. 2.
3. 4.
5.
6.
Specify precisely the goals of the cognitive system. Develop a formal model of the environment to which the system is adapted. Make minimal assumptions about computational limitations. Derive the optimal behaviour function given (1)-(3) above. (This requires formal analysis using rational norms, such as probability theory and decision theory.) Examine the empirical evidence to see whether the predictions of the behaviour function are confirmed. Repeat, iteratively refining the theory.
According to this viewpoint, formal rational principles relate to explaining everyday rationality, because they specify the optimal way in which the goals of the cognitive system can be attained in a particular environment, subject to 'minimal' computational limitations. The assumption is that the cognitive system exhibits everyday rationality, i.e. successful thought and action in the everyday world, to the extent that it approximates the optimal solution specified by rational analysis. The framework of rational analysis aptly fits the methodology in many areas of economics and animal behaviour, where the behaviour of people or animals is viewed as optimizing some goal, such as money, utility, inclusive fitness, food intake, or the like. But Anderson (1990, 1991a) was concerned to extend this approach not just to the behaviour of whole agents, but to structure and performance of particular cognitive processes of which agents are composed. Anderson's programme has led to a flurry of research in cognitive psychology (see Chater and Oaksford 1999a; Oaksford and Chater 1998a, for overviews of recent research), from areas as diverse as
Nick Chater and Mike Oaksford categorization (Anderson 1991b; Anderson and Matessa 1998; Lamberts and Chong 1998), memory (Anderson and Milson 1989; Anderson and Schooler 1991; Schooler 1998), reasoning (Oaksford and Chatel' 1994, 1995a, 199 6, 1998b), searching computer menus (Young 1998), and naturallanguage parsing (Chater, Crocker, and Pickering 1998). This research has shown that a great many empirical generalizations about cognition can be viewed as arising from the rational adaptation of the cognitive system to the problems and constraints that it faces. We shall argue below that the cognitive processes involved in reasoning can also be explained in this way. The three inputs to the calculations using formal rational principles, goals, environment, and computational constraints, each raise important issues regarding the connection between formal rational principles and everyday rationality. We discuss these in turn, and in doing so illustrate rational analysis in action in psychology, animal behaviour, and economics. The Importance of Goals
Everyday thought and action is focused on achieving goals relevant to the agent. Formal principles of rationality can help specify how these goals are achieved, but not, of course, what those goals are. The simplest cases are economic in spirit. For example, consider a consumer, wondering which washing machine to buy. Goals are coded in terms of the subjective 'utilities' associated with objects or events for this particular consumer. Each washing machine is associated with some utility (high utilities for the effective, attractive, or low-energy washing machines, for example); and money is also associated with utility. Simple decision theory will specify which choice of machine maximizes subjective utility. Thus goals enter very directly; people with different goals (here, different utilities) will be assigned different 'rational' choices. Suppose instead that the consumer is wondering whether to take out a service agreement on the washing machine. Now the negative utility associated with the cost of the agreen1ent must be balanced with the positive utility of saving possible repair costs. But what are the possible repairs; how likely, and how expensive, is each type? Decision theory again recommends a choice, given utilities associated with each outcome, and subjective probabilities concerning the likelihood of each outcome. But not all goals may have the form of subjective utilities. In evolutionary contexts, the goal of inclusive fitness might be more appropriate (Dawkins 1977); in the context of foraging behaviour in animals, amount of food intake or nutrition gained might be the right goal (Stephens and Krebs 1986). Moreover, in some cognitive contexts, the goal of thought or action may be disinterested curiosity, rather than the attempt to achieve some particular outcome. Thus, from exploratory behaviour in children and animals
Rational Analysis of Human Cognition
149
to the pursuit of basic science, a vast range of human activity appears to be concerned with finding out information, rather than achieving particular goals. Of course, having this information may ultimately prove important for achieving goals; and this virtue may at some level explain the origin of the disinterested search for knowledge (just as the prospect of unexpected applications may partially explain the willingness of the state to fund fundamental research). Nonetheless, disinterested inquiry is conducted without any particular goal in mind. In such contexts, gaining, storing, or retrieving information, rather than maximizing utility, may be the appropriate specification of cognitive goals. If this is the goal, then information theory and probability theory may be the appropriate formal normative tools, rather than decision theory. Note that rational analysis is at variance with Evans and Over's distinction between two forms of rationality, mentioned above. They argue that 'people are largely rational in the sense of achieving their goals (rationality r) but have only a limited ability to reason or act for good reasons sanctioned by a normative theory (rationalitY2)' (Evans and Over 1997, I). But the approach of rational analysis attempts to explain why people exhibit the everyday rationality involved in achieving their goals by assuming that their actions approximate what would be sanctioned by a formal normative theory. Thus, formal rationality helps explain everyday rationality, rather than being completely separate from it. To sum up, everyday rationality is concerned with goals (even if the goal is just to 'find things out'); knowing which formal theory of rationality to apply, and applying formal theories to explaining specific aspects of everyday cognition, requires an account of the nature of these goals. The Role of the Environment
Everyday rationality is concerned with achieving particular goals, in a particular environment. Everyday rationality requires thought and action to be adapted (whether through genes or through learning) to the constraints of this environment. The success of everyday rationality is, crucially, success relative to a specific environment-to understand that success requires modelling the structure of that environment. This requires using principles of formal rationality to specify the optimal way in which the agent's goals can be achieved in that environment (Anderson's Step 4) and showing that the cognitive system approximates this optimal solution. In psychology, this strategy is familiar from perception, where a key part of understanding the computational problem solved by the visual system involves describing the structure of the visual environment (Marr 1982). Only then can optimal models for visual processing of that environment be defined. Indeed, Marr (1982) explicitly allies this level of explanation with
Nick Chater and Mike Oaksford
Gibson's (1979) 'ecological' approach to perception, where the primary focus is on environmental structure. Similarly, in zoology, environmental idealizations of resource depletion and replenishment of food stocks, patch distribution, and time of day are crucial to determining optimal foraging strategies (Gallistel 1990; McFarland and Houston 1981; Stephens and Krebs 1986). Equally, in economics, idealizations of the 'environment' are crucial to determining rational economic behaviour (McCloskey 1985). In microeconomics, modelling the environment (e.g. game-theoretically) involves capturing the relation between each actor and the environment of other actors. In macroeconomics, explanations using rational expectations theory (Muth 1961) begin from a formal model of the environment, as a set of equations governing macroeconomic variables. This aspect of rational analysis contrasts with the view that the concerns of formal rationality are inherently disconnected from environmental constraints. For example, Gigerenzer and Goldstein (1996) propose that 'the minds of living systems should be understood relative to the environment in which they evolved rather than to the tenets of classical [i.e. formal] rationality.' (p. 651) (emphasis added). Instead, rational analysis aims to explain why agents succeed in their environment by understanding the structure of that environment, and using formal principles of rationality to understand what thought or action will succeed in that environment. Computational Limitations
In rational analysis, deriving the optimal behaviour function (Anderson's Step 4) is frequently very complex. Models based on optimizing, whether in psychology, animal behaviour, or economics, need not, and typically do not, assume that agents are able to find the perfectly optimal solutions to the problems that they face. Quite often, perfect optimization is impossible even in principle, because the calculations involved in finding a perfect optimum are frequently computationally intractable (Simon 1955,1956), and, moreover, much crucial information is typically not available. Indeed, formal rational theories in which the optimization calculations are made, including probability theory, decision theory, and logic are typically con1putationally intractable for complex problems (Cherniak 1986; Garey and Johnson 1979; Good 1971; Paris 1992; Reiner 1995). Intractability results imply that no computer algorithm could perform the relevant calculations given the severe time and memory limitations of a 'fast and frugal' cognitive system. The agent must still act even in the absence of the ability to derive the optimal solution (Gigerenzer and Goldstein 1996; Simon 1956). Thus it n1ight appear that there is an immediate contradiction between the limitations of the cognitive system and the intractability of rational explanations.
Rational Analysis of Human Cognition
lSI
There is no contradiction, however, because the optimal behaviour function is an explanatory tool, not part of an agent's cognitive equipment. Using an analogy from Marr (1982), the theory of aerodynamics is a crucial component of explaining why birds can fly. But clearly birds know nothing about aerodynamics, and the computational intractability of aerodynamic calculations does not in any way prevent birds from flying. Similarly, people do not need to calculate their optimal behaviour functions in order to behave adaptively. They simply have to use successful algorithms; they do not have to be able to make the calculations that would show that these algorithms are successful. Indeed, it may be that many of the algorithms that the cognitive system uses may be very crude 'fast and frugal' heuristics (Gigerenzer and Goldstein 1996) which generally approximate the optimal solution in the environments that an agent normally encounters. In this context, the optimal solutions will provide a great deal of insight into why the agent behaves as it does. However, an account of the algorithms that the agent uses will be required to provide a full explanation of their behaviour (e.g. Anderson 1993; Oaksford and Chater 1995b). This viewpoint is standard in rational explanations across a broad range of disciplines. Economists do not assume that people make complex gametheoretic or macroeconomic calculations (Harsanyi and Selten 1988); zoologists do not assume that animals calculate how to forage optimally (e.g. McFarland and Houston 1981); and, in psychology, rational analyses of, for example, memory, do not assume that the cognitive system calculates the optimal forgetting function with respect to the costs of retrieval and storage (Anderson and Schooler 1991). Such behaviour may be built in by evolution or be acquired via a long process of learning-but it need not require on-line computation of the optinlal solution. In some contexts, however, some on-line computations may be required. Specifically, if behaviour is highly flexible with respect to environmental variation, then calculation is required to determine the correct behaviour, and this calculation may be intractable. Thus the two leading theories of perceptual organization assume that the cognitive system seeks to optimize on-line either the simplicity (e.g. Leeuwenberg and Boselie 1988) or likelihood (Helmholtz 1910/1962; see Ponlerantz and Kubovy 1987) of the organization of the stimulus array. These calculations are recognized to be computationally intractable (see Chater 1996). This fact does not invalidate these theories, but it does entail that they can only be approximated in terms of cognitive algorithms. Within the literature on perceptual organization, there is considerable debate concerning the nature of such approximations, and which perceptual phenomena can be explained in terms of optimization, and which result from the particular approximations that the perceptual system adopts (Helm and Leeuwenberg 1996).
Nick Chater and Mike Oaksford
It is important to note also that, even where a general cognitive goal is intractable, a more specific cognitive goal relevant to achieving the general goal may be tractable. For example, the general goal of moving a piece in chess is to maximize the chance of winning. However, this optimization problem is known to be completely intractable because the search space is so large. But optimizing local goals, such as controlling the middle of the board, weakening the opponent's king, and so on, may be tractable. Indeed, most examples of optimality-based explanations, whether in psychology, animal behaviour, or economics, are defined over a local goal, which is assumed to be relevant to some more global aims of the agent. For example, evolutionary theory suggests that animal behaviour should be adapted so as to increase an animal's inclusive fitness, but specific explanations of animals' foraging behaviour assume more local goals. Thus, an animal may be assumed to forage so as to maximize food intake, on the assumption that this local goal is generally relevant to the global goal of maximising inclusive fitness. Similarly, the explanations concerning cognitive processes discussed in rational analysis in cognitive psychology concern local cognitive goals such as maximizing the amount of useful information remembered, maximizing predictive accuracy, or acting so as to gain as much information as possible. All of these local goals are assunled to be relevant to more general goals, such as maximizing expected utility (from an economic perspective) or maximizing inclusive fitness (from a biological perspective). At any level, it is possible that optimization is intractable; but it is also possible that by focusing on more limited goals, evolution or learning may have provided the cognitive system with mechanisms that can optimize or nearly optimize some more local, but relevant, quantity. The observation that the local goals may be optimized as surrogates for the larger aims of the cognitive system raises another important question about providing rational models of cognition. The fact that a model involves optimizing something does not n1ean that the model is a rational model. Optimality is not the same as rationality. It is crucial that the local goal that is optimized must be relevant to some larger goal of the agent. Thus, it seems reasonable that animals may attempt to optimize the amount of food they obtain, or that the categories used by the cognitive system are optimized to lead to the best predictions. This is because, for example, optimizing the amount of food obtained is likely to enhance inclusive fitness, in a way that, for example, maximizing the amount of energy consumed in the search process would not. Determining whether some behaviour is rational or not therefore depends on more than just being able to provide an account in terms of optimization. Therefore rationality requires not just optimizing something but optimizing something reasonable. As a definition of rationality, this is clearly circular. But by viewing rationality in terms of optimization, general conceptions of what are reasonable cognitive goals can be turned into specific and detailed models of cognition. Thus, the
---------------------
L--------------
Rational Analysis of Human Cognition
153
programme of rational analysis, while not answering the ultimate question of what rationality is, nonetheless provides the basis for a concrete and potentially fruitful line of empirical research. This flexibility of what may be viewed as rational, in building a rational model, may appear to raise a fundamental problem for the entire rational analysis programme. It seems that the notion of rationality may be so flexible that whatever people do, it is possible that it may seem rational under son1e description. So, for example, to pick up an example we have already mentioned, it may be that our stomachs are well adapted to digesting the food in our environmental niche. Indeed, they may even prove to be optimally efficient in this respect. However, we would not therefore describe the human stomach as rational, because stomachs presumably cannot usefully be viewed as information-processing devices, which approximate, to any degree, the dictates of normative theories of formal rationality. Stomachs may be well or poorly adapted to their function (digestion), but they have no beliefs, desires, or knowledge, and make no decisions or inferences. Thus, their behaviour cannot be given a rational analysis and hence they cannot be related to the optimal performance provided by theories of formal rationality. Hence the question of the ston1ach's rationality does not arise. In this section, we have seen that rational analysis provides a mode of explaining behaviour which clarifies the relationship between the stuff of everyday rationality-reasoning with particular goals, in a specific environment, with specific computational constraints-and apparently abstract principles of forn1al rationality in probability theory, decision theory, or logic. Formal rational principles spell out the optimal solution for the informationprocessing problem that the agent faces. The assumption is that a welladapted agent will approximate this solution to some degree. Having outlined the general rational analysis approach, and argued that the approach is prevale'nt in the social and biological sciences, we now consider how the programn1e of rational analysis provides a very different perspective on human reasoning than has been traditionally obtained from laboratory studies. Specifically, apparently non-deductive reasoning performance in laboratory reasoning tasks can be shown to make coherent sense if it is recognized that people may not be treating the reasoning tasks as deductive at all. A probabilistic rational analysis of these tasks provides a simple and powerful framework for explaining a wide variety of empirical data on human reasoning.
RE-EVALUATING EMPIRICAL DATA ON HUMAN REASONING
We began by discussing the controversy concerning the relationship between formal theories of rationality and the everyday notion of the rationality that underlies effective thought and action in the world. We have seen how
154
Nick Chater and Mike Oaksford
everyday rationality can be underpinned by principles of formal rationality in rational analysis. We now consider how rational analysis can be applied to explaining data on human reasoning gained from laboratory tasks. The rational analysis approach allows us to see laboratory performance, which has typically been viewed as systematically non-rational, as having a rational basis. This diffuses a crucial tension at the heart of the psychology and philosophy of rationality-between the manifest success of cognition in dealing with the complexities of the everyday world, and the apparently stumbling and flawed performance on laboratory reasoning tasks. Everyday rationality is a matter of being adapted to the structure and goals in the real world. Thus, rational explanation, whether in animal behaviour, economics, or psychology, assumes that the agent is well adapted to its normal environment. However, almost all psychological data are gained in a very unnatural setting, where a person performs an artificial task in the laboratory. Any laboratory task will recruit some set of cognitive mechanisms that determine the participant's behaviour. But it is not obvious what problem these mechanisms are adapted to solving. This adaptive problem is not likely to be directly related to the problem given to the participant by the experimenter, precisely because adaptation is to the natural world, not to laboratory tasks. In particular, this n1eans that participants may fail with respect to the task that the experimenter thinks they have set. But this may be because this task is unnatural with respect to the participant's normal environment. Consequently people may assimilate the task that they are given to a more natural task, recruiting adaptively appropriate mechanisms which solve this, more natural, task successfully. In the area of research known as the 'psychology of deductive reasoning' (e.g. Evans, Newstead, and Byrne 1993; Johnson-Laird and Byrne 1991; Rips 1994), people are given problems that the experimenters conceive of as requiring logical inference. But they consistently respond in a non-logical way. Thus, human rationality appears to be called into question (Stein 199 6; Stich 19 8 5, 199 0 ). But the perspective of rational analysis suggests an alternative view. We propose first that everyday rationality is founded on uncertain rather than certain reasoning. This suggests that probablity provides a better starting point for a rational analysis of human reasoning than logic. Second, we argue that a probabilistic rational analysis of classic 'deductive' reasoning tasks provides an excellent empirical fit with observed performance. The upshot is that much of the experimental research in the 'psychology of deductive reasoning' does not engage people in deductive reasoning at allbut rather engages strategies suitable for probabilitistic reasoning. Thus, the field of research appears to be crucially misnamed! But more importantly, probabilistic rational analysis helps resolve the tension between apparently poor laboratory reasoning performance, and the conspicuous success of
Rational Analysis of Human Cognition
ISS
everyday rationality. Laboratory performance is rational after all, once the appropriate rational standard is adopted. Our discussion will focus on Wason's selection task (Wason 1966, 1968), the most intensively studied task in the psychology of reasoning, and perhaps the 'deductive' reasoning task that has raised the greatest concerns about human rationality (e.g. Cohen 1981; Stein 1996; Stich 1985, 1990; Sutherland 1992), although the approach we describe has been applied in other areas of reasoning, including other areas in the psychology of 'deductive' reasoning: reasoning with conditionals and syllogisms (e.g. Anderson 1995; Chater and Oaksford 1999C; Oaksford and Chater 1998b). In the selection task, people must assess whether some evidence is relevant to the truth or falsity of a conditional rule of the form if p then q, where by convention p stands for the antecedent clause of the conditional and q for the consequent clause. In the standard abstract version of the task, the rule concerns cards, which have a number on one side and a letter oh the other. The rule is if there is a vowel on one side (p), then there is an even number on the other side (q). Four cards are placed before the subject, so that just one side is visible; the visible faces show an 'N (p card), a 'K' (not-p card), a '2' (q card) and a '7' (not-q card). Subjects then select those cards they must turn over to determine whether the rule is true or false. Typical results were: p and q cards (460/0); p card only (33%), p, q and not-q cards (70/0), p and not-q cards (40/0) (Johnson-Laird and Wason 1970). The task subjects confront is analogous to a central problem of experimental science: the problem of which experiment to perform. The scientist has a hypothesis (or a set of hypotheses) which they must assess (for the subject, the hypothesis is the conditional rule); and must choose which experiment (card) will be likely to provide data (i.e. what is on the reverse of the card) which bear on the truth of the hypothesis. In the light of the epistemological arguments we have already considered, it may seem unlikely that this kind of scientific reasoning will be deductive in character. Nonetheless, the psychology of reasoning has viewed the selection task as paradigmatically deductive (e.g. Evans 1982; Evans, Newstead, and Byrne 1993), although a number of authors have argued for a nondeductive conception of the task (Fischhoff and Beyth-Marom 1983; Kirby 1994; Klayman and Ha 19 87; Rips 1990). The assumption that the selection task is deductive in character arises from the fact that psychologists of reasoning have tacitly accepted Popper's hypothetico-deductive philosophy of science. Popper (1959/1935) assumes that evidence can falsify but not confirm scientific theories. Falsification occurs when predictions that follow deductively from the theory do not accord with observation. This leads to a recommendation for the choice of experiments: to only conduct experiments that have the potential to falsify the hypothesis under test.
Nick Chater and Mike Oaksford
Applying the falsificationist account to the selection task, the recommendation is that subjects should only turn cards that are potentially logically incompatible with the conditional rule. When viewed in these terms, the selection task has a deductive component, in that the subject must deduce which cards would be logically incompatible with the conditional rule. According to the rendition of the conditional as material implication (which is standard in the propositional and predicate calculi, see Haack 1978), the only observation that is incompatible with the conditional rule if p then q is a card with p on one side and not-q on the other. Hence the subject should sele~t only cards that could potentially be such an instance. That is, they should turn the p card, since it might have a not-q on the back; and the not-q card, since it might have a p on the back. This pattern of selections is rarely observed in the experimental results outlined above. Subjects typically select cards that could confirm the rule, i.e. the p and q cards. However, according to falsification the choice of the q card is irrational, and is an example of so-called 'confirmation bias' (Evans and Lynch 1973; Wason and Johnson-Laird 1972). The rejection of confirmation as a rational strategy follows directly from the falsificationist perspective. We have argued that the usual standard of 'correctness' in the selection task follows from Popper's hypothetico-deductive view of science. Rejecting the falsificationist picture would eliminate the role of logic, and hence deduction, in the selection task. The hypothetico-deductive view faces considerable difficulties as a theory of scientific reasoning (Kuhn 1962; Lakatos 1970; Putnam 1974). This suggests that psychologists should explore alternative views of scientific inference that may provide different normative accounts of experiment choice, and hence might lead to a different 'correct' answer in the selection task. Perhaps the dictates of an alternative theory n1ight more closely model human performance, and hence be consistent with the possibility of human rationality. Oaksford and Chater (1994) adopted this approach, adapting the Bayesian approach to philosophy of science (Earman 1992; Horwich 1982; Howson and Urbach 1989), rather than the hypothetico-deductive view, to provide a rational analysis of the selection task. They view the selection task in probabilistic terms, as a problem of Bayesian optimal data selection (Good 1966; Lindley 1956; MacKay 1992). Suppose that you are interested in the hypothesis that eating tripe makes people feel sick. Should known tripe-eaters or tripe-avoiders be asked whether they feel sick? Should people known to be, or not to be, sick be asked whether they have eaten tripe? This case is analogous to the selection task. Logically, you can write the hypothesis as a conditional sentence, if you eat tripe (p) then you feel sick (q). The groups of people that you may investigate then correspond to the various visible card options, p, not-p, q, and not-q. In practice, who is available
Rational Analysis of Human Cognition
157
will influence decisions about which people you question. The selection task abstracts away from this factor by presenting one example of each potential source of data. In terms of our everyday example, it is like corning across four people, one known tripe-eater, one known not to have eaten tripe, one known to feel sick, and one known not to feel sick. The task is to decide whom to question about how they feel or what they have eaten. Oaksford and Chater (1994, 1996) suggest that hypothesis testers should choose experiments (select cards) to provide the greatest 'expected information gain' in deciding between two hypotheses: (i) that the task rule, if p then q, is true, i.e. ps are invariably associated with qs, and (ii) that the occurrence of ps and qs are independent. For each hypothesis, Oaksford and Chater (1994) define a probability model that derives from the prior probability of each hypothesis (which for most purposes they assume to be equally likely, i.e. both are 0.5), and the probabilities of p and of q in the task rule. They define information gain as the difference between the uncertainty before receiving some data and the uncertainty after receiving that data where they measure uncertainty using Shannon-Wiener information. Thus Oaksford and Chater define the information gain of data D as: n
Information before receiving D:
I(H) == -
L P(Hi)logzP(Hi)
i=I
n
Information after receiving D: Information gain:
I(HID) == -
L P(HiID)logzP(HiID) i=I
Ig == I(H)-I(HID)
They calculate the P(HiID) terms using Bayes' theorem. Thus information gain is the difference between the information contained in the prior probability of a hypothesis (Hi) and the information contained in the posterior probability of that hypothesis given some data D. When choosing which experiment to conduct (that is, which card to turn), the subject does not know what that data will be (that is, what will be on the back of the card). So they cannot calculate actual information gain. However, subjects can compute expected information gain. Expected information gain is calculated with respect to ,all possible outcomes, e.g. for the p card, the possible outcomes with regard to what will be found on the back of the card are q and not-q; and the calculation also averages over both hypotheses (that the rule is true, or that p and q are independent). Oaksford and Chater (1994) calculated the expected information gain of each card assuming that the properties described in p and q are rare. This 'rarity assumption' is an appropriate default because in a typical everyday rule such as if it's a raven then it's black, only a small minority of things satisfy the antecedent (nlost things are not ravens) or the consequent (most things are not black). (Klayman and Ha (1987) make a similar assumption
Nick Chater and Mike Oaksford
in accounting for related data on Wason's, 1960, 2-4-6 task.) With this 'rarity' assumption, the order in expected information gain is: E(Ig(p))
> E(Ig(q)) > E(Ig(not-q)) > E(Ig(not-p)),
where E represents the expectation operator. This corresponds to the observed frequency of card selections in Wason's task: p >q > not-q > not-p and thus explains the predominance of p and q card selections as a rational inductive strategy. This result might seem paradoxical: it might seem that the Bayesian analysis suggests that finding falsifying instances of. the rule (which may occur by turning the not-q card to reveal a p) is not important. And this would seem to be bizarre, because from any reasonable point of view, falsifying instances should be especially significant (because they decisively answer the question of whether or not the rule is correct); and any method of testing a rule should put an emphasis on finding such instances if they exist. Fortunately, there is no puzzle here. The Bayesian analysis does rate falsifying instances as highly informative-indeed, as maximally informative, because uncertainty concerning whether the rule is true drops to as soon as a falsifier is discovered. But the expected amount of information obtained by turning the not-q card is, nonetheless, low, because, according to the rarity assumption, mentioned above, the probability of finding a falsifying instance on the back of a not-q card is low. To get an intuitive feel for how this works, consider the following scenario. Suppose that the hypothesis under test is 'if a saucepan falls from the kitchen shelf (p) it makes a clanging noise (q).' This rule, like the vast majority of everyday rules, conforms to the rarity assumption-saucepans fall quite rarely (most of the time no saucepan is falling); and clangs are heard quite rarely (most of the time no clang is audible). The four cards in the selection task can be seen as analogous to the following four scenarios. Suppose I am in the kitchen, and see the saucepan beginning to fall (p card); should I bother to take off my headphones and listen for a clang (i.e. should I turn the p card?)? Intuitively, it seems that I should, because, whether there is a clang or not, I will learn something useful concerning the rule (if there is no clang, the rule is falsified; if there is a clang, then my estimate of the probability that the rule is true increases). Suppose, on the other hand, I am next door and I hear a clang (q card); should I bother to come into the kitchen to see whether the saucepan has fallen (should I turn the q card?)? Intuitively, this is also worth doing-if the saucepan has not fallen then I have learned nothing (son1ething else must have caused the clang); but if the saucepan has fallen, then this strongly confirms the rule. This is the intuitive explanation for why the q card is worth turning, even though there is no possibility that turning this card can falsify the rule.
°
Rational Analysis of Human Cognition
IS9
Now consider the analogue of the turning of the not-q card: I am next door and I hear no clang. This time should I bother to come into the kitchen to see whether the saucepan has fallen (should I turn the not-q card?)? Intuitively, to bother to do so seems crazy-I'll be in and out to the kitchen all day if I adopt this strategy! And I will probably learn nothing whatever, as the saucepan will remain unn10ved on the shelf. Of course, in the very unlikely event that I find that the saucepan has fallen (p), then I can falsify the rule-because if the rule were true I should have heard a clang (q) and I did not. But in everyday reasoning contexts, where the rarity assumption holds, the expected information gain for the analogue of turning the not-q card is typically very low-because the probability of obtaining falsification is so low. Crucially, intuitively (and in Oaksford and Chater's 1994 formal analysis) the expected informational value of turning the q card is greater than turning the not-q card, even though turning the q card cannot lead to falsification-I will be more inclined to bother to check whether the saucepan has fallen if I hear a clang than if I do not. To complete the example, the not-p card corresponds to the case in which I see that the saucepan is sitting safely on the shelf; should I bother to take off my headphones and listen for a clang. Clearly not, because the rule only makes a claim about what happens if the saucepan falls. Oaksford and Chater (1994) also show how their model generalizes to all the main patterns of results in the selection task (for discussions of this account see Almor and Sloman 1996; Evans and Over 1996b; Laming 1996; Klauer, in press; and for responses and developments see Oaksford and Chater 1996, 1998b, 1998c; Chater and Oaksford, 1999c). Specifically, it accounts for the non-independence of card selections (Pollard 19 8 S), the negations paradigm (e.g. Evans and Lynch 1973), the therapy experiments (e.g. Wason 1969), the reduced array selection task (Johnson-Laird and Wason 1970), work on so-called fictional outcomes (Kirby 1994) and deontic versions of the selection task (e.g. Cheng and Holyoak 1985) including perspective and rule-type manipulations (e.g. Cosmides 1989; Gigerenzer and Hug 1992), the manipulation of probabilities and utilities in deontic tasks (Kirby 1994), and effects of relevance (Oaksford and Chater 199 sa; Sperber, Cara, and Girotto 1995). We noted above that the philosophy of science that underlies the 'deductive' conception of the selection task can be questioned. The current consensus is that scientific theories do not deductively imply predictions, and hence that the general problem of choosing which experiment to perforn1 (or analogously, which card to turn in the selection task) cannot be reconstructed deductively. Further, Oaksford and Chater's (1994) probabilistic account provides a better model of human performance on the selection task. According to this model, people do not use deduction when solving the selection task, rather they use a probabilistic inferential strategy.
160
Nick Chater and Mike Oaksford
Having seen how rational analysis can be applied in a specific case, and how the approach may have radical implications for standard interpretations of laboratory data on human reasoning, we now defend the rational analysis approach against theorists who argue that formal rationality has no useful role in explaining everyday rationality.
COULD FORMAL AND EVERYDAY RATIONALITY BE UNRELATED?
The first part of this paper considered various possible relations between formal and everyday rationality. The second part developed a particular conception of this relationship, framed in terms of Anderson's methodology of rational analysis, and the third provided an illustration of the approach. This section considers recent viewpoints which suggest that the whole enterprise may have been misconceived from the beginning-because there is no useful relationship between formal and everyday rationality. We shall argue that formal rationality does indeed form an indispensable part of the explanation of everyday rationality, and that the nature of this explanation is best understood in terms of rational analysis. The view that formal and everyday rationality can be disconnected has been advanced by a number of theorists. In artificial intelligence, McDermott (1987) argues that the atten1pt to build knowledge representation systems based on logical principles persistently fails to capture human everyday reasoning, and (with some sense of despair!) recommends a 'procedural' approach-the researcher simply aims to specify algorithms that seem to work, without attempting to ground these in formal logic or probability. In robotics, there has been much interest in so-called behaviour-based robotics (Brooks 1991; McFarland and Basser 1993), where perceptual and motor functions are linked directly together, using essentially heuristic methods, rather than attempting to use general principles of perceptual analysis and motor control (as exemplified in e.g. Marr 1982). As we noted above, in psychology, Evans and Over (1996a, 1997) distinguish between two notions of rationality: Rationality r: Thinking, speaking, reasoning, making a decision, or acting in a way that is generally reliable and efficient for achieving one's goals. Rationality 2: Thinking, speaking, reasoning, making a decision, or acting when one has a reason for what one does sanctioned by a normative theory. (Evans and Over 1997, 2)
They argue that 'people are largely rational in the sense of achieving their goals (rationalitYr) but have only a limited ability to reason or act for good reasons sanctioned by a normative theory (rationalitY2)' (E-'Y~n~_
Rational Analysis of Human Cognition
161
1997, I). If this is right, then achieving one's goals can be achieved without following a formal normative theory-i.e. without there being a justification for the actions, decisions or thoughts which lead to success: rationality I does not require rationalitY2. That is, Evans and Over are committed to the view that thoughts, actions, or decisions which cannot be normatively justified can, nonetheless, consistently lead to practical success. A similar view is advocated by Gigerenzer and Goldstein (1996) who claim to provide an 'existence proof' for algorithms which work in the real world, but have no apparent justification in terms of formal theories of reasoning (such an algorithm is therefore intended to be a candidate for explaining part of rationality I' in Evans and Over's terms, even though they are held to be unrelated to rationality2). The domain they consider is one of cognitive estimation: deciding which is the larger of two cities, based on a list of features of each city. Their 'nonrational' algorithm, Take-the-Best, works in two steps. First, it uses a 'recognition principle' and-if one of the cities is not known, it is assumed to be the smaller. Second, the algorithm sequentially considers features of the cities, one by one, in decreasing order of 'diagnosticity' for size (the diagnosticity ordering is a prior calculation). So, for example, the feature 'is a national capital' may be most diagnostic of size-if one city has this property it is declared to be the larger. Hence this will be the first feature to be considered. If the cities 'tie' on this property (e.g. neither is a national capital), then another feature is examined (e.g. has the city been the site of an exposition), and so on, until the tie is broken. This algorithm is designed to be 'fast and frugal'-i.e. to consume little time or memory resources; but it has no obvious rational basis. Nonetheless, in a competition with other algorithms, including multiple regression from statistics, Gigerenzer and Goldstein show that Take-the-Best performs as well as these apparently more rationally justified algorithms (and indeed, at levels that appear comparable with human performance). As well as arguing that Take-the-Best is an existence proof that algorithms can succeed in real environments, without any basis in formal rational theories, Gigerenzer and Goldstein (1996) argue that, more generally, human reasoning works by fast and frugal algorithms which work in the real world, but have no justification in terms of probability, statistics, or other normative principles. But this viewpoint does not tackle the fundamental problem we outlined for advocates of the primacy of everyday rationality above. It does not answer the question: why do the cognitive processes underlying everyday rationality consistently work? If everyday rationality is somehow based on formal rationality, then this question can be answered, at least in general terms. The principles of formal rationality are provably principles of good inference and decision-making; and the cognitive systen1 is rational in
162
Nick Chater and Mike Oaksford
everyday contexts to the degree that it approximates the dictates of these principles. But if everyday and formal rationality are assumed to be unrelated, then this explanation is not available. Unless some alternative explanation of the basis of everyday rationality can be provided, the success of the cognitive system is again left entirely unexplained. There is, though, an interesting lesson to be learned from the success of 'fast and frugal' algorithms such as Take-the-Best, which do a good job in the real world without being directly based on formal rational principles. This is that explanation in terms of cognitive algorithms can run ahead of rational explanation-i.e. we can specify algorithms that do work, without knowing why they work. We shall see shortly that the projects of developing rational and algorithm explanations of cognition quite frequently run at different speeds-each approach may run ahead of the other. But this does not undermine the importance of ultimately being able to provide both styles of explanation. In particular, it does not undermine the utility of accounts based on formal rationality, in explaining the real-world everyday rationality of the cognitive system. Consider, first, cases where rational explanations of behaviour have proceeded without considering how they might be approximated by cognitive algorithms. The vast bulk of 'rational choice' explanation, whether in social behaviour (Crawford, Smith, and Krebs 1987; Messick 1991), economics (e.g. Muth 1961; von Neumann and Morgenstern 1944) or animal behaviour (Maynard-Smith and Price 1973) has this character. The programme of rational analysis, outlined above, has the same character-indeed, one of Anderson's (1990) motivations for developing the rational analysis approach was precisely that it abstracts away from specifying underlying cognitive algorithms, which can often be underdetermined by empirical data (e.g. Anderson 1978; Pylyshyn 1984). In all these explanations, formal rational principles specify what should occur, given a specific goal and environment, but the particular cognitive algorithms which underlie behaviour in these contexts may be entirely unknown. Gigerenzer and Goldstein (along with others who advocate separating formal rational explanation from the explanation of everyday, real-world thought and behaviour) focus on the opposite case, where algorithmic explanation has run ahead of rational explanation. This occurs in much of cognitive psychology, which has focused on describing cognitive algorithms and the representations over which they operate. Equally, the study of animal cognition has resulted in accounts such as the Rescorla-Wagner associative learning algorithm for classical conditioning (Rescorla and Wagner 1972). Indeed, explanation in terms of algorithms, whether specified in terms of sequential operations, 'box and arrow' diagrams, or neural networks, is arguably the dominant mode of explanation in many areas of psychology.
Rational Analysis of Human Cognition Similarly, in the technical study of machine learning, neural networks, and much practical (rather than theoretical) mathematical statistics, algorithms have been constructed which address complex and poorly understood realworld problems, with at least some success. But the rational theory of why these algorithms are successful lags behind these developments. To choose an example of current psychological interest, it has recently been shown that a neural network can learn to map from orthography to phonology, dealing successfully both with exception words and non-words (Bullinaria 1994; Plaut, McClelland, Seidenberg, and Patterson 1996; Seidenberg and McClelland 1989).4 But there is no known rational theory of the nature of the orthography-phonology mapping, or how it should be learned. A different kind of example of psychological interest concerns the vast range of practical statistical tests which are widely used, although the assumptions under which they apply are not known (Gigerenzer and Murray 1987). Thus, Take-the-Best seems unnecessary as an 'existence proof' that we can design successful algorithms without knowing why they work, because there are already many examples of such algorithms in the psychological, computational, and statistical literatures. However, even where algorithmic theories have predominated, it remains an important goal to provide rational explanations of why they succeed. For example, in psychology, the adaptiveness of the Rescorla-Wagner learning algorithm (Rescorla and Wagner 1972) has been explained by showing that it asymptotically approximates the optimal solution in a normative probabilistic account of causal reasoning (Cheng 1997; Shanks 1995a, 1995 b). Rescorla-Wagner learning therefore approximates a rational standard, using limited computational resources. Equally, classification by similarity to stored exemplars, for which there is considerable empirical evidence (Medin and Schaffer 1978), can be shown to be adaptive because it approximates Anderson's (1991b) Bayesian classification model (Nosofsky 1991). A further example provided by McKenzie (1994) who has shown that so-called 'linear combination heuristics', which are good descriptions of human causal reasoning performance, also provide good approximations to a normative Bayesian solution (see also Anderson 1990; Cheng 1997). For many years, the fact that people appear to use such heuristics has been cited as evidence for the irrationality of human causal reasoning. Recent analyses suggest that this was premature: these heuristics provide a 'fast and frugal' approximation to rational norms. Even relatively ill-defined heuristics for probabilistic reasoning like 'availability' (Kahneman, Slovic, and Tversky 1982) may have a rational basis. The concept of availability has been developed to explain a range of systematic 4 We take no stand here on whether such models are compatible with detailed psychological and neuropsychological data. See e.g. Coltheart, Curtis, Atkins, and Haller (1993) for discussion.
Nick Chater and Mike Oaksford
biases in people's probability and frequency judgements. In a famous study, Tversky and Kahneman (1974) asked people how many seven letter words have the form: ____n_
and how many have the form ____ ing ~eople typically estimate that there are more words of the second form than the first. But this cannot be correct, because all the words that are examples of the second form are necessarily examples of the first! Tversky and Kahneman's explanation is that the second form provides a better cue to men10ry-words ending 'ing' are more 'available'. The assumption is that people estimate frequencies and probabilities by using availability-the more available an item is, the more frequent or probable it is assumed to be. But this heuristic seems to have a sound rational basis: to the extent that memory retrieval reflects an unbiased sample of the environment, availability will conform to a rational probabilistic analysis. Biased sampling (e.g. because items are stored or retrieved differentially) may lead to errors, but generally, this heuristic will be successful. Indeed, the power of the 'cognitive illusion' in Tversky and Kahneman's study arises precisely because sampling is so biased in this case. More generally, the programme of rational analysis has shown why a wide range of empirically derived algorithmic processes are successful, by showing that they approximate normative Bayesian standards, given certain assumptions about environmental structure. This approach to explaining why cognitive algorithms succeed has been adopted by a wide range of researchers in the cognitive sciences (Oaksford and Chater 1998b). In each case, success is explained because the algorithm approximates, however crudely, some rational norm for optimal behaviour in that environment. Moreover, in line with the mutual constraint between the levels mentioned above, rational level explanations have been used to develop new algorithmic accounts (e.g. Anderson 1993; Chater and Oaksford, 1999c). Similarly, in other domains where the algorithmic theory has run ahead, there has been enormous effort to develop complementary rational theories. The goal of the research programmes of computational learning theory (Valiant 1984) and statistical learning theory (Vapnik 1995) is to provide a rational foundation for practical learning algorithms. Moreover, there has been great interest in interpreting neural networks as probabilistic inference devices, to give insight into the rational basis for their success (e.g. Chater 1995; Macl
Rational Analysis of Human Cognition
165
algorithms (e.g. Bernado and Smith 1995). In each case, algorithn1s have been assumed to approximate rational standards to some degree. Typically, algorithms will be shown to be rational, given a certain goal (e.g. minimizing prediction error), on the assumption that the environment has a certain structure (e.g. that samples are independent, that variance is constant, that different causal factors interact linearly, and so on). Moreover, as in psychology, rational theories in these areas have not merely shown why, and in what environments, existing algorithms will succeed, but also served to develop new algorithms. In sum, across domains where algorithmic theories have run ahead of rational accounts, there has been vigorous and important research on developing complementary rational explanations. This indicates the desirability of both levels of explanation in providing complete accounts of cognitive phenomena. Thus it seems that the real-world success of algorithms such as Take-the-Best, apparently disconnected from a formal rational theory, does not imply that formal rational explanation is unnecessary. Algorithmic and rational levels of explanation are complementary: without an algorithmic account, we do not know how cognition works; without a rational account, we do not know why cognition works. Is There an Alternative Style of 'Why' Explanation?
A possible counter-attack by those advocating the view that formal rationality has no role in explaining everyday thought and behaviour is to argue that there is an alternative, 'ecological' or 'adaptive' explanation of why cognition works, which makes no reference to formal rational principles. This is one interpretation of Gigerenzer and Goldstein's statement that 'the minds of living systems should be understood relative to the environment in which they evolved rather than to the tenets of classical rationality' (p. 65 I) (emphasis added). This suggests a notion of 'adaptive rationality', i.e. success in relation to an environn1ent, as an alternative to classical rationality. But to see that this notion does not provide an alternative explanation, consider the question: Why does a cognitive algorithm succeed in a particular environment? To reply that this is because it is adaptively rational is clearly circular; because for an algorithm to be adaptively rational means by definition that it succeeds in the environment. In contrast, the rational level explains behavioural success by showing how that behaviour approximates optimal performance given appropriate assumptions about the agent's goals and environment. It is, of course, conceivable that there may be some other alternative way of explaining why cognitive algorithms succeed, which Gigerenzer and Goldstein might advert to as an alternative to rational explanation. An obvious suggestion is to appeal to evolution or learning. Perhaps natural selection
166
Nick Chater and Mike Oaksford
has ensured that our cognitive algorithms succeed; or perhaps our learning mechanisms have simply favoured algorithms that work. But explanations in terms of evolution or learning do not explain why specific cognitive algorithms are adaptive. Instead, they explain why we possess adaptive rather than non-adaptive algorithms-essentially because adaptive algorithn1s, by definition, perform better in the natural environment, and processes of natural selection or learning will tend to favour algorithms which are successful. But this still leaves open the question of why some algorithms are successful in the environment whereas some are not. Answering this question requires analysing the structure of the environment, the goals of the agent, and studying how these goals can be achieved given that environment. In short, it involves rational level explanation. To choose an example from a domain in which evolutionary explanation is widely accepted, an account of optimal foraging in behavioural ecology may explain why particular foraging strategies are successful and others are not. Zoologists assume evolution explains why animals possess good foraging strategies, but do not take evolutionary explanation to provide an alternative to the rational level explanation given by optimal foraging theory.
CONCLUSIONS
This paper has considered the relation between everyday and forn1al rationality, and has developed a particular view of the relation between the two, based on Anderson's programme of rational analysis. We have illustrated this approach with a rational analysis of performance on Wason's selection task, and defended the approach against the view that formal rational explanation is unnecessary in explaining cognition. We have argued that formal rational explanation is indispensable in explaining why human cognitive mechanisms are able to succeed in the real world-i.e. why they are able to exhibit everyday rationality. The relation that we have identified between rationality and algorithmic accounts, which is apparent in examples from rational analysis in psychology, and from work in zoology and economics, has broad application. It promises to reconcile rational and mechanistic constraints in a range of contexts where the debate focuses on the different level of emphasis placed on these constraints. Both rational and mechanistic factors are important, because the system under study is presumed only to approximate, perhaps quite accurately or perhaps very coarsely, a rational solution. Within this framework, the debate between rationality-based versus mechanistic explanation becomes a matter of emphasis and degree, rather than a fundamental divide. We suggest that in any debate of this kind, there should be a methodological imperative to explore rationality-based explanations-only by doing so can the scope
Rational Analysis of Human Cognition of this level of explanation be assessed; and we caution that rationalitybased explanation cannot be abandoned wholesale, without losing the ability to explain why the cognitive system is adaptive or successful. The tension between the limited scope of current formal theories of reasoning and the astonishing richness and flexibility of human reasoning should not, however, be underestimated. There are presently no adequate formal theories of simple default inference in everyday reasoning, let alone formal theories of induction, analogical reasoning, or reasoning by comparison with past cases-and it is not clear that formal explanation will be possible at all in all of these cases (e.g. Goodman 1954). Explaining thought and behaviour both in terms of formal rational principles, and at the level of cognitive algorithms, will be one of the principal intellectual challenges of the third millennium.
REFERENCES Allais, M. (1953), 'Le comportement de l'homme rationnel devant Ie risque: Critique des postulats et axiomes de l'ecole americaine', Econometrica, 21, 503-46. Almor, A., and Sloman, S. (1996), 'Is deontic reasoning special?', Psychological Review, 103, 374-80. Anderson, J. R. (1978), 'Arguments concerning representations for mental imagery', Psychological Review, 85, 249-77. --(1990), The Adaptive Character of Thought (Hillsdale, NJ: Lawrence Erlbaum Associates). --(1991a), 'Is human cognition adaptive?' Behavioral and Brain Sciences, 14, 47 1-5 17. - - (199 I b), 'The adaptive nature of human categorisation', Psychological Review, 9 8,4°9- 2 9. - - (1993), Rules of the Mind (Hillsdale, NJ: Lawrence Erlbaum). --(1995), Cognitive Psychology and Its Implications, 4th edn. (San Francisco: W. H. Freeman). --and Matessa, M. (1998), 'The rational analysis of categorisation and the ACT-R architecture', in M. Oaksford and N. Chater (eds), Rational Models of Cognition (Oxford: Oxford University Press): 197-217. --and Milson, R. (1989), 'Human memory: An adaptive perspective', Psychological Review, 9 6, 7°3-19. --and Schooler, L. J. (1991), 'Reflections of the environment in memory', Psychological Science, I, 396-408. Becker, G. (1975), Human Capital, 2nd edn. (New York: Columbia University Press). - - . (1981), A Treatise on the Family (Cambridge, Mass.: Harvard University Press). Bernado, J. M., and Smith, A. F. M. (1995), Bayesian Theory (Chichester, Sussex: Wiley). Braine, M. D. S. (1978), 'On the relation between the natural logic of reasoning and standard logic', Psychological Review, 85, 1-21.
168
Nick Chater and Mike Oaksford
Brooks, R. A. (1991), 'How to build complete creatures rather than isolated cognitive siumulators', in K. Van Lehn (ed.), Architectures for Intelligence (Hillsdale, NJ: Lawrence Erlbaum Associates): 225-39. Bullinaria, J. A. (1994), 'Internal representations of a connectionist model of reading aloud', Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society (Hillsdale, NJ: Erlbaum): 84-9. Chater, N. (1995), 'Neural networks: The new statistical models of mind', in J. P. Levy, D. Bairaktaris, J. A. Bullinaria, and P. Cairns (eds), Connectionist Models of Memory and Language (London: UCL Press): 2°7-28. - - (1996), 'Reconciling sin1plicity and likelihood principles in perceptual organization', Psychological Review, 103, 566-81. --Crocker, M., and Pickering, M. (1998), 'The rational analysis of inquiry: The case of parsing', in M. Oaksford and N. Chater (eds), Rational Models of Cognition (Oxford: Oxford University Press): 441-68. --and Oaksford, M. (1990), 'Autonomy, implementation and cognitive architecture: A reply to Fodor and Pylyshyn', Cognition, 34, 93-1°7. ----(I999a), 'Ten years of the rational analysis of cognition', Trends in Cognitive Sciences, 3, 57-6 5. ----(I999b), 'The probability heuristics model of syllogistic reasoning', Cognitive Psychology, 38,191-258. ----(I999C), 'Information gain vs. decision-theoretic approaches to data selection', Psychological Review, 106, 223-7. Cheng, P. W. (1997), 'From covariation to causation: A causal power theory', Psychological Review, 104, 367-40 5. --and Holyoak, K. J. (1985), 'Pragmatic reasoning schemas', Cognitive Psychology, 17, 39 1-4 16. Cherniak, C. (1986), Minimal Rationality (Cambridge, Mass.: MIT Press). Chew, S. H. (1983), 'A generalization of the quasilinear mean with applications to the measurement of incon1e inequality and decision theory resolving the Allais paradox', Econometrica, 51, 1065-92. Chomsky, N. (1965), Aspects of the Theory of Syntax (Cambridge, Mass.: MIT Press). Clark, K. L. (1978), 'Negation as failure', in Logic and Databases (New York: Plenum Press): 293-322. Cohen, L J. (1981), 'Can human irrationality be experimentally demonstrated?' Behavioral and Brain Sciences, 4, 317-7°. Coltheart, M., Curtis, B., Atkins, P., and Haller, M. (1993), 'Models of reading aloud: Dual-route and parallel-distributed-processing approaches', Psychological Review, 100, 589-608. Cosmides, L. (1989), 'The logic of social exchange: Has natural selection shaped how humans reason? Studies with the Wason selection task', Cognition, 3 I, 18 7- 2 7 6 . Cox, R. T. (1961), The Algebra of Probable Inference (Baltimore: Johns Hopkins University Press). Crawford, C., Smith, M., and Krebs, D. (1987), Sociobiology and Psychology (Hillsdale, NJ: Erlbaun1).
Rational Analysis of Human Cognition Davidson, D. (1984), Inquiries into Truth and Interpretation (Oxford: Clarendon Press). Dawes, R. M. (1988), Rational Choice in an Uncertain World (San Diego, Calif.: Harcourt, Brace, Jovanovich). Dawkins, R. (1977), The Selfish Gene (Oxford: Oxford University Press). Earman, J. (199 2), Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory (Cambridge, Mass.: MIT Press). Ellsberg, D. (1961), 'Risk, ambiguity and the Savage axioms', Quarterly Journal of Economics, 75, 643-69. Elster, J. (ed.) (1986), Rational Choice (Oxford: Blackwell). Evans, J. St. B. T. (1982), The Psychology of Deductive Reasoning (London: Routledge & Kegan Paul). --(1989), Bias in Human Reasoning: Causes and Consequences (Hillsdale, NJ: Erlbaum). --and Lynch, J. S. (1973), 'Matching bias in the selection task', British Journal of Psychology, 64, 39 1 -7. --Newstead, S. E., and Byrne, R. M. J. (1993), Human Reasoning (Hillsdale, N]: Erlbaum). --and Over, D. E. (1996a), Rationality and Reasoning (Hove, Sussex: Psychology Press). ----(1996b), 'Rationality in the selection task: Epistemic utility vs. uncertainty reduction', Psychological Review, 103, 356-63. ----(1997), 'Rationality in reasoning: The problem of deductive competence', Cahiers de Psychologie Cognitive, 16, 1-35. Finetti, B. de (1937), 'La Prevision: Ses lois logiques, ses sources subjectives' (Foresight: Its logical laws, its subjective sources), Annales de l'Institute Henri Poincare, 7, 1-68; translated in H. E. Kyburg and H. E. SmokIer (1964) (eds), Studies in Subjective Probability (Chichester: Wiley). Fischhoff, B., and Beyth-Marom, R. (1983), 'Hypothesis evaluation from a Bayesian perspective', Psychological Review, 90, 239-60. Fishburn, P. C. (1983), 'Transitive measurable utility', Journal of Economic Theory, 3 1,293-3 17. Fisher, R. A. (1922), 'On the mathematical foundations of theoretical statistics', Philosophical Transactions of the Royal Society of London, Series A, 222: 309-68. - - (1925/1970), Statistical Methods for Research Workers, 14th edn. (Edinburgh: Oliver & Boyd). Fodor, J. A. (1983), Modularity of Mind (Cambridge, Mass.: MIT Press). --(1987), Psychosemantics (Cambridge, Mass.: MIT Press). --and Pylyshyn, Z. W. (1988), 'Connectionism and cognitive architecture: A critical analysis', Cognition, 28, 3-71. Gallistel, C. R. (1990), The Organization of Learning (Cambridge, Mass.: MIT Press). Garey, M. R., and Johnson, D. S. (1979), Computers and Intractability: A Guide to the Theory of NP-Completeness (San Francisco: W. H. Freeman). Gibson, J. J. (1979), The Ecological Approach to Visual Perception (Boston: Houghton Mifflin).
17°
Nick Chater and Mike Oaksford
Gigerenzer, G., and Goldstein, D. (1996), 'Reasoning the fast and frugal way: Models of bounded rationality', Psychological Review, 103, 650-69. --and Hug, K. (1992), 'Donlain-specific reasoning: social contracts, cheating, and perspective change', Cognition, 43, 127-7 I. --and Murray, D. J. (1987), Cognition as Intuitive Statistics (Hillsdale, NJ: Erlbaum). Good, I. J. (1950), Probability and the Weighting of Evidence (London: Griffin). --(1966), 'A derivation of the probabilistic explication of information', Journal of the Royal Statistical Society, Series B, 28, 578-81. --(1971), 'Twenty seven principles of rationality', in V. P. Godambe and D. A. Sprott (eds), Foundations of Statistical Inference (Toronto: Holt, Rhinehart & Wilson). Goodman, N. (1954), Fact, Fiction and Forecast (Cambridge, Mass.: Harvard University Press). Haack, S. (1978), Philosophy of Logics (Cambridge: Cambridge University Press). Harman, G. (1986), Change in View (Cambridge, Mass.: MIT Press). Harsanyi, John C., and Selten, Reinhard (1988), A General Theory of Equilibrium Selection in Games (Cambridge, Mass.: MIT Press). Helm, P. A. van der, and Leeuwenberg, E. L. J. (1996), 'Goodness of visual regularities: A non-transformational approach', Psychological Review, 103, 429-56. Helmholtz, H. von (1910/1962), Treatise on Physiological Optics, iii (J. P. Southall (ed). and translation) (New York: Dover). Horwich, P. (1982), Probability and Evidence (Cambridge: Cambridge University Press). Howson, C., and Urbach, P. (1989), Scientific Reasoning: The Bayesian Approach (La Salle: Open Court). Jeffreys, H. (1939), Theory of Probability (Oxford: Oxford University Press). Jaynes, E. T. (1989), Papers on Probability, Statistics, and Statistical Physics, 2nd edn. (Amsterdam: North-Holland). Johnson-Laird, P. N. (1983), Mental Models (Cambridge: Cambridge University Press). --and Byrne, R. M.]. (1991), Deduction (Hillsdale, N]: Erlbaum). --and Wason, P. C. (1970), 'Insight into a logical relation', Quarterly Journal of Experimental Psychology, 22, 49-61. Kahneman, D., Slovic, P., and Tversky, A. (eds) (1982), Judgment under Uncertainty: Heuristics and Biases (Cambridge: Cambridge University Press). --and Tversky, A. (1979), 'Prospect theory: An analysis of decision under risk', Econometrica, 47,263-91. Keynes, J. M. (1921), A Treatise on Probability (London: Macnlillan). Kirby, K. N. (1994), 'Probabilities and utilities of fictional outcomes in Wason's four card selection task', Cognition, 51, 1-28. Klauer, K. C. (1999), 'The normative justification for information gain in Wason's selection task', Psychological Review, 106, 215-22. Klayman, J., and Ha, Y. (1987), 'Confirmation, disconfirmation and information in hypothesis testing', Psychological Review, 94, 211-28. Kleindorfer, P. R., Kunreuther, H. C., and Schoemaker, P. ]. H. (1993), Decision Sciences: An Integrated Perspective (Cambridge: Cambridge University Press).
Rational Analysis of Human Cognition Kuhn, T. (I962), The Structure of Scientific Revolutions (Chicago: University of Chicago Press). Lakatos, I. (I970), 'Falsification and the methodology of scientific research programmes', in I. Lakatos and A. Musgrave (eds), Criticism and the Growth of Knowledge (Cambridge: Cambridge University Press): 9I-I96. Lamberts, K., and Chong, S. (I998), 'Dynamics of dimension weight distribution and flexibility in categorization', in M. Oaksford and N. Chater (eds), Rational Models of Cognition (Oxford: Oxford University Press): 275-92. Laming, D. (I996), 'On the analysis of irrational data selection: A critique of Oaksford and Chater (I994)', Psychological Review, I03, 364-73. Leeuwenberg, E., and Boselie, F. (I988), 'Against the likelihood principle in visual form perception', Psychological Review, 95, 485-9I. Lindley, D. V. (I956), 'On a measure of the information provided by an experiment', Annals of Mathematical Statistics, 2I, 986-I005. --(I97I), Bayesian Statistics: A Review (Philadelphia: Society for Industrial and Applied Mathematics). --(I982), 'Scoring rules and the inevitability of probability', International Statistical Review, 50, I-26. Loomes, G., and Sugden, R. (I982), 'Regret theory: An alternative theory of rational choice under uncertainty', Economic Journal, 9 2, 8°5-24. Lopes, L. L. (I99I), 'The rhetoric of irrationality', Theory & Psychology, I, 65-82. --(I992), 'Three misleading assumptions in the customary rhetoric of the bias literature', Theory & Psychology, 2, 23I-6. Lucas, J. R. (I970), The Concept of Probability (Oxford: Oxford University Press). McCarthy, J. M. (I980), 'Circumscription: A form of nonmonotonic reasoning', Artificial Intelligence, I3, 27-39. McClelland, J. L. (I998), 'Connectionist models of Bayesian inference', in M. Oaksford and N. Chater (eds), Rational Models of Cognition (Oxford: Oxford University Press): 2I-53. McCloskey, D. N. (I985), The Rhetoric of Economics (Madison: University of Wisconsin Press). McDermott, D. (I982), 'Non-monotonic logic II: Nonmonotonic modal theories', Journal of the Association for Computing Machinery, 29, 33-57. --(I987), 'A critique of pure reason', Computational Intelligence, 3, I5I-60. --and Doyle, J. (I980), 'Non-monotonic logic 1', Artifical Intelligence, I3, 4 I -7 2 . McFarland, D. J., and Basser, T (I993), Intelligent Behaviour in Animals and Robots (Complex Adaptive Systems) (Cambridge, Mass.: MIT Press). McFarland, D., and Houston, A. (I98I), Quantitative Ethology: The State Space Approach (London: Pitman). Machina, M. J. (I982). "'Expected utility" analysis without the independence axiom', Econometrica, 39, 277-3 23. MacKay, D. J. C. (I992), 'A Practical Bayesian Framework for Backpropagation Networks', Neural Computation, 4, 44 8- 62. McKenzie, C. R. M. (I994), 'The accuracy of intuitive judgenlent strategies: Covariation assessment and Bayesian inference', Cognitive Psychology, 26, 209-39. Marr, D. (I982), Vision (San Francisco: W. H. Freeman).
Nick Chater and Mike Oaksford May, K. O. (1954), 'Intransitivity, utility, and the aggregation of preference patterns', Econometrica, 22, 1-13. Maynard-Smith, J., and Price, G. R. (1973), 'The logic of animal conflict', Nature, 24 6, 15-18. Medin, D. L., and Schaffer, M. M. (1978), 'Context theory of classification learning', Psychological Review, 85, 201-38. Messick, D. M. (1991), 'On the evolution of group-based altruism', in R. Selten (ed.), Game Equilibrium Models I: Evolution and Game Dynamics (Berlin: Springer-Verlag): 3°4-28. Minsky, M. (1977), 'Frame system theory', in P. N. Johnson-Laird and P. C. Wason (eds), Thinking: Readings in Cognitive Science (Cambridge: Cambridge University Press): 355-76. Muth, J. F. (1961), 'Rational expectations and the theory of price movements', Econometrica, 29, 3 I 5-3 5· Nash, J. (1950), 'The bargaining problem', Econometrica, 28,155-62. Neal, R. (1993), 'Bayesian learning via stochastic dynamics', in S. J. Hanson, J. D. Cowan, and C. L. Giles (eds), Advances in Neural Information Processing Systems 5 (San Mateo, Calif.: Morgan Kaufman): 475-82. Neumann, J. von, and Morgenstern, O. (1944), Theory of Games and Economic Behavior (Princeton: Princeton University Press). Neyman, J. (1950), Probability and Statistics (New York: Holt). Nosofsky, R. M. (1991), 'Relation between the rational model and the context model of categorization', Psychological Science, 2, 416-21. Oaksford, M., and Chater, N. (1991), 'Against logicist cognitive science', Mind & Language, 6,1-38. ----(1992), 'Bounded rationality in taking risks and drawing inferences', Theory & Psychology, 2, 225-30. ----(1994), 'A rational analysis of the selection task as optimal data selection', Psychological Review, 101, 608-3 I. ----(I995a), 'Information gain explains relevance which explains the selection task', Cognition, 57, 97-108. ----(I995b), 'Theories of reasoning and the computational explanation of everyday inference', Thinking and Reasoning, I, 121-52. ----(1996), 'Rational explanation of the selection task', Psychological Review, 103, 381-91. ----(I998a) (eds), Rational Models of Cognition (Oxford: Oxford University Press). ----(I998b), Rationality in an Uncertain World (Hove: Psychology Press). - - - - (I998c), 'A revised rational analysis of the selection task: Exceptions and sequential sampling', in M. Oaksford and N. Chater (eds), Rational Models of Cognition (Oxford: Oxford University Press): 372-98. Paris, J. (1992), The Uncertain Reasoner's Companion (Cambridge: Cambridge University Press). Plaut, D. C., McClelland, J. L., Seidenberg, M. S., and Patterson, K. E. (1996), 'Understanding normal and impaired word reading: Computational principles in quasi-regular domains', Psychological Review, 103, 56-115.
l
Rational Analysis of Human Cognition
173
I ollard, P. (1985), 'Nonindependence of selections on the Wason selection task', Bulletin of the Psychonomic Society, 23, 317-20. IPomerantz, J. R., and Kubovy, M. (1987), 'Theoretical approaches to perceptual organization', in K. R. Boff, L. Kaufman, and J. P. Thomas (eds), Handbook of Perception and Human Performance, ii: Cognitive Processes and Performance (New York: Wiley): 36.1-36.46. Popper, K. R. (1959), The Logic of Scientific Discovery (London: Hutchinson), originally published in 1935. Putnam, H. (1974), 'The "corroboration" of theories', in P. A. Schilpp (ed.), The Philosophy of Karl Popper, i (La Salle, Ill.: Open Court Publishing): 221-40. Pylyshyn, Z. W. (1984), Computation and Cognition (Carnbridge, Mass.: MIT Press). - - . (ed.), (1987) The Robot's Dilemma: The Frame Problem in Artificial Intelligence (Norwood, NJ: Ablex). Quine, W. V. O. (1953), 'Two dogmas of en1piricism', in From a Logical Point of View (Cambridge, Mass.: Harvard University Press): 20-46. --(1960), Word and Object (Cambridge, Mass.: MIT Press). Ramsey, F. P. (1926) 'Truth and Probability', in Ramsey, The Foundation of Mathematics and Other Logical Essays, ed. R. B. Braithewaite (London: Kegan Paul). Rawls, J. (197 1), A Theory ofJustice (Cambridge, Mass.: Harvard University Press). Reiner, R. (1995), 'Arguments against the possibility of perfect rationality', Minds and Machines, 5, 373-89' Reiter, R. (1980), 'A logic for default reasoning', Artificial Intelligence, 13, 81-132. --(1985), 'One reasoning by default', in R. Brachman and H. Levesque (eds), Readings in Knowledge Representation (Los Altos, Calif.: Morgan Kaufman): 4°1-1°; first published in 1978. Rescorla, R. A., and Wagner, A. R. (1972), 'A Theory of Pavlovian Conditioning: Variations in the Effectiveness of Reinforcement and Nonreinforcement', in A. H. Black and W. F. Prokasy (eds), Classical Conditioning II: Current Research and Theory (New York: Appleton-Century-Crofts), 64-94. Rips, L. J. (1990), 'Reasoning', Annual Review of Psychology, 41, 321-53. --(1994), The Psychology of Proof (Cambridge, Mass.: MIT Press). Rissanen, J. (19 87), 'Stochastic complexity', Journal of the Royal Statistical Society, Series B, 49, 223-39. --(1989), Stochastic Complexity and Statistical Inquiry (Singapore: World Scientific). Savage, L. J. (1954), The Foundations of Statistics (New York: Wiley). Schank, R. C., and Abelson, R. P. (1977), Scripts, Plans, Goals, and Understanding (Hillsdales, NJ: Erlbaum). Schooler, L. J. (1998), 'Sorting out core memory processes', in M. Oaksford and N. Chater (eds), Rational Models of Cognition (Oxford: Oxford University Press): 128-55. Seidenberg, M. S., and McClelland, J. L. (1989), 'A distributed, developmental model of word recognition and naming', Psychological Review, 96, 523-68. Shanks, D. R. (1995a), 'Is Human Learning Rational?', Quarterly Journal of Experimental Psychology, 48A, 257-79. --(1995b), The Psychology of Associative Learning (Cambridge: Cambridge University Press). I
174
Nick Chater and Mike Oaksford
Simon, H. A. (1955), 'A behavioral model of rational choice', Quarterly Journal of Economics 69,99-118. --(1956), 'Rational choice and the structure of the environment', Psychological Review, 63, 129-3 8. --(1991), 'Cognitive architectures and rational analysis: Comment', in K. van Lehn (ed.), Architectures for Intelligence (Hillsdale, NJ: Lawrence Erlbaum Associates): 25-4°. Skyrms, B. (1977), Choice and Chance (Belmont: Wadsworth). Sperber, D., Cara, E, and Girotto, V. (1995), 'Relevance theory explains the selection task', Cognition, 57,31-95. Stein, E. (1996), Without Good Reason (Oxford: Oxford University Press). Stephens, D. W., Krebs, J. R. (1986), Foraging Theory (Princeton, NJ: Princeton University Press). Stich, S. (1983), From Folk Psychology to Cognitive Science (Cambridge, Mass.: MIT Press). --(1985), 'Could man be an irrational animal?', Synthese, 64,115-35. --(1990), The Fragmentation of Reason (Cambridge, Mass.: MIT Press). --and Nisbett, R. (1980), 'Justification and the psychology of human reasoning', Philosophy of Science, 47, 188-202. Sutherland, S. (1992), Irrationality: The Enemy Within (London: Constable). Thagard, P. (1988), Computational Philosophy of Science (Cambridge, Mass.: MIT Press). Thaler, R. (1987), 'The psychology of choice and the assumptions of economics', in A. Roth (ed.). Laboratory Experimentation in Economics: Six Points of View (Cambridge: Cambridge University Press): 99-13°. Tversky, A., and Kahneman, D. (1974), 'Judgement under uncertainty: Heuristics and biases', Science, 125, 1124-31. ----(1986), 'Rational choice and the framing of decisions', Journal of Business, 59, 25 1-78. Valiant, L. G. (1984), 'A theory of the learnable', Communications of the Association for Computing Machinery, 27, 1134-42. Vapnik, V. N. (1995), The Nature of Statistical Learning Theory (New York: Springer-Verlag). Wason, P. C. (1960), 'On the failure to eliminate hypotheses in a conceptual task', Quarterly Journal of Experimental Psychology, 12, 129-4°. --(1966), 'Reasoning', in B. Foss (ed.), New Horizons in Psychology (Harmondsworth, Mddx.: Penguin). --(1968), 'Reasoning about a rule', Quarterly Journal of Experimental Psychology, 20, 273-81. --(1969), 'Regression in reasoning', British Journal of Psychology, 60, 471-80. --and Johnson-Laird, P. N. (1972), The Psychology of Reasoning: Structure and Content (Can1bridge, Mass.: Harvard University Press). Young, R. (1998), 'Rational analysis of exploratory choice', in M. Oaksford and N. Chater (eds), Rational Models of Cognition (Oxford: Oxford University Press): 469-5°0.
7 The Rational and the Real: Some Doubts about the Programme of CRational Analysis' ~EE.]. LOWE
•••
ABSTRACT
This paper is a critique of Nick Chater and Mike Oaksford's attempt to apply the programme of 'rational analysis' to human cognitive behaviour and, more particularly, challenges their claim that 'everyday' rationality is based on 'formal' rationality. I begin with some critical remarks about the way in which they distinguish between 'everyday' and 'formal' rationality and then explore a possible relationship between the two which they overlook but which is suggested by some remarks of John Locke's. Next I raise some doubts about the programme of 'rational analysis' itself and about the parallels which Chater and Oaksford claim to see between their explanation of everyday rationality and the use of optimality models in evolutionary biology. Finally, I question their account of subjects' performance on the Wason selection task in terms of Bayesian optimal data selection, on the grounds that there are other normative paradigms which equally vindicate subjects' selections but no principled basis in Chater and Oaksford's programme for choosing between such alternative paradigms. My conclusion is that the programme of 'rational analysis' mistakenly conflates the tasks of empirical psychology with those of philosophy and the sciences of the a priori and brings us no nearer to the goal of achieving a naturalistic explanation of human reasoning ability.
~' Many thanks to Jose Luis Bermudez and Alan Millar for very helpful criticisms of an earlier draft of this paper.
E.]. Lowe INTRODUCTORY REMARKS
My concerns in this paper are mainly critical, my target being the programme of 'rational analysis' recommended by Nick Chater and Mike Oaksford (this volume, Chapter 6). But although I am sceptical about many aspects of their proposals, I should declare at the outset my admiration for the boldness and ambition of their project. I am sure that a radical overhaul of existing ideas is precisely what is needed if a remotely plausible naturalistic explanation of human rationality is to be achieved. I just doubt that Chater and Oaksford's programme is the framework that will take us nearer to that goal. In the course of criticizing their views, I shall gesture towards some possibilities that strike me as being more promising, but this is not the place for me to attempt to explore those possibilities in any great depth. 'FORMAL' VERSUS 'EVERYDAY' RATIONALITY
According to Chater and Oaksford, there is an important distinction to be drawn between 'formal rationality' and 'everyday rationality'. They leave the meaning of the latter term somewhat vague, but equate it with 'commonsense reasoning' and clearly regard it as a ubiquitous feature of human cognitive activity. Indeed, they maintain that 'In this inforn1al, everyday sense, most of us, most of the time, are remarkably rational' (p. 135). This is despite the fact that, according to them, 'common-sense reasoning is immensely difficult' (p. 136). They regard common-sense reasoning as being 'immensely difficult', it seems, largely because it has not succumbed to the attempts of researchers in the field of artificial intelligence to 'formalize' it, by producing computer programs capable of sin1ulating it. This is the notorious 'frame problem' of AI. But, of course, the failure of traditional AI in this regard is only disconcerting on the assumption that its approach to explaining common-sense reasoning is not wildly off-target. If a quite different approach would be appropriate-for instance, one appealing to dynamical systems theory (cf. van Gelder 1995 )-then the failure of traditional program-based AI is only to be expected. As for the suggestion that the exercise of common-sense reasoning is ubiquitous in human affairs-that weare 'remarkably rational', in the everyday sense-this too might be queried. That ordinary folk manage to survive quite well in a complex environment has no obvious implications for their capacity to reason effectively. After all, many creatures seem to do quite as well as we do in this respect without, apparently, possessing any capacity for reason whatsoever-for example, ants and bees. Before we can credit our 'everyday rationality' with striking success on account of our ability to survive in a complex world, we need to be convinced that our
The Rational and the Real
177
survival is rightly attributable to our exercise of such rationality. It will not do to make the connection between successful action and the possession of 'everyday rationality' true by definition-as I think Chater and Oaksford would agree (2oo2)-for it seems inappropriate to attribute 'rationality' in any serious sense to creatures unless they possess an ability to make judgements of truth and falsity and to evaluate and revise those judgements in the light of the evidence available to them. Reasoning is essentially involved in these processes and all normal human beings are clearly capable of it in some degree-though some are manifestly better at it than others, whether through innate ability or through training. But it is far from evident that reasoning in such a sense is something that we engage in constantly in our everyday transactions with our physical and social environment: we are creatures of habit at least as much as we are rational beings. The other half of Chater and Oaksford's dichotomy may also prompt some queries. This is their notion of 'formal rationality', which, they hold, is rooted in 'mathematical theories of good reasoning' (p. 136) and is exemplified by logical and probabilistic calculi, conformity with whose formal principles serves to define 'formal rationality' in Chater and Oaksford's sense (p. 137). Several distinct notions seem to be run together here-for instance, the notion of norms of good reasoning, and the notion of formal principles of reasoning. There is nothing in the notion of a norm of good reasoning which restricts its scope to formal principles, so far as I can see: indeed, n1any traditional fallacies, such as the fallacy of equivocation, are clearly violations of norms of good reasoning, and yet ones which cannot plausibly be represented as violations of formal principles of reasoning. I Again, Chater and Oaksford appear to be in danger of conflating logico-mathematical theory with formal logico-mathematical systems, that is, metalogic and metamathematics with logic and mathematics. But let us leave such niceties to one side for the time being. Chater and Oaksford go on to remark that 'when people are given reasoning problems that explicitly require use of ... formal principles, their performance appears to be remarkably poor' (p. 137). But, we may ask, remarkably poor by what standard? Chater and Oaksford themselves observe that formal methods in logic and probability theory 'took centuries of intense intellectual effort to construct, and present a tough challenge for each generation of students' (p. 137). In view of that, however, what would truly be remarkable would be for ordinary folk, untrained in those formal methods, to be
I As C. L. Han1blin remarks, it would be odd to try to produce 'a formal system in which arguments involving equivocation can be represented ... since one of the usual ain1s of formal systems is to be unambiguous' (Hamblin 1970, 192-3). Hamblin's considered judgement (205) is that it is not possible to give a general account of all the traditional fallacies in purely formal terms.
E.]. Lowe able to perform well on reasoning problems which explicitly require their use. After all, part of the very point of constructing such formal methods is to enable us to transcend the limitations of our native powers of reasoning. At the same time, however, it was only by exercising those native powers of reasoning that logicians and mathematicians were able to construct and evaluate the formal methods in the first place.
FOUR POSSIBILITIES-AND A FIFTH
This leads us directly on to the issue which Chater and Oaksford address next: the question of how 'formal' and 'everyday' rationality are related. Chater and Oaksford consider, altogether, four possibilities: (I) the primacy of everyday rationality, (2) the primacy of formal rationality, (3) the view that everyday and formal rationality are completely separate-that there is 'no useful relationship' (p. 160) between them-and (4) the view that everyday rationality is 'based' on formal rationality. The last is their own favoured option. I have to say that none of these positions, as interpreted by Chater and Oaksford, strikes me as being at all plausible. An advocate of the primacy of everyday rationality, on their interpretation, 'views formal theories as flawed in so far as they fail to match up with human everyday reasoning intuitions' (p. 138). The reverse of this, they say, is held by advocates of the primacy of formal rationality, according to whom 'everday reasoning is fallible and ... must be corrected by ... the dictates of formal theories of rationality' (p. 142). Both of these views are clearly too extreme, as is the view that the two kinds of rationality are quite separate-so much so that I doubt whether anyone has ever really held any of these views in their pure forms. But, equally, it seems inherently implausible to suppose that everyday rationality is, as Chater and Oaksford claim, based on formal rationality, since this seems to put the cart before the horse. Given that formal principles of logic and probability theory were only discovered by the exercise of our native powers of human reasoning, how could the latter be 'based' upon the former? I shall return to this question when I come to consider Chater and Oaksford's programme of 'rational analysis', but first I want to sketch a fifth possibility not considered by them. The fifth possibility is that 'everyday rationality', as Chater and Oaksford call it, does not exploit or depend upon formal principles at all but is limited in its scope, in that it can only handle relatively sinlple chains of argument involving small numbers of easily comprehensible premisses (cf. Lowe 1993). It was, I presume, by exercising this kind of rationality that logicians and mathematicians gradually developed, over many centuries, fornlal principles of logic and probability theory which enable us to extend the range of our reasoning beyond the limitations to which our native powers
The Rational and the Real
179
are subject. The normative force of each kind of rationality is strongest in its own proper domain. Thus, everyday rationality can be corrected by appeal to formal principles when it attempts to engage in long and complex chains of reasoning, while formal principles are suspect if they conflict with the dictates of everyday rationality in respect of simple inferences. When I say that, according to this view, everyday rationality does not exploit or depend upon formal principles, what I mean to suggest is this. Our ability to draw simple inferences is part and parcel of our ability to understand the propositions involved in those inferences, and more particularly the logical connectives and operators used in the construction of those propositions (cf. Hacking 1979; Cohen 1986, 151 ff). Thus, anyone who genuinely resisted drawing the conclusion 'The apple is red' from the pren1isses 'Either the apple is red or the apple is green' and 'The apple is not green' would rightly be suspected of failing to grasp the meanings of the particles 'or' and 'not'. This isn't to say, however, that someone who does draw that conclusion must do so by applying the formal rule of disjunctive syllogism, the principle that from premisses of the form 'p or q' and 'Not q' one should infer 'p'. On the contrary, the formal rule was no doubt discovered by reflection on the fact that many particular inferences which we intuitively deem to be valid exhibit this pattern. The value of having the formal rule is that it provides us with a guide in complex cases, in which our rational intuitions deliver no immediate verdict-for example, in cases in which the premisses are highly con1plex propositions, or in which long chains of reasoning have to be executed. Something like this view of the relationship between formal logic and rational intuition seems to have been favoured by Locke, who asserts, in a memorably sardonic passage of the Essay Concerning Human Understanding (IV, XVII, 4), that God has not been so sparing to Men to make them barely two-legged Creatures, and left it to Aristotle to make them Rational ... He has given them a Mind that can reason without being instructed in the Methods of Syllogizing: The Understanding is not taught to reason by these Rules; it has a native Faculty to perceive the Coherence, or Incoherence of its Ideas, and can range them right, without any such perplexing Repetitions. (Locke 1975, 67 1)
The thought, then, is that a primitive grasp of at least some logical relations between propositions is a prerequisite of the very ability to construct and deploy formal logical methods of reasoning, so that, as I remarked a moment ago, it would be putting the cart before the horse to appeal to formal logical principles to provide any kind of explanation of our everyday, untutored reasoning powers (cf. Lowe 1995a, 182-6). This isn't to say, of course, that an explanation of these powers-and a naturalistic one at that-is simply not to be had. But I don't profess to be able, yet at least, to
180
E.]. Lowe
provide one myself-and I don't want to minimize the difficulty of achieving this goal, which is, I suspect, still a very long way off.
THE PROGRAMME OF 'RATIONAL ANALYSIS'
What, then, can we say of Chater and Oaksford's own preferred answer to the question which they raise concerning the relation between formal and everyday rationality-the view that the latter is 'based' on the former? The key idea here is the programme of 'rational analysis', as they call it, in deference to John Anderson. Rational analysis, they say, aims to 'explain why people exhibit the everyday rationality involved in achieving their goals by assuming that their actions approximate what would be sanctioned by a formal normative theory' (p. 149). I have to say, though, that this sounds nlore like a recipe for constructing a rationalization of everyday behaviour than it sounds like a recipe for an explanation of such behaviour. Chater and Oaksford themselves acknowledge that it would be unrealistic to suppose that ordinary folk actually use the formal principles of logic and probability theory to guide their actions, not least because in many cases the calculations involved would be computationally intractable. But they liken this to the fact that 'the theory of aerodynamics is a crucial component of explaining why birds can fly' (p. 15 I), for 'birds know nothing about aerodynanlics, and the computational intractability of aerodynamic calculations does not in any way prevent birds from flying' (p. 151). However, this analogy strikes me as being thoroughly misconceived. The fact that birds know nothing about aerodynamics is beside the point, given that flying, unlike reasoning, is not a cognitive activity. The theory of aerodynamics provides a genuine causal explanation of how birds fly, in terms of laws governing the properties of surfaces moving through a gaseous medium. By the same token, however, the laws of formal logic and probability theory would only provide a genuine causal explanation of how people reason if they could be construed as laws governing transitions between people's mental states-and yet not even Chater and Oaksford believe that this is so, since they talk instead about the cognitive system using 'algorithms' or 'heuristics' which only approximate the hypothetical solutions dictated by the formal principles of normative theories. Given that, on their view, human beings do not, in their everyday reasonings, actually apply anything like the formal principles in question, it is unclear to nle in what sense one can explain those everyday reasonings by saying that they mimic, approximately, the solutions that would be arrived at by hypothetical systems applying those formal principles under idealized conditions. This is not a pattern of explanation that I can easily recognize as occurring in any other empirical science.
The Rational and the Real
181
My verdict here may perhaps be deemed unduly harsh, in view of Chater and Oaksford's own attempts to draw parallels between their approach and that of other scientists, such as evolutionary biologists who appeal to optimal foraging theory in order to construct a predictive model of how an animal would behave optimally in a given situation, subject to certain constraints. Optimality in such a case may be specified in terms, say, of n1aximizing net energy obtained in a certain time. And here the proposal is not that the animals concerned are explicitly trying to maximize anything'zoologists do not assume that animals calculate how to forage optilnally' (p. 15 I )-but rather that they have evolved to operate according to simple "rules of behaviour which collectively converge upon the optimal solution. In reply to this line of defence, I would make the following points. First, it is doubtful whether evolutionary considerations can really justify the foregoing sort of approach in any case, since species can survive and prosper even though they exhibit markedly suboptimal behavioural patterns, provided that there are no rival species in the vicinity that can outdo them in competition. 2 One only has to think of what happened to many species of marsupials in Australia when European mammals such as rats and rabbits were introduced: before that, presumably, the marsupials were functioning suboptimally in their environmental niches-but they were still thriving, simply because they had no competitors that were functioning any more efficiently. Chater and Oaksford seem to be uncritically adopting the 'Panglossian paradigm'.3 Secondly, what Chater and Oaksford claim to provide is not merely a predictive device but an explanation of behaviour, and yet their kind of explanation cannot qualify even indirectly as being causal, as far as I can see, which is why I think it is questionably scientific. I shall return to this point in a moment. Thirdly, I have considerable doubts about how one would determine an 'optimal' solution to many problems, since normative notions like this seem to be essentially contestable ones: logicians and probability theorists themselves disagree fundamentally over too many of the key issues (cf. Lowe 1997). Chater and Oaksford can, if they like, help themselves to some formal theory or other and determine what, according to that theory, would constitute an 'optimal' solution to some problem. But what determines their choice of theory, when so many conflicting ones are on offer? If it should 2 For further objections to optimality assumptions in evolutionary accounts of human rationality, see Stich (1990, 63 ff). 3 See Gould and Lewontin (1979). Even Daniel Dennett (1983), who mounts a qualified defence of the Panglossian paradigm, quotes with approval the remark of the adaptationists G. F. Oster and E. O. Wilson that 'The prudent course is to regard optimality models as provisional guides to future empirical research and not as the key to deeper laws of nature' (Dennett 1987, 265). This modest role for optimality models seems incompatible with the explanatory power that Chater and Oaksford apparently want to invest in them. For the source of the quotation, see Oster and Wilson (1978, 312).
182
E.]. Lowe
turn out that predictions based on a certain optimization model are borne out empirically, would that constitute empirical confirmation of the corresponding formal theory? Hardly, since formal theories are not answerable to empirical evidence. So what would be confirmed? As far as I can see, nothing more than the predictive utility of the model. Again my conclusion is that nothing in the way of genuine scientific explanation is made available by this sort of approach.
LESSONS FROM THE WASON SELECTION TASK
Let me now turn, finally, to Chater and Oaksford's claim that the programme of rational analysis enables us to re-evaluate the laboratory data on human reasoning in a way which 'allows us to see laboratory performance, which has typically been viewed as systematically non-rational, as having a rational basis' (p. 154). Here I focus on their treatment of Wason's selection task. Chater and Oaksford's proposal is that it is best to 'view the selection task in probabilistic terms, as a problem of Bayesian optimal data selection' (p. 156). By the normative standards of Bayesian hypothesis testing, they argue, subjects make precisely the right selections in choosing the p and q cards in preference to the not-q and not-p cards. Thus, they are engaging in a 'rational inductive strategy' (p. 158). But there are some problems with this suggestion, even setting aside my earlier doubts about the whole programme of 'rational analysis'. First of all, if laboratory subjects treat the selection task as a problem of inductive hypothesis testing, then they are still making a serious cognitive mistake, since they are quite explicitly told that the conditional 'rule' whose truth or falsity is at issue concerns only the four cards in front of them, not some wider population of which these cards are putative exemplars. Secondly, it is well known that subjects who fail to select the not-q card very often subsequently agree with the researchers that they should have selected it, when it is pointed out to them that it could falsify the rule: so, it seems, not even the subjects then1selves would be inclined to agree, on reflection, that their selections were 'rational'. But it borders on the paradoxical to ascribe to people's actions a 'rationality' which they themselves do not acknowledge, or even repudiate, on reflection. Thirdly, however, there is, in any case, at least one alternative normative paradigm which not only delivers the verdict that the subjects' intuitive selections are correct but which also forms the basis of a plausible account of reasoning with conditionals quite generally.4 This is the view that an 4 See further Lowe (1997). For this approach to conditional reasoning quite generally, see Adams (1975); Edgington (1995). I don't want to suggest that I myself wholly endorse this approach, however: see Lowe (1996 and 1995 b).
The Rational and the Real
indicative conditional, 'If p, then q', is assertable if and only if the conditional subjective probability of q given p is high. Since this conditional subjective probability, P(qlp), is defined by means of the ratio P(p & q)/P(p), and since the latter either equals zero or else is undefined when P(q) is zero, that is, when P(not-q) equals one, a rational subject who is certain that one of the cards confronting him is a not-q card ought by this account not to turn it over: for he knows already that the conditional 'If p, then q' is not assertable in respect of that card, irrespective of what it might have on its concealed face. Now, in view of the fact that more than one normative paradigm delivers the verdict that typical subjects make the 'correct' choices in the selection task, what is to be said in favour of Chater and Oaksford's 'explanation' as against any other? Since the programme of 'rational analysis' does not demand that the formal normative principles which it invokes have any psychological reality in the minds of subjects, it is not clear to me what their answer to this sort of question could be. Of course, it may be objected against me here that subjects in the selection task are asked a question concerning the truth of a conditional, not one concerning its assertability: and on some theories of the (indicative) conditional, its truth-conditions and its assertability-conditions are markedly different) On such theories, then, subjects may still be making a mistake in failing to select the not-q card. However, in the first place, there are other theories of indicative conditionals which assign them only assertabilityconditions, not truth-conditions (see Adams 1975; Edgington 1995), so that according to these theories the only acceptable way to construe the question posed in the selection task is as one which really concerns assertability rather than truth. Secondly, even if it should turn out that a theory which assigns both truth-conditions and assertability-conditions to conditionals is superior-and this is, at present, a highly contested matter-it would still not be reasonable to expect ordinary subjects, unversed in the arcane disputes of logical theorists, to be reflectively aware of this distinction, and hence it would be unfair to charge them with committing an error of reasoning in conflating a question of truth with one of assertability. The fact remains that, so long as we do not have consensus amongst logical theorists concerning the semantics of conditionals, we simply don't have an uncontestable answer to the question of what the ideally rational subject should do in the selection task. And this much, too, is surely clear: that if a rational subject could only arrive at a 'correct' choice after much 5 See, esp., Jackson (1979). According to Jackson, an indicative conditional of the form 'If p, then q' is true just in case either p is false or q is true-that is, it has the truth-conditions of
the so-called material conditional-but it is assertable just to the extent that the conditional subjective probability of q given p is high. On this view, even if the conditional 'If p, then q' is known not to be assertable in respect of the not-q card, irrespective of what it might have on its concealed face, it remains an open question whether or not that conditional is true.
E.]. Lowe reflection on the logic of conditionals, then the selection task is not one which can profitably be presented to untutored speakers-it would be like setting a problem in advanced number theory to a class of pre-school infants, in the expectation that their answers would tell us something about their arithmetical reasoning capacities. Ironically enough, the apparent assumption of many psychologists that the problem posed in the selection task is a relatively simple one itself turns out to be a cognitive illusion of considerable magnitude.
CONCLUDING REMARKS
My fundamental concern in this paper has been that the programme of 'rational analysis' fails to carve out a legitimate domain of empirical research into human cognitive behaviour. To the extent that it appeals to normative paradigms, it trespasses upon the proper territory of logicians, mathematicians, decision theorists, and philosophers. To the extent that it attempts to explain psychologically real phenomena, its preparedness to abstract away from the details of concrete causal processes renders it dubiously empirical and questionably explanatory. In sum, by appealing to the rational to explain the real, it arguably conflates the tasks of en1pirical psychology with those of philosophy and the sciences of the a priori. REFERENCES Adams, E. W. (1975), The Logic of Conditionals (Dordrecht: Reidel). Chater, N., and Oaksford, M. (2002), 'The rational analysis of human cognition', this volume. Cohen, L. J. (1986), The Dialogue of Reason: An Analysis of Analytic Philosophy, (Oxford: Clarendon Press). Dennett, D. C. (1983), 'Intentional Systems in Cognitive Ethology: The "Panglossian Paradigm" Defended', Behavioral and Brain Sciences 6: 349-90; reprinted in Dennett (1987). --(1987), The Intentional Stance (Cambridge, Mass.: MIT Press). Edgington, D. (1995), 'On conditionals', Mind 104: 235-329. Gould, S. J., and Lewontin, R. C. (1979), 'The spandrels of San Marco and the Panglossian Paradigm: a critique of the adaptationist prograil1n1e', Proceedings of the Royal Society B205: 581-98. Hacking, I. (1979), 'What is Logic?', Journal of Philosophy 7 6: 285-319. Hamblin, C. L. (1970), Fallacies (London: Methuen). Jackson, F. (1979), 'On assertion and indicative conditionals', Philosophical Review 88: 565-89. Locke, J. (1975), An Essay Concerning Human Understanding, ed. P. H. Nidditch (Oxford: Clarendon Press).
The Rational and the Real Lowe, E. J. (1993), 'Rationality, deduction and n1ental models', in Rationality: Psychological and Philosophical Perspectives, ed. K. I. Manktelow and D. E. Over (London: Routledge): 211-30. --(I99Sa), Locke on Human Understanding (London: Routledge). --(I99Sb), 'The truth about counterfactuals', Philosophical Quarterly 4S: 4 I -S9· --(1996), 'Conditional probability and conditional beliefs', Mind lOS: 603-IS. --(1997), 'Whose rationality? Logical theory and the problem of deductive competence', Cahiers de Psychologie Cognitive/Current Psychology of Cognition 16: 14 0 - 6. . Oster, G. E, and Wilson, E. O. (1978), Caste and Ecology in Social Insects (Princeton, NJ: Princeton University Press). Stich, S. (1990), The Fragmentation of Reason: Preface to a Pragmatic Theory of Cognitive Evaluation (Cambridge, Mass.: MIT Press). Van Gelder, T. (I99S), 'What might cognition be, if not computation?', Journal of Philosophy 9 2 : 34S-8I.
8 The Rationality of Evolutionary Psychology DAVID E. OVER
-..
Recent work in cognitive psychology on the adaptive nature of reasoning is partly a reaction to earlier experimental results that were taken, by some philosophers and psychologists, to imply that human beings are extremely irrational. The reaction to this conclusion has been to ask how we could achieve such apparent success in the world if we are so irrational, and to predict that the standard of human reasoning and decision-making will be found to be much higher when people are tested on more realistic problems, for which there should often be some adaptive mental processes. This reaction is sometimes itself too extreme, leading to complacency about human rationality. Human genes may have been quite successful so far, but many millions of people have died because of irrationality. This is rather worrying for those of us who care as much for ourselves as for our genes, and the debate about experimental results and rationality will no doubt continue to be vigorous (as in Stanovich and West (2000) and commentaries) . Evolutionary psychology is an increasingly important programme in cognitive psychology for studying adaptation and human thought (Barkow, Cosmides, and Tooby 1992). Real enthusiasts for this approach cannot believe that human thought is the result of some domain-general learning and reasoning ability that has evolved by natural selection. If that were true, then evolutionary theorizing in psychology could only tell us why this completely general ability was adaptive-it could do little to help us understand particular psychological states and processes. On the other hand, evolutionary psychology has a great contribution to make if many particular psychological states and processes are biological adaptations. Thus many leading evolutionary psychologists have endorsed what has been called the massive modularity hypothesis (Samuels 1998), which makes evolutionary psychology as important as possible, by taking the strongest position: that the mind only consists of domain-specific mental modules. These dedicated modules are held to be adaptations and to give human beings a kind of
188
David E. Over
domain-specific rationality, without the need for any domain-general learning or reasoning processes, even formal or logical ones. Fodor (1983) was influential in arguing that there are mental modules dedicated to peripheral and low-level input and output computations, e.g. those required by vision and the execution of action. But in his account, the input modules pass information, bottom-up, to a central processor for high-level activities, such as logical or content-independent inference, which can then in turn sometimes have an effect on output, top-down, by coming to a conclusion about the best action to take in some context. For Fodor there are both dedicated modules for solving peripheral problems in specific domains and non-modular central processes for general reasoning. A prime property of a domain-specific module is encapsulation, which means that it only draws on information, bottom-up, from its own domain, and is unaffected by higher-level, top-down inference. Our central processor can use scientific laws and logic to infer that a fish in water is not exactly where it appears to be, but a domain-specific module ensures that we still have the illusory experience. A domain-specific module can enable us to recognize a smiling face immediately, but central processing might be necessary for a domain-general inference about whether the sn1iling person is really friendly or not. Characteristic of domain-general reasoning is that any belief we have, from any domain, can be relevant to it and used in it. In contrast, the massive modularity hypothesis implies that there is no central processor. Some evolutionary psychologists have supported this hypothesis with challenging theoretical arguments, and have produced empirical evidence for it in a stin1ulating series of experiments. Using theoretical argun1ents, they would contend that domain-general thought is less adaptive than a domain-specific heuristic for deciding whether to trust a smiling person. Domain-general reflections would supposedly take too long to answer this question, in part because the list of potentially relevant premisses is unlimited, and much greater efficiency could be gained, with little or even no loss in longer-term accuracy, by a domain-specific heuristic. They might suggest that an efficient heuristic chosen by natural selection would be to have son1e initial trust in smiling persons, but to identify quickly any cases in which they cheat us and to distrust them after that. In their experiments, evolutionary psychologists try to demonstrate that people do not naturally have any content-independent procedures for formal inferences, i.e. a mental logic, whether implemented with mental natural deduction rules (Rips 1994) or with mental models (Johnson-Laird and Byrne 1991). The experiments are alleged to confirm that people do not solve what are supposed to be forn1ally equivalent problems in the same way, as they would do if they used logic on these problems and not domain-specific heuristics. The massive modularity hypothesis has an attractive metaphor. According to it, the whole mind is like a Swiss army knife, with many dedicated blades
Rationality of Evolutionary Psychology
for solving adaptive problems, and no general purpose tool at its centre (Cosmides and Tooby 1994). There are obvious problems with this knife metaphor-for one thing, there usually is a general-purpose blade in a Swiss army knife. Much more seriously, the metaphor can be misused by presupposing, in effect, that son1e content-general processes help to determine which dedicated blade of the knife is to be opened, or even that some homunculus does this. But these are not careful uses of the metaphor, and contradict its basic point, which is that there are no content-independent processes helping the blades. As I hope to show, there is no sound experimental support for the massive n10dularity hypothesis, but the central theoretical argument for it cannot be fully answered without a better understanding than we now have of human rationality and its evolution by natural selection. Serious consideration of the massive modularity hypothesis has the benefit of raising deep questions about rationality.
THE KANT IAN ARGUMENT
The most significant argument for the massive modularity hypothesis was first stated by Kant in a forceful way, though evolutionary psychologists seem unaware of the fact. Kant endorsed the principle that there will be 'no instrument' for any purpose in an 'organized being' which is not 'best adapted to it'. He then argued that the purpose of reason in any creature cannot be the instrumental one of its 'preservation', 'welfare', or 'happiness' because in that case 'nature would have hit upon a very bad arrangement in selecting the reason of the creature to carry out this purpose. For all the actions of the creature for this purpose ... would be marked out for it far n10re accurately by instinct, and that end would have thereby been attained much more surely than it ever can be by reason' (Kant 1997/1785, 395). Kant would agree with supporters of massive modularity that formal reason cannot effectively help us to decide whether we will be made more happy than unhappy in our lives by trusting smiles, and that nature can best pick out trustworthy smiles with some instinctive heuristic. Some contemporary Kantian scholars appear to be embarrassed by the above passage and deny that Kant's main argument depends on it (Korsgaard 1997, p. xii). However that may be, it could almost be written by an evolutionary psychologist, with one crucial difference at the last step. Kant concluded from his version of the argument that reason cannot have an instrumental function, but rather exists to be used a priori to infer normative laws for all finite rational beings. Kant (1981/1799) held, for example, that there is a normative law of reason requiring us to be honest. He thought that this law follows in a priori reasoning, and so it can admit of no exceptions, even if our goal in being dishonest on some occasions is to preserve other
David E. Over people's happiness as well as our own by misleading a villain who wants to cheat us. Evolutionary psychologists would dispute Kant's conclusion and think of reasoning and rationality as instrumental, as serving individual human goals and interests. But some leading evolutionary psychologists have their own version of Kant's argument, only unlike him they infer from it, at the last step, that content-independent reasoning does not exist at all, and that reasoning and rationality are distributed in domain-specific modules which even they sometimes call instincts. In other words, they infer the massive modularity hypothesis from their version of Kant's argument. A better instrumentalist response to Kant, in my view, is to hold that formal or content-independent reasoning does have an instrumental functionthat, with the right premisses, it can be of help in achieving practical goals, by supplementing, or even by compensating for the deficiencies of, instincts or domain-specific modules. However, supporters of massive modularity can overcome limitations in Kant's pre-Darwinian teleological biology to make his argument stronger for their conclusion. It is harder to criticize the hypothesis that reproductive success is better served by massive modularity than by domain-general reasoning. Many evolutionary psychologists would be less bold than Kant in stating an up-to-date version of his argument, and others would qualify to some extent the conclusion that fornlal or contentless reasoning does not exist as a natural mental activity. Tooby and Cosmides (1992) have been by far the boldest in stating a thoroughgoing version of the Kantian argument (though without reference to Kant), and they and their collaborators the most stimulating in their experiments on reasoning. They argue for massive modularity by claiming that contentspecific mechanisms 'will be far more efficient than general-purpose mechanisms', and that content-independent systenls 'could not evolve, could not manage their own reproduction, and would be grossly inefficient and easily out competed if they did' (111-12). The extent to which many decisions affect relative fitness and reproductive success depends on relative frequencies, and many of these cannot be effectively inferred in a lifetime of observations by an individual under primitive conditions. The beneficial effects of incest avoidance is an example Tooby and Cosmides give. It may well be better, for both reproductive success and most people's individual happiness, to have a modular tendency to reject incest than to try it out to see whether the effects are positive or negative. Most people want to have healthy children, and for that goal, an adaptive module for incest avoidance is a good way to achieve instrumental rationality. Individuals could not be safely left to infer in the course of their lives, by some kind of general but natural statistical research, that incest has some tendency to produce unhealthy children. Adaptive nl0dules do not have to be innate in the narrow sense of being fully formed or fixed at birth-there is usually only a biological preparedness for their development (Cummins and
Rationality of Evolutionary Psychology
191
Cummins 1999). Even instinctive behaviour in animals, like flight in most birds, has to be practised, and the development of adaptive preferences may be greatly affected by environmental or social conditions, such as whether brothers and sisters are raised closely together from an early age. The existence of an adaptive module is revealed in the relative ease with which all human beings, or almost all of them, learn to display it given the right conditions. There is firm evidence that an adaptive module for recognizing face-like appearances exists because everyone, or almost everyone, learns to do this quickly and accurately under a wide variety of conditions. A face-recognition ability was adaptive and good for our genes, but it can .also be part of our instrumental rationality, since it is so often useful for many ordinary goals to be able to identify faces quickly and reliably. However, that does not show that a Fodorian system, with peripheral modules supplemented by domain-general reasoning, would not be even better than massive modularity, both for wider reproductive success under primitive conditions, and for individual happiness at any time. Other dual process theories of human thinking and reasoning, of the same general type as Fodor's, have recently been argued for (Evans and Over 1996; Sloman 1996; Stanovich 1999). These theories aim to account for people's basic deductive competence and capacity for content-independent reasoning, but recognize that this reasoning can only be effective in the real world when it is supplied by implicit processes with relevant and reliable premisses, some of which must come from input modules like those for vision. Explicit deduction and implicit modular processes can work together to contribute to instrumental rationality. On the other hand, as I have already said, there is asserted to be empirical evidence for the uncompromising hypothesis of massive modularity, and I shall examine this critically before returning to theoretical arguments.
CHEATER DETECTION
Cosmides (1989) tried to get evidence for the existence of a domain-specific module in experiments on what had been considered a high-level logical problem, the selection task. Wason (1966) had introduced this task by presenting participants in an experiment with a conditional of the form, 'if p then q', and four cards, each known to have a p or not-p value on one side and a q or not-q value on the other side. For instance, the conditional might be: (I) If card has a vowel on one side, then it has an even number on the other side.
Then the four cards would have a letter on one side and a number on the other. Imagine that these cards are lying on a table and showing only the values A, T, 4, and 7. The object was to select just those cards that might
David E. Over reveal whether the conditional was true or false. The correct choice was the p card and the not-q card, or in our example, the A card and the 7 card, as only these could reveal a falsifying case of p, or A, on one side and not-q, or 7, on the other. But most participants make the mistake of ignoring the not-q card, choosing either the p card alone or with the q, or 4, card. Participants sometin1es do much better at the task when it is given more realistic content. Consider: (2) If a person is drinking beer then that person must be over
19 years old.
In this case, the object of the task is to turn over just the cards which might indicate whether someone has violated the rule (2). When this task is given to people where there is a similar drinking age law, they correctly choose just the p card, indicating that someone is drinking beer, and the not-q card, indicating that someone is under 19 years old. Cosmides argued that people could not have a mental logic for solving an adaptive problem like this, for otherwise they would apply it to conditionals of the same logical form to get the same answer in each case, but since they do not respond in the same way to (I) and (2), they cannot have a mental logic. What they are supposed to have is a domain-specific module which can identify possible cases of cheating. She defined cheating as taking a benefit without paying an associated cost in a social contract or agreement, and related her social contract account of the selection task to biological studies of reciprocal altruism (Axelrod 1984; Trivers 1971). In her account, the domain-specific module can be applied to the task for (2) but not to that for (I), as she held that the drinking-age law is a kind of social contract and that violating it is cheating. She proposed her module as an adaptation enabling human beings to engage in reciprocal altruism and live in social groups, and claimed that it works by means of a Darwinian algorithm, a heuristic which n1ay implicitly conform to certain rules we could write down, but which applies only to the domain of social contracts. Cosmides was by no means alone in presupposing that (I) and (2) are of the same logical form, but this is a mistake that leaves her with no good reason for rejecting the existence of a mental logic. At the highest level of analysis, all selection tasks can be seen as having the form of a decisiontheoretic problem (Evans and Over 1996), but conditionals (I) and (2) differ logically: (I) is indicative and (2) is deontic. It would actually be evidence against the existence of logical ability in people if they did not respond differently to indicative and deontic or other modal logical forms. For example, they would be badly confused about logical forms if they did not distinguish a proposition with a deontic 'must' or 'may' in it, as logical operators for obligation and permission, from a proposition without these operators, or from a proposition with a causal 'must' or 'may' in it, as logical operators
Rationality of Evolutionary Psychology
193
for causal necessity and possibility. These modal logical constants affect the logical form of conditionals and other propositions and must be distinguished from each other in different content-independent rules. For example, if some students are cheating, then it logically follows that it is causally possible for them to cheat, but not that it is permissible for them to cheat. The logical goal of an indicative task based on (I) is not even the same as that of the deontic task based on (2). Participants are told that the object of the former is to find out whether (I) is true or false, but in the latter, they are told that the point is to discover whether someone has violated (2), which is assumed to be a correct rule, or one truly in force, for guiding behaviour. The conditional like (I) in the original abstract selection tasks was artificially restricted to a statement about only the four cards on display. At that level of abstraction, and with this restriction, one needs little more than pure logic to infer that the p and not-q cards are the correct choices. However, in any more realistic selection task, particular judgements of probability and utility are required. This can be seen most easily by using the classic example from the philosophy of science as an unrestricted conditional: (3) If it is a raven, then it is black.
Another form of (3) is used to state the ravens paradox in the philosophy of science (Howson and Urbach 1993), but the only practical way to investigate (3) is by examining ravens alone, in effect selecting the p card, and ignoring non-black things, in effect the not-q card. The set of non-black things is so large and heterogeneous that searching it is a most inefficient and improbable way of finding counter-examples to (3) even if they exist. Moreover, finding a raven that is black does a great deal to confirm (3), but finding a non-black thing that is a non-raven does almost nothing. People may ignore the not-q card in an abstract task because the equivalent choice in practical affairs is often a bad one. To solve any realistic problem like the selection task one does need much more than logical rules, whether extensional, deontic, or whatever. For example, judgements about set size and probability may be necessary, as when one realizes that one is unlikely to find a counter-example to (3) by examining a member of the set of non-black things, because this set size is so much larger than the set of ravens (Kirby 1994). In this kind of indicative task, judgements of information gain (Oaksford and Chater 1994) or of epistemic utility (Evans and Over 1996) are necessary as well. Probability and utility judgements of different types may often be grounded in dedicated modules, but that does not mean that higher-level logical rules are of no help here. From (3) and the fact that some bird is a raven, we may infer by modus ponens that this bird must be black, i.e. the probability that this bird is black is I given the premisses for modus ponens. It then immediately follows that finding the raven is black is significant for the confirmation of (3).
194
David E. Over
Cosmides (1989) extensively reviewed experiments on selection tasks and presented some of her own, and her conclusion was that p and not-q cards are correctly chosen only when these are the ones which might reveal cases of cheating in social contracts (262). But consider: (4) If you clear up spilt blood, then you must wear rubber gloves.
In a selection task based on (4), the p card, indicating that someone has cleared up spilt blood, and the not-q card, indicating that someone has not worn gloves, are the only ones which might reveal that (4) has been violated, and participants do tend to select these cards (Manktelow and Over 1990). Yet clearly people who clear up spilt blood without wearing rubber gloves are not cheating; what they are doing is endangering their lives. In a general decision-theoretic analysis, being cheated and putting one's life or health at risk are two kinds of cost that people should try to find out about, in the hope of avoiding or compensating for them in some way (Manktelow and Over 1995). Cosmides and Tooby (1992) have responded to this example by proposing a second domain-specific module for understanding what they call precautions for dealing with hazards. There is obviously an adaptive tendency in us to be cautious about what we believe to be life-threatening dangers, but Cosmides and Tooby are being far more contentious in claiming that this is only the result of an encapsulated module. It is still more contentious to hold that there is another such dedicated module to do with cheating. Controlled experiments would presumably confirm the sad experience of parents that it is easier for children to acquire an objection to being cheated than to being given more than they deserve. Even so, participants in a selection task will readily identify possible cases in which customers are given more than they are entitled to by a store in a special offer (Manktelow and Over 1991, Experiment 3). Cosmides (1989) said in a footnote (196) that the Darwinian algorithm in her module for social contracts has to be what she called item-independent rather than item-specific. Vampire bats are the usual example of animals that practise reciprocal altruism in the way that they share blood with each other (Wilkinson 1990), but being given blood is the only benefit that their item-specific algorithm has to recognize implicitly. Of course, Cosmides had to acknowledge that a benefit for human beings cannot be defined as a single item like food, but she did not take the next step of explaining how this point is compatible with n1assive modularity and encapsulation. Human beings can use almost any belief they have to infer that something is a benefit in some context, even if it is of no intrinsic value, like many forms of money, or totally abstract, like logical inference itself. The same point can be made about costs. Almost any belief we have can be relevant or useful as a premiss for inferring that people are cheaters because they have taken one of these benefits without paying the corresponding cost. An encapsulated cheater-detection module would have severe limitations.
Rationality of Evolutionary Psychology
195
Dedicated modules may help us to learn quickly how to avoid serious costs of many different types, and such bottom-up systems may always be our primary way of avoiding some damaging costs in day-to-day experience. But logical inference, however trivial, can also be required, as when we infer from the obligation to perform the actions referred to in the consequent of (3) and (4) that we have permission to perform them. That obligation logically inlplies permission is a logical relation for any deontic rules, whether these have a particular content about cheating, precautions, or any other possibility. Manktelow and Over (1995) try to give a psychologically plausible semantics for deontic statements which satisfies the logical relations for the deontic logical constants, but these relations are contentless by their very definition, and so there is much space for evolutionary psychology to supply detailed content for separating social, prudential, and other obligations and permissions from each other. (See Cummins (1998) and Fiddick, Cosmides, and Tooby (2000) for more of this detailed work, and Almor and Sloman (2000) and Holyoak and Cheng (1995) for different analyses of deontic reasoning.) Whatever we learn about specific content from modules, there are other high-level processes in operation. It is not enough for any module simply to record the violation of a deontic rule. There is a subtle human ability to judge fine degrees of aggravating and mitigating circumstances (Manktelow, Fairley, Kilpatrick, and Over 1999). It would not be rational to condemn people as cheaters who give us a good excuse for taking a benefit without paying the cost. Perhaps their inability to reciprocate in an exchange with us has been caused by a debilitating illness. However many encapsulated modules were assunled to exist, for understanding causation and hunlan responsibility, these would somehow have to be nlade to work together properly. It is unclear how people could ensure this without at least grasping the logical distinctions between the causal and deontic modalities. Of course, giving and accepting excuses must have helped us to work well together in small social groups in the prinlitive human environment of evolutionary adaptedness (EEA). But excuse giving cannot be fully explained by an encapsulated module, since again no limit can be placed on the premisses which call be relevant to an excuse. The product of an encapsulated module is automatic and unalterable by high-level reasoning, but such reasoning can save us from a false belief when a module is inaccurate. We can infer at a high level that an apparent face is an optical illusion, even as we continue to experience it. Actually, the supposed cheater detection module does not always appear to work in any sense when we are convinced at a high level that we are not being cheated, but if it does, we can infer on logical grounds that it too is inaccurate. We can infer that a feeling that someone has cheated us must be wrong because premisses we definitely accept logically imply that we must have had our
David E. Over money back, whatever we imagine we do or do not recall. The highest level of general reasoning is non-constructive, and this can be helpful when a dedicated module is not working at all. General, top-down processes may enable us to infer that someone or other is cheating us even though we do not, at first anyway, know who-perhaps the books do not balance and more than one person has access to these. This inference is non-constructive simply because it does not enable us to identify the cheater, but it could be useful by telling us to start looking for one. Non-constructive reasoning can compensate for inaccurate modules or, to some extent, act in place of inoperative ones, and it is the best possible example of an unencapsulated mental process.
FREQUENCIES
Cosmides (1989) tried to give a modular account of responses in what had been held to be a logical problem, the selection task. Turning to inductive inference, Cosmides and Tooby (1996) want to uncover the existence of another dedicated module, one for understanding frequencies. The central part of their strategy is like that of Cosmides (1989): they contend that people are not equally good at solving formally equivalent problems. They argue that it is adaptive to be able to notice and recall frequency information, but claim that single-case probabilities are useless for adaptive judgement because these probabilities are unobservable. Cosmides and Tooby infer that, due to the existence of an adaptive module responding to frequency information, probability problems will be easier to solve when expressed in terms of frequencies rather than single-case probabilities. For example, participants in an experiment are told that lout of every 1,000 Americans has some disease, and that there is a test for detecting it. If the test is given to someone with the disease, the result is positive. However, the result is positive as well for 50 out of every 1,000 people who are healthy. The participants are instructed to imagine that a sample of 1,000 Americans is randomly selected by lottery, and then they are asked how many of these Americans who test positive for the disease actually have it. The correct answer is that approximately lout of 50, or better about lout of 5 I, have the disease, or about 2 per cent, and this is the answer most participants give. However, most participants give an incorrect answer when the probabilities are not clearly expressed as frequencies and the question is about a single~·case probability. In one condition of the latter type, the participants are merely told that the disease has a prevalence of 1/1,000 and the test a false positive rate of 5 per cent, and then they are asked for the probability that a specific person with a positive test has the disease. Some doubts can be raised about the details of Cosmides and Tooby's experiments and their interpretation of their data. But the most relevant
Rationality of Evolutionary Psychology
197
point here is the general one that these experiments are, if anything, evidence that content-independent abilities do exist. In the experiment just described, the participants are all but told outright that I out of the 1,000 sample Americans tested will really have the disease, but that about 50 of these Americans who are healthy will also have a positive test. Fron1 this information, the participants can easily get the approximately correct answer of about I in 50, or about I in 51, by whole number arithmetic (Howson and Urbach 1993). Of course, no primitive human beings could have done this arithmetic using our sophisticated position value notation, in which it is easy to work with numerals like 50 and 1,000. And our ability to use this notation to solve Cosmides and Tooby's word problem hardly shows that we tend to make rational probability judgements in the real world. Cosmides and Tooby correctly argue that people do not explicitly follow Bayes's theorem to solve problems like the above. If they did, they would have no trouble with the different ways of expressing the same problem. They would also conform to the theorem more closely if they more often gave the strictly correct answer, which in the above is I out of 5°.95. But it is one thing to say that people do not explicitly follow, or even conform precisely to, a content-independent form of Bayes's theorem, and another to hold that they follow no content-independent rules when solving these problems, and even that following such rules is never of any help in inductive reasoning. We can reinforce this point by considering what Gigerenzer (1998) claims about natural sampling. He holds that this is a way of acquiring frequency information that was adaptive for other species as well as ourselves in the EEA. In his example, a physician in an illiterate society displays her 'ecological intelligence' in the following way. She has 'discovered a symptom' of a disease by remembering that she has seen 1,000 people in her life and that 10 of these have had the disease. She recalls that 8 out of those 10 had the symptom and that 95 out of the 990 without the disease also had the symptom. Using these memories, she judges that the frequency with which someone with the symptom has the disease is 8 out of 103 (8 plus 95). She does not have to use Bayes's theorem explicitly to get this result. This story has many of the characteristics of the examples that are supposed to show that a dedicated module for recording frequencies was adaptive and enables us to make rational probability judgements. Evolutionary psychologists are keen to stress that people cannot fully follow or conform to logic and probability theory, for these are unbounded theories and even apparently simple inferences within them can rapidly become of great complexity. So much is common currency among cognitive psychologists, and they could have learned about defining rationality for finite beings from Kant. But as the story illustrates, Gigerenzer's view implies that people have really massive memories, and apparently many of these are to be explicit memories, like the old physician's knowledge that she has met 1,000 people.
David E. Over There is no evidence that human memories, working alone without any content-independent rules, could cope with all the natural sampling necessary to have reproductive success under primitive conditions or in the EEA, let alone to have rationality in the contemporary world. Evolutionary psychologists often point out that other species can track some sample frequencies-their favourite example is that of bumblebees which can do this, to some extent, in their foraging behaviour among flowers (Real 1991). Bumblebees must n1ake do with severely lin1ited memories relative to ourselves, but have the advantage that the flowers they san1ple tend to be much more homogeneous in the relevant properties than many of the samples we must take. Our own memory limitations are surely serious for creatures who usually have to take an interest in much more than flowers. Suppose we are told how many essays we have marked, how many of these got C or less, how many of those had a spelling mistake on the first page, how many essays got a higher mark than C, and how n1any of those had a spelling mistake on the first page. Given this information, it would be trivial for us to use content-independent rules, about the partitions and unions of sets and simple arithmetic, to infer the frequency with which essays with a spelling n1istake on the first page got a C or less. However, we cannot recall this information for ourselves, and if we actually tried to make a judgement about this frequency we would get a biased result. For one thing, we would be prone to the biases of the availability heuristic (Tversky and Kahneman 1973), as we would tend to recall some of the essays much better than others, e.g. the best ones or worst ones. In general, using natural sampling, in the way recommended by Gigerenzer, would expose us to many biases. Notice that Gigerenzer makes it far too easy for his illiterate physician to infer that she has really found a symptom of the disease. In this and other examples used by evolutionary psychologists, the base rate is implicitly low-only I in 100 people has the disease in the story-and one difficulty here can be illustrated by assuming this is not so (Over and Green 2001). Suppose the physician again recalls seeing 1,000 people in her life but that 800 of these have had the disease-after all not all diseases are uncommon under primitive conditions. She remembers that 640 of these 800 had the 'symptom' and that 160 out of the 200 without the disease also had the 'symptom'. Now with all this memorized, she can say that the probability that someone with the 'symptom' has the disease is 640 out of 800 (640 plus 160). But that is the same as 800 out of 1,000, the base rate of the disease, and clearly she has wasted her mental resources. NaIve natural sampling will keep her unaware of the potentially damaging fact that the 'symptom' is actually independent of the disease and cannot be used to diagnose it. However well endowed with memory she may be, she needs some effective way of discovering whether a possible syn1ptom is causally
Rationality of Evolutionary Psychology
199
related to some disease, and what this relation is, and simple natural sampling on its own will not always give it to her. Gigerenzer claims two advantages for natural sampling. In using it, one does not have to attend to the base rate, and one does not have to normalize the frequencies: to have any idea, in our example, that 640 out of 800, or 160 out of 200, is the same as 800 out of 1,000. But the example just given demonstrates that we must at least take some implicit account of the base rate, or note how often the disease occurs when the 'symptom' does not, and in effect normalize to infer that the 'symptom' is independent of the disease. Much greater efficiency, without massive memory resources, can be gained by the proper use of some inforn1ation on base rates, or more generally on the set sizes of supposed causes and effects, to help confirm or disconfirm causal claims. Over and Jessop (1998) do a Bayesian analysis to reveal how people can use knowledge of set sizes for effective inference about causal claims. Where the set sizes of the possible cause and effect are known to be large, for example, it is better to observe cases in which the supposed cause does not occur, than cases in which the supposed cause does occur, to find whether or not the effect follows (Over and Green 2001). There is evidence that people do more or less conform to this analysis (Green and Over 2000). Let us imagine that a large majority of the people in a big village, except ourselves, have eaten the red plant and a large majority are ill. We could proceed, in natural sampling, to memorize exactly how many both ate the red plant and are ill and how many ate the red plant. From this, a simple module could tell us that the probability that someone is ill given that they ate the red plant is high, but after all we knew that already if we saw lots of people eating the red plant and lots of people getting ill. High conditional probability is not the same as causation, and the critical question is whether eating the red plant will cause us to become ill. For that it is much better to find some of the small number of villagers who did not eat the red plant and observe whether they are ill. If only a very small number of these are found and none are ill, then the hypothesis that the red plant causes the illness is strongly confirmed. This method is intuitively right and complies with a Bayesian analysis, but it is much more than natural sampling. It presupposes a different notion of probability than frequency: that of confirmation. To confirm hypotheses effectively, one needs some logical understanding, which one must also have to infer useful conclusions from well-confirmed causal or other hypotheses. A little bit of content-independent logic, here and in general, makes a massive memory for natural sampling unnecessary. Tversky and Kahneman (1983) were the first to discover that giving information in frequencies, or really in finite sets, sometimes helps participants to solve certain probability problems correctly. The beneficial effect is the result of making a logical set inclusion relation transparent, as Tversky
200
David E. Over
and Kahneman put it (see also Johnson-Laird et ale 1999). Cosmides and Tooby (1996) extended the results, but though they only used transparent problems, they claimed that participants solve these because they have an adaptive module for understanding frequencies. Nevertheless, if an experiment does not use sets which are nested in a logically transparent way, nor numbers which are especially easy for whole number arithmetic, then the problems can become hard for participants. Consider again the easy experimental task specifying that lout of 1,000 people has a disease, and a test which always gives a positive result for a person with the disease, but which is -also positive for 50 out of 1,000 healthy people. This task can become hard for participants when the example is the same except that the test is said to be positive for lout of 20 healthy people (Evans et ale 2000; Girotto and Gonzale 2001). Presumably the task would be still harder if the test were said to be positive for 37 out of 740 healthy people, and yet I in 20, 37 in 740, and 50 in 1,000 are all the same objective frequency. l~atural sampling is of limited use for more than one reason. It is in fact wrong to claim that objective frequencies are more observable than singlecase probabilities. Firstly, single-case probability judgements can be useful just because of our evolutionary history. The first time we see a snake we may make the single-case judgement that this creature is probably dangerous, and this judgement n1ay be adaptive under primitive conditions, thanks to the sampling human genes have in effect done, giving us some innate tendency to fear snakes. Secondly, we must not confuse sample frequencies with objective frequencies. People can only observe and record sample frequencies, and from these they must infer, at least implicitly, what the objective frequencies are. Evolutionary considerations themselves, about can10uflage, disguise, and deception in nature, imply that hun1an beings will acquire many biased sample frequencies. To presuppose that people grasp the existence and possible sources of bias in sample frequencies, under primitive conditions or the contemporary world, is to beg the question about human rationality. So far work on natural sampling has not even addressed the question of whether people make rational judgements about biases in sample frequencies. It is fine to note, as far it goes, that natural sampling records the sample size. For example, some people may recall that they have caught a fish 19 out of 20 times they have been to a lake. Evolutionary psychologists suggest that this sample makes people more confident that they will be successful on their next fishing trip than when they had caught a fish on 3 out of 4 trips to the lake. But first this presupposes that people can make good single-case probability judgements on the basis of sample size. And second the implication is that people have some grasp of the law of large numbers. But this law is a content-independent mathematical theorem, and its proper application makes an assumption about the objective probability. An encapsulated module may effectively record possibly biased sample frequencies, but then
Rationality of Evolutionary Psychology
201
more higher-level central processing, following the law of large numbers or other content-independent rules, may be necessary to confirm or disconfirm hypotheses about what the objective frequency really is (Over 2000a,b). Evolutionary psychologists do not say how a decision is made after some natural sampling. Suppose we do the sampling in the red plant example above, finding it highly probable that we will get ill given that we eat the red plant. Do we then come to the deontic conclusion that we should not eat the red plant? Eating the red plant may not be the cause of the illness. If that is so and we do not eat it, then we will go hungry without decreasing the chance that we will get ill-that does not sound adaptive. One can hardly imagine that massive natural sampling and n1emory would identify the real cause in time for any decision to be made. Thanks to the massive modularity hypothesis, evolutionary psychologists lack an adequate account of both deontic and causal reasoning and their relation to each other in decision-making. Even more generally, they should have some theory of the epistemic utility of reasoning and how this relates to decision-making and its benefits and costs, for the genes or the individual. No doubt evolutionary psychology can discover much of value about people's frequency and probability judgements. For example, Brase, Cosmides, and Tooby (1998) have results suggesting that these judgements may be better about 'natural' whole objects than their arbitrary parts. But supporters of the massive modularity hypothesis try: to explain too much with natural sampling, and so commit themselves to a massive memory hypothesis as well. However, without a massive memory, an unlimited amount of information can be implicitly contained in well-confirmed general propositions and then made explicit through logical deduction. True, restrictions have to be placed on any mental models or rules if finite human beings are to use them for inference, but that has been done in the main psychological theories of deduction (Johnson-Laird and Byrne 1991; Rips 1994). The efficiency of deduction in saving on memory is probably part of the answer to the argument that it is not as adaptive as massive modularity. However, the best system of all is most plausibly a dual one, with both modularity and some n10dest deductive competence.
DUAL PROCESSES
Stanovich (1999) has helpfully surveyed dual process theories and made some important points about them. His own approach makes use of the dual process theories of Evans and Over (1996) and Sloman (1996). In these, there are two types of mental processes in what Stanovich calls System I and System 2. Processes in System I are tacit or implicit, relatively fast, associative or connectionist in nature, and automatic and largely
202
David E. Over
unconscious in operation. System 2 processes are explicit, relatively slow, rule-following, and more controlled and conscious in operation, but can override System I processes when these are inaccurate. It may be that a System I module gives us the visual illusion of a face in some bushes, but we do have a false belief if we use System 2 to infer, from our general beliefs, that there can be no face there. Another point, which Stanovich does not bring out, is that System 2 can make the sort of non-constructive inferences discussed above in relation to cheating. We might be able to infer in System 2, from general propositions we have confidence in, that someone is hiding somewhere or other in the bushes and looking at us, though System I fails to indicate a face at any particular place. In such non-constructive reasoning, we have the best example of System 2 in operation. We direct our attention to some beliefs and infer from these the relevant conclusion, and we could give some report of this activity in reasons for the conclusion. We have less control over or knowledge of the workings of System I, and can only hope that searching the bushes with our eyes will help a module in System I to present us with a face there. Stanovich also makes a distinction between normative rationality and evolutionary rationality. He defines normative rationality as the achievement of individual utility, as measured by the individual's desires and goals, and evolutionary rationality as the achievement through adaptations of utility for genes, as measured by their goal of reproductive success. Where this distinction is ignored, terms like 'ecological intelligence' and 'ecological rationality' can be very slippery in meaning and lead to much confusion (Stanovich and West 2002). It is well known that, in Stanovich's terms, normative rationality and evolutionary rationality can be far apart in certain circumstances (Skyrms 1996), though this fact is ignored by most supporters of massive modularity. Stanovich himself holds that System I is primarily responsible for evolutionary rationality and System 2 for normative rationality. Any processes in System I, for identifying faces or cheaters, or keeping track of natural sampling, have the function of facilitating reproductive success, but they help the whole person as well when they make a contribution, as they often do, to the attainment of individual goals. On the other hand, the primary role of System 2, according to Stanovich, is to override System I on those occasions when it does not serve the goals of the individual. Stanovich does think that System 2 is also the result of evolution by natural selection (see also Over and Evans 2000), but its processes, as the more controlled and conscious ones of an individual, aim at the individual's goals whether or not these are consistent with those of the genes. Stanovich presents much evidence that the people who do well, by the standards of logic and probability theory, in many experiments on abstract problems are of higher cognitive ability than those who do poorly in these experiments. Cognitive ability is operationalized here by means of standard tests, particularly the Scholastic Aptitude Test (SAT). For instance,
Rationality of Evolutionary Psychology
203
participants who correctly solve the abstract selection task, such as that using conditional (r) above, have significantly higher SAT scores than those who are incorrect. Those with the high SAT scores tend to choose the p and not-q cards, while those with the low SAT scores tend to choose p and q cards or other combinations. But some of those with lower cognitive ability do solve the abstract task correctly, and thus it appears that most people have some capacity to use System 2 rule following to override System r processes when this is advantageous to them as individuals. In abstract experiments, an individual must override System r to get the correct answer, and sometimes even in the real world this will be necessary to get benefits or avoid costs for the individual. This impressive work actually makes it more difficult to answer a contemporary version of the Kantian argument. Evolutionary psychologists could reply that these results are evidence that System 2 does not exist as an adaptation, nor even as a direct side effect of an adaptation. They could argue that a true cognitive adaptation tends to be possessed by all normal human beings, as a face-recognition ability is. Many evolutionary psychologists could be reinforced in their belief that the natural human mind is massively modular, and that the ability to think in abstract terms is some kind of indirect side effect of that modularity found only in some people under exceptional cultural influences, like attendance at superior schools. Evans and Over (r996) distinguish in our dual process theory between two kinds of instrumental rationality. Rationality r is the wider concept. People are rational r when they have a reliable means for achieving a goal, even if this means is purely implicit in System I. But sometimes people have rationality 2 as well. Rationality 2 could be called rule rationality-people have this when they explicitly follow, and not merely implicitly comply with, a formal rule which is part of a reliable process for reaching one of their goals. They have a good reason for what they do if following the rule is part of such a reliable process. Rationality 2 is the result of System 2 and can be of help to individuals and their genes, but neither System 2 nor rationality 2 are ever found on their own-they are parasitic, to use a philosopher's term, on System r. Consider logical rule-following as the best example of System 2 in operation. This is not always a satisfactory way of trying to get what one wants, but when it helps, it needs reliable and relevant premisses, and these depend on System r, on an effective memory, and ultimately on accurate observations. And after System 2 performs any helpful explicit inference from System r input processes, System r output processes must set appropriate action in motion. Our argument is that there is a modest capacity for rationality 2 in all human beings. This basic level of formal ability in System 2 cannot easily be dismissed as an adaptation, and we have discussed above some of the ways in which it can be helpful, whether to individuals or their genes. It is so trivial at times that its essential role can be overlooked, in the very meaning of
2°4
David E. Over
logical constants, like those for obligation and permission, in grasping set men1bership and set inclusion in probability judgements, and in making explicit information that is implicitly contained in general propositions and rules. Human beings sometin1es face novel problems for which they are badly prepared by any domain-specific modules and by past training or conditioning, and for these problems also content-independent inference can help. However, illustrating how beneficial this reasoning can be is not to give a theory of why its development under primitive conditions, on top of some modularity, was more adaptive than increasing massive n10dularity, or of how it arose as a side effect of what was adaptive. Moreover, this theory is needed to gain deeper understanding of content-independent thought, and would be relevant to philosophical debates about it. The issue of how its purest form, that of non-constructive reasoning, could have evolved is the phylogenetic counterpart of the ontogenetic question in the theory of meaning (raised by Dummett 1978) of how this reasoning could be learned or justified. CONCLUSION
The empirical case for the massive modularity hypothesis, and the type of domain-specific rationality which comes with it, is unconvincing. The hypothesis in its current form does not have an adequate account of the deontic and causal modalities, nor of rational probability judgement and decision-making, and it implies the existence of an incredible massive memory. A massively modular mind would suffer from many severe biases and yet not have the chance to override these with high cognitive processes that follow content-independent rules. Its metaphorical image should not be a Swiss army knife, which has a general purpose blade in it and is better suited to representing a dual process theory, but a huge record book of natural sampling too heavy to lug across the EEA. However, the contemporary version of the Kantian theoretical argument in evolutionary psychology for massive modularity is still a potential threat, and could be more so with a weaker notion of encapsulation. Dual process theories of the mind are well placed to make use of any domain-specific modules discovered by evolutionary psychologists, but will be at risk of collapsing into some new and weaker form of the massive modularity hypothesis until much more is understood about the evolution of content-independent reasoning.
REFERENCES Almor, A., and Sloman, S. (2000), 'Reasoning versus text processing in the Wason selection task-A non-deontic perspective on perspective effects', Memory and Cognition, 28(6), I060-70.
Rationality of Evolutionary Psychology
2°5
Axelrod, R. (1984), The Evolution of-Cooperation (New York: Basic Books). Barkow, J. H., Cosmides, L., and Tooby, J. (1992), The Adapted Mind: Evolutionary Psychology and the Generation of Culture (New York: Oxford University Press). Brase, G. L., Cosmides, L., and Tooby, J. (1998), 'Individuation, counting, and statistical inference: The role of frequency and whole-object representations in judgment under uncertainty', Journal of Experimental Psychology, 127, 3-21. Cosmides, L. (1989), 'The logic of social exchange: Has natural selection shaped how humans reason? Studies with the Wason selection task', Cognition, 3 I, 187-276. --and Tooby, J. (1992) 'Cognitive Adaptations for Social Exchange', in Barkow, Cosmides, and Tooby (1992) above. ----(1994), 'Beyond intuition and instinct blindness: Toward an evolutionarily rigorous cognitive science', Cognition, 50, 41-77. ----(1996), 'Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty', Cognition, 58, 1-73· Cummins, D. D. (1998), 'Social roles and other minds: The evolutionary roots of higher cognition', in D. D. Cummins and C. Allen (eds.), The Evolution of Mind (New York: Oxford University Press). --and Cummins, R. (1999), 'Biological preparedness and evolutionary explanation', Cognition, 73, B37-B53. Dummett, M. (1978), Truth and Other Enigmas (London: Duckworth). Evans,J. St. B. T.,and Handley, S. J., Perham, N, Over, D. E., Thompson, V. A. (2000), 'Frequency versus probability formats in statistical word problems', Cognition, 77, 197-21 3. --and Over, D. E. (1996), Rationality and Reasoning (Hove, UK: Psychology Press). Fiddick, L., Cosmides, L., and Tooby, J. (2000), 'No interpretation without representation: The role of domain-specific representations and inferences in the Wason selection task', Cognition, 77, 1-79. Fodor, J. (19 83), Modularity of Mind (Cambridge, Mass.: MIT Press). Gigerenzer, G. (1998), 'Ecological intelligence', in D. D. Cummins and C. Allen (eds.), The Evolution of Mind (New York: Oxford University Press). Girotto, V., and Gonzalez, M. (2001), 'Solving probabilistic and statistical problems: A matter of information structure and question form', Cognition, 78, 247-76. Green, D. W. and Over, D. E. (2000), 'Decision theoretic effects in testing a causal conditional', Current Psychology of Cognition, 19, 51-68. Holyoak, K. J. and Cheng, P. W. (1995), 'Pragmatic factors in deontic reasoning', Thinking and Reasoning, I, 289-3 13. Howson, C. and Urbach, P. (1993), Scientific reasoning: The Bayesian approach, 2nd edn. (La Salle, Ill.: Open Court). Johnson-Laird, P. N. and Byrne, R. M. J. (1991), Deduction (Mahwah, NJ: Lawrence Erlbaum Associates). --Legrenzi, P., Girotto, V., Legrenzi, M., and Caverni, J-P. (1999), 'Naive probability: a n1ental model theory of extensional reasoning', Psychological Review, 106,62-88. Kant, I. (1997/1785), Groundwork of the Metaphysics of Morals, trans. by M. Gregor (Cambridge: Cambridge University Press).
206
David E. Over
--(1981/1799), 'On a supposed right to lie because of philanthropic concerns', in Grounding for the Metaphysics of Morals, trans. by]. W. Ellington (Indianapolis: Hackett Publishing Co.). Kirby, K. N. (1994), 'Probabilities and utilities of fictional outcomes in Wason's four-card selection task', Cognition, 51, 1-28. Korsgaard, C. M. (1997), Introduction to Immanuel Kant, Groundwork of the Metaphysics of Morals, trans. by M. Gregor (Cambridge: Cambridge University Press). Manktelow, K. I., Fairley, N., Kilpatrick, S. G., and Over, D. E. (1999), 'Pragmatics and strategies for practical reasoning', in G. De Vooght, G. D'Ydewalle, W. Schaeken, and A. Vandierendonck (eds.), Deductive Reasoning and Strategies (Mahwah, N]: Erlbaum). --and Over, D. E. (1990), 'Deontic thought and the selection task', in K.]. Gilhooly, M. Keane, R. H. Logie, and G. Erdos (eds.), Lines of Thinking, i (Chichester: Wiley). ----(1991), 'Social roles and utilities in reasoning with deontic conditionals', Cognition, 39, 85-105. ----(1995), 'Deontic reasoning', in S. E. Newstead and]. St. B. T. Evans (eds.), Perspectives on Thinking and Reasoning (Hove, UK: Erlbaum). Oaksford, M., and Chater, N. (1994), 'A rational analysis of the selection task as optimal data selection', Psychological Review, 101, 608-3 I. Over, D. E. (2000a), 'Ecological rationality and its heuristics', Thinking and Reasoning, 6, 182-92. --(2000b), 'Ecological Issues: A Reply to Todd, Fiddick, and Krause', Thinking and Reasoning, 6, 385-8. --and Evans, ]. St. B. T. (2000), 'Rational distinctions and adaptations', Behavioral and Brain Sciences, 23, 693-4. --and Green, D. W. (2001), 'Contingency, causation, and adaptive inference', Psychological Review, 108, 682-4. --and Jessop, A. (1998), 'Rational analysis of causal conditionals and the selection task', in M. Oaksford and N. Chater (eds.), Rational Models of Cognition (Oxford: Oxford University Press). Real, L. A. (1991), 'Anin1al choice behaviour and the evolution of cognitive architecture', Science, 253, 980-6. Rips, L. J. (1994), The Psychology of Proof (Cambridge, Mass.: MIT Press). Samuels, R. (1998), 'Evolutionary psychology and the massive modularity hypothesis', British Journal for the Philosophy of Science, 49, 575-602. Skyrms, B. (1996), The Evolution of the Social Contract (Cambridge: Cambridge University Press). Sloman, S. (1996), 'The empirical case for two systems of reasoning', Psychological Bulletin, I 19, 3- 22. Stanovich, K. E. (1999), Who is Rational? Studies in Individual Differences in Reasoning (Mahwah, N]: Lawrence Erlbaum Associates). - - and West, R. F. (2000), 'Individual differences in reasoning: Implications for the rationality debate?' Behavioral and Brain Sciences, 23, 645-726. ----(2002), 'Evolutionary versus instrumental goals: How evolutionary psychology misconceives hun1an rationality', in D. E. Over (ed.), Evolution and the Psychology of Thinking: The Debate (Hove: Psychology Press).
Rationality of Evolutionary Psychology
2°7
Tooby, ]., and Cosmides, L. (1992), 'The psychological foundations of culture', in ]. H. Barkow, L. Cosmides, and]. Tooby (eds.), The Adapted Mind: Evolutionary Psychology and the Generation of Culture (New York: Oxford University Press). Trivers, R. L. (1971), 'The evolution of reciprocal altruism', Quarterly Review of Biology, 4 6 , 35-57. Tversky, A., and Kahneman, D. (1973), 'Availability: A heuristic for judging frequency and probability', Cognitive Psychology, 5(2), 2°7-32. ----(1983), 'Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment', Psychological Review, 90, 293-3 15. Wason, P. C. (1966), 'Reasoning', in B. M. Foss (ed.), New Horizons in Psychology I (Harmondsworth: Penguin). Wilkinson, G. S. (1990), 'Food sharing in vampire bats', Scientific American, 262(2), 64-70.
9 Commitment and Change of View ISAAC LEVI
•••
CHANGING ONE'S MIND
Do intelligent agents change their minds and hearts? Or are their minds and hearts changed for them? I am thinking here of changes of mind and heart as changes in beliefs and values, goals and hopes. And the question I am asking is whether intelligent agents exercise any control over changes in their attitudes. I take for granted that deliberating agents have as much control over what they say as what they do. But exercising control over one's utterances and inscriptions is not exercising control over changes in one's beliefs and values. To the extent that agents have control over changes in their beliefs and values we may coherently hold them accountable for the changes they make when they do. We may ask them for reasons that justify their coming or ceasing to believe that the Gulf Stream will continue to follow its usual course, acquiring a taste for the paintings of Francis Bacon, and coming to oppose the cause of the Bosnian Serbs. The received view seems to be that attitudinal states are multitrack dispositions to linguistic and bodily behaviour and perhaps to feelings of various kinds. According to another line of thought, beliefs, etc. are the linguistic and bodily expressions as well as the phenomenological happenings that are the manifestations of such dispositions. According to both conceptions of beliefs, desires et aI., it is difficult to hold subjects directly accountable for changes in their attitudes. They may be responsible for situating then1selves in the way of stimuli that prompt manifestations of attitudinal dispositions and may sometimes be able to acquire or shed such dispositions through some sort of exercise. The control over attitudes thereby conceded is not sufficiently extensive to cover the range of cases over which some philosophers expect agents to be responsible for their thinking.
210
Isaac Levi
After taking note of Dr johnson's legendary confrontation with the stone, Quine writes: Calling a stone a stone at close quarters is an extreme case. Evidence is deliberately marshaled only when there is more nearly an equilibrium between the sensory conditioning of an affirmative response and the contrary conditioning, mediated by the interanimation of sentences. Thus the question under deliberation may be whether something glimpsed from a moving car was a stone. That it was a stone, and that it was a crumpled paper, are two ready responses; and the tendency to the former is inhibited by the tendency to the latter, via sentential connections at the level of common-sense physical theory. Then one 'checks' or seeks overwhelming evidence, by returning to the spot to the best of his judgement and so putting himself in the way of stimulations more firn1ly and directly associated with the attribution of stonehood or paperhood. (Quine 1960,17-18)
To hear Quine tell it, the traveller's doubts as to whether he or she encountered a rock or a piece of paper is the vector sum of a tendency to judge that a rock was passed and a tendency to judge that a crumpled piece of paper was passed. Neither the posture of suspense nor coming down on one side or the other is subject to the agent's control. From Quine's perspective, the question of justifying changes of view cannot then arise except, perhaps, as a form of consolation. We may be conlforted by the thought that the beliefs we have acquired in response to external stimulation by therapy or indoctrination are rationalizable. We may be discomfited when they are not. But the weighing of reasons cannot control decisions as to what to believe or value except in so far as we have control over the circumstances under which we obtain signals from the environment. For Pragmatists (Quine is supposed to be one), this is an unfortunate situation. One of the core features of the Classical Pragmatism of Peirce, janles, and Dewey is that we should turn our backs on the Cartesian demand for justifications of current beliefs. We should be concerned instead with how we change beliefs and, more particularly, with how we justify such changes in properly conducted inquiries. In the case of Dewey, this vision was extended to cover not merely judgements of truth (full beliefs) but judgements of value as well. Furthermore, justification of changes of view was supposed to have the structure of a practical argument whether such an argument is a practical syllogism of the Aristotelian variety or a more sophisticated decisiontheoretic argument. In fixing belief, one might seek exclusively to relieve doubt (that is to say, acquire new information). Or one might seek new errorfree information. Or one might promote the acquisition by the community of inquirers at the End of Days of the True Complete Story of the World. The classical pragmatists differed over these matters; but all understood justification of change of view as argument showing that one change was better than alternatives for the purposes of promoting the goals of the given inquiry.
Commitment and Change of View
211
The pragmatists' project cannot hope to succeed if the propositional attitudes are always acquired or shed by conditioning or other forms of manipulation. The obstacle is not predicated on a metaphysical anxiety over the compatibility of free will with determinism. I take for granted that we do have control over many matters. But we lack control over others. Doxastic and affective dispositions are sometimes changeable by training, therapy, and with the aid of new technologies (that enlarge the memories and computational capacities of agents). We cannot, however, alter doxastic and affective dispositions on demand. If beliefs are dispositions to assent and to engage in other forms of behaviour, we cannot change our beliefs by choice. If desires and values are also dispositions, we cannot change our values by choice. If beliefs and desires are not dispositions but manifestations of dispositions, there is no control even in the sense in which we sometimes have control over our habits and dispositions. We lack control over our fits of doxastic conviction, our attractions, and aversions, and the like. Dissents and assents may sometimes be deliberate; but when they are responses to stimuli in accordance with the input-output table associated with some disposition, they would not customarily be considered subject to the organism's control. Thinking of beliefs as doxastic dispositions or as manifestations of such dispositions is not entirely mistaken. Nor is a parallel view concerning desires and values entirely off-base. But changes in disposition or manifestation can only be the product of therapy, training, engineering, or some other mode of manipulation. Whether and how to enable those who need to have their doxastic or affective dispositions modified is a legitimate concern of clinicians and technologists who provide us with prosthetic devices such as computers. But attitudinal changes that result from such therapy are not of the kind that Dewey thought might be and sometimes are achieved through deliberate inquiry. In inquiry, we seek to justify changes in the doxastic and affective dispositions we should have. I call changes of these kinds changes in doxastic, affective, and evaluative commitments. Changes in belief or desire that are understood as changes in disposition or manifestation thereof are called changes in doxastic, affective, or evaluative performance that mayor may not succeed in fulfilling the agent's commitments. Changes in commitment can plausibly be subject to the agent's direct control. Changes in commitment (that involve reneging on one commitment and adopting another) cry out for justification-at least in important cases. But changing a doxastic or affective commitment is one thing. Implementing such commitments is another. Thus, inquirer X may be committed to being certain that h is true and so to being committed to assenting to h, to h or h' and to X's believing that h when responding sincerely to questions concerning these matters. Yet, X may fail to fulfil these commitments. X's failure may be attributed to his lack of calculating capacity. Or perhaps he is subject to an emotional storm or distraction from. self-critical reflection. Such
212
Isaac Levi
failures are failures of performance and call for changes that improve performance rather than change in comn1itment. In sum, I propose to escape from the perplexities that surround the question of the extent to which agents control their doxastic, affective, and evaluative attitudes by distinguishing between two kinds of change in attitude: I.
2.
Changes in attitudes as doxastic, affective, or evaluative con1mitments that are subject to control in inquiry. Changes in doxastic, affective, and evaluative perfornlances that succeed or fail in efforts to fulfil such commitments.
Because changes of type I are subject to control by the inquiring agent, the question of justifying such changes may be coherently broached by that agent. Changes of type 2 call for therapy, training, or the use of prosthetic devices such as computers, the printing press, or paper and pencil. Such changes are justified to the extent that doxastic, affective, and evaluative performance fulfils doxastic, affective, and evaluative commitment. Such justification calls for an understanding of the prescriptive standards characterizing the diverse commitments, psychological and sociological studies of conditions that enable inquirers to engage in intelligent problem-solving and technologies that extend the capacities of inquirer to store information and use it effectively in deliberation. But what is justified is a regimen of treatment (whether by the agent or by others) that enhances the agent's capacity to fulfil the agent's commitments. To avoid misunderstanding of this proposal, a comprehensive account of justification of change of view should take both kinds of change and how they are related to one another into account. I do not n1ean to replace a onesided dispositionalist or functionalist account of the attitudes with a onesided commitment account of the attitudes. Dispositionalist and functionalist accounts of the attitudes ignore the distinction between change of view through inquiry and through therapy and the corresponding distinction between changes in commitment and changes in performance. All changes in attitude are changes in dispositions and their manifestations. Such accounts cannot coherently consider the question of justifying changes in attitude through inquiry so central to the preoccupation of the classical pragmatists. To challenge such naturalisms, I insist that changes in attitude that are changes in commitment ought to be recognized along with changes in attitude that are changes in performance. The remainder of this essay is given over to an elaboration of this idea.
POTENTIAL STATES OF FULL BELIEF
To develop either a descriptive or a prescriptive account of change in state of full belief, we should begin by identifying conditions that potential states
Commitment and Change of View
213
of full belief satisfy or should satisfy. Assuming that inquirer X is in a given state of full belief, we should identify the set of states X is conceptually capable of moving to. This task corresponds to identifying the space of potential mechanical states of a gas at a given energy level. In classical mechanics, the state of a given systenl of particles is given by specifying the positions and momenta in three dinlensions of the particles at sonle given time. Given that infornlation, all the relevant behaviour of the particles can be derived at subsequent times according to the laws of classical mechanics. The potential mechanical states of the system of particles consist of all possible specifications of positions and momenta compatible with the constraints specified for the system (such as the total energy). At each tinle, the system is supposed to be in one of the given states. Changes in mechanical state are then represented by 'trajectories' through this 'phase space' of potential states over time. The so called 'phase space' has the structure of a high-dimensional Euclidean space. We may ask here what kind of structure does the 'space' of potential states of full belief exhibit. The question is not so remote from the concerns of inquiry as it may appear to be at first blush. If Peirce is correct that in inquiry, we seek to relieve doubt, we take for granted that some changes in state of full belief are changes that relieve doubt. I shall say that the shift from K to K' removes doubt (should remove doubt) if and only if K' is stronger than K or K is a consequence of K' or K' is an expansion of K. The binary relation just introduced as a primitive relation between potential states of full belief is also related to another goal of inquiry. In inquiry where justifying changes in state of full belief is an issue, it is arguable that the inquirer ought to seek to avoid shifting to false or erroneous states of full belief. I assume that the following constraints ought to be satisfied when an inquirer is seeking to avoid error: I. II. III.
K is judged error-free by X if and only if K is a consequence of the inquirer's current state of full belief. K is judged erroneous if and only if all states having both the current state and K as consequences are judged erroneous. At least one potential state is judged error-free and at least one potential state is judged erroneous.
In virtue of I, if K is a consequence of X's current state of full belief, X should regard a shift from his current state to K as one that incurs no risk of error. On the other hand, if the shift is neither a degenerate shift from the current state to the current state nor to a consequence of the current state, some information will be lost and, in that sense, doubt will be increased. In virtue of II, a shift from the current state to K~:- having both the current one and K as consequences is to be avoided because it imports error for sure if and only if shifting from the current state to K imports error for sure.
21 4
Isaac Levi
Thus, the primitive consequence relation between states of full belief is to be understood and motivated by the idea that efforts to change states of full belief seek acquisition of new information that is error-free. This consequence relation induces a partial ordering over the potential states of full belief. I have argued elsewhere (Levi 1991, s. 2.2) that this partial ordering ought to have the structure of a Boolean algebra. Peirce's injunction against placing roadblocks in the path of inquiry helps underwrite this argument. For example, given any pair K1 and K 2 of potential states of full belief, there ought to be a potential state K 1 V K2 that is the strongest common consequence of just these two states. This join of K 1 and K 2 is the state of suspense that is the common ground to which X in state K 1 and Y in state K2 could move if they were concerned to engage in a joint inquiry that begged no questions against the other's point of view. To deny the availability of such a potential state of full belief (as authors writing in the tradition of Feyerabend and Kuhn often do) is to place roadblocks in the path of inquiry. Pragmatists will condone this practice only in the face of an impossibility theorem. Analogous arguments support the existence of a meet K 1 1\ K 2 , a weakest potential state, a strongest potential state, and for every K a potential state KC that is its complement. Intuitionists might protest that only a pseudo-complement should be posited. But as I have argued in Levi (1991, 16), there would be a potential state of full belief that is stronger than the weakest potential state (the maximal state of ignorance). X is conceptually capable of judging that state true or failing to do so. Yet, according to the intuitionist view, X cannot judge that state to be false except from the inconsistent potential belief state. This, once more, is an objectionable roadblock in the path of inquiry. For better or worse, I contend that the structure of the space of potential states of full belief ought to satisfy the requirements of a Boolean algebra. If there are infinitely many elements of the algebra, it should be closed under meets and joins of sets of elements of arbitrary cardinality. To the extent that the idea of a single space of potential states of full belief available to all inquirers is defensible, it should be atomless. There can be no maximally consistent belief states. That is because there is no upper bound on how finely discriminations can be made. In this sense, the notion of a possible world is, indeed, metaphysical moonshine. Potential states of full belief so conceived are not linguistic entities with internal syntactic structure. As I have characterized them, they lack any internal structure. But just as points in phase space can represent mechanical states, a sentence or set of sentences in a suitably regimented language may represent a potential state of full belief at least partially. X is alleged to believe (fully) that h. On the view I am sketching, X is claimed to be in some potential state of full belief that has as a consequence
Commitment and Change of View
215
another potential state of full belief-to wit, the state of full belief that h. We should not think of X's state of full belief as a system of X's beliefs at a given time. The allegedly individual beliefs are specifications of potential states of full belief that are consequences of X's state of full belief according to the Boolean algebra of potential states of full belief. Thus, we need not think of X as bearing relations of full belief to propositions or sets of possible worlds. No doubt we can concoct a Boolean algebra of propositions or sets of possible worlds. But the exercise is one rather like Whitehead's attempt to separate the geometrical from the gravitational field in General Relativity. Unless the separation leads to a genuinely new theory, the Boolean algebra of propositions is a piece of verdoppelte Metaphysik or gratuitous metaphysics that we do well to ignore. We use language both to express aspects of our attitudinal states including states of full belief and to report information about such states. When X utters 'h', X may express (i.e. manifest) X's full belief that h while simultaneously reporting that h. If X reports that X believes that h by uttering 'X believes that h', X expresses X's full belief that X fully believes that h. The occurrence of 'h' in this utterance prefixed by 'that' represents a potential state of full belief-to wit a consequence of X's current state of full belief. That potential state might also be represented in language L by the set of logical consequences in L of h. X's current state might be represented in L by the set of logical consequences in L of h. I also grant that we can, if we like, speak of potential states of full belief as propositions-to wit, doxastic propositions. We should be tolerant of ways of speaking. I am insisting, however, that it is high time that we abandon the view that beliefs are individuated by contents. States of full belief are individuated by their positions in a Boolean algebra of potential states. Dispensing with propositional content in this fashion may seem to render it problematic as to how to approach propositional attitudes other than full belief such as judging that h is possible, judging that h is probable (to some degree or other), or judging that h is valuable and the like. I see no problem here. The Boolean algebra of potential states of full belief is one way of structuring the set of potential states of full belief. There are other ways of structuring potential states of full belief that are more or less dependent on this one. These alternative structures correspond to other attitudinal states. For example, X judges it impossible that h if and only if X judges it false that h. That is to say, it is impossible that h according to X at t if and only if the deductive consequences of -h represent a potential state of full belief that is a consequence of X's state of full belief at t. Otherwise X judges it seriously possible that h. So X's standard for serious possibility is uniquely determined by X's state of full belief (and conversely).
216
Isaac Levi
Given X's standard for serious possibility, X's credal state may be understood as introducing fine-grained distinctions between potential states of full belief that are judged possibly true (possibly without error). The probability assessments evaluate potential states of full belief and not propositions or sets of worlds. A similar approach may be adopted with respect to desire and value judgements. According to the model hastily sketched above, the justification of changes in full belief has been equated with the justification of changes in states of full belief. This change cannot by itself obviate the question: Is such justification pointless because inquirers lack control over changes in their states of full belief? We now return to this issue. If states of full belief are dispositional states (or biological states of some kind), it should be clear that an inquirer's capacity to control how such states change is very limited. The Peircean preoccupation with prescribing how inquiry should be conducted to promote its goals becomes pointless. Another difficulty exacerbates the first one. The argument I offered for the Boolean structure of the space of potential states of full belief was predicated on the idea that the so-called consequence relation that partially orders these states does so with respect to the strength of the potential states-their capacity to relieve doubt. In addition, the same relation constrains the way truth is to be judged by an inquirer. This approach is motivated by a vision of the predicament of an inquirer concerned with obtaining new error-free information. The new difficulty is this: If potential states of full belief are dispositional in a sense straightforwardly naturalizable, the potential states mayor may not be partially orderable as a Boolean algebra. Moreover, if they are so orderable, it remains an open question whether agents in given states judge these states and their consequences error-free and judge potential states that imply these states to be more informative.
COMMITMENT AND PERFORMANCE
In classical thermodynamics and neoclassical economics, states of equilibrium and changes from equilibrium state to equilibrium state are the object of study. The details of the path followed from one equilibrium state to another are ignored. In thermodynamics, it was initially assumed that as a matter of empirical fact certain kinds of changes are changes from equilibrium state to equilibrium state. Matters were never as clear as this in economics and distinctions between long, intermediate, and short-run equilibria were often introduced. In any case, it was not the province of a theory of comparative statics studying shifts from one equilibrium state to
Commitment and Change of View
21 7
another to study the processes involved in the interstitial paths between equilibria. I A healthy state is an equilibrium of sorts. But the equilibrium is, at least in part, singled out by value considerations. There may be many types of healthy state. If X is unhealthy, there is at least one healthy state better than X's state of disease. When going to a physician or a therapist, we may be seeking to change ourselves either by moving our current unhealthy condition to a healthy equilibrium. Alternatively, we may wish to exchange one healthy state for another as athletes often do when they train for some competition. I suggest that states of rational health are states of normative equilibrium. Such states are rarely if ever attainable by flesh and blood. We do not even come close to attaining them. In order to achieve a state of rational equilibrium, X would need unbounded memory capacity and computational resources as well as abilities for self-knowledge that few come close to possessing. In addition, X's abilities to calculate and introspect would have to be unimpeded by the emotional storms and passions that often undermine our already limited capacities for clear and careful deliberation. The fact that we do not even come close to satisfying requirements of rationality precludes the serious usefulness of principles of rationality as explanatory, predictive, and descriptive principles. They are not even useful idealizations of human behaviour for explanatory and predictive purposes. States of rational equilibrium are, however, very important normatively. Donald Davidson famously embraced the idea that principles of rational coherence are normative ideals. To act, reason, believe, or desire irrationally is to depart from a standard or norm. But what kind of norm is a norm of rationality? Davidson approaches this question by pondering another. He wonders how we should respond to the challenge of someone who resists our standards of rationality-who, perhaps, embraces intransitive preferences as rationally coherent. I am strongly inclined to think my mistake in this imagined exchange came right at the start; I should never have tried to pin you down to an admission that you ought to subscribe to the principles of decision theory. For I think everyone does subscribe to those principles whether he knows it or not. This does not imply, of course, that no one ever reasons, believes, chooses or acts contrary to those principles, but only that if someone does go against those principles, he goes against his own principles. (Davidson 19 85, 345-54) I For a discussion of the conception of equilibrium in economics and of comparative statics see Samuelson (1947, ch. 2). I first suggested construing principles of rationality as characterizing a kind of normative equilibrium in Levi (1970, 136-8). I suggested that a theory of rationality revision is a normative analogue of a comparative statical theory. Brian Ellis (1979, 4-5) introduces the notion of a rational equilibrium that he wishes to serve both as a physical and as a regulative ideal. I do not think that principles of rationality do well in explanation.
218
Isaac Levi
Here Davidson argues, in effect, that to demand a justification for being rationally coherent is to raise what Peirce would have called a 'paper doubt'. There is no need to justify being rationally coherent because no one doubts that one should (even though some may say that they do have doubts). But even if it is true that the injunction to be rationally coherent in one's deliberation is a non-controversial prescription, it is, nonetheless, a prescription. Davidson goes further than claiming that subscription to principles of rationality is non-controversial. He writes, 'It is a condition of having thoughts, judgements and intentions that the basic standards of rationality have application' (1985, 351). He is prepared to insist that every bearer of propositional attitudes should conform to the principles of rationality in order to fulfil their own commitments. If they did not subscribe to the principles of rational belief, desire, and decision, they would not be agents bearing propositional attitudes. Principles of rationality are thus prescriptive. Every rational agent undertakes a commitment to be coherent. If they are not so committed, they lack agency. This does not mean that agents comply with standards of rationality in their behaviour or even in their dispositions to behaviour. According to Davidson, satisfying conditions of rationality is not a necessary condition for having propositional attitudes. Clearly agents do not always believe, choose, or act rationally. But when an agent fails to meet the demands of rational consistency or coherence, Davidson insists that the agent 'goes against' his own principles because 'subscribing' to these principles (in contrast to conforming to them) is a necessary condition for being a bearer of propositional attitudes. I surmise that subscribing or being committed to satisfying conditions of rationality is a necessary condition for having propositional attitudes. An agent X undertakes and, hence, is comn1itted to conforming to requirements for rational coherence. Davidson does not appear to think, however, that when X has a propositional attitude, X has any further con1mitments over and above the commitment to rational coherence. On this point, I mean to differ. In agreement with Ramsey, Davidson understands propositional attitudes to be dispositional properties of agents useful in explaining and predicting their conduct. In my judgen1ent, this view of the attitudes is quite untenable. The generalization 'Any rational agent who faces a choice between options in a set of alternatives maximizes expected utility among these options' is not a testable lawlike claim eligible for consideration as a covering law in reason explanations. Davidson has testified to his frustration in attempting to think of such generalizations as empirically testable. In commenting on the experimental work he undertook in collaboration with Suppes and others in the 1950S and early 1960s, Davidson writes that such experiments 'can be taken, if we
Commitment and Change of View
219
want, as testing whether decision theory is true. But it is at least as plausible to take them as testing how good one or another criterion of preference is, on the assumption that decision theory is true' (Davidson 1980, 272). On the second construal, the principles of decision theory are schemata for a family of covering laws. By plugging in various hypotheses concerning the utility, probability, and expected utility judgements of an agent, one can predict or explain the choices of an agent on the assumption that the agent is rational. In effect, the generalization specified above becomes a characterization of a family of Carnapian reduction sentences for various specifications of belief and desire dispositions of the agent. But notice that if the prediction fails, the generalization need not be falsified. We may question the rationality of the agent at that moment. Davidson (1980, ch. 14, 274) confirms this representation of his conception of principles of rationality by acknowledging that 'reason explanations' resemble explanation by disposition. Why did this cube of sugar dissolve? It was soluble. Soluble things dissolve in coffee. This cube was in coffee. This explanation tells us nothing about when, in general, cubes are soluble. It concerns this cube. Davidson insists, however, that the argument does do substantial explanatory work even though the attribution of the disposition to the particular cube is not made against a background of assumptions integrating solubility with a more comprehensive theory. He compares this kind of explanation with citing X's beliefs and desires when rationalizing agent X's behaviour even though no explanatory laws link such beliefs and desires with the behaviour to be explained except, of course, the reduction sentences derived from the principles of decision theory. Consider, however, the old complaint against citing opium's dormitive virtue as an explanation of Y's going to sleep after taking opium. We can make this look like covering law explanation by declaring that anyone ingesting a soporific goes to sleep. This law specifies a necessary condition for dormitivity and does so axiomatically just as axioms of a decision theory implicitly characterize the primitive notions of preference among options and the notions of probability and utility defined in terms of preference among options. Objects that fail to induce sleep lack dormitive virtue and, hence, are not counter-instances to the reduction sentence. To the extent that use of such principles as covering laws is unsatisfactory for explanatory purposes both in the case of dormitive virtue and in the case of rational decision-making, so is the appeal to the dispositions they characterize. In the absence of more substantial covering laws for the purpose of explanation, we should not consider appeal to dormitive virtue in explanation here as anything other than the parody of scholastic science it was supposed to be. To be sure, we might be prepared to regard the citation of the reduction sentence for dormitive virtue as a stopgap covering law pending future
220
Isaac Levi
research that will either replace the disposition terill with something more satisfactory or manage to integrate the term into an explanatorily adequate theory (Levi and Morgenbesser 1964). Stopgap explanations may serve a useful purpose provided their status as stopgaps is well understood. Thus, the particularity of the predication of dormitive virtue to this particular dose of opiun1 is not itself an obstacle to stopgap explanation-provided we can expect through further inquiry to integrate the various instances of dormitive virtue into a more comprehensive theory. It may be argued that we can do the same with 'is a rational agent' or the specific belief and desire dispositions. Davidson, however, by implication denies that explanations using specific belief and desire dispositions or the disposition predicate 'is a rational agent' are stopgap explanations in which the disposition predicates are placeholders for conceptions that will be better integrated into some framework scientific theory. Such denial is entailed by Davidson's commitment to anomalous monism. Anomalous monism precludes the integration into a broader framework. Explanations invoking belief and desire dispositions may look like stopgap explanations. But, on Davidson's view, they are not. I agree with Davidson's suspicion of attempts to integrate psychology into biological and physical theory. But once one abandons attempts at integration and admits that the principles of rationality are often violated in practice, I fail· to see a great advantage in insisting on the explanatory or predictive virtues of principles of rational belief, desire, and decisionmaking. Explanations appealing to opium's dormitive virtue may not be worth much; but they can with some charity be construed as stopgap explanations pending a deeper understanding of opium's chemical constitution and its effects on the human body. According to Davidson's anomalous monism, to look on belief and desire attributions as placeholders in generalizations used in stopgap covering law explanations in this sense is hopeless. This hopelessness is predicated on the idea that the belief dispositions, desire dispositions, etc. exhibit intentionality. If they do not, Davidson's anomalous monism is no obstacle to treating such dispositions in the same way as 'is magnetic' or 'is soluble' is treated. But once the intentionality is sucked out of such dispositions, they no longer can be construed as propositional attitudes. How can we have both the propositional attitudes and the naturalizable dispositions whose use escapes the charges against dormitive virtue? We may think of Davidson's view as insisting that we are committed as rational agents to being in some state of rational equilibrium or rational health or other. But according to Davidson, rational agents are not committed to any specific state satisfying the requirements of rational equilibrium. I propose that we think of potential states of full belief, probability judgement, value judgement, etc. as potential states of rational equilibrium or health. No flesh and blood agent is in such a state. But rational agents are
Commitment and Change of View
221
committed not only to being rationally healthy as Davidson requires but are committed to being in some state that instantiates rational equilibrium or health. If agent X is in a given state of full belief, X is committed to rnaking changes in X's current condition so that X will exhibit the behavioural and linguistic dispositions that X would have if X were in the appropriate state of doxastic equilibrium to which X is committed. X is not only obliged to meet the requirements of rational doxastic coherence as Davidson requires. X is obliged to meet them in a certain way expressing X's current state of full belief. Suppose X believes that h. On the view I am advocating, X is in a state of doxastic commitment (i.e. a potential state of full belief) that has as a consequence a potential state of doxastic commitment or full belief that h. X is thereby committed to judging the potential state that h to be a consequence of X's current state of full belief (so that X fully believes that X fully believes that h). X is also committed to judging that the potential state that h is free of error and is no stronger than X's current state. X is committed to exhibiting the dispositions to speak and act of someone who rules out the possibility that h is false. On the other hand, if X is in suspense with respect to h, neither the potential state that h nor the potential state that -h is a consequence of X's state of full belief. X is comn1itted to judging both hand -h to be serious possibilities and to full belief that the potential state that h is not a consequence of X's current state of full belief (so that X is committed to fully believing that X does not fully believe that h). X is committed to manifesting this in his speech and behaviour. The dispositions X is committed to having are 'naturalizable' as long as they lack content just like water-solubility. When the having or manifesting of such dispositions is interpreted as bearing content, the having of the dispositions is being understood as partial fulfilment of X's doxastic commitments. Belief that h as a doxastic commitment is no more naturalizable than any other undertaking of an obligation. The naturalistic fallacy is after all a fallacy. But in so far as the dispositions that fulfil a doxastic commitment are consid'ered without regard to their status as fulfilments, their intentionality has been removed and there is no a priori obstacle to naturalization (Levi 199 1 , ch. 2).2 2 L.]. Cohen (199 2 ) discusses a distinction between belief and acceptance that nearly but not quite corresponds to the distinction I am making between doxastic dispositions and manifestations, on the one hand, and doxastic commitn1ents on the other. Doxastic dispositions, on the view I am defending, carry intentionality only in so far as they may be understood as fulfilling or failing to fulfil doxastic commitments. In so far as they are dispositions that can be somehow integrated into adequate scientific theories, I doubt very much that they can be held to have contents-that is to say, can be said to be propositional attitudes. Cohen thinks otherwise (pp. 23-7). Although accepting h seems like an undertaking or commitment, Cohen's conception of the commitment is quite different from mine (see Cohen 199 2, 27-39).
222
Isaac Levi
Thus, changing one's state of full belief is undertaking a commitment and reneging on some previous commitment. If we undertake commitments at all, it is something we do by choice. We are in control. Such undertaking incurs some obligations while reneging on others. To have initially been in doubt as to the truth of h is to be committed to judging both hand -h to be possible. If one comes to full belief that h, one reneges on the prior commitment to judge ~h to be possible and embraces a new commitment to rule -h out as a serious possibility. Once one has made a promise, one should not break it without a good reason. In a similar vein, shedding one set of doxastic responsibilities for another also calls for justification just as the belief-doubt model favoured by Peirce and Dewey requires. On the other hand, justification is not required for fulfilling the promises one has made. The obligation has already been incurred. The task is to fulfil it. Sin1ilarly, just as Peirce insisted, there is no need to justify remaining in the current state of doxastic commitment. The current state of doxastic commitment incurs obligations to recognize as error-free the potential states of full belief that are consequences of one's current state, to recognize as erroneous those potential states incompatible with the current state, and to recognize as both possibly true and possibly false those potential states compatible with the current state but not consequences of it. Thinking of changing beliefs, probability judgements, and values as the undertaking of and reneging on commitments insures the coherence of the claim that agents can deliberately choose between alternative ways to change their minds as a means to promote their goals. If changing propositional attitudes is understood as changing commitments rather than changing dispositions and behaviour that fulfil such comn1itments, attributing propositional attitudes to an agent is alleging that the agent has incurred obligations. Spelling out these obligations by specifying what is required to fulfil them is, to my way of thinking, an appropriate substitute for indicating what the 'contents' of the attitudes are. To be sure, it would be foolhardy to insist that the conditions for completely fulfilling the con1mitn1ents in thought and deed undertaken by coming to believe that h can be completely specified. We cannot do that for an ordinary contract to lay sewers. Often enough the conditions for fulfilling the contract need to be negotiated well after the contract has been signed. Indeed the negotiation n1ay becon1e litigation. To be sure, only rarely, if ever, are disputes regarding the extent to which X's behaviour measures up to X's convictions settled in court. I contend, however, that such questions are (a) often unsettled and even unanticipated prior to undertaking the commitment (Levi 1991, 31-2) and (b) are inevitably value laden (Levi 199 1 ,7 and 3 2 ).
Commitment and Change of View
223
Several advantages emerge from understanding attitudinal states as states of normative equilibrium to which agents are committed: (I) Changes in full belief can be understood either as reneging on old
commitments and undertaking new ones or attempts to improve performance in fulfilling the current commitments. The former kind of change (change in commitment) can be an object of decision and as such calls for the kind of justification that is required when inquiry ceases and a solution is found. Change in performance calls for efforts at training, education and therapy and, perhaps, for the use of prosthetic devices like computers, slide rules, and the like. (2) Two mysteries for naturalism are reduced to one (Levi 1991, ch. 2). The two mysteries are the obstacles to naturalism presented by the naturalistic fallacy and the gap between nature and meaning. By taking attitudes to be commitments, there is hope that the question of meaning can be understood as a question about values. Perhaps, we can live with the fact-value dichotomy. (3) We give up the pretence that principles of rationality are primarily used for the purpose of explanation and prediction. We thereby avoid introducing mystery making dispositions in the sense of Levi and Morgenbesser (1964).
AMPLIATIVE AND EXPLICATIVE
The first point may be illustrated by the case of slow-witted Sandy. Sandy may become convinced on Monday that Stirling is located in Scotland and add this conviction to his prior full belief that any location in Scotland is a location in the United Kingdon1. When asked on Monday whether Stirling is located in the United Kingdom, however, Sandy fails to declare that it is. Perhaps, he is offered a 50-50 bet on Stirling being located in the United Kingdom and refuses. Before Monday, Sandy was ignorant about the location of Stirling. But there was no incoherence in his beliefs-that is to say, in his efforts to fulfil his doxastic commitments. But for some reason, Sandy became curious about the whereabouts of Stirling and with a modicum of consultation came to the conclusion that Stirling is in Scotland. Sandy renounced his commitment to agnosticism on this score and undertook to fully believe. Doing so may legitimately call for some justification. Sandy may have changed the commitment with good reason and yet failed to fulfil it in some way like failing to assent to the location of Stirling in the United Kingdom. Sandy needs therapy or help of some kind to enable
224
Isaac Levi
him to fulfil his commitments. His problems are of a different kind from those he faced when he first wished to find out where Stirling was. Sandy gets help or helps himself. By Tuesday, Sandy assents to the claim that Stirling is located in the United Kingdom. There are two doxastic changes recorded here. Slow·-witted Sandy came to fully believe that Stirling is in Scotland on Monday. On Tuesday, Sandy came to fully believe that Stirling is in the United Kingdom. According to my proposal, on Monday Sandy changed his doxastic commitments. He undertook to fully believe that Stirling is in Scotland. Not only was Sandy committed to that. He was also committed to fully believing that Stirling is in the United Kingdom. This is because he was committed to fully believing all the logical consequences of the conviction he had previously had that all locations in Scotland are locations in the United Kingdom and the conviction that Stirling is in Scotland. On Tuesday, another change in belief took place; but no change in doxastic commitment occurred. Sandy finally put two and two together and began to fulfil the commitment to fully believe that Scotland is located in the United Kingdom. That there is widespread presystematic sentiment in favour of acknowledging that the change on Tuesday is different fron1 the change on Monday seems clear. Before the Quinean onslaught on the analytic-synthetic distinction, a contrast was drawn between making an ampliative change that involves acquisition of new information and changes that merely explicate the contents of the beliefs one already has. Explicating the contents of beliefs one already has is reminiscent of Plato's doctrine of recollection where we already know geometrical theorems but need to be reminded of then1. The failure to believe is a failure in the agent that calls for a kind of therapy. Plato thought the stimulation of memory was in order. But the study of mathematics, logic, and semantic techniques may be thought to be clarificatory. So might the use of various aids to computation. Even psychotherapy might help. Quine famously cast doubt on the analytic-synthetic distinction and scepticism about viability of this contrast raised questions about the ampliative-explicative distinction. But if the ampliative-explicative contrast is understood as a special case of a more general contrast between changes in commitment and changes not in commitment but in performance, perhaps we need not worry about defending other distinctions included under the umbrella of the analytic-synthetic distinction. Davidson did not recognize the kind of contrast I am suggesting. He seems to have denied that there should ever be changes in con1mitment. Rational agents qua rational agents are committed or subscribe to conformity to the principles of rationality. They should be coherent. Changes in attitude, however, are always changes in disposition and, hence, changes in performance.
Commitment and Change of View
225
There are no changes in doxastic commitments-that is to say, in a state of full belief understood as commitment to judging free of falsehood all potential states of full belief that are consequences of that state. Both the changes on Monday and on Tuesday are understood as changes in doxastic disposition. On Monday, X failed to live up to the requirements of rational full belief. On Tuesday, he did better. As far as Davidson is concerned, if Sandy had given up his conviction that Stirling is in Scotland on Tuesday, the interests of closure and consistency would have been just as well served. We should not understand Sandy as having abandoned the commitment to fully believe that Stirling is in Scotland that he had embraced on Monday by doing so. The change is merely another change in performance. We as charitable interpreters of Sandy's behaviour might have preferred the scenario where Sandy remains convinced that Stirling is in Scotland and Sandy comes to realize that it is also in the United Kingdom. We as interpreters have an interest in being charitable. According to Davidson, it appears we are committed to being charitable. But our commitments do not cash out into a question of Sandy's commitments. Sandy's doxastic commitments to closure and consistency do not change. And Sandy has no other doxastic commitments. Is Sandy accountable for his attitudes in general or his full beliefs in particular? Peirce's view, a view I share, denies that Sandy needs to justify his current doxastic commitments. The outcome of successful inquiry is a warrant for changing beliefs. Davidson seems to echo Peirce's attitude regarding current beliefs. Sandy is not accountable for them as long as they are coherent. But Davidson also seems to think that Sandy is not accountable for changes in them either as long as the changes are from one coherent set to another. Sandy is accountable for lapses from coherence and lapses from coherence alone) Sandy is under no obligation to repair the incoherence in one direction rather than another. I do not understand this to be Peirce's view or, for that matter, Dewey's. In any case, I mean to resist it. Introducing the distinction between changes in view that are changes in commitment and changes in view that are changes in performance, we can hold inquiring agents responsible for justifying changes in commitment. We may also demand, as Davidson seems to do, that they undertake repairs when they fail to fulfil their commitments. Once we take this stance, we can
3 Davidson famously imposes an obligation on interpreters to construe the behaviour of their subjects charitably so that their beliefs come out not only coherent but by and large true. In so far as such an obligation extends to the subject as an interpreter of his or her own behaviour, it reduces to the commitment to assign the truth-value true to many of his or her own beliefs. I would regard it as a condition of rational coherence that an agent should assign 'true' to all of his or her full beliefs. But Davidson seems to think otherwise. In any case, the principle of charity self applied is a constraint on rational coherence.
226
Isaac Levi
give significance to the distinction between ampliative and explicative inference that Davidson lacks the resources to do.
COMMITMENT AND BOUNDED RATIONALITY
We urge agents to be coherent or consistent in their full beliefs, their probability judgements, their goals, values, desires, hopes, and other attitudes. Not only should full beliefs be consistent among themselves in the point of view of a given agent X at a given time t but X's probability judgements should be internally coherent and the system of full beliefs and probability judgements should cohere with each other. And this overall doxastic scheme should cohere with X's goals, values, and desires at time t and with the deliberate choices X judges at t that he should make. These demands for coherence or consistency among the attitudes ought to be very weak if they are to be broadly applicable to rational agents. By 'weak' I mean relatively non-controversial. For example, we urge rational agents to embrace full beliefs that are closed under classical logical consequence and logically consistent. Requiring that judgements of full belief or, equivalently, judgements of truth be consistent and closed does not itself take a stand on any controversial issue. Principles of rational full belief neither mandate nor prohibit full belief in the commitment sense that the Goddess Hathor exists or that the universe has been expanding ever since the Big Bang. Similarly, principles of rational judgements of probability recognize as permissible only those probability assessn1ents that satisfy the calculus of probabilities. But they do not stipulate which subset of the non-countable infinity of probability functions meeting this condition contains all the permissible ones. Again in a similar spirit, to be rationally coherent, preferences should be transitive even though Hume seemed to think that the passions are not subject to rational constraint at all. My contention that these principles are non-controversial may seem extremely controversial. As prescriptions regulating the coherence of the attitudes (such as full belief, probability judgement, and preference), they are very demanding. No one can come close to satisfying them. No one has full beliefs closed under logical consequence, consistent full beliefs, degrees of belief obeying the calculus of probability, and the like. Pace Davidson, we do not even remotely approximate the weak demands of rational coherence. Not only do we know that predictions and explanations predicated on the claim that agents fully believe the logical consequences of what they believe are false and, indeed, are poor approximations, we know that we lack the ability to come close to satisfying the requirements specified. We have severely limited memories and computational capacities and have no hope of transcending our bounded rati~n~~tl._
Commitment and Change of View
227
Considerations such as these have persuaded some that we should abandon requirements such as deductive closure and consistency on full belief and cognate constraints on the other attitudes even as prescriptive principles of rationality (see Hacking (1967) for a subtle version of this view). But trimming our sails will not do. Even if we make the standards of rationality less demanding on our capacities, circumstances will arise where our computational capacities will be stretched. In any case, failure because of drunkenness, emotional storms, self-deception, or weak will can still pose obstacles to our abilities to meet the demands of rationality. There is a yet more serious objection to sails trimming here. If we are exempted from the obligation to fully believe the logical consequences of our full beliefs because we cannot do so, then we have no obligation to improve our capacity to recognize the implications of our full beliefs. What then is the motivation for improving skills of computation and deduction? Why go into psychotherapy, go on the wagon, and the like? At least one reason is to improve capacity to make the calculations that rationality requires. But if it is perfectly rational not to believe the logical consequences of what one believes or to obey the calculus of probability in probability judgement, there can be no such reason. When propositional attitudes are commitments, we can admit that we fail to fulfil our commitments and, hence, fall short of being rationally coherent. Because of our commitments, we recognize that we ought to undertake steps to improve our performance by improving skills of computation and deduction and, if necessary, going into therapy or relying on prosthetic devices like computers, slide rules, and abacuses as well as tapes, diskettes, books, and notes. In this setting, a certain difficulty arises. Failures to satisfy logical omniscience requirements arise when we think of full beliefs, judgements of probability, and judgen1ents of value as dispositions or manifestations of dispositions. And the failures are severe. It is simply not true that we come close to satisfying such standards or satisfy them 'by and large'. Moreover, we are simply not capable of overcoming the deficit. The difficulty is serious. I previously compared commitment to promising. In promising, we do something like taking an oath that expresses our undertaking to fulfil the promise. The acts performed generate obligations to fulfil the promise. Observe, however, that an agent that undertakes to fulfil a promise knowing full well that he cannot do so has promised fraudulently and may be criticized for the fraud. Con1mitn1ents to conform to the requirements of rational coherence must be dramatically fraudulent. Comparing committing to promising is not equating the two. When acquiring a propositional attitude is understood to be undertaking a commitment, the commitment is more like a religious undertaking. To vow to be righteous is foolish and fraudulent if the vow is taken to be a promise.
228
Isaac Levi
No such promise can be completely fulfilled. But such vows are often counted as sincere in spite of this. According to this view, our obligations do, indeed, outstrip our abilities. To soften the harshness of this view of responsibility, appeal sometimes is made to God's grace to fill the infinite gap between our finite abilities and achievement and the requirements of full compliance. But to understand religious vows in this way is to mock them. If we are forgiven for failing to fulfil our obligations, these obligations are empty, as are the vows that generate the obligations. We do not have to appeal to God's grace or endorse any fragment of theological doctrine in order to recognize son1e value in the concept of a religious vow or undertaking. We need not regard those who fail to fulfil religious commitments as full of sin if they cannot do so. The point of the analogy to religious vows runs as follows. 'Ought' does imply 'can'. This does not, however, mean that we must trim our sails either in the case of religious vows or the commitments dictated by principles of rationality and the undertakings that take place when we change our views. We may hold those who fail to fulfil their commitments accountable for failing to attempt to extend their capacities to fulfil such commitments. Those who see no hope of their extending their capacities without the help of God may pray for his Divine Guidance. The more sceptical and secular among us will seek more mundane ways to extend our capabilities. Vows of chastity and righteousness may require those that make such vows to go through periods of training. Something very much like this is appropriate in the case of those committed to believing fully the logical consequences of what they fully believe, conforming in their probability judgements to the requirements of the calculus of probability, having transitive preferences and obeying the independence postulate. Failures to fulfil such comn1itments call for better training in techniques of reasoning, some familiarity with deductive logic and the calculus of probabilities. The aid of devices like the abacus, the slide rule, handbooks, paper and pencil, and computers are critical to the enhancement of our capacity to perform the tasks we set out to do. And various forms of psychotherapy (sometimes utilizing the wisdom of psychopharmacology) are also important. So a commitment is not a promise. We are not obliged to fulfil our commitments when the occasion arises and we cannot do so. We are obliged, however, to take steps to minimize the prospects of our not being able to do so as much as we can.
ASSESSING THE STANDARDS OF RATIONALITY
Even so, the principles of rational coherence are 'constitutive' of the attitudes. Regardless of differences in point of view, the attitudinal commitn1ents of
Commitment and Change of View
229
agents are expected to satisfy these requiren1ents. Deductive closure is a constraint on doxastic commitment regardless of one's point of view. The same is true of principles of rationally coherent probability judgement, value judgement, and choice. In order to be part of the common commitment, principles of rational coherence should be very weak. They should constitute the shared background of anyone that inquires, deliberates, or debates a point with someone else. They cannot beg questions in any inquiry. On this assumption, there is no need to ask for an analysis of the claims of those who dispute the rational coherence of Sandy's views or those of anyone else. According to the account being considered, disagreements can arise about the coherence of Sandy's views because of disagreements about what Sandy's views are. But given that the understanding of Sandy's commitments in the way of full belief, probability judgement, and value judgement is settled, the question of coherence boils down to a question as to whether Sandy has or has not fulfilled those commitments and in what respects. Advocates of the injunction to maximize expected utility have often proposed axiom systems designed for deriving credal probabilities and utilities from information about revealed preference-that is to say, choices an10ng the options available to decision-makers in various hypothetical situations. To elicit probabilities and utilities from revealed preference, the decisionmaker's options are often represented formally as functions from hypotheses (states of nature) to outcomes that, as far as the agent knows even after he chooses an option, might be true. It is also assumed that the states are probabilistically independent of the acts and that the values or utilities of the consequences are independent of the state. But these structural assun1ptions are not conditions of rationality. What is claimed is that if such assumptions are satisfied, the probabilities and utilities of rational agents who are fulfilling their commitments would be revealed by their choices in appropriately specified decision problems. The techniques of elicitation will fail when the structural assumptions are not satisfied even though the commitments are being fulfilled. But what are the principles that characterize these commitments? I have suggested that they should be sufficiently weak that all agents subscribe to them. Does every agent undertake to fully believe the logical consequences of what he or she fully believes? It is easy to find philosophers, psychologists, and social scientists that dissent. If these philosophers are agents, as I suppose for the sake of the argument that they are, they are failing to live up to their commitments. Failure to conform to the requirements of logical closure is compatible with commitment to such conformity. But is every rational agent committed to making probability judgements in a manner representable by nun1erically determinate probability judgen1ents, as F. P. Ramsey seems to have thought? I do not think so. Credal probability
230
Isaac Levi
may be indeterminate reflecting a kind of ignorance from which rational inquirers may suffer. The san1e is true of utility judgement. Rational agents may choose without being committed to judge the option chosen to be best among the available options. To require commitment to numerically determinate probability and utility judgement is to mandate opinionation in the name of reason. This is no less objectionable than requiring that rational agents be certain as to which of a family of maximally consistent alternatives is true no matter how refined the range of alternatives might be. Allowing for indeterminacy is allowing rational agents to acknowledge ignorance. Clearly everyone of the claims I have just made is contentious. So how can it be claimed that the principles of rationality that define commitments are so weak that they are non-controversial? I do not know how to settle disputes about the principles of rationality in a manner that avoids begging contentious questions. For my part., all I can say is that I try to keep the principles I deploy to characterize commitments as trivial as I can. But some disputes are unavoidable. Thus, many authors have made a meal out of what seems to be strong evidence that ordinary folk violate and, indeed, resolutely violate the so called independence postulate. I have argued (Levi 1986, 1997) that the allegedly deviant behaviours exhibited in response to the paradoxes of Allais and of Ellsberg are not deviant at all. The appearance of violation may be interpreted as failure on the part of agents to have determinate utility of money functions (in the case of Allais paradoxes) or to have determinate probabilities (in the case of Ellsberg paradoxes). As a consequence, agents find the options presented to them to be non-comparable with respect to expected utility. In such cases, decision-makers invoke other criteria they find salient in the context. In my judgement such decision-makers may be doing well at fulfilling their commitments. They do not violate the independence postulate. They behave as they should behave when they recognize their doubts with respect to value judgen1ents or probability judgen1ents as the case may be. To my way of thinking, this way of seeing the behaviour of agents faced with Allais or Ellsberg problems is preferable to strict Bayesian orthodoxy that sees such agents as deviants from rationality in need of therapy as Dawes (1988, 3.3 and 159), for example, seems to do. It is also better than giving up the otherwise compelling independence postulate as Machina (1982a,b) and McClennen (1990) advocate. Both points of view seem to insist that when rational agents must choose between a pair of options, they are committed to choose for the best all things considered. So the choice made 'reveals' a weak preference for the option chosen. \ My contention is that rational agents are entitled to conclude deliberaion with the judgement that nOE~_~_0~_~J2ti~1'!~~~'!il
r
Commitment and Change of View
23 I
all things considered. Decisions may have to be taken without sufficient warrant for favouring one way of ranking all of the options over another. Such a view of rational choice does not prohibit agents from having evaluations that weakly order all the available options with respect to expected utility. And when the available options are weakly ordered by expected utility, the injunction to maximize expected utility is the fundan1ental principle of rational choice. But the view I favour does not, in general, mandate such strict Bayesianism. Maximizing expected utility applies as a special limiting case of a standard of rationality that allows for doubt. In the limiting case, agents are opinionated in their probability judgements even when they are not certain of extralogical propositions. When in doubt, agents have more indeterminate probability and utility judgements. But what should one say to someone who argues that a weak theory of rationality should allow as coherent failures of independence along the lines suggested by Machina? In part, I appeal to Seidenfeld's (1988) convincing examples showing that those who violate independence are committed in some scenarios to exhibit preference reversals that Machina himself would admit are incoherent (Levi 1996). In part, I wonder why someone should use Machina's approach to rationalizing alleged failures of independence when there are rationales available that allow for doubt and failure of weak ordering. If the arguments to which I have just gestured are sound, as I believe they are, models of rational choice that allow for violations of independence while insisting on weak ordering are both unmotivated and insensitive to the coherence of doubting values and probabilities. Those who strive to be more tolerant than I am by allowing for violations of independence and for violations of the weak ordering requirements still face the difficulties depicted by Seidenfeld. Violating independence under the conditions where it is intended to apply is simply a recipe for trouble. The status of the independence postulate is not, of course, the only contentious feature of the account of rational coherence I favour. And I have not covered the questions about its status in anything like the detail these issues deserve. My purpose here was to show how I would argue on behalf of my view of rational coherence. My two ll1ain lines of defence would be to show that the view of rational coherence I favour is extremely weak. Critics who claim that it is too strong would be rebutted by arguing that questioning the principles of rationality I retain leads to difficulties the critics themselves should find hard to swallow. There will be no doubt some who will swallow anyhow. I have nothing to say in response to such resolute sceptics. I have no serious doubt concerning the merits of the principles of rationality I endorse. And the resolute sceptics offer no good reason for me to come to doubt them. The doubts they generate are paper doubts.
Isaac Levi REFERENCES Cohen, L. J. (1992), An Essay on Belief and Acceptance (Oxford: Oxford University Press). Davidson, D. (1980), Essays on Actions and Events (Oxford: Clarendon Press). --(1985), 'Incoherence and Irrationality', Dialectica 39,345-54. Dawes, R. (1988), Rational Choice in an Uncertain World (New York: Harcourt Brace). Ellis, B. (1979), Rational Belief Systems (Oxford: Blackwell). Gibbard, A. (1990), Wise Choices, Apt Feelings (Can1bridge, Mass.: Harvard University Press). Hacking, I. (1967), 'A Slightly More Realistic Personalist Probability', Philosophy of Science 34,311-25. Levi, I. (1970), 'Probability and Evidence', in M. Swain (ed.), Induction, Acceptance and Rational Belief (Dordrecht: Reidel): 134-56. --(1986) 'The Paradoxes of Allais and Ellsberg', Economics and Philosophy 2, 23-53. Reprinted in Levi (1997). --(1991), The Fixation of Belief and Its Undoing (Cambridge: Cambridge University Press). --(1996), 'Choice Nodes as Loci of Control', in S. Lindstrom, R. Sliwinski, and ]. Osterberg (eds.), Odds and Ends, Uppsala Philosophical Studies 45,158-7°. --(1997), The Covenant of Reason (Cambridge: Cambridge University Press). --and Morgenbesser, S. (1964), 'Belief and Disposition', American Philosophical Quarterly I, 221-32. Machina, M. (1982a), 'Expected Utility Analysis without the Independence Axiom', Econometrica 50, 277-3 23. - - (1982b), 'Generalized Expected Utility Analysis and the Nature of the Observed Violations of the Independence Axiom', in B. P. Stigum and F. Wenst0p, Foundations of Utility and Risk Theory with Applications (Dordrecht: Reidel): 26 3-93. McClennen, E. E (1990), Rationality and Dynamic Choice (Cambridge: Cambridge University Press). Quine, W. V. (1960), Word and Object (New York: Wiley and Technology Press of MIT). Samuelson, P. (1947), Foundations of Economic Analysis (Cambridge, Mass.: Harvard University Press). Seidenfeld, T. (1988), 'Decision Theory without "Independence" or without "Ordering". What is the Difference?', Economics and Philosophy 4, 267-9°.
10
Rationality and Psychological Explanation Ivithout Language JOSE LUIS BERMUDEZ
• ••
One important ramification of the 'cognitive turn' in the behavioural and cognitive sciences is that high-level cognitive abilities are being identified and studied in an ever-increasing number of species and at ever-earlier stages of human development. Conten1porary behavioural sciences have more or less abandoned what was for many years an unquestioned tenet in the study of cognition, namely, that language and thought went hand in hand, and hence that the study of thought could only proceed via the study of language. Our understanding of the early stages of human development has undergone a sea change. Many developmental psychologists have come to speak of prelinguistic infants as little scientists, possessing, testing, and refining theories about the nature of the physical world (Gopnik and Meltzoff 1997). Complex experiments are regularly set up to identify the predictions that infants as young as 3 months make about the structure of physical objects and their dynamic and kinematic properties; about the trajectories that objects take through space-time; and about what will happen when objects interact (Spelke 1990). For example, when they are 3 n10nths old infants are sensitive to the solidity of objects. They show surprise when one object appears in a place that it could only reach by passing through another object. It is tempting to conclude, and many developmental psy~ chologists have concluded, that these infants have classified something as an object and then made inferences about how it will behave on the basis of that classification. The study of animal behaviour has been no less drastically transformed (Allen and Bekoff 1997). The new discipline of cognitive ethology is essentially the study of the mental states of animals and how those mental states manifest themselves in behaviour. Unlike traditional approaches that have remained in the laboratory and attempted to account for animal performance on complicated but artificial tasks in terms of various forms of associationist
234
Jose Luis Bermudez
learning, cognitive ethologists are prepared to study animals in the wild as they deal with the practical problems that arise in foraging, finding mates, constructing shelters, and raising their young. Cognitive ethologists, unlike the older generation of comparative psychologists, have little time for the project of trying to explain how an animal behaves in terms of nonrepresentational stimulus-response mechanisms or the fixed behaviour patterns known as innate releasing mechanisms. They start from the assumption that animals have certain desires and certain beliefs about how the world is organized and act on the basis of those beliefs to try to ensure the satisfaction of their desires. Then they look at a species' natural behaviours, interpreting them as sophisticated strategies for pursuing the desires the members of that species seem to have. A good illustration of the degree of cognitive sophistication now found in animals is the various behaviours that have been analysed as conscious attempts to manipulate conspecifics and members of other species. Deception behaviours have been identified at all levels of the phylogenetic ladder, from the much studied examples in higher primates such as chimpanzees (Byrne 1995) to the broken wing display of the plover (Ristau 1991) and the false alarm calls of the great tit (M011er 1988). The study of human prehistory has taken on a new face (Mellars and Gibson 1996). Cognitive archaeologists are finding evidence of thinking behaviours long before even the earliest plausible dates for the emergence of language. Influential current accounts of the nlind of prehistoric man identify evolutionary stages of high-level but manifestly non-linguistic cognition. Stephen Mithen has argued, for example, that the early prehistory of the human mind was characterized by highly specialized cognitive modules, bodies of knowledge dedicated to specific aspects of the natural and social world (Mithen 1996). These 'multiple intelligences' permitted complex intellectual skills and inferences. The social intelligence of the early human mind initially emerged from the constraints of social living, but then rapidly in its turn expanded the possibilities and parameters of communal existence. The natural history intelligence was tied up with what appear to have been the complex hunting and foraging strategies of the omnivorous early hominids. On Mithen's view the emergence of language proper did not make cognition possible. What it did was allow the integration of the previously separated domain-specific modules. Merlin Donald has offered a rather different, but no less cognitivist, conception of the life of the prelinguistic hominids (Donald 1991). Prelinguistic hominids were capable of representing the world intentionally, learning complex motor skills by imitation, and constructing novel motor routines from a recursively structured motor vocabulary of basic movements. The integration of these individual skills into a social environment, with group mimetic acts, social coordination, and simple forms of teaching facilitated the emergence of complex tool-making, ---------------------
Rationality without Language
235
patterns of hunting that varied according to the season, primitive rituals, and a highly ramified social structure. For Donald, as indeed for Mithen and many other students of human prehistory, these sophisticated forms of instrumental and social cognition are a precondition for the emergence of language, not a consequence of that emergence. The philosophical questions raised by these practices of psychological explanation fall into four broad groups. The first group of questions are broadly metaphysical. They are all questions about the possibility and nature of non-linguistic thought. Questions about the vehicle of nonlinguistic thought fall under this heading, as do the various arguments that have been put forward (primarily by philosophers) to try to establish that it is conceptually impossible that non-linguistic creatures could be thinkers. A second group of questions concerns the semantics of non-linguistic thought. These are questions about how we should understand the content of non-linguistic thought and about the different types of thinking available to language-less creatures. The third group of questions are largely epistemological. Even if all the metaphysical questions are answered satisfactorily, we will still need some account of how we can come to attribute thoughts to non-linguistic creatures. The fourth and last group of questions are to do with the practice of explanation within which attributions of non-linguistic thought are embedded. In the forms of psychological explanation with which we are most familiar (the standard, belief-desire explanations of the behaviour of language-using, concept-possessing humans) we assume that psychological explanation is an idealized reconstruction of practical decision-making. That is, in a psychological explanation we cite beliefs and desires such that the agent whose behaviour is being explained could have (and perhaps even did) reason from those beliefs and desires to the intention to act in the way that he actually did act. So, a proper understanding of the practice of giving psychological explanations of the behaviour of non-linguistic creatures must bring with it a plausible account of how non-linguistic creatures fix on a particular course of action. It is the last of these four groups of questions that I will focus on in this paper. My concern will be with how to develop notions of rationality and reasoning that are both applicable to non-language-using creatures and sufficiently robust to underwrite the practice of giving psychological explanations of the behaviour of non-linguistic creatures. In the first section I explain the interdependence of the notion of psychological explanation and the notion of rationality, and show that this interdependence must be understood differently at the linguistic and the non-linguistic levels. In sections 2 to 4 I outline three different levels of non-linguistic rationality. Section 5 explains the connections between this typology and the project of giving psychological explanations of the behaviour of non-linguistic creatures.
Jose Luis Bermudez THEORETICAL AND PRACTICAL RATIONALITY
Psychological explanations operate by attributing propositional attitudes (typically a combination of beliefs and desires) that rationalize the behaviour being explained (DavidsonI963). The governing principle of the explanation is that it would be rational for a creature with that con1bination of beliefs and desires (and no significantly countervailing beliefs and desires) to act in the way that it did in fact act. The requirement here is not, of course, that the relevant action be completely rational in a full or everyday sense of the term. What matters is that it be rational from the point of view of the agent. That is to say, the performance of the action should make sense in the light of the agent's beliefs and desires. But it may be that those beliefs, or some subset of them, are irrational in a way that makes the action itself irrational. It may be helpful to distinguish explicitly between internal and external rationality. Assessments of internal rationality are relative to an agent's doxastic and motivational states, taking those states as given, while assessments of external rationality include assessments of the doxastic states underlying the action. To say that an action is externally rational is to say that it is in some sense appropriate to the circumstances in which it is performed, where those circumstances include the agent's motivational states-with different theories of external rationality understanding the type of appropriateness involved here in different ways. It is because the appropriateness of an action in a given set of circumstances is partly a function of how the agent interprets those circumstances that assessments of external rationality extend to the agent's belief-set. I We will return to this distinction in Non-Linguistic Rationality and Inference below, as it can be employed to mark a significant difference between levels and types of rationality. This internal rationalizing connection allows the attribution of the thoughts and desires to be genuinely explanatory. Thoughts and desires cause behaviour qua thoughts and desires (that is to say, in virtue of their content) because their contents rationally dictate a single course of actionor a limited number of possible courses of action. In the absence of such a rationalizing connection there would be no reason why a belief-desire pair with those particular contents should cause that particular action. 2 Clearly, I Son1e theories of external rationality will extend still further to include assessments of the agent's motivational states, but the question of whether desires can be assessed for rationality is tangential to the issues to be discussed in this paper. It seems plausible that assessments of the rationality of desires are appropriate only for creatures that are capable of reflecting on their desires and there are reasons for thinking that such desires are unavailable at the non-linguistic level (Bern1udez, forthcoming chs. 8 and 9). 2 This conception of psychological explanation is held widely but not unanimously. According to the theory of intentional icons developed by Ruth Millikan, intentional states such as beliefs and desires should be understood in functional tern1s-,-in terms primarily of
Rationality without Language
237
therefore, any application of psychological explanations to non-linguistic creatures must rest upon the appropriateness of applying criteria of rationality to non-linguistic creatures. But how are we to extend the notion of rationality to non-linguistic creatures? The basic problem is that the models of rationality we possess are not easily generalized to non-linguistic creatures. Consider, for example, the influential account of practical rationality offered by Donald Davidsonwhat we might term the inference-based conception of practical rationality: If someone acts with an intention then he must have attitudes and beliefs from which, had he been aware of them and had he the time, he could have reasoned that his act was desirable ... If we can characterize the reasoning that would serve we will, in effect, have described the logical relations between descriptions of beliefs and desires and the description of the action, when the former gives the reasons with which the latter was performed. We are to imagine, then, that the agent's beliefs and desires provide him with the premises of an argument. (Davidson 1978/1980, 85-6)
Let us suppose that practical reasoning is argument-like in the way that Davidson and many others have suggested. Then it straightforwardly follows that we can only explain and predict the behaviour of other creatures to the extent that we can understand the inferential relations between descriptions of their beliefs, desires, and so forth, on the one hand, and their actions on the other. And it seems natural to think that our understanding of the inferential relations between propositional attitudes and actions depends upon their conforming to the dictates of what might be termed procedural rationality-that is to say, sensitivity to certain basic principles of deductive and inductive inference (although such conformity is a necessary rather than a sufficient condition). Obvious examples are the familiar deductive principles of modus ponens, modus tollens, contraposition, and so forth, together with such basic principles of probability theory as that the probability of a conjunction can never be greater than the probability of its conjuncts; that the probability of a hypothesis and the probability of its negation should add up to I, etc. There are two principal obstacles to extending this inference-based conception of rationality to non-linguistic creatures. The first obstacle concerns why they have con1e about and what jobs they are designed to do. A corollary of her approach to mental states is to downplay the rational connections holding between beliefs, desires, and other propositional attitudes. Psychological explanation, as Millikan construes it (in, for example, Millikan 1984 and 1986) is a form of functional explanation, much closer to explanation in biology than to psychological explanation as traditionally conceived. Millikan's general approach has been sympathetically applied to the domain of cognitive ethology in ch. 6 of Allen and Bekoff. There is much to be said for the view that psychology is a branch of biology, and it is also true that much explanation within psychology and ethology is functional rather than causal/predictive, but most of the behaviours that are candidates for psychological explanation do not fit Millikan's framework, which works best for simple, subpersonal mechanisms.
238
Jose Luis Bermudez
the structure of the vehicles of non-linguistic thought. We understand inference in formal terms-in terms of rules that operate upon representations in virtue of their structure. But we have no theory at all of formal inferential transitions between thoughts that are not linguistically vehicled. Our models of formal inference are based squarely on transitions between natural language sentences (as codified in a suitable formal language). To be clear on the problem we need to remember that there is an important distinction between two different ways in which thoughts can be structured. They can be structured at the level of their vehicles or they can be structured at the level of their contents. Clearly, it is a necessary condition upon there being formal inferential transitions between contentful thoughts that those thoughts should have structured contents. Nonetheless, it is not a sufficient condition. Formal rules of inference do not operate on thought-contents, but rather on the vehicles of those contents. That is what makes them formal. They are syntactic rather than semantic. It is easy to lose sight of this when dealing with inferential transitions between sentences in formal languages, because there is a structural isomorphism between the logical structure of the sentence that expresses a given thought and the structure of the thought-content that it expresses. In the case of formal language sentences it doesn't really matter whether one considers the structure of the vehicle or the structure of the content, since there is really only a single structure. But in the case of non-linguistic thought (as indeed in the case of sentences in natural languages) the distinction becomes important. The contents of non-linguistic thought are indeed linguistically expressible and have a commensurate degree of structure. That is what makes them instances of thinking-that, rather than thinking-how. But it is far from clear that the vehicles of non-linguistic thought are linguistically structured in a way that would make it possible to apply formal rules of inference to them. Two objections are likely to arise at this point. The first concerns the way in which I am understanding inference in purely formal terms. Some philosophers have suggested that 'everyday rationality' of the type psychological explanations aim to track is not in fact formal. That is to say, everyday rationality proceeds by seeing immediate connections between ideas, rather than by mechanically applying formal rules of inference. This broadly speaking intuitionist line of thought goes back to Locke and has been promoted more recently by Jonathan Lowe (1993). The development of formal'codifications of reasoning, according to Lowe, involves maintaining reflective equilibrium between the deliverances of everyday rationality and the deductive consequences of particular formal systems. Someone attracted to this intuitionist conception of everyday rationality would be quite justified in Imistrusting an argument from a syntactic understanding of inference to the Ineed for sentential vehicles for the thoughts between which inferences take Iplace. But it is not clear, however, that such a theorist can avoid the need for
Rationality without Language
239
sentential vehicles altogether. What exactly are the two things between which we intuit connections, if not sentences? It is hard for us to make much sense of what Locke and his contemporaries found so easy to take for granted, namely, that we can directly perceive connections between thoughts. The most natural way to gloss the idea of directly perceived connections between thoughts is in terms of directly perceived connections between the truth of one sentence and the truth of another. 3 And with this we are of course back with linguistic vehicles. The second likely objection is that the language of thought hypothesis gives us a perfectly straightforward way of understanding how non-linguistic thoughts can have linguistically structured vehicles-and nothing I have said up to now gives us any reason for thinking that the language of thought hypothesis is not true. There are reasons, though, for thinking that the language of thought hypothesis will not be of use to us here. The principal one derives from one of the key features of the inference-based conception of rationality. As Davidson himself makes very clear, the explanatory power of the rationality connection in ordinary folk psychological explanations depends upon the fact that the person whose behaviour is being explained is at least in principle capable of carrying out the relevant reasoningas he puts it, 'had he been aware of them and had he the time, he could have reasoned that his act was desirable'. Without this we would have nothing like a genuine explanation. But to be capable of reasoning requires mastery of the relevant inferential principles-that is to say, mastery of the canons of what I have termed procedural rationality. And it is here that the inferencebased conception breaks down. Non-linguistic creatures are not reasoners in anything like the sense required for them to be rational on the inferencebased conception of rationality. Various students of animal behaviour have suggested that non-linguistic creatures are in fact capable of mastering certain basic formal principles of inference. These claims are plausible only when the notion of mastering a formal principle of inference is taken in such an etiolated sense that it no longer really counts as inference at all. Some of the claims made about Ronald Schusterman's work with sea lions will illustrate this. It is frequently claimed that Schusterman has trained his sea lions to understand and apply the logical principle of the transitivity of identity. Here is a representative report: Researchers have so far discovered that the essentials of Aristotelian logic are accessible to at least one other species: the sea lion. The first task presented to the seals was to learn that two icons are equivalent: X = Y. Next they were taught that Y and
3 This would effectively amount to a conception of everyday reasoning taking semantic, rather than syntactic, validity as fundamental.
Jose Luis Bermudez Z were equivalent: Y = Z. Then they were asked if X and Z were equivalent: does X = Z? Sea lions readily mastered this logical train. (Gould and Gould 1994, 176)
When we look a bit more closely at the experimental paradigm, however, it becomes clear that there is a certain creative licence in this description. The initial training, which Gould and Gould describe in terms of teaching the animal that two icons are equivalent, really amounted only to rewarding the sea lion when it chose the second icon (Y) shortly after being presented with the first icon (X). There seems no reason why this form of learning should be described in terms of a logical concept such as the concept of identity. Not only does the learnt behaviour seem to be a purely conditioned response, but it is hard to know how even to interpret the suggestion that two icons n1ight be identical, given that the icons used in the experiment had different illustrations on them. It is true that there is something significant going on here. The sea lions learn to associate X and Z even though they have never been exposed to the conjunction of the two icons. This is interesting because it seems clearly to contradict the basic claim of classical conditioning theory, which is that learned associations must rest upon reinforcement. Classical conditioning theory clearly predicts that the sea lions would fail the test, on the grounds that they had not been exposed to any reinforcement of the association between the first and third icons. But, understood in these terms, the sea lion performance seems much closer to the well-documented phenomenon of sensory preconditioning in rats (Rizley and Rescorla 1972). In sensory preconditioning rats are exposed to two pairings. The first pairing is of a light and a tone, while the second is of the same light and an electric shock. If, once the preconditioning has been completed, the rats are exposed to the tone on its own they will manifest the same aversive behaviour that they were conditioned to show to the light (in virtue of its association with the electric shock). Just as the sea lions had never been exposed to the conjunction of icon A and icon C, so too had the rats never been exposed to the conjunction of tone and electric shock. Yet in both cases the response was generalized without further training. It might be appropriate to describe this phenomenon using some phrase such as the 'transitivity of association', but there seems to be no sense in which implicit grasp of any logical principle is involved. It is striking that Schusterman and his co-workers do not make any explicit claims about the concept of identity. In, for example, Kastak, Schusterman, and Kastak (2001) the experiments are described as showing that sea lions are capable of equivalence classification, where this is understood as the capacity to classify groups of physically dissimilar stimuli which are related by a relation R possessing the formal properties of reflexivity (every member of the group bears that relation to itself), symmetry
Rationality without Language
24 I
(if a bears relation R to b then b will bear relation R to a), and transitivity (if a bears relation R to band b bears relation R to c then a will bear relation R to c). Any relation satisfying these formal properties is an equivalence relation. It is true, of course, that the logical concept of identity is an equivalence relation-but it hardly follows that sea lions which have shown themselves capable of equivalence classification will ipso facto have mastered the concept of identity. There are various equivalence relations in play in the different experimental paradigms which have been taken to illustrate equivalence classification, but none of them has anything to do with the concept of identity. 'The simplest equivalence relation is the relation of having-the-same-reinforcen1ent-history-as (i.e. all stimuli which have been positively reinforced will be classified together, as will all stimuli that have been negatively reinforced). Classification according to this very straightforward equivalence relation is not, in fact, unique to sea lions. It has been demonstrated in pigeons (Vaughan 1988). An example of a more complex equivalence relation would be the relation of having-the-same-function-as. It turns out that sea lions (although as yet no other species) are capable of classifying stimuli ordered by this equivalence relation. A recent survey article (Schusterman, Kastak, and I(astak 2002) suggests that sea lions are able to classify according to equivalence relations of being a conspecific or being related. None of these equivalence relations, however, has anything to do with the equivalence relation of identity, as a relation which holds between everything and itself. This is no place for a survey of claims that have been made for logical competence in non-linguistic creatures, but it seems to me that all such claims can be fitted into one of either three categories. In the first category fall claims such as those made by the Goulds about Schusterman's sea lion experiments. The key characteristic here is that mastery of logical principles is appealed to in order to explain forms of behaviour that can be easily explained more parsimoniously. There is obviously little aid here for an extension of the inference-based conception to non-linguistic creatures. In the second category certain forms of cognitive ability are wrongly described as logical. As an example we can take David Premack's well-known experiments using a san1e-different paradigm on chimpanzees (Premack and Premack 1983). Premack trained his chimpanzees by presenting them with an initial symbol followed by a pair containing that symbol and a second, different symbol. They were positively rewarded for selecting the second symbol. Although at first reinforcement was required for each new pairing the chimpanzees gradually began to respond correctly without any reinforcen1ent. It may well be right to interpret this behaviour in terms of mastery of the concepts same and different, but there is surely no logical principle involved here. The experiment says nothing about the capacity of chimpanzees to master formal principles of reasoning. Again, this will be of little use to proponents of an inference-based conception of practical reasoning.
242
Jose Luis Bermudez
In the third category of claims about the logical competence of nonlinguistic creatures we can place what might be termed 'as-if' attributions of certain patterns of reasoning. As has become well known from studies of animal foraging behaviour, it is possible to model certain aspects of animal behaviour by making the heuristic assumption that the animal is performing complex cost-benefit calculations. Here is how Krebs and Kacelnik describe the bare bones of the framework they propose for studying patterns of animal behaviour, such as those displayed by a foraging robin: We shall use the metaphor of the animal as a 'decision-maker'. Without implying any conscious choice, the robin can be thought of as 'deciding' whether to sing or to feed, whether to feed on worms or on insects, whether to search for food on the grass or on the flower bed. We shall see how these decisions can be analysed in terms of the costs and benefits of alternative courses of action. Costs and benefits are ultimately meaured in terms of Darwinian fitness (survival and reproduction), and may, in many instances, be measured in terms of some more imn1ediate metric such as energy expenditure, food intake or amount of body reserves. Analysing decisions in tern1S of their costs and benefits cannot be done without also taking into consideration physiological and psychological features that might act as constraints on an animal's performance. The fitness consequences of decisions, and the various constraints that limit an animal's options, can be brought together in a single framework using optimality modelling. (Krebs and Kacelnik 199 I)
The guiding assumption of optimal foraging theory is that animals should optimize the net amount of energy obtained in a given period of time. So, acquired energy is the benefit in the cost-benefit analysis. In the case of a foraging bird, for example, faced with the 'decision' whether to keep on foraging in the location it is in or to move to another location, the costs are the depletions of energy incurred through flight from one location to another and during foraging activity in a particular location. The cost-benefit analysis can be carried out once certain basic variables are known, such as the rate of gaining energy in one location, the energy cost of flying from one location to another, and the expected energy gain in the new location. It turns out that optimality modelling makes robust predictions of foraging behaviour in birds such as starlings (Sturnus vulgaris) and blue tits (Parus major). Cowie's study of great tits foraging in an experin1ental environment containing sawdustfilled cups with mealworms hidden inside showed that the amount of time a given bird spent at a given cup could be accurately predicted as a function of the travel time between patches and the quantity of meal-worms in the cup (Cowie 1977). Similarly,· Kacelnik has shown that adult starlings foraging for their young behave in ways predicted by the marginal value theorem of optimal foraging theory, on the assumption that the relevant currency is net energy gain to the family (Kacelnik 1984). Foraging starlings collect a beakload of food from a foraging site before returning to their nests. Obviously, diminishing returns will set in during the foraging excursion, as
Rationality without Language
243
the bird will be less efficient at gathering new food the fuller its beak is with the food it has already gathered. The marginal value theorem is a quantitative way of predicting the adjustments that a creature will make in its foraging behaviour to compensate for these diminishing returns. Of course, as Krebs and Kacelnik make plain in the quoted passage, there is no suggestion that the great tits or starlings really are carrying out complex calculations about how how net energy gain can be maximized within a particular set of parameters and background constraints. It is a crucial tenet of optimal foraging theory that the optimizing behaviour is achieved by the animal following a set of relatively simple rules of thumb or heuristics, which are most probably innate rather than learned. So, for example, a great tit might be hard-wired to move on to the next tree after a certain number of seconds unsuccessful foraging in one tree. And this is why this is best described as 'as-if' reasoning. Evolution has worked in such a way (at least according to the proponents of optimal foraging theory) that foraging species have evolved sets of heuristic strategies that result in optimal adaptation to their ecological niches. This optimal adaptation can be mathematically modelled in terms of what is ultimately a sophisticated version of expected utility theory, but the behaviours in which it manifests itself do not result from the application of such a theory-any more than a bird's capacity to fly reflects any mastery on its part of the basic principles of aerodynamics. Once again, there is no scope here for understanding how the inference-based conception of practical rationality can be extended to non-linguistic creatures. It would seem, therefore, that Davidson's strategy of treating practical rationality as a form of theoretical rationality seems doomed to failure. It is true that I have so far argued explicitly only for the thesis that we cannot extend a workable notion of procedural rationality to non-linguistic creatures. Despite the views of some philosophers (Stein 1996) and many psychologists of reasoning (particularly those studying the Wason selection task and other experimental paradigms testing mastery of the rules that govern conditional reasoning) mastery of the basic principles of inference needs to be clearly distinguished from what might be termed norms of good reasoning. These are principles that govern the processes of thinking: weighing up the eviden<.:e for and against a particular proposition; judging the likelihood of a particular event; changing one's beliefs and probability assignments in response to changes in the available evidence, and so forth. The norms of reasoning go to make up what I have elsewhere termed the domain of epistemic rationality (Bermudez 2001). It is hard to see how the epistemic dimension of theoretical rationality can be of any use in making sense of practical rationality. Clearly, therefore, if we are to identify a sense of non-linguistic rationality in the practical sphere it is no use approaching the issue via procedural rationality in the way that Davidson does. We need a specific account of how a non-linguistic creature might be practically rational. In line with the
244
Jose Luis Bermudez
general methodology I have adopted elsewhere (Bermudez 1998), I will broach the issue from an epistemological and operational dimension, starting with the following questions. In what circumstances would it be appropriate to describe the behaviour of a non-linguistic creature as rational? What operational criteria might there be for non-linguistic rationality? In the next three sections I will identify three different types of behaviour, each of which can be described as rational in a different sense. For reasons that will emerge below, only the second and third of these can count as rational in the right sense to ground the practice of psychological explanation. On the basis of this classification of types of behaviour and operational criteria I will proceed in Non-Linguistic Rationality and Inference to offer an account of non-linguistic rationality and non-linguistic reasoning.
LEVEL-o RATIONALITY
Let me start with a basic datum. There is no need for psychological explanations of anin1al behaviour when we are dealing with tropistic behaviours such those produced by reflexes, innate releasing mechanisms (such as imprinting mechanisms), or classical conditioning. In such situations we can explain the behaviour in terms of a law-like connection between stimulus and response. Whenever the relevant stin1ulus is encountered the same response will emerge. We do not need to postulate intermediary representational states between sensory transducers and behavioural output. Nonetheless, there is a sense in which tropistic behaviour of this form can be described as rational. This is the sense of the word 'rational' on which 'rational' means something along the lines of 'adaptive' or 'conducive to survival' (Dawkins 1986). There are criteria by which it is appropriate to judge the rationality of even the simplest tropistic forms of behaviour. Take a behaviour that has evolved as an adaptive response to potentially harmful events on the body surface, such as the eye-blink response to puffs of air. This is clearly adaptive, and the adaptiveness carries over even tobehaviours in which the eye-blink response is itself part of a larger conditioned process. If there is reliable advance warning of the arrival of the unconditioned stimulus (e.g. the sound of a tone preceding the puff of air), then there are adaptive advantages in activating the response before the unconditioned stimulus actually appears. There is nothing wrong with describing such a behaviour as rational. I will call this level-o rationality. It has two characteristic features: (i) It is not grounded in any process of decision-making. (ii) It is not applicable to particular behaviours, but to the presence (either in the organism or in the species) of a particular tendency or disposition.
Rationality without Language
245
The first feature should be self-explanatory, but some comments are required on the second feature. We can only apply the notion of rationality when there is a space of alternatives. A rational behaviour has to be one that is performed rather than some other behaviour that could have been performed. But of course there is no such space of alternatives at the level of individual tropistic behaviours. The space of alternatives exists only at the level of the genetically determined disposition to behave in a certain way. Putting the point in more familiar philosophical terms, level-o rationality applies only to behaviour-types and not to behaviour-tokens. Foraging behaviour is an excellent example of behaviour that can be level-o rational in precisely this sense. As we saw in the previous section, many species have evolved to follow simple behavioural rules when foraging for food. According to optimal foraging theory, in many cases these rules are such that individuals of the relevant species are maximally adapted to their environment. So, how n1ight one apply the notion of rationality to the feeding patterns of a redshank (Tringa totanus)? Redshanks are shorebirds that dig for worms in estuaries at low tide. It has been noticed that they sometimes feed exclusively on large worms and at other times they feed on both large and small worms. The hypothesis put forward by Stephens and Krebs to explain this behaviour involved what they called the principle of lost opportunity (Stephens and Krebs 1986). In essence, although a large worm is worth more to the redshank in terms of quantity of energy gained per unit of foraging time than a small worm, the costs of searching exclusively for large worms can have deleterious consequences except when the large worms are relatively plentiful. If the large worms are rare, then it will obviously take much longer to find one-time during which the redshank is not only expending valuable energy but also losing opportunities to gain energy from smaller worms. So, redshank only ever forage exclusively for large worms when there are plenty of large worms around. It seems perfectly reasonable to say that this is rational behaviour on the part of the redshank. But of course this is not to say that whenever the redshank switches from a restricted search strategy (only large worms) to an unrestricted search strategy (both large and small worms) it is behaving rationally. That would not be right, because at the level of the individual foraging behaviour there is no space of alternatives. The redshank is 'following' a relatively simple algorithm and there seems no sense in which it could fail to follow itunless, of course, the foraging algorithm was trumped by another algorithm and the bird ceased foraging, as it might do if a predator was detected. The space of alternatives exists at the level of the hard-wired algorithm. What is rational is not the redshank's behaviour on a particular occasion, but rather the fact that it has evolved in such a way as to follow an algorithm that allows it to switch from a restricted search strategy to an unrestricted search strategy~ as opposed, for example, to always following the same strategy.
246
Jose Luis Bermudez
As the example of optimal foraging theory makes clear, for a behaviour pattern to count as rational in the level-o sense there must be not only a contrast space of alternative behaviour patterns, but also a normative standard against which those behaviour patterns can be assessed. It is by no means always easy to determine what this normative standard should be. As we have seen, the normative 'currency' by which to judge foraging behaviour is some form of maximization of energy gain. But there are questions to be raised about how widely the energy calculation should extend. In certain cases (such as the redshank example we have just considered) it makes sense to restrict the calculation to the foraging bird. In other cases, such as the starlings foraging for their young, the most accurate predictions come when one works on the basis of the rate of energy procurement for the family as a whole. Nor, of course, is maximization of energy gain the only available currency. Another obvious normative standard for evaluating particular patterns is that they should facilitate predator avoidance-or that they should not hinder the animal from finding a mate. A distinction needs to be made between short-term criteria of rationality of the type we have just been considering and long-term criteria (see Dawkins (1986) for a similar distinction between short-term and long-term rationality). Rate of energy procurement is a short-term criterion of rationality. The accepted currency for long-term calculations is fitness. There are different ways of calculating fitness. Strictly speaking, fitness is a matter of relative quantities of specific genes in the gene pool over time, but in practical terms this can often be measured in terms of the reproductive output of individual animals. This is individual lifetime fitness, calculated as the product of the length of time the animal survives and the average number of offspring it produces during each year of its life. In the not uncommon situation in which an animal sacrifices its own individual lifetime fitness in a way that increases the lifetime fitness of other animals that share many of the same genes by descent, the appropriate currency is inclusive fitness. Inclusive fitness is arrived at by adding the individual lifetime fitness of the donor animal to the sum of the individual lifetime fitness of the animals helped by the donor, each discounted by the probability of their sharing genes with the donor animal. The point to extract from all this is that there are many different criteria· according to which level-o rationality can be assessed. The basic distinction is between short-term and long-term criteria, but within each grouping there are further distinctions. This is important for the following reason. One objection that is likely to be raised against the very idea of level-o rationality is that it is completely Panglossian. Every form of behaviour that is best analysed at this level will come out as rational, simply as a function of the way in which the notion is defined. Any suitable behaviour pattern that exists does so because it has proved more adaptive than the other
Rationality without Language
247
potential behaviour patterns that were available to natural selection at a particular moment in evolutionary time. But then, it looks very much as if any extant behaviour pattern will prove to be rational in the level-o sense simply in virtue of having been selected. This would clearly deprive the notion of level-o rationality of sense and point. It would make it impossible for there to be such a thing as level-o irrationality and it hardly seems appropriate to talk about rationality in circumstances in which the notion of potential irrationality is unable to get a grip. What we have just seen, however, is that evolutionary fitness is not the only criterion by which the level-o rationality of a particular behaviour pattern can be assessed. Indeed, in many cases it seems that fitness (whether individual or group) is not really an appropriate currency. Of course it is true that the overarching currency of natural selection is fitness-but for that reason fitness is too coarse-grained a tool for thinking about the ways in which particular behaviour patterns help a creature fit into its ecological niche. Short-term criteria are far more helpful than long-term criteria in that respect, precisely because they will not produce the empty conclusion that everything comes out as level-o rational. It may well be the case that a particular behaviour pattern is rational in the long-term sense of having been the most fitness-promoting of the alternatives available for natural selection to choose between without being rational in the short-tern1 sense. That is to say, it seems a suboptimal strategy according to all the plausible short-term criteria. There is nothing surprising about this. Let us suppose that there is an optimality threshold for each of the different aspects of an animal's existence-foraging, mate selection, nest-building, rearing of young, food avoidance, and so forth. The fitness of animals of a particular species may well depend so crucially upon striking the right balance between the various different activities in which they engage that performance above the· optimality threshold on one dimension could not be achieved without compromising the balance. So, for example, performance above the rationality threshold on foraging might only be possible at the price of performance on predator avoidance too far below the threshold to be sustainable. This might well be a case in which, assuming that we were interested solely in short-term criteria of level-o rationality, none of a creature's specific behaviour patterns would qualify as level-o rational. It is more likely, of course, that long-term fitness would be served by a combination of behaviour patterns some of which fell below the optimality threshold while others were safely above it. The discussion so far has concentrated on the idea that level-o rationality is to be calculated in terms of the maximization of some designated currency, such as energy procurement rate or predator avoidance. But this is by no means the only way of understanding level-o rationality. Level-o rationality is not always a matter of optimization or maximization. There are good
248
Jose Luis Bermudez
examples of level-o rationality in the signalling strategies that are widespread in the animal kingdom, and in particular in what might be termed two-way information transfers. These are informational transactions in which information is transferred in two directions (as opposed to one-way information transfers such as mating displays). We find such transactions most typically in situations where animals are in conflict over food, prospective mates, or breeding sites. Rather than resort to the direct use of force to resolve the conflict, many animals employ threat-display signals to come to a non-violent consensus as to which is the stronger. Consider, for example, the roaring contests of red deer stags (Clutton-Brock and Albon 1979). During the rutting season red deer stags compete with each other for the control of groups of fen1ales. Actually fighting each other is risky and exhausting for both winner and loser. 50 fighting contests are frequently replaced by roaring contests, where the winner is the loudest and most sustained roarer and the loser backs down. A male's capacity for roaring is a good index of his strength and consequently of how he would have fought had he been called upon to do so (partly because roaring uses the thoracic muscles also used in fighting). Inforn1ation is transferred in a probabilistic sense. The male who roars loudest will have a higher probability both of being able to defend the group of females and of defeating his opponent in an actual physical contest than the n1ale who loses the roaring contest. The fact that the loudest roarer stands his ground while the other backs down indicates that they each act upon the information transmitted. One feature of two-way information transfers is that each sender can modify its signal in the light of the signal it receives from the other participant in the exchange. In red deer roaring contests, for example, if the roaring rates are more or less similar and fail to identify a 'winner', it is usual for the contest to shift to a second stage in which the deer perform a parallel walk that seems to provide information about body height and antler size (Clutton-Brock and Albon 1979). If that also turns out equal, then the deer lock antlers and proceed to a pushing contest. Conflict resolution is not, of course, the only sphere in which we find two-way information transfers. Mating displays and territorial signalling also provide plenty of examples-examples in which, unlike many instances of conflict resolution, the signalling behaviours of the two participants are not symmetrical. One feature which all such information exchanges have in common is that, unlike one-way information transfers, they lend themselves to being modelled game-theoretically (Lewis 1969; 5kryms 1996). Evolutionary game theory can help make sense of why particular signalling exchanges between sender and recipient should have becon1e stabilized. We can see both participants' signalling behaviour as strategies adopted by particular roles within a population (where a role might be male v. female, young males v. older males, and so forth) which have become stabilized because each is the optimal
Rationality without Language
249
response to the other within the range of alternatives made available by a particular evolutionary context. They are both evolutionary stable strategies (Maynard Smith 1982). To summarize. Level-o rationality is a type of rationality appropriate to behaviour-types rather than behaviour tokens. It does not involve any genuine decision-making. The behaviour-types that qualify for level-o rationality will most often be hard-wired, ranging from simple reflexes such as the eye-blink response to innate releasing mechanisms. They are properly described as rational because there is a normative theory, such as for example some version or other of optimal foraging theory, under which they come out as the best response among the range of alternatives available at . that time. As we have seen there are, broadly speaking, two different ways of determining whether or not a particular strategy is a best response. It might qualify in virtue of maximizing some particular currency. Or it might qualify as being part of what game theorists call an equilibrium strategythat is to say, a strategy such that neither party can benefit by deviating from it. In either case there remains the question of what the currency is in terms of which gains and benefits are being calculated, and as we have seen there are a range of potential currencies, short-term and long-term.
LEVEL-I RATIONALITY
Discussions of non-linguistic rationality, particularly on the part of philosophers, frequently assume that there is no middle ground between what I have termed level-o rationality and the sophisticated types of rationality that we find in language-users and that are modelled by formal logic, rational choice theory, game theory and so forth. This does not seem right. Level-o rationality is limited in two ways. First, it does not involve a recognizable process of decision-making. Second, it is not applicable to behaviourtokens, but only to behaviour-types. We can easily see that there is roon1 in logical space for at least one intermediate conception of rationality. Such an intermediate form of rationality would be one that was subject to one of these limitations but not the other. The abstract possibility is illustrated as follows: Level-o Level-1
Applicable to behaviour-tokens
Decision-making
X .I
x X
Is this possibility realized? It seems to me that it is. A given tropistic behaviour token cannot be rational because there is no sense in which it is selected from a range of alternatives. Consequently it cannot involve a process of decisionmaking. But it does not follow from this that any behaviour that is properly
250
Jose Luis Bermudez
described as having been selected from a range of alternatives n1ust involve a process of decision-making. Let n1e take a simplified example that I imagine is fairly common in the animal kingdom. Imagine an animal confronted with another potentially threatening animal. The animal has two possible courses of action-Fight or Flee. There is a clear sense in which one of the two courses of action could be more rational than the other. Roughly speaking, it will be in the animal's best interests either to Fight or to Flee. And it seems that in such a situation there need be no process of decision-making. The animal might just 'see' that Fight is the appropriate response. Or it might just 'see' that Flee is appropriate. The theory of affordances developed by]. J. Gibson gives us a way of making sense of this as a form of direct perception (Gibson 1979). Gibson's theory is that perception is not neutral. It is not just a matter of seeing various objects that stand in spatial relations to each other. It involves seeing our own possibilities for action-seeing the possibilities that are 'afforded' by the environment. If this is right, then we can see how a given behaviour might be selected from a range of alternatives in a way that does not involve a process of decision-making and yet that seems to be assessable according to criteria of rationality. Let us call this the sphere of level-I rationality. The principal difference between level-o rationality and level-I rationality is in the location of the contrast space of alternative possible courses of action. As we have remarked, assessments of rationality are only applicable when the relevant course of action can properly be described as having been selected from a range of alternatives. In level-o rationality the appropriate contrast space is between different behaviour patterns or tropistic mechanisms that might have been selected by evolution, whereas in level-I rationality the contrast space is a range of different possible courses of action available to the organism at the relevant time. One would expect there to be a range of possible currencies in terms of which a particular behaviour token might be assessed for level- I rationality much like the range of currencies operative for level-o rationality. Level- I rationality is subject to the same distinction between short-term and long-term criteria, and within each group there will be different criteria pushing in different directions. One feature of level-I rationality that marks it out, however, from level-o rationality is that there seems to be much more scope for longterm criteria of rationality that are non-fitness-based. In the case of level-o rationality we are considering patterns of behaviour that are most probably instinctive, but the sources of behaviours assessable for level-I rationality are far more fluid. We can imagine an animal being motivated to act in a way that would reduce its individual lifetime fitness (calculated as the product of survival and fecundity) and yet wishing to characterize that behaviour as level-I rational. The most obvious case would be one where an animal (a member of a species such as the vervet monkey that has a highly developed system for
Rationality without Language
25 1
alerting conspecifics to the presence of different types of predator) fails to signal the approach of a predator and instead flees the scene. It might well be that fleeing as opposed to warning decreases both individual lifetime fitness (by making other members of the group less likely to cooperate in future) and inclusive fitness (by decreasing that individual's chances of finding a mate), while yet counting as level-I rational according to other criteria, such as for example the criterion of the individual's long-term survival. 4 As with level-o rationality, there will be two different ways in which the notion of level-I rationality might apply. A particular behaviour-token might qualify as level-I rational in virtue of maximizing units of some relevant currency. Or, alternatively, it might qualify in virtue of being an equilibrium strategy in game-theoretical terms-although, unlike the level-o case, the relevant strategy will be behaving in that particular way in that particular context, rather than a generalized behaviour pattern. Although in practice it will not always be clear whether a given behaviour is to be judged in terms of level-o or level- I rationality, one might expect the sort of situations in which issues of level-I rationality arise to be correlated with greater amenability to game-theoretical analysis, for the following reason. The type of behaviours that it is appropriate to assess for level-o rationality are highly invariant behaviours, instances of rigid behaviour patterns that one might expect to be repeated whenever the environment is suitably configured. It is when this invariance breaks down that we move to assessing the rationality of behaviour-tokens rather than behaviour-types. But one would expect the invariance to break down far more frequently in cases involving inter-animal interaction (such as conflict resolution) than in those where the interaction is simply between animal and environment, and it is the former that are susceptible to game-theoretic analyses. LEVEL-2 RATIONALITY
What is the next level of rationality? Clearly it will be one in which both of our criteria are satisfied. Thus: Applicable to behaviour-tokens Level-o Level-I Level-2
Decision-making
x X
'"
But it is not immediately obvious how such a notion of rationality can be applied at the non-linguistic level. As we saw in the context of optimal foraging theory in Theoretical and Practical Rationality it is far from 4 It is known that vervet monkeys are sensitive to whether other individuals in the group are reliable sources of information about predators (Cheney and Seyfarth 1990).
252
Jose Luis Bermudez
straightforward to see how non-linguistic creatures can properly be described as decision-makers. As we saw earlier, there is no room for an inference-based conception of decision-making, whether inference is taken strictly as a matter of formally characterizable operations defined over syntactic vehicles, or more broadly as a matter of the immediate perception of entailment relations between thoughts. But then how else could decision-making be understood? Unless we can develop an alternative conception of decision-making it looks very much as if we will have to restrict the notion of non-linguistic rationality to level-o and level-I rationality. Let us look once again at the requirements. We have already seen how in behaviours assessable in terms of level-I rationality it is appropriate to describe an animal as behaving in a particular way against a contrast space of alternative possible courses of action. It was natural and convenient to formulate this in terms of Gibson's theory of affordances. So, one might say that an animal perceives a range of different courses of action afforded by the environment and acts on one of these perceived affordances. Why is this not properly described as decision-making? To appreciate why not it is important to make a distinction between two different ways in which different courses of action can be compared. On the one hand, they can be compared simply qua courses of action. That is to say, an animal might compare the action of fighting with the action of fleeing. It is this sort of comparison that is at play in the forms of behaviour assessable for level-I rationality. The possibility of such comparison requires simply representations of actions. These representations are not very complex. They might (if some form of Gibson's theory is correct) be understood at a purely perceptual level. It is perfectly possible, and indeed highly likely, that the choice between such action-representations can be made on relatively simple and more-or-Iess non-cognitive grounds. One way of interpreting the whole basis of instrumental conditioning is as a process of attaching a certain positive valence (through reinforcement) to the representation of a particular action, so that the animal being conditioned is motivated to act upon that action-representation rather than one of the others afforded by the immediate environment. It is for this reason, I think, that the notion of decisionmaking is best not applied to cases in which courses of action are compared qua courses of action. Such cases do indeed involve a form of choice-or perhaps 'selection' would be a better word. But the choice or selection is not made on the right sort of grounds to qualify as decision-making.5 5 Of course, this is to a certain extent a stipulation as to how words are to be used, but there is some justification for it in our ordinary ways of speaking. If I act completely randomly to resolve a deadlock in a situation where I really cannot see any advantage in acting one way rather than another, it would seem natural to describe this as my selecting a course of action without actually deciding upon it.
Rationality without Language
253
What then is involved in genuine decision-making? The minimal requirement is that the selection of a particular course of action from the contrast space of alternative possible courses of action should be made on what might be termed consequence-sensitive grounds. 6 That is to say, genuine decisionmaking involves a selection between different possible courses of action that is grounded on an assessment of the likely consequences that those different possible courses of action will have. Deciding is not simply selecting. It is selecting for a reason. There is an important distinction here. In many cases of instrumental conditioning (but not all, as we shall shortly see) it is correct to say that the relevant action is performed because of its consequences. So, for example, what explains the pigeon's pressing the lever is that it will result in delivery of food from the cartridge. It is the association of that behaviour with those consequences that have created the positive valence that leads the pigeon to carry out the action of lever-pressing. But to say that an action is being performed on consequence-sensitive grounds implies far more than its simply being performed because of its consequences. It implies that the agent has made an assessment of those consequences, on the basis of a belief about the outcome that that action is likely to have (and, most likely, a comparison of that outcome with the likely outcomes of other possible courses of action). Decision-making only takes place, in other words, at a level where instrumental beliefs are available. The distinction between simple selection (as in level-I rationality) and genuine decision-making of the sort that only becomes available with level-2 rationality can be put in terms of the representation of contingencies, the relevant contingency being of course that between an action and an anticipated outcome-situation. It is natural to think that what would motivate an animal to act upon a particular represented contingency is that the outcome-situation would satisfy one of its desires. So, in level-2 rationality we encounter once again a familiar pattern of psychological explanationthat is to say, psychological explanation in terms of a belief-desire pair linked by an instrumental belief about how a desire might be satisfied in a particular context. A further consequence of this is that level-2 rationality can ,be assessed in a fundamentally different way from the other two levels of rationality that we have considered. Whereas both level-o and level-I rationality can only be understood in terms either of maximization of a given currency (such as rate of energy procurement, individual lifetime fitness, and so forth) or of 6 There should be no implication that, for example, ethical deontologists are not advocating a method of decision-making. My comments should be understood as limited to 'decisionmaking' as it might plausibly be identified in a non-linguistic context. It seems clear, however, that making decisions on deontological grounds is not possible for creatures who are not capable of making decisions on consequence-sensitive grounds.
254
Jose Luis Bermudez
strategies that are in game-theoretic equilibrium, level-2 rationality admits a fundamentally different type of assessment. Since the crucial element in level-2 rationality is the way in which action is grounded in instrumental beliefs about the outcomes of those actions, it is clear that in an important respect the level-2 rationality of an action will depend upon the 'match' between action and background beliefs. In this sense the level-2 rationality of a particular action will be a function of: (i) the accuracy of the instrumental belief, (ii) the extent to which the action in question is a suitable implementation of the instrumental belief. There is fortunately no need at present to explain how criteria (i) and (ii) can be developed into a full-blown theory of level-2 rationality.? There is enough on the table already for the central problem to be clear. The principal question is how we should understand instrumental beliefs and the representation of contingencies at the non-linguistic level. It is open to a sceptic to accept the distinction between level-I and level-2 rationality, but to deny that behaviour amenable to assessment in terms of level-2 rationality is available at the non-linguistic level. How, it might be asked, can contingencies be represented and compared except through inferential processes of the sort that we have seen not to be available to non-linguistic creatures? How can we make any sense of a creature acting on the basis of a comparison of contingencies unless we take it to be choosing the course of action with the highest likely benefits when each course of action's benefits are discounted by their probability? And surely that requires it to be representing the different potential outcomes, the utility it attaches to each of them, and the likelihood with which it estimates that each outcome will occur-as well, of course, as performing the necessary calculations? I propose to make a start on this question by asking how we might understand the distinction between level-I rationality and level-2 rationality in operational terms. What evidence might there be that a creature is representing the consequences of two or more different courses of action, rather than the actions themselves? Once we have a clear set of operational criteria in 7 It is worth pointing out, though, that the normative criteria of level-2 rationality incorporate elements of both internal and external rationality, in the sense introduced earlier in Theoretical and Practical Rationality. The requirement that the relevant instrumental belief be accurate is clearly an external requirement. It is quite possible that an action will fail to qualify as rational in this external sense, even though it clearly satisfies the second, internal criterion. Arguably this would be enough for it to qualify as rational in our everyday sense-and, as we have already seen, internal rationality is what is important for psychological explanation. In any case it is also possible to apply the normative criteria appropriate for level-o and level-I rationality to level-2 behaviours. We can assess the types of instrumental reasoning involved in, say, tool construction from the viewpoint of expected utility theory. This will be discussed further below.
Rationality without Language
255
view it will be easier (in the next section) to offer a constitutive account of the forms of reasoning implicated in level-2 rationality. It is easiest to see how we might detect when a creature is not representing the contingency between action and consequence-that is to say, how we might detect when criteria of level-r rationality should be applied rather than criteria of level-2 rationality. For example, if behaviour is driven by the representation of an instrumental contingency, then one would expect the behaviour to cease in the face of repeated evidence that the contingency no longer holds. If the animal persists in the response, then we have prima-facie evidence that the contingency is not being represented. A classic experiment has been done by Hershberger (r986) who set up a graphic illustration of such a reversed contingency for chicks. In the experimental set-up their food source retreated from them at twice the rate they walked towards it, but advanced towards them at twice the rate they walked away from it. Even after roo trials the chicks only succeeded in obtaining the food 30 per cent of the time, clearly indicating that they were failing to represent the two relevant contingencies (that walking backwards causes the food to advance and walking forwards causes the food to retreat). It is a minin1al operational criterion, therefore, for actions being grounded on representations of contingencies that the action should not be persisted in once the anin1al has been confronted with evidence that the contingency ceases to hold. It is relatively easy to see what would count as evidence that this operational criterion is not being met. But what evidence might there be for thinking that it is being met? Some suggestive discoveries have been made by Rescorla and Skucy. They found that rats that have been trained to press a lever for food will cease to press the lever when the schedule is changed so that the food is delivered whether they press the lever or not (Rescorla and Skucy r969). This seems to involve recognition that the contingency between lever-pressing and food delivery no longer holds. The rats had initially been pressing the lever in virtue of an instrumental belief that lever-pressing would result in the appearance of food. When the correlation tracked by the instrumental belief ceased to hold the associated behaviour also ceased. Perhaps the most obvious source of relatively clear-cut evidence of level-2 rationality in non-linguistic creatures is tool manufacture and tool-using behaviour. The means-end don1ain of tool construction and use is deeply tied up with the representation of contingencies. Of course, not all such behaviour is evidence for level-2 rationality. Many types of tool construction are relatively hard-wired and, although capable of refinement and improven1ent through learning, seem best evaluated according to the criteria associated with level-o or level-r rationality. The construction of dams by beavers is a case in point, n1uch closer to nest-building than to deliberate tool-based manipulation of the environment. A good example, though, of what seems to be the genuine representation of contingencies comes with
Jose Luis Bermudez the way in which chimpanzees in the wild n1anufacture tools for particular purposes (Byrne 1995, 96-7). Wild chimpanzees n1ake wands for dipping into ant swarms by stripping the side leaves and leafy stem fron1 a stick several feet long. The wands constructed for dipping :into termite nests, on the other hand, are made from vines or more flexible twigs and are considerably shorter. They also have a bitten end, unlike the ant wands. It is sometimes remarked that such tool construction is purely innate. Gould and Gould (1998,55) suggest, for example, that there is no genuine thought involved in what they call termite-fishing because even chin1panzees born in captivity are obsessed by putting long thin things into holes. This neglects, however, the specialized nature of the different tools constructed. It is not just a matter of dipping a long thin stick into a narrow hole-the long thin stick needs to be constructed differently depending on what sort of hole it is going into. Nor, moreover, does the wand construction seem to be a form of trial-anderror learning. Byrne notes (1995,97) that the wands are often constructed some time in advance and a considerable distance away from the place where they are going to be used. Perhaps our best source of information for tool-manufacture comes from archaeological studies of the tools constructed by pre-linguistic hominids. In human evolution the construction of complex tools long predated the emergence of language (Gibson and Ingold 1993). The fossil record suggests that handaxes, the characteristic tool of early Homo habilis, first appeared about 1.4 million years ago-long before the evolution of language which (even on the most optimistic scenario) could not have occurred before the speciation of archaic Homo sapiens about one n1illion years later. Considerable technical skill is required to make a handaxe. Since the handaxe is symmetrical the flakes need to be removed from alternate sides. Each nodule is different, with different stresses and fracture lines, so the toolmaker needs to keep in mind a specific goal and adjust his blows accordingly. The force of the blows needs to be precisely calculated. The entire process is highly complicated and dependent upon constant feedback and revision. A highly developed form of instrumental rationality is at work here, feeding into action. The next major event in the early evolution of archaic Homo sapiens was the emergence of the Levallois flake, a characteristic tool that only emerged in the Middle Paleolithic period (Mithen 1996). Figure 10.1 indicates the process of making a Levallois flake. These Levallois flakes were then incorporated into more complex tools. Spears were made, for example, by hafting Hakes onto wooden shaftsa process that involves the extraction of resin, the production of lashing materials, and so forth. It is hard to see how such complex forms of tool construction, including the combination of tools to n1ake further tools, could be possible without explicit representation of contingencies. Here, it seems, we are well within the realm of level-2 rationality.
Rationality without Language
Figure
10.1.
257
The multiple intelligences of the early human mind.
To make a Levallois point one must remove flakes from the surface of a core to leave a series of ridges on a domed surface (1-3) which will then guide the removal of the final pointed flake. A striking platform is prepared perpendicular to the domed surface of the core (4) and the Levallois point renloved by a single blow (5).
NON-LINGUISTIC RATIONALITY AND INFERENCE
At the beginning of this paper I argued that it would be futile to try to extend to non-linguistic creatures what I termed the inference-based conception of practical reasoning. According to Davidson and many others, the rationalizing connection between the beliefs and desires cited in a psychological explanation and the action they explain is derived from the possibility of constructing an argument with those beliefs and desires as premisses and a description of the action as conclusion. Of course, it is not just the abstract possibility that such an argun1ent could be constructed that is important. It is essential that the agent should himself be capable of constructing the argument, although he need not actually have done so on the occasion in question. But, as I argued, we have no understanding of how non-linguistic creatures can construct arguments, or assess validity in any other way. So the inference-based conception of practical reasoning has to be abandoned. What we have seen in the last few sections is that this does not mean that the notion of rationality cannot be applied at the non-linguistic level. Quite the contrary. There are three distinct and useful senses in which the behaviour of non-linguistic creatures can be properly assessed for rationality. The behaviour of non-linguistic creatures can be assessed according to three different sets of norn1s-those of level-o rationality, level-r rationality, or level-2 rationality. Each set of norms is appropriate for different types of behaviour.
Jose Luis Bermudez Behaviours that are instances of innate releasing mechanisms and other fixed and invariant behaviour patterns are best assessed according to the criteria of level-o rationality. The assessment of rationality attaches in this case to the type under which the behaviour falls, rather than to the token behaviour instantiated on a particular occasion. When it makes sense to suppose that the action performed was one of a range of possible courses of action open to the creature in question it becomes assessable according to the norms of level-I rationality. When the criteria for genuine decision-making are met then the norms of level-2 rationality come into play. Two questions arise at this point. First, one might ask which of the three levels of rationality I have identified is appropriate for the project of providing psychological explanations of the behaviour of non-linguistic creatures. Are psychological explanations available all the way down the ladder of rationality, or is there a privileged level (or levels) of rationality below which psychological explanation is not possible? Second, one might ask what the relation is between the way in which rationality is assessed at level-2 and the way in which it is assessed according to the normative theories of rationality that might plausibly be taken as normatively binding upon language-using agents. How, if at all, do the normative criteria of rationality that we apply in our everyday psychological explanations and predictions differ from those that we might apply to non-linguistic creatures? I will take these questions in order. It is obvious, more or less as a matter of definition, that there is no scope for psychological explanation of behaviours for which considerations of level-o rationality are appropriate. It is equally obvious that psychological explanations can be appropriate when we are dealing with behaviours assessable for level-2 rationality. The principal question is whether we can have psychological explanations in situations where the appropriate criteria of rationality are those of level-r rationality. There is a straightforward line of argument setting out to establish that level-2 rationality is required for psychological explanation. The argument runs as follows. Giving a psychological explanation is saying that an animal has acted in a certain way because of its beliefs and its desires. More precisely, it is to say that the combination of its beliefs and desires explains its actions. An animal has certain beliefs about its environment and also certain desires. But how do these come together to bring about action? Only through the representation of contingencies between actions and their outcomes. Only when an animal forms the belief that a certain course of action will lead to the satisfaction of a desire. But this is an instrumental belief about the consequences that an action is likely to have-and hence the behaviour to which it gives rise falls squarely within the domain of level-2 rationality. The problem with this line of argument is that the fact that a certain course of action will bring about the satisfaction of a desire may be immediately
Rationality without Language
259
perceptually manifest-as when, for example, a food reward is in plain view. There is no need always to formulate an instrumental belief. Any psychological explanation will always have an instrumental component, but that component need not take the form of an instrumental belief. In fact, instrun1ental beliefs really only enter the picture when two conditions are met. The first is that the goal of the action should not be immediately perceptible and the second is that there should be no immediately perceptible instrumental properties (that is to say, the creature should not be capable of seeing that a certain course of action will lead to a desired result). The fact, however, that one or both of these conditions is not met does not entail that . we are dealing with an action that is explicable in non-psychological terms. The basic requirement for psychological explanation is negative. An action requires psychological explanation just if its occurrence could not have been predicted solely from knowledge of the environmental parameters and sensory input. That is to say, the need for psychological explanation arises only in situations where the connections between sensory input and behavioural output cannot be plotted in a law-like manner. Clearly, however, it is perfectly possible for a situation to qualify even if the goal of the action is immediately perceptible-or, for that matter, if the distal environment contains immediately perceptible instrumental properties. In such situations the instrumental component of the psychological explanation will most likely be part of the content of perception. It should not be thought, however, that instrumental beliefs (and with then1 level-2 rationality) come into play only when there is no immediately perceptible goal. It is possible for a goal to be directly in view and yet for it to be far from apparent how it is to be gained. In such a situation one would expect the instrumental component to take the form of an instrumental belief. Some of the classic exan1ples of instrumental reasoning in animals fall into this category. Kohler's chimpanzees could clearly see the bunch of bananas that was hung out of their reach (Kohler 1925). They just could not immediately see how to reach the bunch, until they formed an appropriate instrumental belief (which, depending on the chimpanzee, was that stacking boxes one on top of the other would bring the bananas within reach, or that two sticks could be joined together to knock the bananas down, or that standing on a box would bring the bananas within reach of the stick). Another, less anecdotal example comes from Bernd Heinrich's experiments with hand-reared ravens (Heinrich 2000). Pieces of meat were hung by string from their perches, too far for them to reach from the perch and too securely tied to be accessible in flight. Four out of the five ravens eventually worked out different ways of pulling up the string and obtaining the meat. As with the chimpanzee example, the goal was clearly in view. It is tempting to think that the difference was made by an instrumental belief about how that goal might be obtained.
260
Jose Luis Bermudez
What is important, therefore, for the applicability of psychological explanation is that there be an instrumental component in the psychological states that give rise to the particular action. This instrumental component can be part of the content of perception (in what I have termed level-I rationality) or it can take the form of a separate instrumental belief (in level-2 rationality). In order to appreciate the importance of this instrumental component it is helpful to return to the distinction between internal and external rationality briefly introduced earlier (see Theoretical and Practical Rationality). An action is internally rational when it makes sense relative to an agent's beliefs and desires, while an action is externally rational when it makes sense relative to a given set of environmental parameters that include the agent's desires but not his beliefs. From the viewpoint of psychological explanation it is of course the first of these that is paramount. Agents often act on the basis of poorly supported inductive generalizations and inaccurate assessments of the situation. What is important in explaining their behaviour is how those generalizations and assessments get translated into action. The question of how they might have· acted optimally in that situation is not relevant. But assessments of internal rationality only make sense when there is an instrumental component-only with the instrumental component in play can one properly evaluate the appropriateness of an action relative to the agent's beliefs. Psychological explanation, therefore, is only applicable when we are dealing with behaviours reflecting either level-2 rationality or an instance of level-I rationality incorporating an instrumental component. Moving now to the second question, how are we to understand the relation between level-I and level-2 rationality, on the one hand, and the type of rationality that we think of as governing everyday folk psychological explanation, on the other? There are two sub-questions to be separated out here. The first sub-question concerns the norms that govern ascriptions of rationality at the different levels. The second sub-question concerns the reasoning that generates the relevant behaviours. How should we compare the reasoning implicated in behaviours assessable for level-I, level-2, and folk psychological rationality? The answer to the first question is relatively straightforward. There is little difference in the norms that govern and guide ascriptions of rationality at all three levels. As we saw, level-I rationality is governed by norms of maximizing expected amounts of a particular currency, as well as by the norm of maintaining inter-animal strategies that are in game-theoretic equilibrium. There is nothing here alien to the norms of folk psychological rationality-although of course one would expect the range of available currencies to be much greater in folk psychological rationality, as well as scope for issues such as incommensurability that it is hard to imagine arising at the non-linguistic level. The difference is one of degree rather than kind. Something sin1ilar holds for the norms governing level-2
Rationality without Language
261
rationality. Assessments of level-2 rationality, as we observed in the previous section, are determined by two factors, the first being the accuracy of the appropriate instrumental belief and the second being the appropriateness of the action relative to that instrumental belief. It is hard to see how the norms governing folk psychological rationality could be any different fron1 these. The crucial differences come, not at the level of the norms governing the ascriptions of rationality, but rather at the level of the reasoning that leads up to the action. We can illustrate this by comparing level=2 rationality with the inference-based conception of practical rationality introduced through the passage from Davidson quoted at the beginning of the paper. As just observed, the overarching norms of rationality governing ascriptions of rationality are the same at the two levels, namely, that the instrumental belief should be accurate and that the action should be appropriate relative to that instrumental belief. The difference comes in the way in which 'appropriateness' is calculated in the two cases. In essence, the inferencebased conception of practical rationality understands the appropriateness of an action relative to an instrumental belief in terms of the constructibility of a valid argument from the instrumental belief (and associated beliefs and desires) to a description of the action. As we saw in the first section of the paper, no such understanding of appropriateness is available at the nonlinguistic level, because the notion of formal inference is not applicable at the non-linguistic level. So how should appropriateness be understood at the non-linguistic level? It is tempting to put the point in terms of consistency, so that an appropriate action was one that was consistent with the instrumental belief and understanding an action as appropriate would be a matter of understanding it as consistent with the relevant beliefs, perceptions, and background desires, but this would be unsatisfactory for two reasons. The first is that there are all sorts of ways of acting consistent with an instrumental belief that do not involve acting upon it. Not all of these would properly be described as appropriate. Secondly, the notion of consistency can be viewed in two ways, neither of which is applicable at the non-linguistic level. Consistency might be viewed as primarily an inferential matter, so that a set of beliefs, desires, and intentions is consistent just if its members do not jointly entail a contradiction. But this sort of consistency-based understanding of the appropriateness of a course of action to a set of beliefs and desires would be little improvement over an inference-based conception, which we have already seen to be inapplicable at the non-linguistic level. Things are no better if consistency is viewed in semantic terms, so that a set of beliefs, desires, and intentions is consistent just if they can be true/satisfied together. Understanding consistency is a matter of understanding the consistency of a set of sentences-and hence requires linguistic vehicles.
Jose Luis Bermudez At the non-linguistic level the appropriateness of a course of action relative to a set of beliefs and desires, including an instrumental belief, can really only be understood as a matter of a creature straightforwardly acting upon a given instrumental belief in the light of its other beliefs and desires. It is to be assumed, in other words, that an instrumental belief will be immediately acted upon, unless there are significant countervailing considerations-perhaps a potential threat from a predator, or the overwhelming energy cost of carrying out the relevant action. At the non-linguistic level, issues of rationality get a grip, therefore, in two ways. The first way is . through the accuracy of the instrumental belief-the rationality of the contemplated course of action as a means of obtaining the required goal. The second way is through the translation of the instrumental belief into actionrelative to the creature's other beliefs, desires, and needs. As far as reasoning is concerned, level-2 rationality depends upon what one might term a straightforward practical syllogism-that is to say, an immediate translation of an instrumental belief into action. The answer to the second question, therefore, is that there are significant differences between level-2 rationality as applied to non-linguistic creatures and the forn1s of rationality that we deploy in explaining and predicting the behaviour of other language-using creatures. These are differences, however, in the types of reasoning available at the different levels, rather than in the overarching norms that govern ascriptions of rationality. The overarching norms remain constant. What varies is the complexity of the decisionmaking processes that lead up to the action whose degree of rationality is being assessed. The problem with which we began was the following. Recent research in the disciplines of cognitive ethology, developmental psychology, and cognitive archaeology involves treating certain non-linguistic creatures as thinking creatures, where this involves treating their behaviour as susceptible to (and only susceptible to) explanations of a psychological type. Since psychological explanations are, in essence, rationalizing explanations, it will only be possible to extend psychological explanations to non-linguistic creatures if we have a model of rationality and rational decision-making that can be applied to non-linguistic creatures. In this paper I have explored three different ways in which the notion of rationality can be applied to non-linguistic creatures. The behaviour of non-linguistic creatures can be assessed according to three different sets of norms-those of level-o rationality, level- I rationality, or level-2 rationality. Each set of norms is appropriate for different types of behaviour. Of these three sets of norms only the second and third can serve the purposes of psychological explanation-but they do so in a way that shows how, at the non-linguistic level, there exist analogues for the forms of norm-governed reasoning that are inextricably linked with psychological explanation at the linguistic level.
Rationality without Language REFERENCES Allen, C., and Bekoff, M. (1997), Species of Mind (Cambridge, Mass.: MIT Press). Bekoff, M., Allen, C., and Burghardt, G. M. (2002), The Cognitive Animal (Cambridge, Mass.: MIT Press). Bermudez, ]. L. (1998), The Paradox of Self-Consciousness (Cambridge, Mass.: MIT Press). - - (2001), 'Normativity and rationality in delusional psychiatric disorders', Mind and Language 16, 457-93. - - (forthcoming), Thinking Without Words. Byrne, R. (1995), The Thinking Ape (Oxford: Oxford University Press). Cheney, D. L., and Seyfarth, R. M. (1990), How Monkeys See the World (Chicago: University of Chicago Press). Clutton-Brock, T. H., and Albon, S. D. (1979), 'The roaring of red deer and the evolution of honest advertisement', Behaviour 69, 145-7°. Cowie, R. (1977), 'Optimal foraging in great tits, Parus major', Nature 268, 137-9. Davidson, D. (1963), 'Actions, reasons and causes', Journal of Philosophy 60, 68 5-7°°. - - (1978), 'Intending', in Y. Yovel (ed.), Philosophy of History and Action (Dordrecht: Reidel) reprinted in Davidson (1980). --(1980), Essays on Actions and Events (Oxford: Oxford University Press). Dawkins, M. S. (1986), Unravelling Animal Behaviour (Harlow: Longman). Donald, M. (1991), Origins of the Modern Mind (Cambridge, Mass.: Harvard University Press). Gibson, ]. ]. (1979), The Ecological Approach to Visual Perception (Boston: Huughton Mifflin). Gibson, K. R., and Ingold, T. (1993), Tools~ Language and Cognition in Human Evolution (Cambridge: Cambridge University Press). Gopnik, A., and Meltzoff, A. (1997), Thoughts~ Theories and Things (Cambridge, Mass.: MIT Press). Gould, ]. L., and Gould, C. ]. (1994), The Animal Mind (New York: Scientific An1erican Library). --(1998), 'Reasoning in animals', Scientific American (special issue on 'Exploring Intelligence') 9, 5 2 -9. Gould, S. ]., and Lewontin, R. (1979), 'The spandrels of San Marco and the Panglossian paradigm: A critique of the adaptationist programme', Proceedings of Royal Society of London B 20 5, 581-98. Heinrich, B. (2000), 'Testing insight in ravens', in C. Heyes and L. Huber (eds.), The Evolution of Cognition (Cambridge, Mass.: MIT Press). Hershberger, W. A. (1986), 'An approach through the looking-glass', Animal Learning and Behavior 14, 443-5 1. Hirschfeld, L. A., and Gelman, S. A. (1994), Mapping the Mind: Domain-Specificity in Cognition and Culture (Cambridge: Cambridge University Press). Kacelnik, A. (1984), 'Central place foraging in starlings (Sturnus Vulgaris)', Journal of Animal Ecology 53, 28 3-99.
Jose Luis Bermudez Kastak, C. R., Schusterman, R. J., and Kastak, D. (2001), 'Equivalence classification by California sea lions using class-specific reinforcers', Journal of the Experimental Analysis of Behaviour 76,13 1-5 8. Kohler, W. (1925), The Mentality of Apes (New York: Harcourt Brace). Krebs, J. R., and Davies, N. B. (1991), Behavioural Ecology: An Evolutionary Approach (Oxford: Blackwell). --and Kacelnik, A. (1991), 'Decision-making', in Krebs and Davies (1991). Lewis, D. (1969), Convention (Cambridge, Mass.: Harvard University Press). Lowe, E. J. (1993), 'Rationality, deduction and mental models', in Manktelow and Over (1993). Manktelow, K. I., and Over, D. E. (eds.) (1993), Rationality: Psychological and Philosophical Perspectives (London: Routledge). Maynard Smith, ]. (1982), Evolution and the Theory of Games (Cambridge: Cambridge University Press). Mellars, P., and Gibson, K. (1996), Modelling Early Human Minds (Cambridge: McDonald Institute Monographs). Millikan, R. (1984), Language!, Thought and Other Biological Categories (Cambridge, Mass.: MIT Press). --(1986), 'Thought without laws', in White Queen Psychology and Other Essays (Cambridge, Mass.: MIT Press). Mithen, S. (1996), The Prehistory of the Mind (London: Thames & Hudson). M011er, A. P. (1988), 'False alarm calls as a means of resource usurpation in the great tit Parus major', Ethology 79, 25-30. Premack, D., and Pren1ack, A. J. (1983), The Mind of an Ape (Hillsdale, NJ: Erlbaum). Rescorla, R. A., and Skucy, J. C. (1969), 'Effect of response-independent reinforcers during extinction', Journal of Comparative and Physiological Psychology 67, 3 81 -9. Ristau, C. A. (1991), 'Aspects of the cognitive ethology of an injury-feigning bird, the piping plover', in C.A. Ristau (ed.), Cognitive Ethology: The Minds of Other Animals (Hillsdale, NJ: Erlbaun1). Rizley, R. C., and Rescorla, R. A. (1972), 'Associations in second-order conditioning and sensory preconditioning', Journal of Comparative and Physiological Psychology 81, I-II. Schusterman, R. J., Kastak, C. R., and Kastak, D. (2002), 'The cognitive sea lion: meaning and memory in the lab and in nature', in Bekoff, Allen, and Burghardt (2002). Skryms, B. (1996), Evolution of the Social Contract (Cambridge: Cambridge University Press). Spelke, E. S. (1990), 'Principles of object perception', Cognitive Science 14, 29-56. Stein, E. (1996), Without Good Reason: The Rationality Debate in Cognitive Science (Oxford: Oxford University Press). Stephens, D. W.,and Krebs, J. R. (1986), Foraging Theory (Princeton, NJ: Princeton University Press). Vaughan, W. Jr. (1988), 'Formation of equivalence sets in pigeons', Journal of Experimental Psychology: Animal Behaviour Processes 14, 36-42.
II
Normative Explanations: Invoking Rationality to Explain Happenings ALLAN GIBBARD
• ••
The natural world is the only world. Reason, then, if any such thing exists, must be part of the natural world. To be rational, we might add, is to accord with reason, and so rationality, if any such thing exists, must be a natural property. To call an act or a way of thinking rational is to describe it, somehow, in naturalistic terms, in terms that can fit into empirical science. Or so we might think-but how this could be proves elusive. Attempts to formulate naturalistically what 'rational' n1eans appear to fail. Does a world with reason in it, then, contain more than naturalistic philosophy can admit? Naturalism leads in itself, this paper argues, to non-naturalistic conceptswhile I still insist stoutly that we are natural beings, all of whose properties are entirely natural. My topics, then, will be concepts and properties: what the concept of rationality is, and how it picks out a natural property. I will not, for the most part, be making claims about what rationality consists in, or detailed claims about the place of rationality in nature. Instead, I'll look to a concept, the concept of being rational. I'll ask what's at issue when people disagree about what's rational, when they disagree what property constitutes being rational. The claims of this paper, cryptically put, will be these: Some concepts of rationality, concepts we very much need, are not naturalistic; they are not causal-explanatory concepts that strictly fit into science. The prime use of these concepts is not in explaining things that happen, but in thinking how to think and how to act. Nevertheless, such a concept of rationality will act, in many ways, remarkably like a naturalistic concept. In particular, a claim of constitution obtains: there is a broadly natural property that constitutes being rational. So much for cryptic abstract. The position I sketch appropriates many of the resources of a classic anti-descriptivist strain in ethical theory, but it is going to end up sounding, in many ways, like the non-naturalism of G. E. Moore.
266
Allan Gibbard
I'll begin with hopes for naturalistic studies of humanity; my qualms about rationality as a topic for scientific study don't stem from any view that science, if it succeeds, must picture humanity as bizarre and unfamiliar. After that, I'll sketch a position on what it means to call something rational. The sketch isn't meant as a full presentation, and I'll rehearse arguments for this position and against alternatives only cursorily. My aim is to set the stage for a question. Both in everyday life and in the social sciences, we explain human events and their consequences in terms of rationality and irrationality. Irrational exhuberance, perhaps, causes market bubbles and later, crashes. Now one might think that the kind of 'irrealism' I sketch commits me to laughing off such explanations as defective in their very nature. My aim is to explain why, on the kind of position I sketch, such explanations make sense-whether or not they are correct or plausible.
HUMANITY IN NATURE
Some truisms, I hope: We humans, we are learning to think, are a part of the natural 'universe. Quite a special part, though: As infants we start our lives as the upshot of thousands of millions of years of natural selection. The same of course goes for a wolf cub or a robin egg, but with humans the social interplay of phenotypes in childhood and later is especially intricate. We, like our ancestors, grow up surrounded by other people, in a complex web of human relations. Our brains are adapted to such a human webthough not, in many respects, to the kinds of societies we now live in. Incipient research programmes aim to make tractable some of the rough patterns of human existence. Thomas Schelling speaks of an 'ecology of micromotives';I Dan Sperber, of an 'epidemiology of representations'.2 Robert Boyd and Peter Richerson have modelled 'runaway' cultural processes that can result from the interplay of well-designed reproducers) I'll mention chiefly, though, the programme of 'evolutionary psychology' propounded by such workers as Donald Symons, Leda Cosmides, and John Tooby.4 Think of natural selection, they propose, as shaping various 'mental modules'. From the gene's figurative point of view, each module is designed to cope with some specific, recurrent problem that our ancestors' genes faced 'in getting themselves reproduced. The modules often won't produce rigid responses; in general, they realize contingency plans, plans for responding to a history of cues in the environment. This makes for much of the glorious flexibility with which we humans respond to our environments: different environmental cues, the genes have 'found', indicate that different I
4
Schelling (197 8). 2 Sperber (199 6). Symons (19 8 7); Tooby and Cosmides (1992).
3
Boyd and Richerson (19 85).
Normative Explanations
267
ways of acting will promote the reproduction of one's genes. From the genes' perspective, so to speak, n1uch of our flexible response to the world realizes their plans, more or less-whereas the rest of it is fortuitous, a matter of design limitations and imperfect tinkering to make old plans serve new purposes, or of previously adequate modules encountering evolutionarily novel situations, situations for which they weren't designed. 5 A picture like this suggests both why we humans would look to some degree as classical decision theorists depict us, with desires and beliefs, preferences and gradations of certainty and doubt, and why decision theory might get us severely wrong. Decision theory and game theory apply pretty closely to the as-if 'purposes' that natural selection had in shaping us-with an individual's genetic reproduction as the as-if goal of the design. As for human purposes in any literal sense, however, they are a matter of the workings of cobbled-together modules. The modules will to some degree realize decision-theoretic patterns, but only roughly, with systematic departures.
NATURALISTIC CONCEPTS OF BEING RATIONAL
Rationality, it now seems easy to say, belongs in this naturalistic framework. An adequate, evolutionarily informed story of us humans must surely include a story of hun1an rationality. Already we know much about human rationality and what the selection pressures towards rationality must have been-or we can guess well. We know something of how genetic plans for producing more rational phenotypes must have promoted reproduction, why in ancestral populations, the more rational would out-reproduce the less rational. The n10re rational killed more game; they found plants with a better balance of nutrients and toxins; they won more battles; they formed more rewarding alliances and socially outsmarted their less rational rivals. These lines of thought about rationality and human genetic evolution include great strands of truth, but they should also prompt misgivings. The rational inherited the earth, perhaps, but this makes it sound as if rationality were some uniform kind of stuff-which our forebears had more of or less of. Sad and wonderful tales of victims of brain damage should disabuse us of this; I draw my examples from Antonio Damasio. 6 William Douglas, Justice of the United States Supreme Court, suffered a stroke which left half his body paralysed-but insisted that there was nothing wrong. For brief periods, he could be brought to admit that he couldn't stand unaided, but the persuasion never lasted long. Now clearly, I agree, he was being irrational. In some ways, though, his thinking was normal; it was terrible when 5 6
See Dawkins (1982, 3 0 -54) on 'constraints on perfection'. Damasio (1994, 65) on Douglas, and 34-51 and elsewhere on prefrontal lobe dan1age.
268
Allan Gibbard
it came to his own health. A specific brain module had been damaged or killed, so that he lacked all sense of malaise, and the consequent illusion of health overwhelmed the powers of reasoning of the large part of his brain that was intact. A second example is a tumour victim who had lost much of his prefrontal brain lobe. He scored well on IQ tests, but couldn't hold a job, keep a marriage, or decide on a restaurant. Clearly he had lost important aspects of his rationality, but putting it that way wouldn't tell us much. What this man had lost specifically, it seems, was his capacity to be viscerally disturbed by disturbing thoughts. Still, .if rationality isn't some uniform stuff, if it consists in a cluster of abilities, we can expect that this cluster has something in common. What do the lacks I have been recounting share that justifies our calling them defects in rationality? Both stories I have reported are stories of brain injury, of ways in which genetic plans for the working of a human brain were stymied. This isn't, though, what makes them deficits in rationality as such. Indeed rationality, I'll be claiming, is not a concept driven by its role in causal explanations. In this, it is unlike concepts, say, of selection pressures, mental modules, incest avoidance, or senescence. Of course, I cannot go through all naturalistic candidates for what the concept of being rational might be, but I'll quickly dismiss a couple. What marks some mental propensities as aspects of rationality, and others as forms of irrationality? Accuracy, we might say: Justice Douglas was irrational, after his stroke, in that he thought he was in good health, whereas he wasn't. Not all false belief, though, stems from irrationality; a cancer victim, before the first symptoms appear, might quite rationally be as confident as anyone else that there is nothing wrong. Douglas, the problem is, was impervious to the most blatant of evidence. A paranoid too is irrational, but highly sensitive to some kinds of evidence, to evidence for plots and conspiracies-and for evidence that points the other way, he may have an elaborate story of why it is misleading. What constitutes, then, a rational sensitivity to evidence, and what an irrational hypersensitivity to some kinds of evidence? The smug and cocky victim of prefrontal lobe damage isn't rational, we must say, but false belief is not his problem. He just lacks gut feelings for peril; he can discuss the risks as well as anyone else, but not feel then1 in a way that prompts a steady course of action. The upshot is that he acts rashly at times, whereas at other times he dithers over small matters-perhaps because he can't be annoyed with his own dithering. A second kind of try would be this: All these defective tempers of mind, we could note, would be bad for one's reproductive prospects. Perhaps this is what makes them all count as kinds of irrationality. Of course not everything that's bad for one's reproductive prospects could count as lack of rationality: a deformed face or a limp n1ight hinder reproduction, but isn't, directly at least, a matter of rationality. Rationality, we can say, is a defect in reasoning, in reasoning how things are or reasoning what to do. Still, what
Normative Explanations makes reasoning defective, we could try saying, is that it's the sort of reasoning that detracts from one's reproductive prospects. Saying this would of course require us to identify reasoning, and no neat philosophical account of reasoning is in the offing. Suppose, though, for purposes of inquiry, that reasoning, coming to conclusions, is a syndrome that a good, naturalistic account of humanity would have to recognizethat science could distinguish reasoning to a conclusion from what happened with Saul on the road to Damascus. The paranoid reasons but he reasons badly, giving too much credence to threats and conspiracies and too little to good faith and open avowals. He thus misses out on opportunities. for cooperation; he trades off opportunity against their dangers, overspending on protection. All this, in typical situations, detracts from his reproductive prospects. Reproductive success, though, cannot be a proper test of what is rational and what irrational. Is Jill who works on relief of famine irrational if she plans effectively to feed hungry people but this doesn't promote her own genetic reproduction? Or, to vary another stock example, suppose Jack becomes obsessed with his genetic reproduction, wrangles a position in a sperm bank, arranges for his sperm to in1pregnate hundreds of women, and when predictably caught, predictably spends a long time in jail. That is reproduction on the scale of a Sultan of old, but rationality it is not. A third kind of answer is offered by many economists and philosophers: that rationality is a matter of advancing one's preferences whatever they are. If that is what the term 'rational' means, though, it would make nonsense of my claim that Jack the sperm bank obsessive is irrational; he fulfils his preferences spectacularly. Still, he will strike many as irrational. We need an account of the concept of rationality, if we can find it, that allows this judgement at least as conceptually coherent.? These quick remarks do not, of course, exhaust all possible ways that the term 'rational' might be thought to have a straight descriptive meaning, a meaning that tells us what's at issue when people dispute the nature of rationality. Instead of pursuing further possibilities of this kind, however, I now want to turn to an alternative kind of account of what 'rational' means-an account that would explain, among other things, why the search for a straight descriptive rendition of the meaning of the term proves so elusive.
THINKING HOW TO REASON
In questions of rationality, I'll now hypothesize, what is at stake is not causal explanations of goings-on-or not exclusively that. Disputes about 7 As I argue in (1990, 12-r8), even what constitutes instrumental rationality is contentious. No definition settles the matter.
27°
Allan Gibbard
what's rational may be rooted instead in disagreements about how to reason. When I say that the sperm bank obsessive is irrational, I am objecting to his reasoning. Of course there are many ways one can object to someone's reasoning; a person's reasoning may be tawdry, malicious, self-serving, or inglorious. What am I imputing when I condemn a piece of reasoning specifically as irrational? Here is a proposal: I am n1aking a hypothetical determination of how to reason in his situation. I am saying, in effect, 'Let me not reason that way if in his shoes.' Consider Newcomb's fanciful dilemma. s A thousand euros lie before you in a transparent box, and a second box is opaque; you can take both boxes or just the opaque one. A psychologist of amazing accuracy, you know, has predicted what you will do, and put a million euros in the opaque box just in case she predicts that you'll take just that one box. Suppose that having money is all it's rational for you to aim for. Some authors claim, then, that it's rational, in this situation, for you to take both boxes, and irrational to take just one. What does this claim amount to? To accept it, I propose, consists in making a kind of contingency plan, a plan for a hypothetical situation: the situation of being exactly like you and faced with the Newcomb dilemma. I think it rational for you to take both boxes, and my thinking this consists in my rejecting, in my contingency planning, the option of taking just one box if in that situation. That is my proposal. Far more would need to be said to clarify the proposal and argue for it. So far, it covers only judgements of ideal rationality, and so it needs refinement to accommodate bounded rationality, coping with decisions in the face of one's human limitations. Perfect rationality abstracts away from limitations of attention, memory, calculation costs, and the like. The perfectly rational chess player never loses as white, just as no competent player of noughts and crosses ever loses-but not even a grand master qualifies as ideally rational in his play. Questions of ideal rationality are questions of how to think, in a situation, all limitations in one's powers of reasoning aside. Questions of bounded rationality are likewise hypothetical questions of how to think and what to do, I would want to propose, but questions of a more complex form. 9 The proposal is limited too, so far, in that it covers rational acts and thinking, not rational people; it covers what it is rational to do and to think. Judgements of who is rational and who is not, I would say, are likewise not purely naturalistic, but laden with contingency planningagain in complex ways. A fuller version of my proposal would need to specify all this.
8 Nozick (19 6 9). Gibbard and Harper (1978) proselytize a proposal of Robert Stalnaker or analysing Newcomb's problem. 9 See Gibbard (1997).
Normative Explanations
27 1
EXPLANATIONS, CONCEPTS, AND PROPERTIES
Disputes about rationality are at root, I proposed, disputes about how to reason. Talk about rationality, if this is right, is not purely naturalistic; it is not purely causal-explanatory. Still, in psychology and the social sciences, rationality and irrationality seem to explain a great deal. The heuristics and biases literature identifies systematic forms of irrationality that figure in some of our judgements. 1o Economists have long inveighed against sunk cost fallacies and money illusions, and in the psychology of economics of the past couple of decades we find claims that other kinds of irrationality have economic consequences-for example, an asymmetry between differences framed as losses and differences framed as lack of gains. I I How might such explanations work? The literature on moral concepts and properties includes extensive debate on 'moral explanations', on whether moral states of affairs can explain non-moral happenings. 12 Parallel considerations apply, I'll be claiming, to explanations in terms of rationality. Take first the question of ethical explanation. When in the nineteenth century, a party headed across the mountains to California was trapped in the Donner Pass, many died who could have been saved. On one assessment, this was because the man in charge of the rescue was 'no damn good'. His moral defects, it seems, can explain deaths. 1 3 Theories of moral concepts and properties-my own included-must explain, or explain away, how this might be so. Now on my view, concepts of rationality have the same kind of status as moral concepts. We can ask about 'rationality explanations', in the sense of explanations of natural events in terms of rationality and its lack. My claims will be these: (i) Nothing about the concept of rationality in itself precludes such explanations. (ii) Such explanations aren't purely naturalistic in their content, and where such an explanation is correct, there will be a naturalistic explanation that is equally satisfactory-whether we know what it is or not. (iii) Except when they rest on claims about rationality that are uncontroversial, rationality explanations aren't likely to be very useful in advancing our causal understanding, except as first approximations. The views of the concept of rationality that lead to these claims stem from a view parallel to those of Ayer and Hare on ethical concepts-views that I call expressivist. 14 Ayer was reacting to ethical non-naturalism: he rejected talk of non-natural properties, but he was impressed by arguments that Moore had given against analytical ethical naturalism. He proposed, as
Nisbett and Ross (1980). II Kahneman and Tversky (1979). See, among other articles, Sturgeon (1985b), Blackburn (1993, 198-2°9), and Sturgeon (199 1). 13 Sturgeon (1985b, 63-5). 14 Ayer (193 6), ch. 6; Hare (1981). See my (1988) on Hare as an expressivist. 10 12
Allan Gibbard a way to reconcile his views, that moral statements express emotions, or attitudes that are dispositions to emotions. This theory, he thought, could account for the phenomena that Moore relied on in his arguments, without forcing us to believe in occult properties in the world. 1 cite this history in order to give it a couple of twists, which can then apply to rationality as well. Moore's arguments have been controversial. 15 For some readers, however, Moore's arguments retain an intuitive appeal, an appeal that is not exhausted by the obviously inadequate open question test. Moore's arguments appeal to our sense of when two people are genuinely disagreeing. Two people can agree that a state of affairs is pleasant but disagree about whether it is good; from this Moore concludes that 'good' can't mean pleasant. I6 Even if hedonists are right, and to be good is to be pleasant, still the concept of being good isn't the concept of being pleasant; the terms 'good' and 'pleasant' don't mean the same. This ties Moore to Charles Stevenson, who isn't quite an expressivist but comes close. Stevenson famously began his treatment with agreement and disagreement: there can be disagreement not only in belief, he argued, but in attitude. I7 1 follow both Moore and Stevenson in this regard: disagreement is the key. Specifically, we can agree or disagree on how to reason, and in this, I'll be claiming, lies the key to the logic of rationality. Moore can be twisted into being more of a naturalist than he realized. He spoke of good as a non-natural 'object', 'notion', or 'quality', but his arguments support only an insight about the concept, that the concept of good is special. A distinction between concepts and properties is familiar in many current philosophical writings; a pre-scientific concept of being water isn't the scientific concept of being H 2 0, but the property of being water just is the property of being H 2 0. Now Moore spoke of what he called the good, which consists of all and only those things that are good. The good, he said, is a natural object. For all his strictly metaethical arguments showed, Moore explained, the good might be the pleasant. (1 use the term 'good' in this discussion to mean good intrinsically.) And although for other reasons, he rejected ethical hedonism, he had no doubt that there is some natural property that all and only good things share. I8 Imagine, then, a hedonist who accepts the metaethical part of Moore's views. She thinks not just that as things stand, all and only good things are pleasant; she thinks this would be true in any possible situation. She thinks, then, that 'good' and 'pleasant' are not just coextensional; they are so necessarily. Now on some treatments of properties, necessarily coextensive properties are one and the same. This raises the possibility that goodness and pleasantness, on this Moorean hedonist's views, are the same property, but 15 17
See e.g. Sturgeon (1985a, 25-6). 16 Moore (19°3,11-12). Stevenson (1944, 2-8). 18 Moore (19°3, 7-9).
Normative Explanations
273
picked out by two different concepts. Assume, for the moment, that ethical hedonism is right. Still, Moore showed, claiming a thing to be good is something quite different fron1 claiming it to be pleasant. If one agrees that something is pleasant but not that it is good, one shows some degree of ethical incompetence, we're supposing, but not a purely conceptual defect. That shows that the two concepts are distinct. Why, then, think there is more than one property in play at all? Distinct concepts account for all the Moorean phenomena of agreement and disagreement; why-if we are ethical hedonists-think we are dealing with distinct properties too? I read Moore as prone to think, if the question had arisen, that there is some natural property-not pleasantness, but something far more complexsuch that all and only good things would share that property in any possible situation. With concept and property clearly distinguished, then, he could have said this: goodness is a natural property, involving pleasure, truth, beauty, organic wholes, and the like. The concept of being good, though, is a simple concept that is non-naturalistic, that doesn't figure, strictly, in the content of natural science, psychology, and sociology. (And once Moore is emended in this way, many current naturalists won't disagree. Many of them insist that they are talking about properties, not analytic equivalence or conceptual identity.)I9 I now move to following Ayer's expressivism, again with twists. One twist is to consider the rational, not the good. Another is to advance to quasi-realism in Simon Blackburn's sense: I end up with a position that very much mimics my emended Moore. 2o We might also call my position quasinaturalistic. In one sense, it is naturalistic solidly: I start out assuming nothing but natural phenomena. I don't though end up claiming that all our concepts cash out as naturalistic concepts; in particular, I won't claim that the concept of being rational is naturalistic. I will, however, claim that there is a property of being rational, and that, in a broad sense, it is a natural property.
THE PRINCIPLE OF CONSTITUTION
When we judge how it is rational for someone to think or to act, we conduct thought experiments. Was it rational for Galileo to recant? To judge this, I put myself in his shoes; I ask myself what to do if just like him and in exactly his plight. Was he rational to think the earth moved? To judge this, I ask myself what to think if in his situation, with only the evidence that Galileo held.
19
For instance, Sturgeon (I98Sa, 26).
20
Blackburn (1993, IS).
274
Allan Gibbard
In the deliverances of these thought experiments we can agree or disagree. We can disagree, say, on what to do or how to think if in Galileo's shoes. Why should such a thing count, though, as real disagreement? I reject a course of action that you adopt; we have different plans for the contingency of being just like Galileo and faced with his exact plight. Is this genuine disagreement-or is it just a difference between us, a personal difference as in height or weight? We differ in our plans for being in exactly the same contingency, but why should this difference over what to do count as my denying something you accept? This, I think, is a profound question, a question at the root of why normative concepts exist in our thinking. Start first with solitary planning; for this we can ask a parallel question. One moment I plan to swim late this afternoon; the next to stay dry and work into the evening. One mon1ent I plan to stand firm if in Galileo's shoes; the next, to recant. Have I changed my mind about anything, coming to disagree with my self of the previous moment, rejecting something I had hitherto concluded? Or is all this like sitting one moment and, lunch finished, standing the next-a change but no conversion, no rejection of my previous state as suiting a previous n10ment? I couldn't possibly take this view of my states of mind and plan what to do. A plan, after all, emerges over time; I settle on whether to swim, and then on when to start for the lake. What I think earlier is subject to change, but change counts as revision; I reject my previous state of mind-whereas to stand at the end of a meal is not to reject having sat. In other words, I treat whether to swim today as a single issue I can address at different times. I treat fragments of plans as conclusions I can later accept or reject, as having content that I can contemplate at different times. Standing or sitting are not like this; they are not opposing stances towards a single conclusion. 21 I thus refine my plans for situations I expect to face, and stick to a view or change my mind about what to do. The same with contingencies, even when they are wildly hypothetical: later I can agree or disagree with what I'm now thinking. Is each person even so an island to himself, forming plans that last over time, but neither agreeing nor disagreeing with the plans of others-any more than when I'm shorter than you, I thereby disagree with your height? That would belie the ways we depend on each other in our thinking. We put our heads together in conversation, continuing or revising each others' lines of thought. You and I can think together what to do if in my shoes; we do this if I come to you for advice. We can think together what to do if in Galileo's shoes; doing this can help us when we go on to think together how to live our own lives. Now to think together, we must be able to agree or disagree, not only each with himself but with each other. 21
Brandom (1994, 453) discusses making the same claim from different standpoints.
Normative Explanations
275
Agreement and disagreement on what to do or how to think, engaging each other over a plan for living across time and between persons, these are the seeds of plan-laden content. With them we have everything needed to germinate concepts that behave as my twisted Moore would expect. To see how this happens, start with a goddess, a thinker-planner who is hyperdecided: she has a view on every matter of fact, and a plan for every conceivable hypothetical situation. This fancy carries the key to the meaning and logic of plan-laden concepts-as follows: A challenge to expressivistic accounts like mine stems from Geach and Searle; it is the noted 'Frege-Geach' problem. How shall we account for the meanings of disjunctions and the like, and their role in reasoning?22 Emulating possible world semantics with a twist, try identifying the meaning of a disjunction, say, with a set of possible hyperdecided thinkerplanners: the set of such possible beings who would be in agreement with it. A valid argument is one such that any hyperdecided thinker who accepts the premisses accepts the conclusion. 23 It will follow that in a broad sense of the term 'natural', there is a natural property that constitutes being rational. This property constitutes being rational in rnuch the sense that the property of being H 20 constitutes being water: that in any possible situation, all and only thoughts and acts with this property are rational. More precisely, n1Y contention is that any thinker-planner is committed to a principle of constitution: that there is a broadly naturalistic property that constitutes being rational. The argument for this contention goes in two steps. First, any hyperdecided thinker accepts this claim. Second, for thinkers like you and me who are not hyperdecided: Suppose there is a claim I would accept in any hyperdecided state I could reach without changing my mind about anything. Then I am already committed to this claim in my thinking-for then there is no way I allow that things might be, for all I know, that doesn't include this claim. To fill out these steps only slightly: First, a hyperdecided thinker has a plan for each conceivable situation that anyone might be in, a plan for what to think and what to do. We could express this plan as a plan to think and do all and only those things that share a certain property. This property must be natural in the broadest of senses: it must be constructible, at least infinitely, out of naturalistically conceived features of the situation and the alternatives available. 24 Otherwise the plan would be no plan at all for what Searle (19 62 ); Geach (19 6 5). For treatments of the problem that I think are equivalent to this, see Gibbard (1990, 94-102); Blackburn (1993, 19 2-3). 24 In a more careful treatment, this claim would need refining. The concept of rationality might allow for supernatural, metaphysical, and spooky properties. Even if there are no ghosts, for instance, we can perhaps ask what it's rational to do if you encounter one. (I thank John Hawthorne for raising this point.) Moore insisted that his claims about 'natural' properties 22 23
Allan Gibbard to do in each situation, for a person who follows a plan must be able to recognize, in naturalistic terms, the contingencies to which the plan applies and the acts it directs. The property, then, qualifies as a natural property that the hyperdecided thinker regards as constituting being rational to think or to do. Every hyperdecided thinker-planner, it follows, accepts a principle of constitution: that there is a broadly natural property that constitutes being rational to do. Step two is to note that if every hyperdecided thinker accepts something, then it satisfies the condition for our already being committed to it. You or I, who are not hyperdecided, would accept the principle of constitution in any hyperdecided state we could reach without changing our minds. Therefore, we are already committed to this principle of constitution.
THE STATUS OF RATIONALITY
Is the concept of rationality I have depicted too broad? It might be thought that the term 'rational' has a fairly definite descriptive meaning, which the analysis I propose ignores. Now one feature I have indeed specified as part of the meaning: rationality concerns reasoning. (This must include reasoning on what to do and what to believe, in that actions and beliefs can be criticized as irrational.) Don't various other truisms about rationality, though, go towards forming the meaning of the term-truisms that go beyond just saying that rationally is the way to think? If a person sets out to eat his cake and have it too, isn't he irrational by the very meaning of the term? And what of conditions that decision theorists set on rationality, such as the transitivity of preference? These features and others like them I haven't included in my account of what it means to call a way of thinking rational. That rationality has anyone of these features, on the account I have given, counts as a substantive claim, not as something built into the meaning of the term 'rational'.25 Note first that even if I'm wrong on this score, and features like these are built into what 'rational' means, it is quite doubtful that conditions like these will settle all questions of what's rational and what isn't-even when all applied also to supernatural and metaphysical properties. 'Non-natural' properties, as he spoke of them, were not meant to be spooky properties like ghostliness. The term 'descriptive' is sometimes used for the broader class of properties Moore had in min9 as what good could not be. So couched, my claim would be that descriptive properties are the only properties there arethough there are non-descriptive concepts, including normative concepts. There is a property that constitutes being the rational thing to do, to think, or to feel, I am claiming, but it is not a peculiarly normative property; there is no such thing. There are only properties, and every property is descriptive in the sense that there can, in principle, be a descriptive concept of it. 25
I treat a similar set of issues for moral concepts in Gibbard (I992b).
Normative Explanations
277
naturalistically specified facts are agreed. Is it rational to sacrifice one's own happiness for that of someone else? Some think yes and some no, and it is doubtful whether the issue can be settled by logical deductions fron1 purely conceptual premisses, from premisses that it would show conceptual incompetence to reject. If the meaning of the term indeed is constrained by truisms like these, then to be sure, my account of the term will need elaborating to incorporate such constraints. 26 There will be cases, though, where an issue about what's rational still comes down to a disagreement in plan. The crux will still be that we disagree on what to do in certain circumstances. Are truisms concerning rationality, though, really incorporated in the very meaning of the term? Students of decision theory disagree on such matters as transitivity, the 'sure thing' principle, and prisoner's dilemmas with twins. Truisms shared by all conceptually competent users don't seen1 to settle these controversies. Moreover, suppose all competent users do agree on a matter. We all agree, presumably, that clasping and unclasping one's hands for two hours a day, for its own sake and for no further reason, is irrational. That doesn't show that this finding is built into the very meaning of the term. What, after all, if someone did put great stock in spending two hours a day clasping and unclasping his hands, on no further ground? Would that person think the activity irrational? If the irrationality of what he's doing is built into the very concept, then he must think it irrational on pain of failing to grasp the concept. But if he didn't call what he was doing 'irrational', would he be misexpressing himself? Would his failure lie in a bad grasp of what the term means? Wouldn't he be showing by his actions that his mistake concerned not how to express his views of what to do and why, but those views themselves? He would be irrational, of course, but he'd be irrational in his plans for living, not in how he expressed his irrational views on how to live. Of course he might say that what he was doing was irrational. What, though, is the difference between someone who engages in extended, multiple hand-clasping for its own sake thinking it irrational, and someone who does so thinking it rational? Thinking it irrational would seem to indicate some unease with the activity, some failure to embrace with his·whole being the plan to do what he is doing. Or perhaps he proceeds with confidence, but picks up the term from people around him who find it truistic that what he's doing is irrational, and he treats their bases for this assessment as part of what the term means. But if that's the case, does he really mean by 'irrational' ,vhat the rest of us do? On questions like whether the 'sure thing' principle is a requirement of rationality, he can't now take sides. What, after all, would be at issue for him in the matter if he'll feel no unease with 26
I consider concepts that intertwine descriptive and evaluative considerations in Gibbard
(I99 2a ).
Allan Gibbard violating the principle whichever answer he accepts? How does he agree with one side of the dispute and disagree with the other, except on a question of how to apply a word? For him the question is at most one of linguistic sociology, whereas for us it is a question of whether to constrain ourselves by the 'sure thing' principle. One further misgiving a reader might have: Norms of rationality we take to be authoritative for everyone. Does the account I have given succeed in accounting for this feature? It does so trivially: according to the account, by the very meaning of the term, I don't count a precept as a norm of rationality unless I regard it as authoritative. Is transitivity of one's preferences a requiren1ent of rationality? The answer is controversial. But if I think it is, then, according to the account I have given, I plan always, in anyone's shoes, to satisfy the principle-even in the shoes of someone who rejects it. In this sense, I treat the principle as authoritative. 27
RATIONALITY IN NATURE
If the programme I have been sketching succeeds, what is the place of rationality in nature? In the first place, I have been saying, rationality is a broadly natural property: there is a natural property of something's being rational to think or to do. For acts, this property might, for all I have said, be something as straightforward as maximizing one's hedonic prospects, or it might instead be something quite complex. In the second place, the concept of rationality is not itself naturalistic. You and I might disagree about what rationality is, naturalistically characterized. We'll do so if we disagree on how to think or what to do in a situation, when we agree in all our naturalistic descriptions of the situation. What, then, of explanations in terms of rationality? I tell you, suppose, 'The attack failed because the commander had blundered.' This says, in effect, that the attack failed because the orders the commander issued weren't rational. This might be a good explanation-or in any case, it is meaningful as an explanation. We say what its meaning is by saying what it is for a hyperdecided thinker-planner to accept it or reject it. For anyone hyperdecided, accepting this explanation amounts to two things: first, having a universal contingency plan to issue orders that meet certain conditions, and second, thinking that the commander's not issuing such orders explains the failure of the attack. 28 27 My (199 0 ) has a long discussion of normative authority, which takes up many issues that I don't treat here. See esp. ch. 9. 28 This is rough, of course; the analysis is more of a mistake than a blunder. A blunder is a mistake that is egregious.
Normative Explanations
279
The explanation that someone had blundered, on this account, isn't purely causal/naturalistic. The key lies in the kinds of agreement and disagreement that are possible among the hyperdecided. You and I, suppose, each turn hyperdecided. We might then agree in the entire causal story we tell of the universe, and yet still disagree on whether rationality had anything to do with it-anything to do with the attack's failing. The root of our disagreement, if this is so, isn't disagreement on natural fact and causal patterns; we agree on all that. We disagree on how to think and how to act. We disagree on how to think militarily, and on what to order in the commander's shoes. Or consider another example: I'm faced with a Newcomb problem. The psychologist tests me and finds I'm a two-boxer, and sure enough, I take two boxes, just as she predicted. I don't end up rich, and we want to explain why. Anatol, a one-boxer, says I lack riches because I'm an irrational type, because I wasn't disposed to choose the rational thing in the Newcomb case. I say I lack riches because, alas! I was rational, and the situation I was in was specifically designed to reward a certain kind of irrationality. Those disposed to choose rationally, say I, didn't have a chance at the million euros; they weren't in the box. Anatol and I agree on the causal pattern: we both think that it was my prior disposition to choose two boxes that, indirectly, kept a million euros from being placed within my grasp. We disagree about whether this cause constituted my being irrational-in that we disagree what to do if faced with a Newcomb problem. Given that we can explain happenings in terms of rationality or irrationality, should we do so? Special properties matter for human beings: long histories of natural selection give rise to properties that physics as such doesn't study, properties that are crucial to explaining natural patterns in the universe. Human ecology, as we might call it, is the most extreme case of a subject that explains in terms of such properties. For scientific purposes, though, we should beware of such impurely causal explanations, of citing rationality or its lack as a causal factor. Such explanations conflate possible sources of disagreement, and moreover, they suggest rationality as a mysterious kind of stuff that permeates us, when what we need to examine is an array of specific mental abilities. I am a Bayesian, imagine, in questions of rationality in belief, and. on rationality in action I am a hedonistic egoist. I can offer explanations of events, then, in terms of people's rationality and departures from ideal rationality. The opposing commander satisfied precepts of Bayesian hedonistic egoism, say I, and that explains why he retreated. So his rationality explains his retreating. I have explained what happened, then, in terms of rationality. Is this a valuable mode of explanation? It combines two theses of mine that it would be better to distinguish: a claim about how the commander thought, and a claim about how to think in his shoes. If you disagree with me, it will help if I distinguish these two contentions: the
280
Allan Gibbard
naturalistic story I'm telling and the planning gloss I'm giving it. You may object to my naturalistic story, or you may accept it, and object to my account of what constitutes being rational. Different discussions should ensue in the two cases. Might I reasonably accept an explanation of an event couched in terms of rationality, but have no purely naturalistic account to give of the event? Might I think that the charge failed because someone had blundered, but be quite unsure what constitutes blundering?29 Such a state of mind is intelligible enough, I'll agree. For we can say which more opinionated states of mind constitute agreeing and which disagreeing. I don't, though, advise enthusiasm for such explanations. They cater, for one thing, to the pernicious tendency to think that some uniform paran1eter constitutes rationality, that one can have more or less of it, and that the degree to which one is filled with rationality is deeply explanatory. Such a picture must be profoundly misleading. Important lines of research in social psychology show that performance in one situation often fails to predict performance in another)O Developmental psychologists chart the emergence of distinct cognitive skills in young children)I Suppose, though, contrary to my expectation, the property that constitutes being rational did turn out to be important in causal explanations. It would still be important to distinguish two questions: what this property explains, and whether it indeed constitutes being rational. This second is a question of how to think and how to live. Naturalisn1 concerning the rational is in one sense correct: some natural property constitutes being rational. Reason, then, has its place in nature, and for all a theory of concepts can tell us, the property might explain much of human affairs. We shouldn't, though, have great expectations that rationality as such actually does do n1uch to explain human thoughts and actions. Aspects of rationality do-but not rationality as some uniform quality. Assessn1ents of rationality primarily serve another purpose: settling on ways of thinking and acting.
REFERENCES Ayer, A. J. (1936), Language~ Truth and Logic (London: Victor Gollancz). Blackburn, Simon (1993), Essays in Quasi-Realism (New York: Oxford University Press). 29 Blackburn (1993, 206-8), suggests a 'more speculative strategy' than he thinks we really need; this strategy allows that there could exist a moral feature that is causally relevant. Sturgeon (1991, 30-1) thinks that Blackburn does indeed need to make such a treatment work, and doubts that such a position is available. I think of my treatment of normative explanations in this paper as filling out, in more detail, the kind of strategy Blackburn suggests, and so vindicating it. 1 3° Ross and Nisbett (1991). 3 Hirschfeld and Gelman (1994).
Normative Explanations Boyd, Robert, and Richerson, Peter J. (1985), Culture and the Evolutionary Process (Chicago: University of Chicago Press). Brandom, Robert (1994), Making It Explicit (Cambridge, Mass.: Harvard University Press). Damasio, Antonio (1994), Descartes' Error (New York: Grosset/Putnam). Dawkins, Richard (1982), The Extended Phenotype (San Francisco: W. H. Freeman). Geach, Peter (1965), 'Assertion', Philosophical Review 74, 449-65' Gibbard, Allan (1988), 'Hare's Analysis of "Ought" and its Implications', in D. Seanor and N. Fotion (eds.), Hare and Critics (Oxford: Oxford University Press). --(1990), Wise Choices, Apt Feelings: A Theory of Normative Judgment (Oxford: Oxford University Press). --(1992a), 'Thick Concepts and Warrant for Feelings', Proceedings of the Aristotelian Society suppl. vol. 66, 267-83. --(1992b), 'Moral Concepts: Substance and Sentiment', Philosophical Perspectives 6, 199-221. - - (1997), 'Engagement limite et rationalite limitee', trans. by Pierre Livet, in Jean-Pierre Dupuy and Pierre Livet (eds.), Les Limites de la rationalite, i: Rationalite, ethique et cognition (Paris: Editions la Decouverte): 397-41 I. --and Harper, William L. (1978), 'Counterfactuals and Two Kinds of Expected Utility', in C. A. Hooker, J. J. Leach, and E. F. McClennen (eds.), Foundations and Applications of Decision Theory, i. 125-62. Hare, R. M. (1981), Moral Thinking: Its Levels, Method, and Point (Oxford: Clarendon Press). Hirschfeld, L. A., and S. A. Gelman (1994), Mapping the Mind: Domain Specificity in Cognition and Culture (New York: Cambridge University Press). Kahneman, Daniel, and Tversky, Amos (1979), 'Prospect Theory: An Analysis of Decision under Risk', Econometrica 47, 263-91. Moore, G. E. (1903), Principia Ethica (Cambridge: Cambridge University Press). Nisbett, Richard, and Ross, Lee (1980), Human Inference: Strategies and Shortcomings of Social Judgment (Englewood Cliffs, NJ: Prentice Hall). Nozick, Robert (1969), 'Newcomb's Problem and Two Principles of Choice' in Nicholas Rescher (ed.), Essays in Honor of Carl G. Hempel (Dordrecht: Reidel). Ross, Lee, and Nisbett, Richard (1991), The Person and the Situation: Perspectives of Social Psychology (Philadelphia: Temple University Press). Schelling, Thomas (1978), Micromotives and Macrobehavior (New York: Norton). Searle, John (1962), 'Meaning and Speech Acts', Philosophical Review 71, 423-32. Sperber, Dan (1996), Explaining Culture: A Naturalistic Approach (Oxford: Blackwell) . Stevenson, Charles L. (1944), Ethics and Language (New Haven: Yale University Press). Sturgeon, Nicholas L. (1985a), 'Gibbard on Moral Judgment and Norms', Ethics 9 6 , 22-33· --(1985b), 'Moral Explanations', in David Copp and David Zimmerman (eds.), Morality, Reason and Truth (Totowa, NJ: Rowman & Allanheld): 49-78. --(1991), 'Contents and Causes: A Reply to Blackburn', Philosophical Studies 61,19-37·
Allan Gibbard Symons, Donald (1987), 'If We're All Darwinians, What's the Fuss About', in C. B. Crawford, M. F. Smith, and D. L. Krebs (eds.), Sociobiology and Psychology: Ideas, Issues, and Applications (Hillsdale, NJ: Erlbaum). Tooby, John, and Cosmides, Leda (1992), 'The Psychological Foundations of Culture', in J. H. Barkow, L. Cosmides, and J. Tooby (eds.), The Adapted Mind: Evolutionary Psychology and the Generation of Culture (New York: Oxford University Press): 19-136.
INDEX Alston, W. 35, 38,46 Allais's paradox 138-40, 142, 230 Allen, C. 237 Anderson, J. 147,149-53,160, 162, 166, 180 Anscombe, G. E. M. 113,122 Anscombian Principle 113-14,118-19, 122,13 1 Aristotle 8 5, 14 1 Aune, B. 89 Ayer, A. J. 27 1 Bayesian theory 136, 15 6-9, 163, 164, 175, 182,197,199,23°- 1,279 Bekoff, M. 237 Bermudez, J. L. I, 8-9 Blackburn, S. 271, 280 Boghossian, P. 1-4, 15, 16, 18, 20, 28,43, 4 s, Ch. 3 passim Bonjour, L. 20, 40, 4 I bootstrapping 96 Boyd, R. 266 Brase, G. L. 201 Brandom, R. 274 Bratman, M. 96, 105-6, 127-9 Broome, J. I, 5-6,96, 100, 102, lIS, 12 3, 12 5 Burge, T. 22 Byrne, R. 256 Carroll, Lewis 36, 37, 4 1, 57, 74, 7 6- 8, 80,82 Chater, N. I, 7, 8, I s6-9, Ch. 7 passim Chisholm, R. M. 93 Chomsky, N. 144 Chrysippus'dog 72 cognitive ethology 9, 23 3-S, 262 Cohen, L. J. 14°,221 commitments: changes of Ch. 9 passim contrasted with performances 211-12, 216-2 5 doxastic/affective/evaluative 8, Ch. 9
passim incurred by beliefs/intentions 4-6, 124-9 psychological v. normative 127-9
computational theories of the mind 144, 149 conceptual role semantics 42-7 conditioning theory 162, 240, 252 content independent abilities/procedures/rules Ch. 8 passim Cosmides, L. 190-2, 194, 19 6, 197, 200, 201,266 Cowie, R. 242 Cullity, Garrett, and Gaut, Berys 114, I I 5 Damasio, A. 267 Darwall, S. 114, 129 Davidson, D. 117,119,121, 217-21, 224-6,237,239,257,260 Dawes, R. 230 Dawkins, R. 246, 267 decision theory 5,6,136,142,148-50, 153,192,219,267,276-7 challenges to 138-40, 230-1 and instrumental reasoning 102-7, I 10 Dennett, D. 181 denying the antecedent 56-9, 70 deontic logic 93-5, 126, 192-5 see also instrumental reasons/reasoning/rationality: as intention reasoning Dewey, J. 210-1 I, 222, 225 dispositionalist theories of propositional attitudes 2°9-12,218-23 Donald, M. 234-S Dummett, M. 44-S, 204 Dutch book argument 142-3 Ellis, B. 217 Ellsberg's paradox 230 epistemic rules/principles 3-4, Chs.
2
and 3
passim Evans, J. St. B. T. 144, 149, 160-I, 201, 2°3 evolutionary psychology 7-8, 141, 148, 1S2, 165-6, 181, Ch. 8 passim, 266-7 and cheater detection 191-6 and dual process theory 191, 201-4 evolutionary stable strategies 249 and game theory 248, 25 I, 254, 267 see also modularity; rationality
Index expressivism (non-factualism) about justification/rationality 2-3, 9, 28, 31-4, Ch. I I passim and in ethics 27 1-3, 275 Fermat's last theorem 38, 57, 60 Fodor, J. A. 188, 19 1 frame problem 136, 176 Frege, G. 49, 63-4, 275 full belief, potential states of 212=16 Galileo 273-4 Geach, P. T. 90, 275 Giaquinto, M. 26 Gibbard, A. 1,31-4, 270, 27 6, 277, 27 8 Gibson, J. J. 150, 250, 25 2 Gigerenzer, G. 150, 161, 162, 16 5, 197, 19 8, 199 Goldstein, D. 150,161,162,165 Gould, C. L., and Gould, S. J. 240-I, 256 Hale, R. 54, 64 Han1blin, C. L. 177 Hare, R. M. 90, 271 Harper, W. L. 270 Heinrich, B. 259 Hershberger, W. A. 255 Hume, David 89 Hume's Principle 64, 66 Hursthouse, R. 122 hyper-decided thinkers 9-10, 275-6 imperatives, logic of 90 information (Shannon-Wiener) 157-9 instrumental reasons/reasoning/rationality 5-10, Chs. 4 and 5 passim and adaptive modules 190 ff. comparison with belief reasoning 88-90 and creatures without language Ch. 10
passim and dual process theories 203 as intention reasoning 5, Ch. 4 passim and meta-reasoning 99-102 and normative ascent 97-9 reasoning and reason-giving 92-7 see also decision theory; Means-End Principles; normative requirements; rationality Jackson, F. 183 James, W. 210 Jeffrey, R. 102 Jessop, A. 199
justification: default/default reasonable beliefs 21-3, 27,4 1,5 2-5 and entitlement 3, 4, 38 and epistemic (ir)responsibility 40-I, 61-73 inferential (of rules of inference) 3-4, 23-7, 52 £f., 54, 70, 78 and internalism/externalism 37-4 I, 57-66 ,7 2- 8 3 non-inferential (of rules of inference) 3-4, 20 £f., 52 £f., 70, 76, 81-3 and perception/observation 20, 52-3 and relativism 3,15,28-31,44,51 see also epistemic rules/principles; expressivism (non-factualism) about justification/rationality; rules of inference: transmission of warrant/justification by Kacelnik, A. 242 Kamm, F. 91-2 Kant, I. 83, 189-9 1, 197, 203, 204 Kastak, C. R., and Kastak, D. 240 Kohler, W. 259 Korsgaard, C. 96-7, 189 Krebs, J. R. 24 2, 245 Kyburg, H. 20 language of thought hypothesis 239 Lavallois flakes 256 ff. Lehrer, K. 40 Levi, I. I, 8-9, 217, 223 Locke, J. 175, 179, 23 8-9 Lowe, E. J. 1,7, 182, 238 McClennan, E. F. 230 McDermott, D. 60 McDowell, John I 17 Machina, M. 230-1 Mackie, J. L. 2, 4 Manketlow, K. L. 195 Marr, D. 149-51, 160 Means-End Principles 122-6, 129-3 I mental logic/rules 144-5,188, 192, 201 mental n10dels 144-5, 188, 201 Millar, A. I, 5-6, 117 Millikan, R. 236ff. Mithen S. 234-5 modularity Ch. 8 passim, 266-7, 268 massive modularity hypothesis 187, 188, 18 9,19°,19 1,201-4 Moore, G. E. 265, 271-3, 275-6 Morgenbesser, S. 223 Motivation Principle 117-19,122,131
Index Nagel, Thomas 29, 45, 114 natural sampling 197-201 naturalistic fallacy 221, 223 see also Moore, G. E. necessity, knowledge of 70 ff. Newcomb's problem 270, 279 Newtonian mechanics 146 normative requirements 93-110 norms/principles of rationality: Ch. I passim as authoritative for everyone 278 as constitutive of propositional attitudes 228-3 1 and n10ral norms 2 objectivity of 2-4; see also epistemic rules/principles; instrumental reasons/reasoning/rationality; rational analysis shape of 4-6 and psychological explanation/prediction 5-1 I, Chs. 6-1 I passim Nozick, Robert 67 Oaksford, M. I, 7, 8, 156-9, Ch. 7 passim optimal foraging theory 150, 151, 166, 181, 24 2-3,245-7 Oster, G. E, and Wilson, E. O. 181 Over, D. E. I, 7ff., 149, 16o-I, 195, 199, 201, 203 Panglossian paradign1 18 I, 246 Peacock~C. 23,36,43 Peirce, C. S. 210,213,214,216,218, 222,225 Plato 29, 224 Popper, K. 155-6 pragmatism/pragmatists 210 ff. Premack, D. 24 I principle of constitution 273-6 principle of lost opportunity 245 Prior, A. N. 26, 42, 56 Quine, W. V. 8, 28, 45, 210, 224 Ramsey, E P. 2 I 8, 229 rational analysis 7-8, Chs. 6 and I passim and computational limitations 147 ff., 150-3, 16 3 and optimal behaviour function 147-53, 175,181 and the psychology of reasoning 154-60 rationality: and adaptedness 9,14 1, 149 ff., 15 2 , 154, 244-9; see also evolutionary psychology; modularity
and algorithms capturing actual modes of reasoning 160-6, 192 and competence/performance distinction 7, 144 concepts of-as naturalistic 267-9 concepts of-as non-naturalistic 265 concepts v. properties of 9, 271-80 and control over attitudes 209-12 and creatures without language 9, Ch. 10
passim everyday v. formal 7,135-50,153-5, 160-6,175-80,23 8 and heuristics/biases 144, 163, 19 8,27 1 ideal v. bounded 270 inference-based conception of 237-43, 25 2,257,261 internal and external 236, 260 levels of 9, 244-62 and naturalism Ch. I I passim and normative equilibrium 217 normative v. evolutionary 202 procedural 237 rationality I v. rationality 2 149, 160-1 see also hyper-decided thinkers; instrumental reasons/reasoning/rationality; norms/principles of rationality; rational analysis reasons for action: and arational action 122 justifying/justificatory Ch. 5 passim motivating/explanatory 113,117-19 normative Chs. 4 and 5 passim pro tanto 92, 94, I I 5 see also instrumental reasons/reasoning/rationality reasons for belief: and justification 6, Chs. 2 and 3 passim and objectivity Chs. 2 and 3 passim reflective equilibrium 140-1 Rescorla, R. A. 162, 163, 255 Richerson, P. 266 rule-circularity 3, Chs. 2 and 3 passim rules of inference Chs. 2 and 3 passim and the acquisition condition 56, 59-69, 7 2 ,74 and meaning/concept-constitution 3-4, 23,39-47,57- 6 7 and possession of prepositional attitudes 237 scepticism about 5I, 54 transmission of warrant/justification by 18-19,25,27,35-4 2,47,5 6- 8 3
286
Index
rules of inference (cont.) see also justification; mentallogic/rules; mental models; rational analysis; Wason selection test Russell, Bertrand 28, 63
Take-the-Best algorithm 161-3, 16 5 Thagard, P. 141 tonk 26, 42-3, 56-8, 62 ff., 69 ff. Tooby, J. 190, 194, 19 6, 197, 200,
Scanlon T. M. 114 Schelling, T. 266 Schueler, G. F. 114 Schusterman, R. J. 239-4 1 Seidenfeld, T. 23 I signalling strategies 248-9 Skucy, J. C. 255 Sloman, S. 20 I Smith, M. 114,115,119 Sperber, D. 266 Stalnaker, R. 270 Stanovich, K. E. 20 I, 202 Stephens, D. W. 245 Stevenson, C. L. 272 Sturgeon, N. 280 Suppes, F. 218 Symons, D. 266 System I and System 2 see Stanovich, K. E.
triple effect 9 I Tversky, A., and Kahnman, D. 164,
201,266
199-20 0
Uniacke, Suzanne
I
16
Van Cleve, James 16, 36,46 Velleman,]. D. 33 verificationism 17 von Wright, G. H. 88 Wagner, A. R. 162,163 Wason selection test 7,137, 155-9, 166, 175,182-4,191- 6,243
Watson, G. 118-19 Williams, B. A. O. 116 Wittgenstein, L. 28, 44, 5 I Wright, Crispin 1-4, 24, 26, 35, 40-I, 64