John Dewey Professor
I do not think I was ever attracted to “formal methods in philosophy” nor do I think I am drawn to them now. For a long time I have been interested in the topic of justifying changes in points of view. I use the term “point of view” to encompass an agent’s convictions, judgments of uncertainty, and values. I claim that such justification is itself “practical” in the sense that it requires adopting the best means available to the agent relevant to realizing the agent’s current goals.
All of the notions I have just mentioned are, as they stand, highly obscure. It has seemed to me that structures studied by logicians, statisticians, economists and decision theorists could be helpful aids to alleviating this obscurity. I have felt compelled to learn something about the technical ideas in these disciplines in order to improve my understanding of some of the concepts with which I must deal. And these technical ideas and structures have, of course, an irremediably formal character. To the extent that I use formal methods in philosophy, I do so because I have to.
I was early on in my career concerned with the extent to which ampliative scientific inference (statistical inference, inductive inference and theory choice) could be rationalized within the framework of a general approach to rational decision making. I always thought that the expected utility principle was an excellent starting point for giving an account of ampliative scientific inference except for the fact that the principle required the specification of numerically determinate probabilities and utilities for its application. In the 1960’s, I bracketed my concern and built a model of “cognitive decision-making” founded on the injunction to maximize expected utility.
This model was published in a book Gambling with Truth (1967) and shortly after modified in “Information and Inference” (1967).
The classical approach to such hypothetical elicitation did not seem to me to answer the question in which I was interested. That approach presupposed that the ideally rational agent had numerically determinate probabilities, judgments of utility and of expected utility and that such an agent maximized expected utility. By “numerically determinate”, I mean that the agent’s state of belief or credal or subjective probability is representable by a unique probability function over the relevant algebra of states, the agent’s evaluation of consequences (his extended value structure as I came to call it) is representable by a utility function unique up to a positive affine transformation and the agent’s expected utility function (the agent’s “value structure”) is representable by a set of expected utility functions obtained by taking any utility function from the given set and the unique credal probability and determining the expected utility. [This too is unique up to a positive affine transformation.
It seemed to me extremely doubtful that a rational agent should always be committed to such numerically determinate judgments. My worry was not that in practice it would not be feasible to measure or identify the agent’s probabilities, utilities or expected utilities precisely. The classical Bayesian tradition already conceded that, as in physics, so too in psychometrics, perfect precision in measurement is not to be achieved with or without the Heisenberg principle. The classical Bayesian tradition could concede that no one can meet the demands of rationality (including numerical precision in probability and utility judgment) in practice while insisting that ideally rational agents were, nonetheless, committed to numerically determinate probabilities, utilities and expected utilities and that researchers studying such agents could treat such commitments as theoretical magnitudes.
My dissent from the classical Bayesian tradition related to the standards of ideal rationality that classical Bayesians endorsed. I insist that ideally rational agents ought not to be committed to numerical determinacy in probability, utility and expected utility. Rational agents ought in some contexts to be prepared to remain in suspense or doubt concerning probability, utility and expected utility just as they ought to be prepared to remain in doubt or suspense in the sense of withholding full belief that proposition h is true or, equivalently, refusing to rule out as a serious impossibility that h is false and that ~h is false. Allowing ideally rational agents not to be opinionated concerning truth-value bearing propositions is recognized as part of an adequate account of rationality. Indeed, in many contexts, ideal rationality requires refusal to be opinionated.