Probabilism is a subjective theory of probability. On this interpretation, probabilities are not a measure of something ‘in the world,’ but rather a measure of the confidence, degree of belief, or uncertainty – however you want to slice it – of the agent at a given time. This confidence is measured on a scale from 0 to 1, from impossibility to certainty. For any given event, the agent assigns it a probability between these numbers based on how likely they think it will occur. This includes paradigmatic cases of situations in which we think we understand the objective ‘in-the-world’ probability of something. For example, if I flip a coin, it does not have a 0.5 probability of landing on heads (like we would typically think); it is just that my confidence that it will land on heads is 0.5. This causes some problems for the account that I will go over later, but for now, it is just important to note how radically internal this account is to the agent. The benefit of the account is that it avoids all the pitfalls of having to formulate an objective theory of probability, which is highly metaphysically elusive.
How likely we think something will occur, traditional probabilists often suppose, can be measured roughly by our willingness to act on some belief, given a (probabilistically consistent) appraisal of the expected utility of the situation.[1] For example, I would happily bet $1 on a coin to land on heads if it were paying out $10 because I know that there is a good chance it would benefit me much more than I would lose by betting on it.[2] Conversely, someone who erroneously thought it impossible that coins could land on heads would assign the probability to 0 and would thus act on their belief by never taking such a bet. The strength of some belief is measured by the strength of one’s disposition to act in any given situation. I have no room to defend this claim here, but it is a relatively intuitive notion, especially in the case of expected utility (broadly construed, not just in terms of bets, because money’s value scales non-linearly for most people).
The probabilistic theory can be extended to be a complete epistemological theory. A theory of probabilistic epistemology says basically three things:
(1) All beliefs are partial or uncertain beliefs, where the extent to which we think each belief is true is measured by the relative confidence we assign to it.
(2) All beliefs ought to be consistent with the laws of probability as a coherence constraint on belief formation.
(3) All beliefs, when faced with new evidence, ought to be updated according to conditionalisation, a rule of probabilistic inference.
Of course, this is a highly artificial example, and it is not at all clear that we would ever be in a position of certainty regarding new evidence – that is, to assign it a probability of 1. It is more likely that we would be at least partially sceptical of such evidence. For example, news outlets are often wrong or biased, and we ascribe a certain attenuated level of confidence to its proclamations. Alternatively, in scientific discovery, theories thought to be certain are often overturned. Thus, this has led some to factor this uncertainty about the evidence into the conditionalisation process by assigning confidence values to the evidence received and calculating the conditional probability of any statement that includes the possibility of the relevant evidence turning out to be false. Taking this route means that nothing can be logically certain and thus mirrors the debate between those that want to keep fully-fledged beliefs versus those that want to dispense with them. These differences do not matter for the content of this essay, especially here in the expository part. It only needs to be noted that all probabilists think that conditionalisation in most contexts is the process that ought to govern the way we revise our beliefs if we are to be rational. The specific constraints one puts on this process will be important in determining which problems each probabilist theory will face, but only contingently to the problem I will be concerned with.
A probabilist epistemology says three things. Firstly, at least some (or all) of our beliefs are partial beliefs and are thus probabilistic. Secondly, those partial beliefs ought to be formed according to the laws of probability. Finally, rational agents ought to update their beliefs by conditionalising their current beliefs in light of new ones.
III. There are many potential problems with a probabilistic epistemology. For example, postulate (3) entails that a completely rational agent would have to conditionalise every single belief in light of any arbitrary new evidence, which could be as banal as seeing something. Since probabilists also think that if we are to be rational, we must follow (3), it seems to follow that we ought to conditionalise every single belief in light of every bit of evidence. We do not need to test anything to know that this is not something any human could possibly undertake, and if ought implies can, then (3) must be false. Prima facie, this is a pretty strong argument with plausible premises. I think this is the biggest problem with the theory (and not one I can dispute here), or more generally that it seems to entail in inhuman task to actually follow these rules. Perhaps one could plausibly weaken the norm for rational agents only to include the need for revising those beliefs possible or relevant to one’s situation, the stronger norm remaining an ideal.
Another problem is in justifying the norms (2) and (3). This is often done by appealing to ‘Dutch Book Arguments’, which show that we have good prudential or pragmatic justification for conforming our beliefs to the probability calculus (because otherwise we could always be ripped off). They purport to show that if we fail to do this, we will make decisions that seem obviously irrational and lead us to ruin. As far as norms go, these seem as rationally acceptable as any standard norms or rules of inference we find in classical logic prescribing consistency and good inference. Thus, I am happy to accept them prima facie. One might worry that the prudential justification is not enough, but I do not, at least as far as developing a deductive logic for partial beliefs go.
Finally, a third problem could be that the acceptance of a probabilistic epistemology and the consequent idea that it is possible that nothing being certain jeopardises the status of the full belief of anything other than the norms supposed to govern probabilism. This would mean giving up beliefs entirely in favour of partial beliefs, which is quite the price to pay. This is also a bullet I can bite, if I were forced to take this position, and no hybrid formulations worked.
What I am more concerned about here is, if you accept probabilism, by its own lights, can it do the work we need a theory of knowledge to do when it comes to knowing things about the world. This problem will take up the rest of the essay.
IV. There is a slew of problems with the probabilist theory of knowledge that I am worried about, which I would call problems of subjectivity. They include the problem of priors and the problem of inductive content. This section goes over these in turn, followed by a brief discussion of convergence.
The latter problem of inductive content follows from the previous problem. Probabilistic epistemology purports itself to be a formalisation of inductive logic. Thus it purports to show how to confirm theories and how to make true (or at least accurate) predictions about the world. However, as we have just noted, the values assigned to any given belief cannot just be plugged into the agent directly by the world. The agent has to come up with them – and they could be arbitrary. We have a plausible deductive formalism for organising and updating our partial beliefs in conditionalisation, but that does not logically guarantee the inductive efficacy of the content arrived at by following such formalism. We could be perfectly rational according to probabilism, and yet our belief scheme would be, in McDowell’s memorable phrase, “a frictionless spinning in the void.”[4] Thus the theory, in order to be successful, needs to secure some friction on the world.
Both of these concerns are also concerns that rational Bayesian agents would not converge both with other people and on truths about the world. This is because initial probabilities would differ in a way that no possible conditionalisation is ever likely to yield overlapping posterior beliefs, to levels rationality would seem to require. We seem to need such convergence to communicate, co-operate, and ultimately justify our beliefs as convergence, at least in some cases and over the long term, is one of our best indications of truth.
There are broadly two approaches to these problems. The first can be called ‘subjective probabilism’ while the second can be called ‘objective probabilism.’ For my thesis, the specifics of the individual theories developed by philosophers along the continuum these poles constitute will not matter. I will just outline the limit cases of each to give one an idea of the conceptual space.
The maximal subjectivist would think there are no rational constraints on prior probabilities beyond the probability calculus itself, and prior probability assignments are entirely due to ‘irrational’ accidents such as culture, genes, or luck. Thus, the subjectivist bites the bullet on the possibility of no convergence between agents, the truth, and the world. The maximal objectivist, on the other hand, would think a uniquely rational and objective prior probability is determined in every case by some a priori principle. This solves the problem of convergence and the other problems of subjectivity by showing that insofar as the agent strays from the objectively correct probability assignments, they are irrational for doing so. Thus, on this picture, convergence is, by definition, rational. Rational divergence between agents is simply not possible – truth and reality are baked in. The former theory is easy to accept and specifies a minimal number of norms to follow and defend but sacrifices possible friction, while the other specifies an impossibly complex and unreachable number of norms to defend but has perfect friction.
Without getting into the weeds about what kind of rational constraints we can impose on adopting initial beliefs, my contribution is that I will argue that we have good reason to think that even a maximally subjective interpretation of a probabilistic epistemology would not be saddled with an objectionable level of subjectivity or arbitrary belief. I argue that the most pernicious possible problems of subjectivity simply cannot occur in normal humans and that most people’s beliefs will not only converge but also be true. And securing this only requires a few minimal assumptions. Thus, I attempt to vindicate the most vulgar form of subjective probabilism by explaining away the problems without simply biting the bullet (as our theoretical maximal subjectivist would).
V. In this section, I argue that even on a maximally subjective interpretation of probabilistic epistemology, mere coherence being satisfied is enough to guarantee both friction and convergence.[5] It is my theory of friction. I give three arguments for this claim. The first argument attempts to ameliorate the problem of priors and convergence, while the latter two attempt to ameliorate the problem of inductive content or friction.
Firstly, I think the sceptical rejection of our shared basic belief structure being true cannot be rationally accepted. I can obviously offer no decisive refutation of global scepticism here, nor can anyone else. I can only point out the irrational conclusions this objection entails. This response entails that all our basic beliefs are false and that at no stage during inquiry, even as a community, do we ever course-correct in the right direction. One must ultimately think not only that those individual beliefs are false, but that the most basic propositions of human knowledge that we all seem to share are also false.
All I can offer by way of argument is to appeal to one’s priors regarding scepticism. What seems more likely to be true: that all our beliefs are systematically and universally false, or that some basic beliefs regarding our own experience, existence, and presence in the world are true? If one honestly asks themselves, very few people truly have more confidence in the former than the latter. Thus, the sceptical premise should simply be abandoned in favour of a presumption of truth regarding our basic beliefs. If this answer is unsatisfactory because one’s condition for having any purchase on the world whatsoever is that the belief could not conceivably turn out to be false, it will be so for every other theory of knowledge, not just probabilism. Thus, qua epistemological theory, this is not a special concern for probabilism, lest we give up knowledge entirely. We have good reason to think, therefore, that our basic beliefs do have some inductive content.
Secondly, I think our distinctive form of collective sociality as humans indicates not only that our basic beliefs have some inductive content, but some of our best theories, rationally arrived at, do too. While our human constitution as individuals offers a restricted platter of initial probability assignments, our collective position as social creatures restricts the direction of conditionalisation by imposing communal standards on what is to be counted as worthy of belief. As humans, we care about not only what is true individually, but what others think is true. As Peirce writes, in seeing that “men think differently from him…it will be apt to occur to him, in some saner moment, that their opinions are quite as good as his own, and this will shake his confidence in his belief.”[7]
We are constantly conditionalising our beliefs socially. Firstly on the fact that others believe different things to us and seem to do so rationally, and secondly on the alternative beliefs themselves seeming worthy of consideration. For example, take someone with a strongly held political belief but very little exposure to alternative points of view or to what is possible in public policy and in politics in general. They will start out relatively dogmatic on the issue. However, have one of their friends whom they trust (that is, assign a high probability to their evidence generally) earnestly share an alternative and discuss the workings of government with him, and he will be confronted with the arbitrariness of his own views. Consequently, he will conditionalise on his beliefs substantially. Even if this even-handed correction does not take place, one that cares enough might instead attempt to prove their own beliefs with more certainty in response to opposition and, in doing so, bring more true information and thus worldly friction to all belief sets. In fulfilling our impulse to know and to be correct about these things, we are cumulatively performing error correction on a given body of knowledge within a given social system. This arguably describes the process of an adequately systematised (though certainly idealised, but not alien) science. Ultimately, this process lifts our explanatory theories from the blind contingency of subjective choice and into the realm of socially tempered objective explanations of reality. It does this because, ultimately, we care, as humans, not merely about what seems true to us in the moment but what others think and especially what people ought to think, given the evidence (and we are happy to tell them so).
Part of this process involves enforcing the kind of consistency demanded by probabilism. We find it exceptionally psychologically dissatisfying to entertain contradictions within theories. But, more importantly, we find the same dissatisfaction with opposing candidate theories of particular phenomena. Thus, we try to stamp those contradictions out by seeking out confirmatory evidence and thus the required friction. In sum, the conditions of sociality make us care about consistency, completion, and the reality of our knowledge about the world and “unless we make ourselves hermits, we shall necessarily influence other’s opinions” in this way. My contention here is that the mere facts of sociality with basic beliefs as initial conditions, even absent some substantial objective norm of rationality, is enough to guarantee that rational agents following the formal tenets of the most austere subjective probabilism will run up against the world.
Thus, to sum, with the minimal assumptions that global scepticism is not true, that we share many basic beliefs, and that these beliefs have some purchase on the world, any rational conditionalisation of one’s beliefs will always be heavily constrained as to guarantee convergence. Also, these basic beliefs, which we cannot help but have about reality, are not merely subjective, but about the world, and largely true. Furthermore, standard conditions of social interaction give us reason to think that even our more tenuous and theoretical partial beliefs will have some predictive efficacy. Therefore, against the problems of subjectivity outlined above, we have good reason to think that any person with a (mostly) coherent set of beliefs (that is, adheres to the laws of probability) will have a set of beliefs that are also true to the facts (and not merely convergent with others), regardless of the subjective starting point. Probabilism, constrained by human nature, will always have inductive content.
VI. To end, one might still be worried about those seemingly insoluble difficulties that arise from disagreements about politics, public policy, or morality. These things do not constitute part of our basic beliefs and also do not seem to be captured entirely by communal belief-forming practices. They are the primary source of frustration when it comes to appraising the rationality of others. When people fundamentally disagree with us politically (we are confident their beliefs are false), probabilism gives us no grounds for repudiating such beliefs (because they are still deemed rational). I do not think this is a problem, and argue that the kind of 'internalism' about rationality entailed by this account is good, actually.
The fact that there is not perfect convergence of rationality to reality is not remotely a point against probabilism as a theory. As I have indicated, this does not result in a frictionless spinning in the void, or epistemological anarchy, because it is simply not possible that it could. Following the coherence constraints imposed on us by probabilism by itself gives us good reason to think, at least as a community in standard cases, that we are getting some true purchase on the world, that following its rules points us in the right direction, and finally that it leaves open a reasonable level of rational disagreement. Probabilism is not only strong as an epistemological theory; I think we have reasonable grounds to think that one of its biggest obstacles to success, the associated problems of subjectivity, are surmountable, even on the most austere reading of it’s norms.
This does not put aside the problem that probabilism must purport to govern and ultimately describe the actual reasoning processes rational agents must follow, when it is a highly idealised model that we could hardly even begin to follow formally. But it is a good start and gives a compelling (and I think convincing) model of how we should think.
Footnotes
[1] Jeffrey, Richard. Probability
and the Art of Judgment. Cambridge: Cambridge University Press, 1992. 30-31.
[2] The expected utility of this bet would be as follows: (10 x 0.5) + (-1 x 0.5) = 4.5
[4] McDowell, John. Mind
and World. London: Harvard University Press, 1996. 11.
[5] This strategy is somewhat
similar to (and inspired by) Davidson’s in “A Coherence Theory of Truth and
Knowledge” (2001), although he could be saying anything. It is closer in my
mind to being inspired by Peirce’s “The Fixation of Belief” (1993).
[6] "Typical" can be read pretty
widely and I still think this works.
[7] Peirce, Charles Sanders.
“The Fixation of Belief.” In The Essential Peirce, Volume 1: Selected
Philosophical Writings (1867-1893), 109-123. Bloomington: Indian University
Press, 1993. 116.
No comments:
Post a Comment