« Abstract online: "Taking Sellarsian Holism Seriously" | Main | A new dissertation! »

March 26, 2007

Do probability judgments presuppose matters of fact?

In order to support the notion that prior probabilities are grasped a priori, Patrick Maher (2006) gives the example of a coin which one has been told either has heads on both sides or tails on both sides. What is the probability that it will land heads? Understood in terms of physical probability, the answer is supposed to be “either 0 or 1 but I do not know which.” Understood in terms of “inductive” probability, the answer is supposed to be 1/2. Clearly there is some distinction between these answers.

The question is if separating “inductive” probability from this kind of “physical probability” really implies that inductive probability “does not depend on the nature of the coin.” Presumably the motivation for saying that the physical probability is either 0 or 1 is a way of saying that if we knew the full nature of the coin, we would know with certainty whether it would land heads or not. If we knew it was heads on both sides, then we would know for sure that it would land heads up. Surely there is a difference between probabilities we assign in the absence of knowledge of particular facts, and those we would assign in its presence facts: this is one of the most important insights of Bayesianism.

But this makes us think that the difference between the two probabilities here is a difference between two instances of the same kind of probability, not two kinds of probability. After all, even our judgment in the absence of the specific nature of the coin (probability = 1/2) still requires some general knowledge about the coin, and about other facts of the matter. We need to know, for example, that coins are such as to fall on one side rather than another the vast majority of the time (rather than on their edges), such that we can discount this third possibility and say that the chance of heads is 1/2 , rather than 7/16 or some fraction incrementally smaller than 1/2.

Now it might be objected that this ignores the supposition of the example, that we “have been told that a coin either has heads on both sides or else has tails on both sides and it is about to be tossed.” The idea would be that we should take it for granted that it will fall on one side or the other, and judge the probability with that assumption. But it is curious that this simple example of an inductive probability is as complex as it is. An even simpler example would be to ask our probability judgment about what we know to be an ordinary coin that is about to be tossed. In this example, it would be very clear that our judgment of a 1/2 probability of landing heads would be based on our knowledge of the nature of the coin: not only our knowledge that it is extremely unlikely to land on its edge, but on the fact that it has two sides, each of which is different. The artificial example of being told that the coin is one of two possible double-sided coins obscures this.

In fact, I think that even the artificial example could be understood in terms of knowledge about certain facts, rather than anything a priori. The only realistic case in which we could form a judgment of 1/2 after having been told that the coin is either one double-sided coin or another is the case in which we must rely on someone’s testimony to this effect (if we were merely to consider the possibility for ourselves, to select these two possibilities rather than others would in turn presuppose special knowledge of the coin). In that case, we are depending on our knowledge of facts about the reliability of the person’s testimony, and of facts about the general possibility of the construction of such coins—still including the fact that they are likely never to land on their edge.

The artificiality of this example even leads us to wonder if any probability can be assigned in this case at all. In order to judge that there is a 1/2 probability given that we have been told there are two possibilities, we would also have to presuppose that these two possibilities are equally likely. We might naturally do this in the absence of any special reason to think that one is more likely than the other, but only if we know something about how coins are selected. We say 1/2 about a normal coin because we know about its physical construction. But we do not judge that there is always a 1/2 probability of rain, even though we know there are only two options (rain or non-rain), because know the mechanism that produces rain and know it requires a special combination of events. In the absence of any knowledge of the mechanism whereby we are to be given one of these two double-sided coins, it is hard to see why we should assign a probability of 1/2, or any probability at all.

In short, I see no reason to draw a line between “inductive” probabilities and physical probabilities, based on this kind of example. The example does point to a real difference in probability assignments, but can be accounted for by the normal Bayesian method of understanding probabilities as conditioned on amount of available evidence. In short I see every reason to agree here with Norton, that even probabilistic induction is dependent on knowledge of local facts of the matter.

It is important that John Norton (2003) considers priors as in need of justification of their own, because on any other view it is not entirely clear what relevance Bayesianism would pose for solving the classical problem of induction, the problem we have now returned to examine in order to find the justification for our scientific knowledge. Of course it is always open to Bayesians to suggest that the “justification” of prior probabilities is assigned on an a priori basis, but then the force of one of Norton’s points is not to be ignored, that it is curious that such an a priori prior “formed in advance of the incorporation of any evidence, decides how any evidence we may subsequently encounter will alter our beliefs.” If we are concerned with examining probability in order to deal with the classical problem of induction, the question of what justifies our beliefs epistemically, what makes them likely to be true, then unless we have a special theory about how a priori priors relate to the a posteriori claims whose probable truth is in question, it is unclear how we can get by without finding some source of justification for our priors. This, along with other considerations, is what leads Lange (2004) to say that a solution to the problem of induction requires a solution to the problem of the priors (199-203

MAHER, P. (2006), Confirmation theory, in Donald M. Borchert, The Encyclopedia of philosophy, 2nd ed., New York: Macmillan.

NORTON, J. (2003a), A Material theory of induction, Philosophy of Science, 70: 647-670.

Posted by Ben at March 26, 2007 03:23 AM

Comments