We can now use what we have seen in Hume’s discussion to understand the basic idea of Bayesianism. In all the cases Hume discusses, he asks us to use our previous experience of people’s behavior and the workings of the world when we try to decide whether to believe a report. The Indian prince bases his decision on his own previous experience, the eight days of darkness case asks us to consider just how hard it would be to hoax the whole world, the Queen Elizabeth case asks us how hard it would be to hoax a smaller public, and the Bible case asks us how strange it would be for an ancient text to report things that did not happen. All of these cases require us to first consider what we first take to be generally true from our experience and then to assess the new information in relation to that general experience. Paraphrasing Hume, we ask whether the falsity of the new information would be more extraordinary, given our experience, than believing what the information says, given our experience.
This is the core of the Bayesian method. The method is named for Thomas Bayes (1701-1761), an English mathematician, philosopher, and minister. Instead of viewing events as probable or improbable on the basis of how frequently they happen, Bayes asked how confident we should be about some new bit of information given our other beliefs, or what degree of probability we should place on the new information. It is therefore a subjective view of probability since it focuses not on the “real chance” of an event happening but on how likely we should think such an event is.
Bayes expressed his method through a precise mathematical theorem. We will not need to delve into that theorem to understand what the core idea is (in case you are interested, the theorem is given and explained briefly at the end of this chapter). But Bayes’ theorem is most often explained in terms of a situation in which someone is being tested for having a certain disease, and some math is always used in these cases. In the interest of providing a general introduction to Bayesianism, we will take the time to work through one of these typical medical cases to see how it works. Then, we will make the underlying idea more general.
1. A medical example
Suppose Patrick has been exposed to some rare but frightful disease. Only one person in a thousand has it. He has not shown any symptoms, and has no other reason to think he actually has it, but he is worried, and so he goes to a doctor to be tested. Alas, the test comes back positive. He wants to know how accurate the test is. The doctor carefully explains that the test is accurate 90 times out of 100, and only generates false positive results in 5 cases out of 100. It is the worst day of Patrick’s life, for he now believes he has a 90% chance of having this terrible disease.
But should Patrick be this concerned? This is where Bayes’ theorem does its work, and the results are surprising. Ordinarily, before any tests or anything else, how worried should Patrick be about having this disease? Only one person in a thousand has it, so Patrick should not be worried very much. But now his positive test result comes in, and the test “gets it right” 90 times out of 100. Following the core idea of Bayesianism, Patrick needs to assess the test results in the light of his previous knowledge, namely that the disease only affects one in a thousand.
So what should Patrick consider? He needs to think how this test result changes his earlier guess about how likely it is he has the disease. He needs to compare how likely it is he is in the group of people who have the disease and would test positive and how likely it is he is in the group of people who do not have the disease but still end up testing positive. If we have a million people, there will be 900 people who have the disease and test positive (that’s 90% of the 1,000 people who will have the disease in a population of a million people). On the other side, the test offers a false positive result 5% of the time, which means that out of the one million people we are considering, 49,950 people do not have the disease but would still test positive for it (that is 5% of the 999,000 people who do not have the disease). So really, Patrick should consider which is more likely: that he is among the 900 people who have the disease and test positive? Or that he is among the 49,950 people who do not have the disease but still test positive?
We are now looking at a smaller group of 50,850 people out of a million who would test positive for the disease whether they really have it or not. Patrick should consider whether he is in the diseased group of 900 out of this 50,850, or in the undiseased group of 49,950 out of 50,850. All other things being equal, the odds of being in the first group is about 1.8%. The odds of being in the second group is over 98.2%. So, the positive test result tells him that he should update his chances of having the disease from about a one in a thousand to about a two in a hundred. Admittedly, this is a significant increase. Patrick should be more worried than he was before the test. But he should not be a lot more worried. He certainly should not believe there is a 90% chance that he has the disease!
Here is our reasoning, laid out in a table:
|In a population of 1,000,000 people, how many have the rare disease?||Of the 999,000 without the disease, how many will test positive?||Of the 1,000 with the disease, how many will test positive?||Patrick tests positive; what are the odds he is in either of these groups?|
|999,000 do not have the rare disease||49,950||49,950 out of 50,850, or ~98.2%|
|1,000 do have the rare disease||900||900 out of 50,850, or ~1.8%|
|(total number of people who test positive = 50,850)|
This is a surprising result (and no small relief to Patrick!). It comes about because Patrick is reminded that the disease itself is very rare, and when the rarity of the disease combines with the imperfection of the test, the numbers end up being not as threatening as one might otherwise think. When we jump from the claim that a test is accurate 90% of the time to the conclusion that Patrick, testing positive, has a 90% chance of having the disease, we are forgetting about just how rare it is for anyone to have the disease in the first place. Once we remind ourselves of that background knowledge, and we do our math, we have a much more accurate sense of how worried Patrick should be. There are over fifty times as many people who do not have the disease and test positive than there are people who do have the disease and test positive—just because the disease is so rare.
Suppose Patrick figures all this out, and he is somewhat relieved, but he still knows that he has a greater chance of having the disease than before. He would like to know with greater certainty whether he actually does have the disease. What should he do? He should be tested again. Remember, the number of people (out of a million) who had the disease and tested positive was 900, and the number who did not have the disease and tested positive was 49,950. Let’s imagine this whole group taking the test again. Again, we are assuming no one is showing any symptoms, and we have only the test results to worry about. Out of the 900 people with the disease, 810 will test positive again (for the test correctly catches the disease 90% of the time). Out of the 49,950 who tested positive without the disease, only about 2,500 will test positive again (for the test gives false positives only 5% of the time). So, now we will have a smaller group of 3,310 who have tested positive a second time. If Patrick gets a second positive result, then he is either among the diseased 810 out of 3,310, or he is among the undiseased 2,500 out of 3,310. There is a 24% chance of being in the first group (diseased), and 76% chance of being in the second (undiseased). This is more worrisome, as he now has a 1 in 4 chance of having the disease. If he tests positive a third time—and you can do the calculation on your own!—his chance of having the disease is 95%.
By the way, if the disease were far less rare—affecting 1 in 100 people, for example—and all of our other numbers stayed the same, getting a positive result on the first test should tell Patrick that he has a 15% of actually having the disease; a second positive result should tell him that his chance of having the disease is 89%.
The moral of this story is that you should always get a second opinion. It also shows that the new information we receive has to be positioned correctly relative to our broader knowledge of the world. When a new piece of information comes your way, you need to remember what your previous experience of the world is and let that previous experience guide how much importance you attach to the new piece of information.
2. Another example
Let’s turn to another sort of case to see Bayesianism in action. Suppose you read that there has been a UFO sighting. Someone driving on a deserted highway at night reports they saw a glowing disk descend from the sky, hover over the ground for a minute, and then shoot back up into the sky. What should you believe? In particular, should you believe that this sighting is compelling evidence for the claim that Earth has been visited by intelligent extraterrestrials?
We should begin by considering what our previous experience says about the likelihood of alien visitors. From what we know, the universe is a very big place, and spaceships can only travel so fast (less than the speed of light), which means that it takes a very, very, very long time to travel from planet to planet or from solar system to solar system. There probably are intelligent beings elsewhere in the universe just given how big it is, but the likelihood that they are anywhere near us is very, very low. So, our previous experience suggests that the likelihood of alien visitors is extremely low, perhaps on the order of one in a billion or one in a trillion or even less.
Now, let us consider the new information, the UFO report. We need to weigh the extremely low probability of alien visitors against the likelihood that this report is true. It is an extraordinary report; people do not often report similar experiences. What sort of probability should we attach to it? Let’s consider it. On the one hand, people do very often offer true reports of what they experience. That’s normal. But on the other hand, sometimes people lie about extraordinary experiences. Sometimes they seem to have extraordinary experiences, but the experience is due to psychological stress or mishap. Sometimes such experiences might be caused by strange weather phenomena, rare distortions of light, or anything other than alien visitors. There are many alternative explanations.
We have to combine all of these possibilities into an overall estimate of how likely we think it is that the person actually saw an alien spaceship. If we guess that all of these possible explanations for their experience—really seeing a ship, lying, psychological stress, weather—are all equally likely (which is a generous guess!), we might think the chance of the experience really being a sighting of an alien spaceship is about 1 in 4. Put another way, by our estimate, it is three times more likely that someone wrongly believes they have seen an alien spaceship than that they have truly seen one.
Now we put that 1 in 4 chance in the broader context of the overall extremely low probability of alien visitors. (Note that this case is similar to the case of Patrick and the very rare disease.) The chances that aliens have visited earth, and that this person actually saw them, is extremely low (one in a billion, say), mainly because the likelihood of alien visitors, given what we know about the vast distances of space, is extremely low, and the likelihood that the person falsely believes themselves to see an alien spaceship is so much greater. So, in all, the report of a UFO should not cause you to significantly change your belief about alien visitors. It is far more likely that the report is coming from some other cause.
But what if lots of people report such experiences? This might change our belief, depending on the details. If by “lots” we mean thousands of reports coming from individuals driving at night on lonely highways, our beliefs should not change all that much. It is still far more likely that their experiences are coming from something other than actual alien visitors. But suppose by “lots” we mean millions of reports from people driving on highways and in crowded sporting events and public assemblies, plus observations of alien spaceships on radar and from orbiting satellites and from professional astronomers and even a video recording from the International Space Station of aliens cruising by and waving from their spaceship window. (It is going to take a lot if we are to overcome the initial extremely low probability of alien visitors.) If these are the reports we are considering, we are in a case like Hume’s eight days of darkness. The likelihood of so many reports coming from some cause other than alien spaceships becomes extremely low, and we should regard the possibility of alien visitors as far more likely.
Again, the general rule David Hume offered can provide a quick assessment. Which would be the greater “miracle”? Is it more of a miracle for the reports to be explained by some other cause or for the reported events to have actually happened as described?