Probability, Mammograms, and Bayes Law

The New York Times Magazine Ideas Issue is a gold mine for a blogger in operations research. Either OR principles are a key part of the idea, or OR principles show why the “idea” is not such a great idea after all.

One nice article this week is not part of the “ideas” article per se but illustrates one key concept that I would hope every educated person would understand about probability. The article is by John Allen Paulos and is entitled “Mammogram Math”. The article was inspired by recent controversy on recommended breast cancer screening. Is it worth screening women for breast cancer at 40 or should it be delayed to 50 (or some other age)? It might appear that this is a financial question: is it worth spending the money at 40 or should we save money by delaying to 50. That is not the question! Even if money is taken out of the equation, it may not be a good idea to do additional testing. From the article:

Alas, it’s not easy to weigh the dangers of breast cancer against the cumulative effects of radiation from dozens of mammograms, the invasiveness of biopsies (some of them minor operations) and the aggressive and debilitating treatment of slow-growing tumors that would never prove fatal.

It would seem to have an intelligent discussion on this, there are a few key facts that are critical. For instance: “Given a 40 year old woman has a positive reading on her mammogram, what is the probability she has treatable breast cancer?” Knowing a woman of roughly that age, I (and she) would love to know that value. But it seems impossible to get that value. Instead, what is offered are statistics on “false positives”: this test has a false positive rate of 1%. Therefore (even doctors will sometimes say), the probability of a woman with a positive reading is 99% likely to have breast cancer (leaving the treatable issue by the side, though it too is important). This is absolutely wrong! The article gives a fine example (I saw calculations like this 20 years ago in Interfaces with regards to interpreting positive drug test results):

Assume there is a screening test for a certain cancer that is 95 percent accurate; that is, if someone has the cancer, the test will be positive 95 percent of the time. Let’s also assume that if someone doesn’t have the cancer, the test will be positive just 1 percent of the time. Assume further that 0.5 percent — one out of 200 people — actually have this type of cancer. Now imagine that you’ve taken the test and that your doctor somberly intones that you’ve tested positive. Does this mean you’re likely to have the cancer? Surprisingly, the answer is no.

To see why, let’s suppose 100,000 screenings for this cancer are conducted. Of these, how many are positive? On average, 500 of these 100,000 people (0.5 percent of 100,000) will have cancer, and so, since 95 percent of these 500 people will test positive, we will have, on average, 475 positive tests (.95 x 500). Of the 99,500 people without cancer, 1 percent will test positive for a total of 995 false-positive tests (.01 x 99,500 = 995). Thus of the total of 1,470 positive tests (995 + 475 = 1,470), most of them (995) will be false positives, and so the probability of having this cancer given that you tested positive for it is only 475/1,470, or about 32 percent! This is to be contrasted with the probability that you will test positive given that you have the cancer, which by assumption is 95 percent.

This is incredibly important as people try to speak intelligently on issues with statistical and probability aspects. People who don’t understand this really have no business having an opinion on this issue, let alone being in a position to make medical policy decisions (hear me politicians?).

Now, I have not reviewed the panel’s calculations on mammogram testing, but I am pretty certain they understand Bayes Law. It makes sense to me that cutting down tests can make good medical sense.

9 thoughts on “Probability, Mammograms, and Bayes Law”

  1. The vote seems to be in favor of breast cancer screening, if the probability of finding cancer in those with the disease is as high as 95%. From a patient’s point of view, I think they would always rule in favor of ruling out all possibilities—and if mamography is the solution, so be it!

  2. My favorite sentence from the entire article was “For many, the only probability values they know are `50-50′ and `one in a million.’ ” It rings very true.

  3. Calculations that you quote are, in fact, in the book by Sheldon ROSS (Intro to Probability Models, Academic Press, see page 13, Example 1.14 in Chapter 1). We cover the material the first week of classes in probability.

  4. The manufacturer or seller of a machine should clearly state the probability that a person has cancer given the test is positive. Any other statistic or probability is sure to create confusion. Expecting patients or doctors to use Bayes theorem for a test seems unreasonable to me.

  5. The problem is that eventually insurance companies (or the universal insurance plan) will only reimburse according to these new recommendations that may or may not consider family history and the like. I consider my sister in her 40’s. Since our mother died from breast cancer that wasn’t detected until she was 50 (because she didn’t get screened until there was already a problem), it makes sense for my sister to get annual screenings starting earlier than my mother did.

  6. This is indicative of why the entire Obama care plan is setting up a grand failure. When feelings are the basis of law we are doomed.

  7. Many of the stories in the media failed to mention that the PSTF mammogram recommendations do take account of family history and other risk factors. The delayed start and reduced frequency recommendations are for women at average risk for breast cancer. Individual patients are encouraged to consult with their physicians to determine the right course for them. It’s important to understand that screening is not risk free–the recommendations balance the risks of screening against the risks of not screening.

    simone doesn’t make clear what “this” is, but part of the point of reform is to promote scientific comparative effectiveness research and get away from “feelings as the basis of law” (at least in relation to this issue). The final version of the Senate bill strengthens the role of the Medicare advisory panel that makes those determinations and reduces the role of politics. It’s important that knowledgeable professional scientists are the ones assessing the data and making recommendations, precisely because they understand these issues and politicians don’t understand them or subordinate them to their political agendas.

Leave a Reply to simone Cancel reply

Your email address will not be published. Required fields are marked *