by Sam Wainwright
There’s a fascinating blog post over at the New York Times math blog that has the health policy program here at New America scratching our heads, sharpening our No. 2 pencils, and dusting off our calculators.
Turns out, German mathematicians have been studying how and why the human brain seems to struggle with comprehending risk. We find it difficult to translate the mathematical fact of probability into an accurate assessment of danger. This can be especially true in medicine, where emotion frequently (if not always) clouds purely rational thinking.
In one study, Gigerenzer and his colleagues asked doctors in Germany and the United States to estimate the probability that a woman with a positive mammogram actually has breast cancer, even though she’s in a low-risk group: 40 to 50 years old, with no symptoms or family history of breast cancer. To make the question specific, the doctors were told to assume the following statistics — couched in terms of percentages and probabilities — about the prevalence of breast cancer among women in this cohort, and also about the mammogram’s sensitivity and rate of false positives:
The probability that one of these women has breast cancer is 0.8 percent. If a woman has breast cancer, the probability is 90 percent that she will have a positive mammogram. If a woman does not have breast cancer, the probability is 7 percent that she will still have a positive mammogram. Imagine a woman who has a positive mammogram. What is the probability that she actually has breast cancer?
Can you solve this word problem? Get it right and there’s a good chance you’ll have outsmarted your doctor.
Find the (shocking!) answer over at the NYT, and then start to think about the challenge of incorporating understandable explanations of risk into the shared medical decision making process. To make sure patients are fully informed means conveying information about a procedure’s risks and benefits in a way they can understand, often when there is neither the time nor presence of mind for SAT-caliber mathematical agility. The lack of accurate and evidence-based guidelines further complicates the situation. For many treatments, we know neither the true probability of success nor how to explain it clearly to a sick and worried patient.
The Affordable Care Act has begun a dialogue about how to generate this probability evidence, and put into motion a number of initiatives to help fill the present knowledge gap. The Patient-Centered Outcomes Research Institute will do vital work to generate the comparative effectiveness studies that inform doctors of a treatment’s risks (so long as GOP efforts at defunding the program are successfully thwarted). At the same time, the law contains provisions to fund the development of shared decision making. Tools called Patient Decision Aids (PDAs) facilitate the necessary conversations between physicians and patients that can make risk-benefit analysis understandable, and can help all parties see that the answer to the NYT’s word problem is only 9%!
It is not for us to say whether or not a mammography screening is appropriate. That’s squarely in the realm of a private patient-doctor decision. However, patients MUST be fully informed. Only with an understanding of the risks of “false positive” results, and the uncertainty and unnecessary care that result from such a finding, can a patient make the decision that is right for them.
Sam Wainwright is an analyst for New America’s Health Policy Program and blogs at The New Health Dialogue.
Submit a guest post and be heard on social media’s leading physician voice.