Diagnostic errors (missed, delayed, incorrect diagnoses) are increasingly being recognized as a prevalent cause of harm to patients. At the same time, physicians are simultaneously under pressure to deliver high-quality, low-cost health care. How do physicians come to a balance between the competing demands of addressing underuse versus overuse, and consequently a balance between underdiagnosis from inadequate investigations versus overdiagnosis and resource waste? Can physicians learn from hearing about their diagnostic errors and improve their diagnostic performance? Could this lead to reducing error and improving safety?
In a recent JAMA paper, we suggest strategies to help answer some of these questions and move us forward. We suggest ways to better align physicians’ diagnostic accuracy with their confidence—or better calibrate physicians in their diagnostic thinking. Why? Because low confidence may lead to overtesting and high confidence to undertesting. We suggest that the best diagnostician may be the physician who makes the correct diagnosis using the least resources, while maximizing patient experience.
This is no small feat.
Psychology literature suggests a solution: improve physicians’ calibration by providing them with feedback about their diagnoses. Why do we not do this already? The answer is complicated. While there are too many reasons to name here, one reason is that currently, when a physician gives a patient a wrong diagnosis, the physician may never learn about their mistake. The patient might go elsewhere in the event of a wrong diagnosis (either by seeking emergency care unbeknownst to their original physician or by seeking a second opinion from another physician).
Another, perhaps thornier reason why physicians do not receive feedback about their diagnostic performance is that conversations involving feedback pose several challenges. For one, physicians may be uncomfortable receiving feedback about their thinking skills and competencies because they might find it threatening to their professional image. It has also been difficult to figure out what type of feedback would be helpful. We think a combination of quantitative and qualitative feedback will enable physicians to recalibrate their thinking by providing a helpful overview of their diagnostic processes, but by also providing an informative, in-depth analysis into the reasons behind their thinking and the context surrounding those decisions. We don’t need to always focus on outcomes, such as mistakes; focusing on what processes went right and wrong could be very useful.
How could this feedback occur? We suggest following these tips to deliver diagnostic performance feedback.
Diagnostic performance feedback should:
1. Occur within a receptive learning environment (tell physicians that the feedback is meant for learning and deliver all feedback in a positive, constructive, nonjudgmental, nonpunitive way)
2. Involve quantitative summaries of performance that are meaningful, but revolve around processes that can be improved by the physician (such as test results that were not acted upon or notified to patients in a certain time period)
3. Involve in-depth, qualitative deep dives of specific cases to enable self-assessment and accountability (make sure to highlight the cognitive, systems, and patient factors to paint a rich context)
4. Involve teams (diagnosing is often a team sport, and team feedback can help with processes related to team dynamics, but also capture a larger picture of the diagnostic process)
Creating this type of feedback within a learning health care system can produce better calibrated clinicians who prevent harm from missed diagnostic opportunities as well as from overdiagnosis, overtesting, and overtreatment.
Despite these tips, there are still many unknowns. What are the unintended consequences of such feedback (i.e., could this lead to hypervigilance)? What specific diagnostic processes and outcomes should be tracked and fed back to physicians in various specialties? How do we maintain clinician accountability? How do we develop peer-to-peer collaborative learning networks for practicing physicians who have no real means of getting feedback from let’s say a supervisor?
These unknowns, however, can be worked out and examined once feedback is routinely given. Who will take the first steps?
The findings and conclusions in this post are those of the authors and do not necessarily represent the official position of the Department of Veterans Affairs or the U.S. government.
Image credit: Shutterstock.com