On September 10, 1986, soon after I completed my residency in internal medicine, I “took the Boards” – the certifying examination administered by the American Board of Internal Medicine (ABIM). A few months later, I learned that I passed the exam, and that success, combined with an attestation by my residency program director, rendered me “board certified.” I was granted lifetime certification – my framed certificate implied that I was not only a competent internist at that time, but that I could be counted on to remain one (without any further assessment) until the day I retired. I was all of 28 years old.
As the proud owner of ABIM’s lifetime seal of approval, I assumed that my thick envelope was the last contact I would ever have with the Board. I was wrong.
Last month, I became chair of the ABIM. The organization has always been well respected in the medical community, but suffered from a reputation for being, shall we say, not particularly nimble. (The former chair of my own department, Holly Smith, once quipped that the Board “chews more than it bites off.”)
But that was your father’s ABIM. We are now paddling in a very fast current, and the actions that ABIM (and the other certifying boards, such as surgery and pediatrics) takes in the next few years will have a profound influence on how physicians are judged by the public and other key stakeholders. If you believe in professional self-regulation, you should care about what the Boards are doing, and – while nobody loves an accreditor or regulator – you should be rooting for them (er, us) to succeed.
Why, in 1986, did the Board offer me lifetime certification, when patients would undoubtedly value evidence that their doctor is keeping up in his or her field? For the same reason the Joint Commission preannounced its hospital surveys two years in advance, residency programs allowed interns to work 110 hours a week, and hospitals and doctors were paid the same whether their care was stellar or terrible: we simply were not very accountable to the public.
That was then. For the past 15 years, American healthcare has been placed under a microscope. While there are islands of striking success, even miracles, the overall picture is not pretty: there are too many mistakes, quality is often shoddy, variations are the norm, access is spotty, seamless coordination is rare, patient-centeredness is unusual, and costs are unsustainable. Against that backdrop, every regulator, accreditor, payer, and legislator is feeling pressure to do his or her part to make the system better. These pressures have fueled myriad initiatives – transparency, pay for performance, no pay for errors, more robust accreditation standards, readmission penalties, meaningful use payments – to promote value.
While the early action centered on hospitals, it’s now turning to doctors. After all, since doctors’ decisions determine most of what is done for patients, viewing quality, safety, and efficiency through a physician lens seems appropriate. Moreover, most healthcare is delivered outside hospitals.
But while measuring the quality of hospital care is hard, measuring individual physicians’ quality of care is that much harder. On top of the usual problems of case-mix adjustment (if it’s not done – or not done well – it’s easy to ding unfairly a great doctor who attracts sick patients), there are other daunting statistical and attribution issues. For example, while it’s statistically feasible to determine the better of two hospitals for heart failure if they’ve each cared for a few hundred patients, it’s next-to-impossible to differentiate between two doctors who each cared for 20 patients. Moreover, when a team of doctors manages a patient, which one should be credited, or blamed, for the outcomes? These are tough nuts to crack.
Perhaps an even larger issue is that all of today’s quality and safety measures assume that the physician has made the correct diagnosis and that the procedure was actually needed. A world of door-to-balloon times, hemoglobin A1c’s, and pneumovax rates inexorably undervalues diagnostic acumen and appropriate use of technology: the ability to take a good history, formulate the right differential diagnosis, order the correct tests and consultations, and interpret all of the data correctly. What is measured matters, and without measures of physicians’ knowledge, analytical skills, and judgment, patients won’t be able to assess these things when choosing a doctor, and training programs will gradually deemphasize these competencies in their curricula.
Enter the Boards. Over the past 25 years, all the boards have implemented “Maintenance of Certification” (MOC) programs.” Under MOC, physicians – no longer deemed competent for life – are required to participate in a lifelong assessment and improvement program. (As often happens, physicians who were certified under the old rules – including me – were “grandfathered.” All ABIM board members are required to participate in MOC – to “eat at our own restaurant” – and I recertified three years ago.)
MOC is more than simply passing a test every 10 years. It now includes measuring one’s own practice patterns and submitting plans for improvement, reviewing patient and peer satisfaction surveys, and more. While the secure examination is likely to remain a once-a-decade affair, physicians will soon be required to demonstrate that they are measuring and improving some aspect of their practice every two years. If this seems like a lot, just think of commercial airline pilots, who face such requirements every 6-12 months. As a frequent flier, I’m glad about that, and I suspect patients would feel the same way about “continuous MOC.”
Tightening MOC requirements is unlikely to make doctors happy, but I believe it is needed to bolster the credibility of board certification, and thus of professional self-regulation. To doctors who say, “I’m working hard, please leave me alone,” I can guarantee that they won’t be left alone – by Medicare and other insurers (which need quality measures for their public reporting and pay for performance programs), by the Joint Commission (which requires hospitals to periodically assess the competency of medical staff members), and by state licensing boards, which are launching “Maintenance of Licensure” (MOL) programs. The Board’s goal is for our process to be sufficiently credible to the public and others that it “counts” for all of these programs.
The early returns are positive. Medicare, which has been challenged to find strong and feasible measures for its “Physician Compare” website and its P4P programs, seems attracted to the possibility of using ongoing participation in robust MOC as a quality measure. The Joint Commission is considering a similar idea. And the Federation of State Medical Boards has signaled its intent to accept MOC as meeting requirements for MOL.
We see this as a case of, if we build it (well) they will come. If we don’t, each organization can be counted on to do its own thing, and the resulting measures are unlikely to be as robust or as relevant to physicians. On top of that, the lack of harmonization (collecting five different versions of quality reports for five different organizations) is likely to be crazy-making for doctors.
While our assessment tools must be rigorous enough to be credible, we’re highly sensitive to their impact on busy practicing physicians. For this reason, the Boards give doctors credit for participating in many quality assessment programs from hospitals, medical societies, and health systems. We’re also striving to make our tools and website more user-friendly and to modernize our “secure exam.” For the latter, we aim to write questions that measure what physicians really have to know in practice, to choose high quality AV resources (ECGs, x-rays), and – where appropriate – to allow access to aids such as on-line calculators.
But these are incremental improvements. With medicine changing so rapidly, I suspect we may need to be more ambitious, even audacious. I have charged a new committee, called “Assessment 2020” (chaired by Yale’s Harlan Krumholz) to take the long view. What should physician assessment look like in five to seven years? Is there a role for simulation? Should we assess the ability to do a physical examination or interview a patient? Can on-line searches be allowed during the exam without doing violence to the validity of the results? The latter question is particularly important, both because this is how physicians seek information today and because searching the literature is now a core competency. All of these are hard questions, but we are committed to tackling them thoughtfully and with scientific rigor.
Transparency is also on my radar. Today, the Boards deliver a dichotomous verdict: a physician is either certified or not certified. But we know more than that about our diplomates: everything from their test scores to how they performed on practice improvement modules. Just as the Boards would be irrelevant to today’s quality dialogue if we hadn’t embraced MOC, we may be equally irrelevant in the future if we maintain our traditional “Certified Y/N” stance. When people seek out a good doctor, knowing whether the doctor is board certified is just the beginning. As the popularity of sites like HealthGrades and Angie’s List illustrates, patients want far more information. I believe that if the Boards don’t provide it, others will.
(To demonstrate this, at a 2010 Board meeting I divided our members into two groups and gave each the task of quickly finding a great cardiologist for Aunt Minnie in Denver. Each had a computer with web access. The groups quickly realized that board certification was only the starting point for their search, but they had to sift through mountains of data, much of it garbage, to try to give poor Minnie a rational – and evidence-based – referral. My favorite moment came when one Board member, a department chair, was gushing over a cardiologist’s on-line CV and publication list. Another member of the group stopped him short. “We’re looking for a doctor, not applying for an NIH grant!” he said.)
The other big issue we’re facing is the cost of care. While the Boards have historically shied away from assessing efficiency, in today’s world we must add appropriateness and resource stewardship to our assessment tools. The ABIM Foundation’s highly successful “Choosing Wisely” campaign – in which nearly every specialty society has committed to avoiding five costly, low value practices – is a tangible manifestation of our growing commitment to this area.
The public grants to professions the privilege of self-regulation. For physicians, our ability to retain that privilege will be determined by the public’s trust that we can deliver on it. If we lose this trust, the Boards will quickly become irrelevant, and physician standards will be set by others: Congress, insurers, dot coms, malpractice attorneys, state licensing boards. I think this would be a major loss – for patients and for doctors.
There has never been a more interesting time to be at the center of efforts to measure and improve the quality of care, and thus to be leading the ABIM. Working with our superb Board and staff, I will do what I can to ensure that our work remains true to the ideals of professional self-regulation – and that board certification becomes ever more meaningful to the physicians we represent and the patients we serve.
Bob Wachter is chair, American Board of Internal Medicine and professor of medicine, University of California, San Francisco. He coined the term “hospitalist” and is one of the nation’s leading experts in health care quality and patient safety. He is author of Understanding Patient Safety, Second Edition, and blogs at Wachter’s World, where this post originally appeared.