If you listen to the pundits, the future of medicine is big: big medicine, big data. And indeed the healthcare policy of our nation is couched in the promise of what is to come. Many dictates of the accountable care act focus on the ability to aggregate and consume a variety of inputs. ICD-10, EMRs, and meaningful use all tie nicely into a beautiful computational orgy.
Big data, however, has it drawbacks. One wonders if in usual fashion, politicians and pundits will do more harm then good.
Correlation and causation
There is a hierarchy in medical data. Every clinician knows that prospective, randomized, double blind studies are the gold standard. The reason why, is that lesser models (retrospective and case study), often are only able to show correlation. Time and time again, we find that clinical decisions based on correlation are faulty. High homocysteine levels are associated with coronary artery disease but bringing them down with folic acid can be harmful. Poor dental health may be related to cardiac disease, but good hygiene has little effect on the risk of heart attack. In a world where the LDL and HDL hypotheses are quickly being disproven, one loses a taste for relying on such logic.
Yet, big data is clearly a correlational model. One can only compare it to the weakest forms of evidence (case control, open label). There is no ability to use it in a prospective randomized manner.
Poor studies lead to poor medicine.
Garbage in, garbage out
I am not a big fan of meta-analysis. The reason why, is often the bias of the investigator clouds the results. If you want certain answers, you ask certain questions. Inclusion criteria can be tricky and bend to the will of those crunching the numbers.
Big data suffers from the same fundamental issues. Who knows the political pressures that will be placed on scientists. If you don’t get the answer you want, maybe you have to ask the question differently, query the database more delicately.
Anyone can produce results, but will they be meaningful.
For years scientists have relied on death certificates to understand causes of death in America. But as almost any signer of such documents knows, they are often completed in a hurried, haphazard way. As a physician, I have no reason to care if the cause of death is correct. Often, in fact, I don’t even know the answer. It’s just another paper to fill out: cardiovascular collapse (whatever that means). A grand majority of times when I review these documents as a medical expert, the cause of death on the certificate is inaccurate.
Big data relies heavily on ICD-9 and CPT codes. Providers often manipulate these codes, however, for a variety of reasons. Want the venous doppler to be covered, say the patient has a DVT (of course you don’t know yet because you haven’t done the test). Want the blood tests to be paid for by insurance, say the patient has fatigue. The EMR doesn’t have a code the suitably fits the situation, just use another, who cares if it’s not accurate?
Most of the time these data inputs have no real meaning to the clinician and thus only receive a passing thought. They are another hurdle to providing care, they are to be dispensed with as quickly as possible.
Keeping our eye on the ball
The great task of big data is falling squarely on the shoulders of overburdened clinicians.
ICD-10, CPT, EMR, Meaningful Use, PQRI
Inputting all this data takes huge amounts of time, time that is being taken away from patient care. Years of practice and training has formed clinicians who strive towards perfection. These distractions destroy our attempts at mastery.
No one would think of asking the conductor of a symphony to also collect tickets at the front door in the middle of a performance.
What is gained in knowledge with big data, is lost many times over in faulty, distracted, and poor face to face care.
We are left with one basic question.
Do we want big medicine or good medicine?
I’m not sure we can have both.
Jordan Grumet is an internal medicine physician and founder, CrisisMD. He blogs at In My Humble Opinion.