The health care AI space is frothy. Billions in venture capital are flowing, nearly every writer on the health care beat has at least an article or two on the topic, and there isn’t a medical conference that doesn’t at least have a panel if not a dedicated day to discuss. The promise and potential is very real.
And yet, we seem to be blowing it.
The latest example is an investigation in STAT News pointing out the stumbles of IBM Watson followed inevitably by the “Is AI ready for primetime?” debate. Of course, IBM isn’t the only one making things hard for itself. Their marketing budget and approach makes them a convenient target. Many of us — from vendors to journalists to consumers — are unintentionally adding degrees to an already uphill climb.
If our mistakes led to only to financial loss — no big deal. But the stakes are higher. Medical error is blamed for killing between 210,000 and 400,000 annually. These technologies are critical because they help us learn from our data — something health care is notoriously bad at. Using our data to improve is a matter of life and death.
In that spirit, here’s a short but relevant list of mistakes we’d all benefit from avoiding. It’s curated from a much longer list of sometimes costly, usually embarrassing, mistakes I’ve made during my dozen years of trying to make these technologies work for health care.
1. Inconsistent references to … whatever we’re calling it. I had a hard time settling on the title of this piece. I had plenty of choices to describe the topic of interest, including machine learning, big data, data mining, data science and cognitive computing to name a few. Within certain circles, there are meaningful distinctions between all of these terms. For the vast majority of those, we hope to help, using ten ways to describe the same thing is confusing at best and misleading at worst.
I’d prefer the term “machine learning,” since that’s usually what we’re talking about, but I’ll trade my vote for consensus on any name — except “artificial intelligence.” The math involved is neither artificial nor intelligent. Which brings us to mistake two.
2. Machine learning is a tool, not a sentient being. It’s a really powerful tool that can help with detection of disease, early prediction of progression and pairing individuals to interventions. The tool metaphor has real repercussions — not just for cooling off the “AI as doctor” hype but for how we put it to use.
For example, the hammer is a great tool, if you know how to use it, have a plan to create some value with it, you are working with wood and if the job, ultimately, is to bang nails. If not, it’s useless. The second we claim otherwise, we’re setting up for disappointment.
3. Ridiculously unhelpful graphics. On a related note, the images accompanying articles on the topic aren’t helping matters. I sympathize with the challenge of visually representing a somewhat intangible approach. However, robotic terminator arms presenting magical pills (or brains) are not helpful. They’re hilarious but not helpful.
4. People don’t get excited about being replaced. Yet our references to artificial intelligence, our graphics, and our headlines keep steering their audience back toward this one inevitable conclusion. I get it. Scare sells. But it doesn’t get us to better care faster.
5. Outrageous promises (and belief) of what these tools can do. It’s not helpful to frame these approaches as magic oracles that will cure cancer by replacing human doctors. The reality is far more boring than that (at least for the foreseeable future).
Machine learning approaches offer a more efficient way to find patterns. These patterns can be used to figure out what we’re doing, whether it’s working, and what we should be doing — pretty important stuff. However, tools can only help us when they’re applied to appropriate tasks, using the right raw materials and as part of a well-thought-out plan. We see that in the limited examples of machine learning, solving real problems in health care such as identifying care gaps sooner, catching missed codes in documentation and identifying specific risks sooner, such as spotting people with serious mental illness most likely to have an inpatient psychiatric admission. Those are useful tools to add to our clinical and operational toolkits.
The problem with “AI as Savior” headlines is that they make it difficult to have an educated conversation about specific opportunities to use these tools to derive value.
The STAT News article implied disappointment that IBM Watson hasn’t yet revolutionized cancer care. If I sold you a hammer based on the promise that it can build a house on its own, would you be disappointed if it didn’t?
For that matter, who deserves the blame? Me for selling you the hammer, or you for believing the pitch? No one in their right mind would blame the hammer. And yet, inappropriate use, over-promising, and poor project management are causing many to question whether machine learning is ready for prime time.
Why is it so easy to blame the tool? See above.
6. Measure (and talk about) what matters. Hint: it’s not the predictive performance of an algorithm, the terabytes of data amassed or grandiose introductions of your data scientists’ degrees. It’s dollars saved or earned, lives improved, time reduced, etc.
If you must describe value in terms of accuracy/statistical performance, it’s best to do so responsibly. Claiming “90 percent accurate!” doesn’t mean anything without additional context. Accurate at what? Measured how? With what data? Details matter in health care.
7. Technology is great. But people and process improve care. The best predictions are merely suggestions until they’re put into action. In health care, that’s the hard part. Success requires talking to people and spending time learning context and workflows — no matter how badly vendors or investors would like to believe otherwise. It would be fantastic if health care could be transformed by installing software that assumed your workflows and priorities. Just ask those dealing with the aftermath of electronic medical record (i.e., most practicing clinicians). Until certain fundamental realities change, invest in understanding, process and workflow.
I share this partial list of lessons learned not out of frustration but with incredible enthusiasm for what’s to come. These technologies will become an integral part of how we identify patients in need of attention, reduce wasteful administrative overhead and recommend more appropriate pathways of care. I see it happening in small steps in real health care organizations every day. The sooner we reframe the way we speak about and apply these tools, the sooner we can begin using our data to get better.
Leonard D’Avolio is co-founder and CEO, Cyft.
Image credit: Shutterstock.com