A guest column by the American College of Physicians, exclusive to KevinMD.com.
Last month, I shared my take on the mess that we find ourselves in with our electronic health records (EHRs) and referred to the recently published ACP position paper on clinical documentation. The paper includes a list of recommendations for EHR system design. These are high-level recommendations to which I would like to add a few from the perspective of an end user.
Every day, I wish my EHR could do something that seems doable, given what I see in non-medical computer applications. Here are a few items on my “wish list:”
The ability to take a dictated note and place all the necessary items in structured data fields and automatically click the required boxes. Position 4 in the ACP paper states, “Wherever possible, EHR systems should not require users to check a box or otherwise indicate that an observation has been made or an action has been taken if the data documented in the patient record already substantiate the action(s).”
Much of the software that we use daily utilizes natural language processing. A couple of months ago, I commented on a JAMA article on scribes on my Facebook page and for weeks after that I had ads for voice recognition software and EHRs on my web pages. If you browse for an item on Amazon, the next time you return the page will display items related to ones previously viewed.
So why can’t the EHR take a paragraph in which I write or say, “The patient is smoking one pack per day and was counseled to quit” and convert that into structured data that will satisfy reporting requirements? Instead, I (or my medical assistant) have to go to the correct section of the EHR and click at least three items so that I get credit for what I did. If I dictate that I told my patient that he should lose weight, why can’t the EHR recognize and count it without my having to click a box that says I did it?
While we’re at it, how about having the EHR give me an ICD-10 code based on what I dictate?
Built-in calculators that automatically calculate risk scores using information in the EHR. It would be nice if my EHR had the ACC/AHA cardiac risk calculator or the FRAX score built in. It would be great if when my patient had a lipid profile or a bone density test, the EHR provided the appropriate risk scores alongside the results without my having to enter the numbers manually. As “easy” as it seems to enter the numbers, it’s one more step on top of many other steps, and, as a result, it probably isn’t done as often as it should.
If a patient has a diagnosis of atrial fibrillation and the EHR contains date of birth, gender, and diagnostic information on comorbidities, why do I still have to calculate the CHA2DS2-VASc score?
Personalized screening recommendations using data in the EHR. Reminders that patients 50 and older should have a colonoscopy every ten years are standard features of EHRs (I hope). However, in order to change that to a shorter interval for a patient at increased risk, the user must override the default frequency (for each patient). Wouldn’t it be nice if the EHR took a family history of colon cancer or a personal history of adenoma and automatically changed the follow-up frequency? The same goes for bone densities and lipid screening.
A Quicken-like interface for outside labs. I love Quicken. Somehow, it is able to utilize information from a variety of financial institutions and perform bidirectional exchanges of data without my having to hire a programmer and spend thousands of dollars for an interface for each new account. The banks, credit card companies, and brokerages use standards that make it possible. What ever happened with the standards adopted over the years by EHR vendors and why can’t something similar be done with labs? I realize that lab data is more complex than credit card charges, but still …
“Intelligent” interaction alerts. One of the values of EHRs is the ability to identify drug-drug and drug-disease interactions. Unfortunately, that usually means incessant pop-up windows for the most minor of potential interactions. There are filters that you can set (for example, just show “major” interactions), but these leave something to be desired since what I consider to be major, moderate, or minor risk may not match how the EHR classifies them. Why can’t the EHR ask the user if a pop-up alert was useful and learn from the responses? The model that comes to mind is the way that voice recognition software learns based on corrections made by the user.
I’m sure someone will accuse me of not understanding how EHRs are programmed or of asking for features that are impossible to implement using current technology. If so, I invite those who know better to explain it to the rest of us. Call me naïve, but if my phone can remind me that I need to leave in five minutes to get to a meeting on time-based on my calendar, the meeting location, and my GPS, I can’t help but wonder why our EHRs aren’t as smart and user-friendly.
What items are on your EHR wish list?
Yul Ejnes is an internal medicine physician and a past chair, board of regents, American College of Physicians. His statements do not necessarily reflect official policies of ACP.
Image credit: Shutterstock.com