Government involvement in usability has been the talk recently. In case your not aware, EMR usability is such as problem the government is exploring ways to get involved. As you might imagine the topic is a very sticky subject, especially EMR vendors, as the topic conjures images of committee-centered design and EMRs being worse of then they already are now.
However, when you think objectively about it, I believe the government can be part of the solution, if done correctly. I want to lay out my guidelines for government involved in Health IT usability.
Brief introduction to usability testing
Before getting into my recommendations a brief background in usability testing is required. Usability testing can be roughly broken into three types: (1) heuristic evaluation, (2) expert user testing, and (3) novice user testing. Although each of these are not explicitly “usability” testing, for the sake of simplicity I’m going to lump them together.
Goal: General software usability
Usability heuristics, or discount usability testing, is a process developed by Jakob Nielsen to rapidly and cost effectively test the usability of a user interface. In this method, Nielsen defines ten “rules of thumb” for all interfaces whether it’s a game on an iPad or a state-of-the-art enterprise EMR system. Remarkably these suggestions have held through years of scrutiny. In a heuristic evaluation, a usability expert performs a detailed evaluation of the user interface to find heuristic violations. Next, the violations are given a severity rating to prioritize the fixes for the developer team. Some of the most commonly violated heuristics in HIT include:
- Recognition rather than recall. Minimize the cognitive load on the clinician by storing more information in the system, rather than in the head of the user.
- Aesthetic and minimalist design.If it is not important, don’t put it one the screen! I can’t tell you how many times I’ve seen lab results, problem lists, or a medication interaction listed in a table format with poor information display. Rather than taking up screen space with this low information density data, it’s better to use techniques such as Edward Tufte’s sparklines.
- Error prevention. User interfaces should prevent errors from occuring in the first place. What happens when this heuristic is not followed? Alert fatigue.
Goal: Quantitative prediction of task completion time
Using a technique called user performance modeling, the time it will take for an expert to complete a task on a particular user interface can be determined. These models predict the completion time based on parameters such as button size/spacing, number of clicks, and load times. The best tool I’ve used for this modeling is CogTool from Carnegie Mellon.
Goal: Learnability, Find major design flaws
Novice testing is the process of asking a new user to complete a given task. It generally goes like:
- Find a representative task for your system (eg. Fill a new order for Lipitor, refill the prescription, change the prescription)
- Ask a novice user to perform the task
- Tell the user to “think aloud” (eg. verbalize their thought process)
- Record major breakdowns in the process (eg. can’t find a button, unfamiliar wording, etc.)
Don’t let the relative simplicity of this process fool you, it can be very powerful for finding flaws in design. In the design of our newest application to automate medication refills, we went through a major redesign after each round of our user testing.
Usability heuristics for government intervention
Do not design EMR software
Let’s take that off the table immediately. Although platform oversight on software design can work (Human Interface Guidelines from Apple), design is not a core competency of the government. Putting government regulation on the design process would inhibit rather than promote usability. As such, any government involvement in usability can not present an “ideal EMR” and require evaluation against that “ideal EMR”.
Evaluate objectively, not subjectively
This goes hand-in-hand with the previous recommendation, but if the government is going to evaluate usability (ala KLAS), then all metrics need to be objective rather than subjective. A common misconception is that usability is subjective. In fact, it’s not. Usability is a fairly exact science. User experience, which is often confused with usability, deals with more the subjective measurements, such as more the “look and feel” rather than objective measurements. Good evaluation criteria would be metrics such as heuristic violations or task completion time.
Don’t allow for over-optimization
Give an engineer a metric to optimize, and they will optimize it, potentially to the detriment of the overall system. What this means for usability testing, is that you can not test on a single metric such as “Time to complete a new order”. If you’re going to test on completion time then make sure you are switching up the tasks to make sure over optimization does not occur.
Open the eyes of consumers
The most important outcome of a government intervention in usability is to promote consumer level awareness. If clinicians are aware of usability problems, vendors are going to compete on usability. It might be wise to piggy back on existing consumer satisfaction surveys, such as the reports from KLAS to increase consumer awareness on usability.
Let the market flourish
Let’s face it, in a properly functioning market, the customer will pick the best products, and the winners will rise to the top. Do you ever hear Steve Jobs complaining about the usability of iOS Apps? From the lack of usable HIT products, you could assume potential innovators are being left on the outside looking in. If you want usability (and innovation) to flourish, open healthcare environments such as SMART Platforms, need to be continually pushed.
Focus on the “big five tasks”
Clinicians spend the majority of their days completing five tasks – orders, results, messages, prescribing, and notes. If you make these processes more usable, clinicians are going to be happier and more productive. Whatever intervention is selected, it would be wise to focus on these particular need areas.
The EHR Usability Report Card
I’ve taken the liberty of complying all of these suggestions into a potential usability report card for EMRs. The report card highlights the heuristic violations of three fictitious vendors in the “big five tasks”. The more violations the poorer the overall usability of the system. While this report is not perfect, I believe it best exemplifies the processes the government needs to take in its “War on EMR Usability.”
Jonathan Baran is co-founder of HealthFinch.
Submit a guest post and be heard.