Yesterday, I got just about the strangest version of “MD aware” I have ever had.
We all know what this is, the person who is relaying some critical clinical information (positive blood cultures, Mr. Smith in Room 402 is having chest pain, etc.), and then they ask for your name so they can write it down somewhere in the chart.
In this case, I was at a meeting in a conference room across the street from my office, after a long morning session seeing patients, and we were discussing some administrative issues unrelated to clinical patient care activities.
Suddenly, my Apple watch made a bleeping and buzzing sound I’d never heard before.
We’re all used to our phones erupting and ringing and going off at inopportune times (such as when the incredibly inappropriate ringtone my daughter has set for herself goes off in the middle of a meeting with my Chairman of Medicine or the Dean of our medical center). Once, in the middle of examining a patient, my phone began chirping with a sound it had never made before – alerting me to a video that my niece had sent me of her goat that had just won the blue ribbon at her 4-H club fair in Missoula, Montana.
A new kind of signal
This new signal on my watch was different — it was, in fact, an alert from my electronic health record (EHR) that had somehow, unbeknownst to me, dumped information from the EHR record of a patient, tracked down my phone, and beamed from there to my watch this notification of the critical lab result on a test I’d sent off just a few hours earlier.
“Wow,” I thought, “this is very cool — kind of ‘Star Wars,’ sort of Star Trek, a little Dick Tracy,” but it also freaked me out a little. It made me realize how reachable we all are, and how inescapable things are, especially when things are urgent.
While this lab value was incredibly critical, and clinically important, it was not unexpected, and a plan had already been put into place to deal with it. But the computer didn’t know that; it just had to tell somebody. “MD aware.”
So how do we — and the systems trying to help us take care of patients — decide what’s urgent, and is there a way for the burgeoning field of artificial intelligence to help us do this better? Can we more accurately focus and concentrate our attention, to prevent alarm fatigue and burnout, from constantly being told to come quick, stop what you’re doing, drop everything, pay attention to me?
The other day, I was going through paperwork to sign off on the endless stack of home health care forms and durable medical equipment requests, when I got to a fax from a home care agency labeled — surprise — “URGENT,” in all caps, with several exclamation points, underlined repeatedly by the hand of whoever sent the fax.
When I flipped the page to find out what this particular critical matter was (and after I was able to decipher what their peculiar abbreviations meant), it turned out that the home care agency had sent a nurse to the house for a routine visit, and they wanted to inform me that the patient had been away on vacation, and therefore they could not make the visit that day.
This is urgent? I know it probably seems urgent to somebody. Somebody probably got yelled at by their supervisor that that form needed to be filled out and signed and back in the office before an upcoming audit, and they needed proof that they were not fraudulently claiming visits that never happened. But if we make everything urgent, then the word itself loses its impact.
Different Meaning of “911”
I remember a few years ago, while seeing patients, my pager went off. Glancing down I saw the number of our front desk, followed by 911.
I immediately excused myself from the exam room, hustled up the hallway to see what was going on, expecting to see some sort of commotion, a patient in extremis, a tense police standoff, or maybe that surprise party they’ve been planning for my birthday.
When I got there, I asked the person at the front desk what was going on: “What is it? Is everything okay?” Nothing important. She said that when she texts her friends, they always put 911 when they want someone to respond; otherwise, they expect to be ignored.
Clinicians live in this world all the time, a constant state of alert, waiting for the next emergency to happen, we’re expecting something terrible to be revealed in our history, the physical exam, results of a lab testing, you never know what’s going to be behind Door Number One.
But the varied systems we live in have not evolved in sync with the needs of those doing the addressing, handling all of these “emergencies.”
When almost every laboratory result is labeled with a red up-arrow warning us that something’s wrong, something’s terribly wrong; it gets pretty hard to see the tree for the forest that’s set on fire.
True, the lab system continues to add more and more up-arrows to a result as things get more and more “critical,” but is it not asking too much for the system to become smart enough to know that the moment something’s a fraction of a tiny amount above the upper limit of normal, it’s not a medical emergency?
Avoiding alarm fatigue
Many years ago, in the years right after 9/11, we tried an experiment to send urgent messages from our electronic health record to our pagers (this was the time before cellphones were ubiquitous), but so many people were marking messages as urgent (set of keys found in the bathroom, donuts in the large conference room, can anyone cover my Tuesday weeknight call?) that the increased use of the pagers’ bandwidth set off a warning with Homeland Security; they shut us down.
Walking through the halls of the hospital today, I was reminded again about the endless barrage of alarms going off incessantly. From every room emanated the beeps of occluded IVs, heart rates out of range, patient call-bells being pressed and pressed.
Overhead was a continuous cacophony of intercom and hospital-wide announcements, and the resulting auditory onslaught bordered on the painful. I can’t say I blame the night nurses for “alarm fatigue,” for ignoring the beeping, the buzzing, the alarming, when they’re going off all the time.
Hopefully, somewhere out there in the world of Silicon Valley and tech startups, or in the halls of academia at MIT or Stanford, people are working on ways to refine this critically important part of taking care of people when they are critically ill.
And even when patients are not critically ill, in the ambulatory world, things that need to be addressed often do get critical, and perhaps we need the systems to get smarter, to learn to recognize what’s important, and what may not be — what can wait till tomorrow, what needs someone’s attention right now, and when do we need to send a small jolt of electricity zapping into Dr. Pelzman’s watch to make him wake up and pay attention to that urgent alarm flashing in front of him over and over?
Image credit: Shutterstock.com