Every day, as we care for our patients, we are placed in a unique position, where we are armed with the world of literature, randomized controlled trials, society recommendations, national screening and practice guidelines, and more, working to prompt us to try and do what’s best for each patient in a vast array of clinical situations.
And patients come to us armed with a lifetime of experience, their own perceptions about their health and what they’d like to do about it, sometimes some misperceptions and faulty data gleaned from the internet or from friends, and sometimes just some really stubborn opinions about what they will and will not do.
Quite often, our patients know best. Sometimes, it’s us. And sometimes, it’s us and them, or neither.
A hidden issue
During one recent morning practice session, a resident was presenting a patient — an elderly woman who, on review of her chart, had been struggling at our practice with uncontrolled hypertension for years. Try what we might, her number always looked terrible.
On this particular day, her initial blood pressure reading had been high, and when the resident got to his plan, he said that he would like to add a new medication.
I asked him why he chose this particular medicine, and he made a fairly cogent argument about the studies that showed its efficacy and safety and tolerability, and there was certainly something to be said about the benefit of protecting this patient from strokes, heart attack, renal failure, and other well-known scourges of uncontrolled hypertension.
But was this the right medicine for this particular patient, at this particular time in her life?
While medication certainly could add some benefit, we all know medicines, like everything else we do, come with costs, some we can see, some we can discover, and some that no matter how we try, are hidden from us as healthcare providers.
After looking through her medication history to see what had been tried for her blood pressure in the past, we finally reopened her initial visit note in the electronic medical record, and discovered a notation that said that this patient had come to us after having stopped three medications for blood pressure because they made her feel “tired and terrible,” and she stated she would never take them again.
One of them was the medicine that the well-intentioned resident had thought to start for this patient. Again.
Figuring out what’s best
These are the conundrums we are stuck with, challenging issues about finding out what’s best for patients, sometimes with limited data, and sometimes with just not enough information to make the right decision.
As always, we should be trying to do shared decision-making, helping the patient understand why we think this blood pressure reading is not good for her, and have her give us information about why she doesn’t like to take certain medications.
All we have are those brief snippets in the office — that one number recorded by the medical technician with an (probably inaccurate) electronic sphygmomanometer, after the patient arrived late and was rushed down the hallway and was not allowed to rest for 5 minutes with her feet on the ground, and she did not take her medicines this morning since she does not like to take them when she has to ride the subway because they always make her need to pee, and she is only really taking them at most, every other day because the co-pay is so high and she has to decide between medications and food.
How can we know all this?
Perhaps, moving forward, we can enlist some of the new technologies out there to help make this type of decision easier and better for all involved parties.
I can envision a future where some really smart artificial intelligence systems will be able to quickly show us a list of all the blood pressure medicines the patient is taking and has tried in their life, along with associated reasons why it was stopped.
This one made her tired, that one led to severe hyperkalemia, this one costs too much, that one led to sexual dysfunction, and so on.
The promise of technology
Right now, when my patient clicks on a medication in the patient portal requesting a refill, the system is smart enough to check whether they’ve had appropriate lab monitoring, a recent appointment, and has an upcoming appointment scheduled, at which point the system suggests to me that it’s probably OK for me to go ahead and refill this particular medication.
This is a translation of the thought process we go through when we have done refills without these systems: have they seen me recently; were their electrolytes or liver function tests OK; has it been less than a year since they’ve been here; are they seeing me next month — and this provides the foundation on which I decide to go ahead and continue the medication and safely refill it.
OK, their last visit was in 2017, but they have an annual scheduled for later this month; let’s go ahead and refill a 1-month supply to tide them over until then.
Just as our electronic medical record has these clinical decision rules that align with the patient’s request for a refill, perhaps every complicated medical decision in the future will be backed up by a sophisticated system that takes into account what the patient wishes and what we think is best for them based on our current state of medical knowledge.
None of us wants to be replaced by artificial intelligence and natural language processing systems, but it would be great if we were able to harness all of the computing power at our fingertips to create a foundation on which we and our patients can make better decisions moving forward.
We should always take into account the best evidence, our own practice experience and patterns of behavior, as well as a hefty dose of input from the patients about what they do want and do not want.
This idealized construct could ultimately help us take better care of our patients, and help our patients get better care, without re-creating the wheel over and over, and making the same error again and again, even when we do so with the best of intentions.
Image credit: Shutterstock.com