Artificial intelligence (AI) promises to revolutionize the practice of medicine by expanding our knowledge and reducing our workload. However, physicians should tread carefully and take heed of the example set by other industries. For example, AI was at the core of the negotiations in the recent 2023 Screen Actors Guild (SAG) and Writers Guild of America (WGA) strike. “AI was a dealbreaker,” said Fran Drescher, a union leader of the SAG AFTRA following the capitulation of the movie studios. Included in the deal was an arrangement allowing actors to be compensated for any digital replicas of their work or likeness. Prohibitions were set against the use of AI to rewrite scripts or to require writers to use AI. Movie studios must now disclose the use of any material given to a writer from an AI source, and that writers’ material should not be used to train AI systems. The months-long strikes took a toll on both parties but resulted in concessions supporting writers and actors. If actors and writers are placing limits on AI, why shouldn’t physicians do the same?
The strikes were successful for the SAG and WGA, but as AI advances, these efforts may not be enough to protect their livelihood. AI promises to infiltrate all areas of working life and dramatically change the workforce over the next decade. Even in the best-case scenario, AI will result in major shifts in so-called white-collar jobs that are dependent on creativity and language. AI will increasingly find its way into clinical practice as health care costs and medical knowledge expand. Prominent voices already call for “AI in health care to not be questioned.” They argue that medical students will only have access to about 6 percent of written medical knowledge by the time of graduation, some of it already outdated, and AI will be necessary to fill the knowledge and workforce gap. Yet, The author of the book “Sapiens,” Yuval Noah Harari, heralds the recent developments of AI as the end of human history, where humans will no longer control what is written, designed, and created. As such, physicians should question the quick adoption of AI.
Current AI training requires immense datasets for creating the weighted properties of the artificial neural networks that replicate human thought. The adoption of the electronic health record (EHR) has allowed for the massive accumulation of data primed to train AI to think like doctors. The promised benefits include ease of access to key clinical information, as well as the synthesis of large datasets for research. Clinical work will become less cumbersome with AI-managed charting, billing, and coding. Supportive medical positions such as medical assistants/scribes may soon be a thing of the past. Systems already in existence have been trained by human clinicians, have one billion minutes of medical dictation, and have ten million ambient encounters.
Sounds good.
We are exhausted, overworked, and burned out. Surely, AI could never fully replace the human comfort and touch required for good bedside manner, a concept that we use to self-soothe. Wake up. AI can already pass the USMLE with a 90 percent accuracy rate. Doctors are already using ChatGPT-4 for curbside consults to reduce the difficulty of producing a differential diagnosis and planning next steps in treatment.
AI is increasingly being used in fields such as radiology to “unburden” doctors from routine tasks such as looking at normal chest X-rays Robotics are being increasingly utilized in my field of orthopedic spine surgery to improve accuracy and take what once required a human hand, and instill machine-like precision. Finally, AI has also increased its capacity to comfort the patient and provide emotional support that is never subject to fatigue, anxiety, or disillusion.
There are downsides to handing over the keys to a physician’s human intellect. Although AI may make our lives simpler in the moment, what will the future hold as we physicians train these systems and as they continue to improve? Should AI systems be able to learn from your clinical “likeness” and those of other physicians without remuneration?
Once these systems are trained, in place, and self-learning, they may quickly displace those of us who have spent decades training. Cui bono? A small group in a large tech industry. The physician authors of the EHR will be overlooked when powerful AI systems use this wealth of information to become trained. The collective knowledge of all doctors on the EHR is primed to be transferred to AI, which is more capable of synthesizing it.
There are other diabolical problems with AI in medicine. In the future, it may become impossible to get insurance approval for a diagnostic or procedure without AI analysis. This will benefit insurance companies, which will have powerful AI at their disposal to process denials without the burden of human gatekeepers. In response, health care providers/systems will have to deploy AI on their own in an insurance claims arms race. Thus, decision-making will be totally removed from physicians by linking the economics of care to AI systems. This will complete the abdication of decision-making authority from human doctors.
What can be done? Physicians should be aware of when and if they are being used to train AI systems or improve AI products. Regulations should be enshrined that limit the use of AI systems in clinical decision-making or for removing the human wardens of payor decisions. The use of AI should never be mandated or required to provide medical treatment. Health care can and should always be between a patient and a human physician.
We stand on the precipice of losing control of not only our livelihoods but also our bond to patient care. While AI promises to improve human endeavors such as research and clinical decision-making, human teachers will be left in the dust. Certain groups of physicians may suddenly find themselves obsolete as corporate medical institutions find a cheaper way to deliver care with the assistance of physician-trained AI. A whole generation of doctors-in-training may fail to realize their careers and recuperate financial loss from medical training.
William B. Schwartz wrote in 1970, “Computing science will probably exert its major effects by augmenting, and in some cases, largely replacing the intellectual functions of the physician.” Fifty years later, this prediction is coming to fruition. Doctors go into medicine to pursue a career with endless challenges and demand for creativity. Now, we are entering the realm where difficult clinical decisions, patient counseling, and even procedures will be taken out of our hands. What will be left?
Actors and writers realized the peril of AI and were early to demand that the technology be checked. We as physicians must ensure that AI augments our capabilities and avoid training AI free of cost only to be ultimately replaced. Physicians need to look inward instead of to the tech world to improve our fitness to practice, our creative intellect, and our iron-clad character to provide empathic care. In the New England Journal of Medicine, Drazen et al. argue that AI will not displace humans but instead augment our capabilities. Yet, as a physician, I was trained to consider the worst-case scenario, not the best.
This article was written entirely by human hands and minds.
Yoshihiro Katsuura is an orthopedic surgeon and author of The Spine Encyclopedia: Everything You’ve Wanted to Know about Back and Neck Pain but Were Too Afraid to Ask. Eric Chang is an orthopedic surgery resident. Kie Shidara and James Schmidt are premedical students and research coordinators.