Artificial intelligence in medicine: not ready for prime time

July was an interesting month for artificial intelligence in medicine.

A study from MIT found when human doctors order tests on patients, they factor in something that artificial intelligence is not currently aware of. The authors analyzed charts of about 60,000 ICU patients admitted to Beth Israel Deaconess Medical Center in Boston.

By looking at physician progress notes with positive or negative sentiments in patient records, they derived scores which they correlated with the number of diagnostic imaging tests that were ordered. When other factors were controlled for, medical data alone did not drive the ordering of tests, but the sentiments of doctors could predict how many tests were ordered.

Pessimistic doctors ordered more testing at first, but later when the patient’s condition was viewed very negatively, they ordered fewer tests.

The study concluded that “gut feelings” or intuition had a strong impact on the number of tests ordered. One of the authors of the study said, “That gut feeling is probably informed by history of experience that doctors have.” He likened it to what a mother senses by looking at a child who had done something wrong.

The MIT investigators are thinking about ways to teach a computer to have gut feelings, but at this point, artificial intelligence has not been programmed to employ this wisdom.

Artificial intelligence took another hit from the folks at STAT News who did an in-depth report on IBM’s Watson for oncology. It seems Watson was trained with hypothetical cases by physicians and others at Memorial Sloan Kettering Cancer Center in New York instead of using “big data” from the large number of cancer patients at that institution.

An oncologist who formerly worked for IBM and helped run the project outlined its problems in presentations at the company. He said recommendations for treatment were based on the opinions of a few oncologists — not guidelines or evidence. In one instance, Watson recommended treating a mock patient with lung cancer and bleeding with a drug which carries a black box warning saying its use can cause massive hemorrhage. No real patients were treated with the drug.

IBM has not openly disclosed any problems with Watson for oncology and claims it is “going fabulously.” An IBM Watson Health executive said doctors liked the program. However, the article quoted a physician user from Jupiter Hospital in Florida as saying, “This product is a piece of s**t,” adding that it was unusable for most cases.

My gut feeling is we physicians should be able to hang on to our jobs for a few more years.

“Skeptical Scalpel” is a surgeon who blogs at his self-titled site, Skeptical Scalpel.  This article originally appeared in Physician’s Weekly.

Image credit: Shutterstock.com

View 2 Comments >