“Machine learning is only as good as the information provided to train the machine. Models trained on partial datasets can skew toward demographics that often turned up in the data—for example, Caucasians or men over 60. There is concern that “analyses based on faulty or biased algorithms could exacerbate existing racial gaps and other disparities in health care.” Already during the pandemic’s first waves, multiple AI systems used to classify x-rays have been found to show racial, gender, and socioeconomic biases.
Such bias could create a high potential for poor recommendations, including false positives and false negatives. It’s critical that system builders are able to explain and qualify their training data and that those who best understand AI-related system risks are the ones who influence health care systems or alter applications to mitigate AI-related harms.”
He shares his story and discusses his KevinMD article, “Artificial intelligence, COVID-19, and the future of pandemics.”
Did you enjoy today’s episode?
Please click here to leave a review for The Podcast by KevinMD. Subscribe on your favorite podcast app to get notified when a new episode comes out!
Do you know someone who might enjoy this episode? Share this episode to anyone who wants to hear health care stories filled with information, insight, and inspiration.
Hosted by Kevin Pho, MD, The Podcast by KevinMD shares the stories of the many who intersect with our health care system but are rarely heard from.