The tale is told of a large realm, of 1.4 billion, the largest nation in the world. It has a culture that is old, rich and grounded. A citizen’s worth can be understood, determined, exalted or decried. In modern times said country has a visionary leader for life. His foresight includes the automation of culture through the use of big datasets, mass surveillance, facial recognition, and artificial intelligence. A citizen’s value can be determined by their actions. Did they jaywalk? We’ll know. Machine learning starts in kindergarten. There now is a system that reports on many facets of everyday life to determine the model citizen.
This is not a futuristic episode of Black Mirror. This is 2019. This is China’s social credit score: a standardized, integrated electronic national reputation system with real implications, rewards, and punishments. It is an example of the use of AI to automate existing cultural norms.
Let us now make an analogy to health care in the U.S.
The United States health care system is among the most advanced in the world. Fetuses can have life-saving operations in-utero, organ transplants are routine, and medical advances continue to bring new promise to patients with previously incurable and/or untreatable conditions. But it is also a system that is wrought with inequity and inaccessibility. Examples of disparity abound: Black children have a 500% higher death rate from asthma than their white counterparts. Hispanics are 50% more likely to die from diabetes as compared to non-Hispanic whites. Women are 50% more likely of getting a missed diagnosis when they are having a heart attack.
These health disparities are, of course, multifactorial. Lack of timely access to the right care, environmental factors/exposures, systemic socio-economic barriers, bias, under-representation of women and minorities in research and clinical trials that guide diagnostic practices and several other factors are all contributors. To be clear, then, health care inequity in the U.S. is a reflection of a larger society in which social and economic biases are entrenched: a mirror of this country’s societal norms, rather than an indicator of an inherent perversion solely centered in medicine.
That being said, given such disparities do exist, will AI be made in this flawed image? Who will teach the machines? What will they teach them?
AI and machine learning use large amounts of data to recognize patterns that may otherwise go unnoticed, and, as such, it has the potential to revolutionize the way we diagnose, treat and prevent diseases. As mentioned earlier, it is well understood that women and minorities are often underrepresented in research studies that could prove to be a source for such databases. And, in this manner, AI has the potential to codify already existing biases and limitations in our diagnostic practices. The problem is that AI would do so in a manner that is automated, integrated and, if used incorrectly, potentially considered above reproach.
AI holds immense promise. In the developing world it has the potential to allow countries to leap over hurdles of lack of medical infrastructure and manpower. In public health it can predict, model and slow the spread of epidemics. And, in the U.S., it has the potential to correct for our disparities rather than bake or code them in — but only if we teach it to.
Iyesatta Massaquoi Emeli is an emergency medicine physician.
Image credit: Shutterstock.com