Ethical Considerations of Healthcare’s ‘Alphabet Soup’: AI, ML and NLP
By Damian David - Senior Director of Sales & Business Development
Apr 18, 2022
Recently, researchers unveiled a technique for using artificial intelligence (AI) to diagnose COVID-19 from X-rays in mere minutes, compared with waiting hours for the results of a PCR test. It’s yet another example of the in-roads AI and its offspring, machine language (ML) and natural language processing (NLP), are making in medicine.
Today, AI can distinguish benign moles from cancerous ones, analyze chest films for lung cancer, emphysema and other diseases, and even predict which patients are less likely to follow doctors’ orders.
Yet, exciting as its potential to revolutionize healthcare may be, AI comes attached with a variety of ethical issues. Like any technological leap forward, AI has the potential to do many great things in medicine. On the other hand, also like any other advancement that falls into the wrong or inexperienced hands, AI can be used inappropriately.
Consider the following examples:
- Early on in the development of the Apple credit card, the AI-based algorithm used to measure applicant creditworthiness and grant credit limits consistently gave female applicants lower credit limits than male applicants.
- A study of an AI-powered system used by the state of Florida to measure and predict recidivism among jail inmates inaccurately identified people of color as having a higher likelihood of recidivism than Whites.
- IBM Watson, the computer most famous for beating Ken Jennings on Jeopardy!, was being trained to identify the subjects in pictures using massive data sets of images annotated with keywords. During final testing before it was released for widespread use, the computer was fed a picture of a person in a wheelchair. Watson returned the word, “Loser.”
Obviously, no one believes the biases were intentionally trained into the AI algorithms; in all cases, the errors that led to these results were eventually uncovered and fixed. But they illustrate the critical importance of understanding, managing, testing and validating training data for AI, regardless of how or where it is applied.
The ramifications for health care are significant. As a comprehensive, 165-page report issued by the World Health Organization in 2021 lays out, “Use of limited, low-quality, non-representative data in AI could perpetuate and deepen prejudices and disparities in health care. Biased inferences, misleading data analyses and poorly designed health applications and tools could be harmful. Predictive algorithms based on inadequate or inappropriate data can result in significant racial or ethnic bias.”
It is absolutely possible that AI may someday become smart enough, for example, to diagnose a tumor from a chest film as accurately as a radiologist. But how will this capability change the radiologist’s job or that of other specialists? Will AI permit providers to spend more time with patients, or will it make care “less humane,” as the WHO report puts it? Legally speaking, who should be held responsible for mistakes in care decisions based on faulty AI findings?
At a more macroscopic level, moreover, how do we ensure that the “healthcare wealth” generated by AI technology is distributed appropriately and equitably? What safeguards should be put in place so that AI is not just accessible to the nations, institutions or people with the resources to afford it?
Guiding Principles
As AI spreads throughout healthcare, we should be careful to answer and apply these three ethical questions:
- Is AI being used appropriately?
- Is it being used equitably?
- Does it meet our standards for what’s right?
AI should never be used solely to improve the bottom line of its makers or even just the ability of providers to diagnose and treat disease. It should also be used to preserve the principles of privacy, fairness, inclusion, and accountability.
Anything less will make us all losers.
For more information on Healthcare’s ‘Alphabet Soup’: AI, ML and NLP, watch our webinar.