A new research paper from Yale School of Medicine takes a closer look at how biased artificial intelligence can impact clinical outcomes. This study focuses specifically on different stages of AI model development and shows how data integrity issues can impact health equity and quality of care. .
Why is it important?
The study, published earlier this month in PLOS Digital Health, describes both real-world and hypothetical ways in which AI bias could negatively impact healthcare delivery. This is true not only at the point of care but also at every stage of medical AI development (training data, models). development, publication, implementation, etc.
“Bias is in, bias is out,” study lead author John Onofrey, assistant professor of radiology and biomedical imaging and urology at Yale School of Medicine, said in a press statement.
“Having worked in the machine learning/AI field for many years, the idea that bias exists in algorithms is not surprising to me,” he said. “But enumerating all the potential ways in which bias can creep into the AI learning process is incredible, so mitigating it can seem like a daunting task. .”
As research points out, bias can occur almost anywhere in the algorithm development pipeline.
According to the researchers, this can occur in “data features and labels, model development and evaluation, deployment, and publication.” “Insufficient sample size for a given patient group can result in suboptimal performance, underestimating the algorithm, and producing clinically meaningless predictions. It can also result in missing patient findings. This can lead to biased model behavior, such as missing data that is available but not random, such as diagnosis codes, or data that is not typically available, or that is not easily captured, such as social determinants of health. There are some things.”
On the other hand, “professionally annotated labels used to train supervised learning models may reflect implicit cognitive biases or substandard treatment practices. Performance during model development may be affected. Overreliance on metrics can obscure biases and reduce the clinical utility of the model. When applied to data outside the cohort, the model’s performance may be worse than previous validation and may result in differences between subgroups. ”
And of course, the way clinical end users interact with AI models can also introduce its own biases.
Ultimately, “AI models here will be developed and published, and by whom, to influence the trajectory and priorities of future medical AI development,” the Yale researchers said.
And they point out all the efforts to mitigate that bias, including “collection of large and diverse datasets, techniques to reduce statistical bias, thorough model evaluation, emphasis on model interpretability, and standardization.” “Bias Reporting and Transparency Requirements” need to be implemented carefully and carefully. Notice how these guardrails work to prevent negative impacts on patient care.
“Rigorous validation through clinical trials is important to demonstrate unbiased application before actual implementation in clinical practice,” they said. “To ensure that all patients can equitably benefit from future medical AI, it is important to address bias throughout the model development stages.”
However, the report “Bias in Medical AI: Implications for Clinical Decision Making” offers several suggestions for mitigating that bias with the goal of improving health equity. .
For example, previous research has found that using race as a factor in estimating kidney function can increase the wait time for Black transplants to get on the transplant list. Yale researchers offer several recommendations to help future AI algorithms use more accurate measures, such as zip code and other socioeconomic factors.
On record
“Further capture and utilization of social determinants of health in medical AI models for clinical risk prediction will be of paramount importance,” said James L. Cross, a first-year medical student at Yale School of Medicine and lead author of the study. he said in a statement.
“Bias is a human problem,” added study co-author Dr. Michael Choma, adjunct associate professor of radiology and biomedical imaging. “When we talk about ‘bias in AI,’ we must remember that computers learn from us.”
Mike Miliard is the Editor-in-Chief of Healthcare IT News
Email the author: mike.miliard@himssmedia.com
Healthcare IT News is a HIMSS publication.