One might argue that one of the primary duties of a physician is to constantly assess and reassess the odds of a medical procedure’s success. Is the patient at risk for developing severe symptoms? When should the patient return for further testing? Amid these important discussions, the rise of artificial intelligence is helping to reduce risk in the clinical setting. The hope is that doctors will be able to prioritize care for high-risk patients.
Despite that potential, researchers from the Massachusetts Institute of Technology’s School of Electrical Engineering and Computer Science (EECS), Equality AI, and Boston University have published a new study published in the New England Journal of Medicine AI (NEJM). In its commentary, it calls for greater oversight of AI by regulatory bodies. AI) October issue after the U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) issues new regulations under the Affordable Care Act (ACA).
In May, OCR announced the ACA’s final rule prohibiting discrimination on the basis of race, color, national origin, age, disability, and sex in patient care decision support tools. This tool is a newly established term that includes both AI and AI. Non-automatic tools used in medicine.
The final rule, developed in response to President Joe Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence beginning in 2023, promotes health equity with a focus on preventing discrimination. This is based on the efforts of the Biden-Harris administration.
“This regulation is an important step forward,” said Marzieh Ghasemi, senior author and associate professor at EECS. Ghasemi, who is affiliated with the MIT Abdul Latif Jameel Health Machine Learning Clinic (Jameel Clinic), Computer Science and Artificial Intelligence Laboratory (CSAIL), and Institute of Medical Engineering Sciences (IMES), added: . Equity-focused improvements to non-AI algorithms and clinical decision support tools already in use across clinical subspecialties should be directed. ”
The number of AI-enabled devices approved by the U.S. Food and Drug Administration has increased dramatically over the past decade since the first AI-enabled device (PAPNET testing system, a tool for cervical screening) was approved in 1995. . As of October, the FDA had approved approximately 1,000 AI-enabled devices, many of which are designed to support clinical decision-making.
However, researchers found that although the majority (65%) of U.S. physicians use these tools monthly to determine next steps, the clinical It points out that there is no regulatory body that oversees risk scores. patient care.
To address this shortcoming, Jameel Clinic plans to hold another regulatory conference in March 2025. Last year’s conference sparked a series of debates and debates among faculty, global regulators, and industry experts focused on regulating AI in healthcare.
“Clinical risk scores are less opaque than ‘AI’ algorithms in that they typically include only a small number of variables linked to a simple model,” said Isaac, chair of the Department of Biomedical Informatics at Harvard Medical School and editor-in-chief.・Keohane comments on NEJM AI. “Nevertheless, even these scores are only as good as the datasets used to ‘train’ them and the variables experts choose to select or study in a given cohort. If they are to influence clinical decision-making, they should be held to the same standards as their more recent, more complex AI cousins. ”
Additionally, although many decision support tools do not use AI, researchers note that these tools are similarly responsible for perpetuating bias in healthcare and need to be monitored.
“As clinical decision support tools embedded in electronic health records proliferate and are widely used in clinical practice, regulation of clinical risk scores poses significant challenges,” co-authors of Equality AI CEO Maia Hightower said. “Such regulations remain necessary to ensure transparency and non-discrimination.”
However, Hightower said that under the incoming administration, clinical risk score regulation will be particularly difficult given its “focus on deregulation and opposition to the Affordable Care Act and certain nondiscrimination policies.” There is a possibility that something will turn out to be true,” he added.