Healthcare organizations may be slow to adopt new artificial intelligence tools and other cutting-edge innovations due to legitimate concerns about safety and transparency. But healthcare needs these innovations to improve quality of care and patient outcomes.
However, it is essential that they are applied correctly and ethically. Just because a generative AI application passes a medical school exam doesn’t mean it can become a practicing physician. Healthcare organizations are leveraging the latest advances in AI and large-scale language models to put the power of these technologies in the hands of healthcare professionals to deliver better, more accurate, and safer care. There is a need.
Dr. Tim O’Connell is a practicing radiologist and CEO and co-founder of emtelligent, a developer of AI-powered technology that transforms unstructured data.
We spoke with him to learn more about him. The importance of guardrails for AI in healthcare, as AI helps modernize healthcare. We also discuss how algorithmic discrimination can perpetuate health inequities, legislative efforts to establish safety standards for AI, and why human involvement is essential. Ta.
Q. As AI technology helps modernize healthcare, what is the importance of AI guardrails in healthcare?
A. AI technologies offer exciting possibilities for healthcare providers, payers, researchers, and patients, offering the potential for better outcomes and lower healthcare costs. However, to realize the full potential of AI, especially medical AI, healthcare professionals must be able to understand both the capabilities and limitations of these technologies.
This includes recognizing risks such as non-determinism, hallucinations, and problems in reliably referencing source data. Healthcare professionals need knowledge about the benefits of AI, as well as a critical understanding of its potential pitfalls, to be able to use these tools safely and effectively in a variety of clinical settings.
To use AI safely and ethically, it is important to develop and adhere to a thoughtful set of principles. These principles should include addressing concerns about privacy, security, and bias, and should be rooted in transparency, accountability, and fairness.
AI systems need training to reduce bias Leverage more diverse datasets that account for historical differences in diagnosis and health outcomes, as well as reprioritize training to ensure AI systems match real-world medical needs.
A focus on diversity, transparency, and robust oversight, including the development of guardrails, will ensure that AI remains resilient to error and becomes a highly effective tool to drive meaningful improvements in healthcare outcomes. will be done.
Guardrails in the form of well-designed regulations, ethical guidelines, and operational safeguards are important here. These protections help ensure that AI tools are used responsibly and effectively and address concerns about patient safety, data privacy, and algorithmic bias.
It also provides accountability mechanisms, allowing errors and unintended consequences from AI systems to be traced to specific decision points and corrected. In this context, guardrails act as both a safeguard and an enabler, allowing medical professionals to trust AI systems while protecting against potential risks.
Q. How does algorithmic discrimination perpetuate health inequalities? What can be done to fix this problem?
A. If the AI systems we rely on in healthcare settings are not properly developed and trained, there is a very real risk of algorithmic discrimination. AI models trained on datasets that are not large or diverse enough to represent the full range of patient populations and clinical characteristics can and do produce biased results.
This means that AI can help underserved populations, such as racial or ethnic minorities, women, people from lower socio-economic backgrounds, and people with very rare or uncommon conditions. This means they may provide less accurate or less effective care recommendations.
For example, if a medical language model is trained primarily on data from a specific demographic, it can be difficult to accurately extract relevant information from clinical notes that reflect different medical conditions and cultural backgrounds. There is a gender. This can lead to missed diagnoses, misinterpretation of patient symptoms, or ineffective treatment recommendations for populations that the model is not properly trained to recognize.
In fact, AI systems may perpetuate the very inequalities they are meant to alleviate, especially for patients from racial minorities, women, and lower socioeconomic backgrounds who are underserved by traditional health care systems. There is.
To address this issue, it is important to: Ensure that your AI systems are built on large, highly diverse datasets that capture a wide range of patient demographics, clinical symptoms, and health outcomes. The data used to train these models must be representative of a variety of races, ethnicities, genders, ages, and socio-economic statuses to avoid biasing the system’s output to a narrow view of medicine. It won’t.
This diversity enables models to perform accurately across diverse populations and clinical scenarios, minimizing the risk of perpetuating bias and ensuring that AI is safe and effective for everyone. Masu.
Q. Why is human involvement essential for AI in healthcare?
A. Although AI can process vast amounts of data and generate insights at speeds far exceeding human capabilities, it lacks the nuanced understanding of complex medical concepts essential to providing quality care. I am. Humans in the loop are essential to AI in healthcare. They provide the clinical expertise, oversight, and context needed to ensure that algorithms are executed accurately, safely, and ethically.
Let’s consider one use case. It is the extraction of structured data from clinical records, laboratory reports, and other medical documents. Without human clinicians to guide their development, training, and ongoing validation, AI models risk missing important information or misunderstanding context-specific nuances of medical terminology, abbreviations, and clinical terminology.
For example, the system might incorrectly flag a symptom as serious or miss important information embedded in a doctor’s note. Human experts can fine-tune these models to accurately capture and interpret complex medical terminology.
From a workflow perspective, humans in the loop can help interpret and act on AI-driven insights. Even when AI systems produce accurate predictions, medical decision-making often requires a level of personalization that only clinicians can provide.
Human experts can combine AI output with clinical experience, knowledge of a patient’s unique situation, and understanding of broader healthcare trends to make informed and compassionate decisions.
Q. What is the status of legislative efforts to establish safety standards for AI in healthcare, and what should legislators do?
A. Legislation to establish safety standards for AI in healthcare is still in its infancy, but there is growing awareness of the need for comprehensive guidelines and regulations to ensure the safe and ethical use of AI technologies in clinical settings. Awareness is growing.
Several countries have begun to introduce AI regulatory frameworks, many of which are based on trustworthy fundamental AI principles that emphasize safety, fairness, transparency, and accountability, and are shaping this debate. It’s starting.
In the United States, the Food and Drug Administration has introduced a regulatory framework for AI-based medical devices, specifically software as medical devices (SaMD). The FDA-proposed framework follows a “total product lifecycle” approach that aligns with trustworthy AI principles by emphasizing continuous monitoring, updating, and real-time evaluation of AI performance.
However, while this framework is compatible with AI-driven devices, it still does not fully account for the challenges posed by non-device AI applications dealing with complex clinical data.
Last November, the American Medical Association We have published draft guidelines for using AI in an ethical, fair, responsible and transparent manner.
In its “Principles for the Development, Deployment, and Use of Augmented Intelligence,” the AMA reinforces its position that AI enhances, rather than replaces, human intelligence, and that “the physician community “It’s important to help guide development in the most optimal way.” Understand the needs of both physicians and patients and help define your organization’s risk tolerance, especially when AI impacts direct patient care. ”
Fostering this collaboration between policy makers, medical professionals, AI developers, and ethicists can help develop regulations that promote both patient safety and technological progress. Lawmakers must strike a balance to create an environment where AI innovation can thrive while ensuring these technologies meet the highest safety and ethical standards.
This includes developing regulations that enable agile adaptation to new AI advances, ensuring that AI systems remain flexible, transparent, and responsive to the evolving needs of healthcare.
Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email: bsiwicki@himss.org
Healthcare IT News is a publication of HIMSS Media