Written by Emily Vespa
Co-published with Charlotte Ledger
As the use of artificial intelligence expands in North Carolina’s health care system, state leaders say they want to pioneer policies to regulate the rapidly evolving technology.
This effort could be the first step toward putting guardrails around AI in health care at the state level.
AI, which involves software that uses large amounts of information to perform tasks previously thought to require human intelligence, is rapidly gaining popularity in the medical field. The advent of “generative” AI in recent years means machines can analyze large amounts of data to make predictions and recommendations, as well as output new and original content.
North Carolina health systems are some of the first in the nation to develop and deploy generative AI technology, including ambient documentation tools that record patient visits and generate notes.
But these technologies come with many challenges, said Christina Silcox, director of digital health research at the Duke Margolis Institute for Health Policy Research. Studies have shown that AI algorithms can be trained on data with subtle biases, resulting in poor performance for patients of color. Utilizing vast amounts of patient data can raise privacy concerns. Another danger is that healthcare providers may become overly reliant on AI tools and ignore years of accumulated knowledge.
Sen. Jim Bergin (R-Angers) said he plans to introduce legislation this year that would address some of the concerns about AI’s potential pitfalls. Some North Carolina health leaders want to develop AI policies that could serve as a model for other states.
Creating policy in a complex and ever-changing AI environment is difficult, but could have significant benefits for the future of the healthcare industry, Silcox said.
“Not only do I truly believe that AI can really improve medicine and health, I also believe that we need AI to improve medicine and health,” Silcox said. “We are seeing an aging population and an overburdened workforce, and it is only going to get worse.”
National AI regulations are poor
Federal agencies have issued regulations regarding AI in health care, but Congress has been slow to enact legislation, meaning federal oversight is limited. Some states are moving to fill the gap.
In Utah, for example, lawmakers in April passed a law requiring state-licensed professionals, including most health care workers, to notify customers when they interact with generative AI. Other states are also considering legislation that would ban the use of discriminatory AI algorithms or allow patients to opt out of AI use altogether, according to the National Conference of State Legislatures.
NCSL’s AI bill database shows that North Carolina lawmakers have not taken similar efforts to regulate AI in health care over the past two years.
Meanwhile, North Carolina’s healthcare industry is ramping up its use of AI. Across the state, health care providers have applied numerous tools to help predict health events, analyze scans, handle administrative tasks, communicate with patients, and more.
The North Carolina Medical Board said in a position statement that physicians are responsible for the decisions they make in accordance with the recommendations of AI algorithms. The statement also said that when physicians use AI tools to transcribe clinical notes, they are responsible for reviewing the notes to ensure they are accurate.
The state medical board has not yet filed any disciplinary action related to the use of AI tools and will immediately develop guidance documents and policies regarding the use of AI, according to Jean Fisher Brinkley, communications director for the state medical board. There are no plans to do so. .
He said in an email that the board stays up-to-date on the latest information on AI, but “wants to avoid abrupt action that could dampen innovation in the use of AI in health care. I believe there is a lot of promise in this.”
Some North Carolina health care providers are building their own governance systems to monitor AI. Silcox said that at Duke Health, which along with UNC Health was an early adopter of such a system, AI clinical tools must go through a vetting process. Eric Poon, Duke Health’s chief medical information officer, said a committee of system and university experts is overseeing the tool and its outcomes.
“We are asking the project team to test this technology, deploy it with appropriate guardrails, and see if it works before disseminating it to the general public within clinical populations.” Poon he said. The committee is also scrutinizing the data on which AI tools are built for potential bias, he said.
In 2022, Duke University clinicians and researchers had to revise the algorithms they were training to detect cases of sepsis or systemic infections in children, as reported in the TradeOffs Podcast . Researchers found that for Hispanic children who were later diagnosed with sepsis, doctors took longer to order blood tests than for white children, and that Hispanic children took longer to develop sepsis. He pointed out that this could potentially be taught to the algorithm.
According to TradeOffs, Duke University researchers investigated the issue and found that the AI detected sepsis in Hispanic children as quickly as other children because there was not enough data to skew the algorithm. He said he discovered it. Mark Sendak, a researcher there, said on a podcast that the incident “solidified in his mind how easy it is for bias to creep into AI.”
Still, government officials worry that leaving oversight to agencies alone is not enough.
“The question I keep asking is, ‘AI is making all these decisions for us, but where is the responsibility if it makes the wrong decision?'” said Bergin. “Who is in charge?”
Leaders focus on ‘gold standard’ state law
Bergin said he is working on legislation that would address accountability when AI is used for clinical decision-making. A UNC Health executive recently suggested that the healthcare industry is also interested in tightening AI regulation.
“In an ideal world, there would be preemptive federal legislation for all states regarding the responsible use of AI,” said David McSwain, chief medical information officer for the UNC Department of Health. He spoke at a UNC panel on AI in November.
That doesn’t seem likely to happen anytime soon, he added, but that means health systems will have to navigate a patchwork of state laws. He said dispersion of state laws could complicate matters and require companies to design AI platforms differently in each state.
McSwain said North Carolina health leaders want to bring together a variety of stakeholders, including the health care system and society, with the goal of enacting a “gold standard state law” that other states can emulate. He said there was.
A key aspect of future legislation will be the definition of terms like “AI”, which can be difficult as they are used to describe a large number of technologies used in different ways. he said.
“I think this is a bad idea,” McSwain said of state-level AI laws. “But the reality is, that’s what’s going to happen. What we want to do is minimize the burden on the health care system, minimize the burden on health care workers, and reduce the potential for health equity.” enact state-level laws that minimize negative impacts.
NC Health News/Charlotte Ledger reporter Michelle Crouch contributed to this report.
This article is part of a partnership between The Charlotte Ledger and North Carolina Health News to produce original health care reporting focused on the Charlotte region.
You can support this effort with a tax-deductible donation.
Republish this story