California is taking aim at the algorithms used by insurance companies to make pre-approval and other coverage decisions with a new law that limits how artificial intelligence (AI)-generated formulas can be used.
The state will also begin requiring health care providers to notify consumers when a patient communication is generated by AI.
The legislation reflects a growing trend among state lawmakers to more tightly regulate the use of AI in medicine and other fields in the absence of federal action.
The Physician Decision Making Act (SB 1120) goes into effect on January 1st. The legislation was supported by dozens of physician associations and medical groups, the California Hospital Association, and several patient advocacy groups. Insurance industry groups opposed the bill.
“As physicians, we recognize that AI can be an important tool to improve health care, but we know that AI can be It should not replace decision-making.”
The bill’s sponsor, State Sen. Josh Becker (D-Menlo Park), said the new law ensures that the human element always determines quality care for patients.
“Algorithms do not fully capture and understand a patient’s medical history and needs, which can lead to incorrect or biased decisions regarding treatment,” he said.
Law imposes guardrails
The new law mandates the use of AI or algorithms based on a patient’s medical history and individual clinical situation. Decisions cannot be based solely on group datasets, cannot replace clinician decisions, and require human physician approval.
The law requires that algorithms be “applied fairly and equitably.”
Algorithms can be biased, Dr. Sarah Murray, vice president and chief medical AI officer at UCSF Health, told Medscape Medical News. She recently published a paper in Science that found that decisions based on algorithms widely used by health systems (not insurance companies) mean that sick black patients receive less care than white patients. cited the paper.
The law seeks to address data used to train insurance companies’ algorithms. “The accuracy of an AI tool is determined by the data and algorithms fed into it,” write Carmel Shachar, JD, MPH, Amy Killelea, and Sarah Gerke in Health Affairs.
“It is very important to be transparent about what data is being used as a training set and to ensure that it matches the population in which the algorithm is actually used.” says Shachar, a clinical assistant professor at Harvard Law School (Cambridge). , Massachusetts, told Medscape Medical News.
While human approval of AI-generated decisions is important, “it also comes with risks,” Murray said. “We can become overly reliant on these tools, and we can also be biased and not notice the bias, or we can be unaware of the bias if the algorithm is providing a biased output. That’s a possibility,” Murray said.
A 2023 ProPublica investigation alleged that Cigna’s algorithms allowed doctors to quickly deny claims on medical grounds without ever reviewing a patient’s file. The publication reported that physicians employed by Cigna rejected more than 300,000 claims over a two-month period, spending an average of 1.2 seconds on each claim.
California is “reacting to real fear,” she said.
Lack of federal oversight
Although AI used to detect disease and improve diagnosis and treatment is regulated by the U.S. Food and Drug Administration, the AI tools targeted by lawmakers in SB 1120 “have not received the same scrutiny; There is also little independent oversight,” said Anna Yap, MD. , an emergency medicine physician in Sacramento, when she testified in support of SB 1120 on behalf of the CMA in early 2024.
California’s law “is a good first step,” Shachar said. Algorithms “were kind of a blind spot in our regulatory system,” she said. The new law “gives state regulators the power to act and establishes some accountability and requirements for how insurers implement AI,” she said.
Shachar et al. noted that AI has the potential to streamline and speed up prior authorization decision-making.
Dr. Neil Bisis, a neurologist at New York University’s Grossman School of Medicine in New York City, agreed in a paper published in JAMA Neurology. “If trained with the right data, AI has the potential to improve prior authorization by reducing administrative burden, increasing efficiency, and improving the overall experience for patients, clinicians, and payers. ” he writes.
In a 2022 report, McKinsey & Company touted the potential of AI to make pre-approvals more efficient. However, the authors say AI must be monitored to ensure it does not learn from biased datasets that “may result in unintended or inappropriate decisions,” especially for patients of lower socio-economic status. he pointed out. The report concludes that “experienced clinicians will continue to be the ultimate decision-makers in PA.”
The American Medical Association (AMA) did not take a position on SB 1120, but it plans to adopt a similar policy in 2023 that would require AI-based algorithms to use clinical standards and require physicians and other medical professionals with specialized knowledge to requested to include a review by. Services are under consideration and there is no incentive to refuse care.
AMA board member Dr. Marilyn Heine said at the time that even if AI streamlines prior authorizations, the volume is increasing. “The bottom line hasn’t changed: We have to reduce the number of things that are subject to prior approval,” she said.
Shachar et al. write that AI could encourage even more reviews. “A ‘review spike’ may occur,” they write.
Lawsuits erupt against insurance companies over the use of AI
In the absence of regulation, several lawsuits have been filed against insurance companies for the use of AI-based algorithms.
The families of two deceased Medicare Advantage beneficiaries who lived in Minnesota filed a lawsuit against UnitedHealth in 2023, alleging that the company’s algorithm had a 90% error rate and was used illegally, according to a CBS News report. I appealed.
In October, a thorough investigation by the U.S. Senate Permanent Subcommittee on Investigations found that insurers are systematically denying post-acute care services to Medicare Advantage enrollees using automated pre-approval algorithms. They reported that they found systematic denial of care at a much higher rate than other types of care denial. Other insured persons.
In March, an individual filed a class action lawsuit against Cigna, alleging that Cigna used its algorithms to deny claims based on information reported by ProPublica.
Shachar said litigation is not a satisfactory way to understand algorithms, in part because “you have to wait for the damage to be done.” She added that the tort system is still developing how different aspects of the law will apply to AI used by insurance companies.
Shachar said more states are likely to follow in California’s footsteps.
An AMA spokesperson agreed. “The AMA anticipates future legislative action to begin in 2025 as reports of health plans using AI to systematically deny claims are increasing,” said RJ Mills of the AMA. told Medscape Medical News.
New rules for AI-generated provider communications
The Governor of California also signed AB 3030. This means that AI-based patient communications must be generated by the AI unless the communication is first read and reviewed by a human licensed or certified healthcare provider. It is mandatory to show it.
Murray said UCSF Health is already doing that.
Health systems are testing the use of AI to help doctors draft responses to messages from patients, with the aim of helping doctors respond more quickly. The message includes text informing the patient that AI was used to assist the doctor. He also said doctors are still reviewing all communications.
“We just wanted to be transparent with our patients,” Murray said.
AI “is going to be very good for healthcare,” she says. But California’s new law was necessary to provide “guardrails.”
Mr. Shachar and Mr. Murray reported no relevant financial relationships.
Alicia Ault is a freelance journalist based in St. Petersburg, Florida, whose work has appeared in publications such as JAMA and Smithsonian.com. You can find her at X @aliciault.