With more than seven million NHS waiting patients in England and around 100,000 staff vacancies, artificial intelligence has the potential to revolutionise the health service by improving patient care and freeing up staff time.
Its uses are wide-ranging, from discovering risk factors to help prevent chronic diseases such as heart attack, stroke and diabetes, to assisting clinicians by analysing scans and x-rays to speed up diagnoses.
The technology maximizes productivity by performing routine administrative tasks, from automated voice assistants to scheduling appointments and recording doctor visits.
“Transformative”
Generative AI – a type of artificial intelligence that can create different types of content such as text and images – will revolutionise patient outcomes, according to Sir John Bell, a senior government adviser on life sciences.
Sir John is Chancellor of the Ellison Institute of Technology in Oxford, a large new research and development facility investigating global issues such as the use of AI in healthcare.
He says generative AI can improve the accuracy of diagnostic scans and generate predictions of patient outcomes under different medical interventions, leading to more informed and personalized treatment decisions.
But he cautions that researchers should not work in isolation and that innovations should be shared fairly across the country to ensure that some communities are not left behind.
“To realise these benefits, the NHS must unlock the enormous value currently locked in data silos to do good while preventing harm,” Sir John said.
“Giving AI access to all data, within a safe and secure research environment, will make AI tools more representative, accurate and equal, benefiting all segments of society, reducing the financial and economic burden of running a world-leading National Health Service and resulting in a healthier nation.”
“Mitigating risk”
AI opens up a world of possibilities, but it also brings risks and challenges, such as maintaining accuracy – results need to be verified by trained staff.
The government is currently evaluating generative AI for use in the NHS, but one problem is that the AI can sometimes “hallucinate” and generate content that cannot be substantiated.
Dr Caroline Green, from the Oxford University Institute for AI Ethics, recognises that some health and care workers are using models like ChatGPT to seek advice.
“It is important that those using these tools are properly trained and understand and know how to mitigate the risks that arise from the technical limitations, such as the potential for incorrect information to be provided,” she said.
She feels it is important to involve health and social care workers, patients and other organisations in the early stages of generative AI’s development, and then build trust by continuing to evaluate the impact with them.
Dr Green says some patients have decided to unsubscribe from their GP because they fear how AI will be used in their care and how their personal information will be shared.
“Of course, this means that these people could be left behind and not get the medical care they may need in the future,” she says.
Additionally, there is a risk of bias: AI models may be trained on datasets that may not reflect the population to which they apply, potentially exacerbating health disparities based on, for example, gender or ethnicity.
Regulation is therefore key: it needs to ensure patient safety and protect personal data, while also increasing our ability to keep up with developments and allow AI to evolve and learn on the fly.
AI-powered medical devices are heavily regulated by the Medicines and Healthcare products Regulatory Agency (MHRA).
The Health Foundation think tank recently published a six-point national strategy to ensure AI tools are deployed fairly and regulations are updated.
“There are so many of these models in the system that it’s hard to evaluate them fast enough,” said Nell Thornton, an improvement fellow at the Health Foundation.
“So we need support around the system’s ability to regulate these things, and also clarity around some of the challenges that arise from the weirdness of generative AI systems and what additional regulation they might require.”
Dr Paul Campbell, head of software and AI at the MHRA, said: “As the regulator, we need to balance the right oversight to protect patient safety with the agility needed to meet the particular challenges these products pose, and remain a driver of innovation.”
The Department of Health and Social Care says the new Labour government will “harness the power of AI” by purchasing new AI-enabled scanners to diagnose patients earlier and treat them more quickly.
Few would deny the transformative effect that AI could have on healthcare, but there are challenges to overcome – not least that NHS staff need to be confident in using AI, and patients need to be able to trust it.