AI can be optimally applied in medical settings without exceeding human limitations.
“When it comes to AI, I still apply, but repurpose, the principles of the Hippocratic Oath for physicians: AI shall do no harm, and we will maintain impartiality and confidentiality. And above all, humans must stay informed,” said Dr. Shankar Sridharan. , Chief Clinical Information Officer, Great Ormond Street Hospital, UK. He spoke at HIMSS24 APAC’s plenary session, “Introducing Generative AI to Clinical Environments.”
“Perfection[in AI]is not necessarily better. It abdicates responsibility to the user. AI is already fast, but it has the potential to be kinder and safer. It’s kinder to pay attention to the patients you see, and it’s safer when you see them.”Humans are constantly under surveillance to validate the AI. ”
Dr. Shankar gave a live demonstration of automatically generating clinical notes using Ambient AI. He had conversations with simulated patients about their symptoms and medical history.
“The transcript will arrive quickly,” he said, displaying the output of the AI solution. “The platform automatically divided the notes into clinical notes that included the patient profile, allergies, symptom summary, findings, and follow-up plan with the doctor.”
Despite the benefits, Dr. Shankar emphasized that hospitals need to focus more on governance and digital infrastructure for optimal AI implementation.
“AI protects cognitive load, but it needs to be done[within a governance framework]. We need to look at misspoken rates and hallucination rates. The onus is on hospitals to have systems in place. This could include AI contracts such as: ”Such systems should apply to all AI technologies, not just ambient AI. ”
Dr. Shankar expressed interest in applying AI to the documentation of not only clinicians but also nurses and other medical staff.