AI Expert Shares Insights with Quality Leaders Forum

Artificial intelligence, or AI, is not a sentient or magical force. It is, rather, “the simulation of human intelligence by machine.” And in health care, it can provide benefits for patients and clinicians—if used in the right way with the right parameters.

This observation was shared by John Halamka, MD, MS, who spoke at UHF’s Quality Leaders Forum in June. Dr. Halamka is the president of the Mayo Clinic Platform and oversees the health system’s adoption of artificial intelligence. 

He engaged health care quality leaders in a timely conversation about the rapidly expanding use of machine learning and AI technology in health care. He stressed that quality improvement leaders and anyone considering leveraging these algorithms should ask questions about the data used to develop the AI tools, the possibility of biases in the predictions, and the performance of algorithms in evaluating findings accurately. He also spoke about his own innovative work on AI and quality measurement.

Dr. Halamka discussed the evolution of machine learning as a form of AI and explained the difference between predictive and generative AI. Predictive AI uses a variety of multimodal data to create predictions that can be tested “against ground truth.” Generative AI, such as ChatGPT, is different—it uses pictures or words to create human-like interaction to “predict the next word in a sentence.” In the field of health care, the value of predictive AI is established and well-understood, whereas the role of generative AI is still emerging and not always clear and measurable. 

Dr. Halamka explained how the Mayo Clinic used a de-identified data set of its 10 million patients to create, evaluate, and then implement 160 predictive AI models in cardiology, neurology, radiology, radiation oncology, and oncology. Fourteen of these cardiology algorithms were deployed for all Mayo patients. In a control trial, a group of primary care physicians used standard care without the AI cardiology algorithms, and another group of physicians used the 14 algorithms. The AI-assisted group of physicians, Dr. Halamka said, was able to diagnose disease 20 percent earlier. “The notion of earlier detection of disease means less cost and less morbidity and mortality,” he said.

None

He described similar results from AI algorithms in the fields of neurology, radiology oncology, and endoscopy. Noting a “15 percent miss rate” for finding small polyps in endoscopies, Mayo developed an AI model to look at all endoscopy images. The AI algorithm’s miss rate for finding polyps was about three percent. “So, in effect, a human alone is five times worse than the AI model that is augmenting the human during an endoscopy,” said Dr. Halamka. 

The process for predictive AI involves developing a model, validating it, doing clinical trials, seeking approval from the federal Food and Drug Administration (FDA), and then scaling its use. Dr. Halamka acknowledged concerns about AI in health care and a credibility problem. He predicted that the FDA and other federal agencies will, in the near future, issue a set of “guidelines and guardrails” mandating transparency for predictive AI models— i.e., what data was used to create them—that will help health care providers decide whether they should use them on a given patient.

Regulation of generative AI is a “different issue,” he said, stressing that it is much too early in its development. He advised caution on its use until methods are refined and accuracy and validity established.   

“Would I use it now for diagnosis? Absolutely not. Because, at the moment, generative AI fabricates and hallucinates because it’s predicting words in a sentence, not fact.”

Even for the more established predictive AI, Dr. Halamka said that the technology should not replace doctors but rather become another tool “that can provide augmented human decision-making.”

"We're all on this journey together," he said about making AI accessible, as well as regulated and implemented properly. "It's going to take a village.”

The Quality Leaders Forum, organized in collaboration with Greater New York Hospital Association (GNYHA), is a group of emerging and established health quality leaders committed to improving the delivery of high-quality care in the greater New York area. Members include alumni from the UHF/GNYHA Clinical Quality Fellowship Program and honorees from UHF’s Tribute to Excellence in Health Care. Members are invited to network and discuss current issues in health care quality with nationally recognized quality leaders and to pursue opportunities for sharing best practices. 

Past Forum summaries can be found here. The next Forum will be held on October 16. 

UHF is grateful to Elaine and David Gould, whose generosity supports the Quality Leaders Forum.