News
Harvard Lampoon Claims The Crimson Endorsed Trump at Pennsylvania Rally
News
Mass. DCR to Begin $1.5 Million Safety Upgrades to Memorial Drive Monday
Sports
Harvard Football Topples No. 16/21 UNH in Bounce-Back Win
Sports
After Tough Loss at Brown, Harvard Football Looks to Keep Ivy Title Hopes Alive
News
Harvard’s Greenhouse Gas Emissions Increased by 2.3 Percentage Points in 2023
Harvard Medical School researchers and affiliates have discovered that the use of artificial intelligence in radiology is not universally beneficial, contrary to existing research.
The study — released last Tuesday by researchers at MIT, Stanford, and the Rajpurkar Lab of Harvard Medical School — was a re-analysis of a previous study by the same researchers. Published in Nature, it centered on a high-performing AI model and studied its effectiveness in diagnosing patients based on chest X-rays.
Pranav Rajpurkar, a Harvard Medical School professor who co-authored the study, emphasized the need for a more detailed understanding of AI in medicine.
“While previous studies have shown the potential for AI to improve overall diagnostic accuracy, there was limited understanding of the individual-level impact on clinicians and what factors influence the effectiveness of AI assistance for each radiologist,” he wrote in an emailed statement.
The study found that AI use in radiology “did not uniformly improve diagnostic accuracy, and could even hurt performance for some cases,” according to Kathy Yu, a researcher who was a member of the Rajpurkar Lab when the study was conducted.
Currently, AI is used by radiologists in image processing, but it has never been employed systemwide for diagnostic purposes. Computer assisted diagnosis dates back to the 1970s and has been leveraged to reinforce a radiologist’s diagnosis, but it is not designed to replace the role of a clinician.
Previous studies about the use of AI in medical procedures have highlighted the positive impact it could have on the clinical industry at large. However, the findings of the recent HMS study “challenge several common assumptions about the impact of AI assistance in radiology,” Yu wrote.
“Our results show the real-world impact of AI assistance is more complex and conditional than previously thought”, she added.
The study refuted many assumptions around AI assistance, namely that prior training or exposure to AI would result in greater accuracy when clinicians used such tools. The idea that AI assistance can “narrow gaps between radiologists” was also refuted, with Yu writing that “lower-performing radiologists did not consistently benefit more from AI assistance.”
Susie Y. Huang ’02, a radiologist at Massachusetts General Hospital, said the study demonstrated the continued importance of radiologists, claiming that the replacement of jobs by AI is “actually quite far away.” Huang noted that current modes of AI, while useful in completing simple tasks, have not reached human ability in interpreting images across radiology.
“What it’s showing is that a radiologist’s job may be more complex than what one might think,” Huang added.
Rajpurkar sees the study as “a call to action” for further research on AI use in radiology. He wrote that “we need to move beyond the hype and fear surrounding AI,” and instead, look towards “an evidence-based understanding” of AI assistance.
Rajpurkar also acknowledged the steps AI could take in the future to become more effective in radiology, highlighting the input of practicing radiologists. “Just as we wouldn’t expect every radiologist to instantly master a new imaging technique, we can’t assume every radiologist will seamlessly integrate AI into their decision-making,” he wrote.
“The future of AI in radiology is not one-size-fits-all, but rather, tailored integration plans that consider each radiologist’s unique strengths, weaknesses, and cognitive style in interacting with AI,” he wrote.
Want to keep up with breaking news? Subscribe to our email newsletter.