Radiology Study Warns of "Deepfake" Medical Scans as AI-Generated X-rays Deceive Both Experts and Multimodal Chatbots
A study in Radiology finds that radiologists and AI chatbots struggle to distinguish GPT-4o generated X-rays from real scans, raising alarms for medical fraud.
By: AXL Media
Published: Mar 26, 2026, 9:10 AM EDT
Source: Information for this report was sourced from Radiology via Tarun Sai Lomte

HEADLINE
Radiology Study Warns of "Deepfake" Medical Scans as AI-Generated X-rays Deceive Both Experts and Multimodal Chatbots
SUMMARY
A multi-center international study published in Radiology has revealed that modern generative AI can produce anatomically plausible X-rays that are increasingly difficult to distinguish from real clinical scans. In testing, experienced radiologists and advanced multimodal models like GPT-4o and Gemini 2.5 Pro struggled to identify synthetic images, raising urgent concerns regarding the potential for medical fraud, insurance scams, and the erosion of trust in digital health records.
CONTENT
The Evolution from GANs to Diffusion-Based Medical Fakes
Categories
Topics
Related Coverage
- New AI Framework RST2G Revolutionizes Breast Cancer Tumor Segmentation Using Spatiotemporal Graph Fusion
- Frontier AI Models Invent Medical Details for X-Rays They Have Never Seen
- Clinical Study Identifies Pancreatic Fat as a Silent Driver of Cardiometabolic Risk in Children and Adolescents With Severe Obesity
- Noninvasive Multiparametric Ultrasound Emerges as Critical Diagnostic Alternative for Monitoring Cirrhosis and Portal Hypertension Severity