Towards Trustworthy Diagnostic AI: Mitigating Hallucination in Deep Learning-Based Medical Image Restoration

by Dr. P. Venkatesan

Published: February 11, 2026 • DOI: 10.51244/IJRSI.2026.13010169

Abstract

Deep Learning (DL) has shown impressive promise for medical image restoration, offering improved image quality for precise diagnosis. However, these models—especially generative ones—are vulnerable to a critical failure mode called hallucination, in which they erase subtle pathologies or create plausible but nonexistent anatomical structures. Due to the possibility of false positives and false negatives, this phenomenon seriously jeopardizes patient safety and undermines clinical confidence. In this work, we suggest a thorough framework to reduce hallucinations and develop reliable diagnostic AI. In order to limit the solution space to data-consistent outputs, we first formulate the restoration task within a physics-informed architecture that explicitly incorporates the imaging forward model. Additionally, we present a brand-new uncertainty quantification module that creates a pixel-by-pixel confidence map, enabling medical professionals to see possible hallucination regions. Additionally, we support a hybrid loss function that strikes a balance between strict fidelity to the input data and perceptual quality. Our framework is tested on a variety of clinical datasets, such as fast MRI and low-dose CT. We show that, when compared to state-of-the-art baselines, our method dramatically reduces hallucination artifacts as measured by both conventional metrics (PSNR, SSIM) and a recently proposed Faithfulness Score. Importantly, a reader study involving three board-certified radiologists verifies that images restored using our technique increase interpretative confidence while maintaining diagnostic accuracy. This work demonstrates that creating dependable and clinically useful AI-based restoration tools requires a comprehensive approach that combines uncertainty-aware visualization with model-centric constraints.