A Hybrid EfficientNet and Self-Attention Architecture for Masked Face Recognition
by Ajay Ramteke, Girish Katkar, Lekha Prajapati
Published: March 2, 2026 • DOI: 10.51584/IJRIAS.2026.110200035
Abstract
The use of facial masks in the real-world setting has made the Masked Face Recognition (MFR) a critical research problem in Pattern Recognition. The classical face recognition technology is highly impaired in performance when the nose and mouth are covered as the main facial features. In this paper, a strong hybrid deep learning model will be suggested, which integrates the EfficientNet-B0 convolutional neural network with a self-attention mechanism to promote the learning of discriminative features on a partially visible part of the face. EfficientNet-B0 is also an efficient and scalable feature extractor, and the self-attention module allows global contextual reasoning and adaptable attention to unoccluded areas of the face, especially the periocular area. The suggested model is tested on the actual MFR2 dataset and has a recognition rate of 0.99, which proves to be better than the traditional CNN-based methods. The experimental test proves that the combination of self-attention can greatly enhance the resilience to the obstruction of the object by obstructing the features of the mask. The findings suggest that the hybrid architecture proposed is quite appropriate in real-time biometric authentication and surveillance and access control systems with masked environments.