Optimization-Driven Deep Learning for Gait Recognition: Benchmarking the Hippopotamus Optimization Algorithm Against Established Metaheuristics.
by Nathaniel Atansuyi, Ogunkan Stella Kehinde, Oyelakun Temitope A
Published: March 21, 2026 • DOI: 10.51584/IJRIAS.2026.110200151
Abstract
Gait recognition has emerged as a robust biometric approach for human identification in surveillance, healthcare, and forensic applications. However, the efficiency of deep-learning-based gait recognition largely depends on the optimization algorithm used for model training and hyperparameter tuning. While traditional gradient-based methods such as Stochastic Gradient Descent (SGD) and Adam are widely adopted, their convergence behavior often deteriorates in high-dimensional non-convex spaces. Recent studies employing metaheuristic algorithms such as Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) have demonstrated performance gains but remain constrained by local optima and unstable convergence dynamics.
This study benchmarks a newly introduced metaheuristic; the Hippopotamus Optimization Algorithm (HOA) against four well-established optimizers: Adam, SGD, PSO, and GA, as reported in previous deep learning optimization studies. The developed HOA-CNN-LSTM hybrid model integrates the HOA for global hyperparameter optimization and Adam for fine-tuned gradient updates. Experiments conducted on the TUM-GAID dataset show that HOA achieves 97.4% accuracy and 98.5% Genuine Acceptance Rate (GAR) with a reduced convergence time of 39s per epoch. These results surpass comparative benchmarks reported for Adam [10], SGD [11], PSO [12], and GA [13], confirming HOA’s superior balance between exploration and exploitation.
By situating HOA’s performance within a metaheuristic benchmarking framework, this work provides empirical evidence that HOA represents a promising optimization paradigm for next-generation spatiotemporal deep learning and biometric recognition applications.