Privacy-Preserving Agentic AI: Federated Learning, Differential Privacy, and Secure Multi-Agent Coordination

by Ayomide V. Akinola, Jerusha A. Akpojovwo, Juliet E. Idume-David, Maxmilian C. Ugwunna, Oluwatosin E. Labode, Opeyemi T. Olatunji, Toyyibat M. Yisau, Uchenna J. Nzenwata

Published: May 9, 2026 • DOI: 10.51244/IJRSI.2026.1304000155

Abstract

The proliferation of autonomous agentic artificial intelligence systems necessitates robust privacy-preserving mechanisms to facilitate secure collaboration in distributed environments. This systematic review investigates the synergistic integration of federated learning (FL), differential privacy (DP), and secure multi-agent coordination in agentic AI systems. Through a comprehensive analysis guided by the PRISMA methodology, we examine how FL enables decentralized model training while preserving data locality, and how DP fortifies these systems against privacy inference attacks through controlled noise injection. Our investigation reveals critical security vulnerabilities including adversarial poisoning and backdoor attacks, while identifying emerging cryptographic solutions such as homomorphic encryption and secure multiparty computation. The findings demonstrate that the convergence of these technologies provides a foundational framework for privacy-respecting autonomous AI systems, though significant challenges remain in scalability and real-world deployment.