Agentic AI and Autonomous Decision-Making: A Review of Human-in-the-Loop Frameworks, Oversight Mechanisms, and Trust Calibration

by Bosede Olajoke Ishola, Catherine Olatorera Olaleye, Dorcas Atinuke Adedokun, Rachel Ihunanya Adeniran, Simeon Ayoade Adedokun

Published: April 18, 2026 • DOI: 10.51584/IJRIAS.2026.11030104

Abstract

The rapid proliferation of agentic artificial intelligence (AI) systems, which are autonomous agents capable of perceiving, reasoning, planning, and executing multi-step tasks with minimal human intervention, presents foundational challenges for the design of effective oversight architectures. Although developers report using AI assistance in approximately 60% of their work, empirical estimates suggest that full delegation remains feasible for only 0–20% of tasks, establishing a persistent and consequential human-AI collaboration boundary that current frameworks struggle to characterize with sufficient precision. This study carried out a systematic review that synthesized peer-reviewed studies published between 2020 and 2026 to map the state of the art in human-in-the-loop (HITL) frameworks, oversight mechanisms, and trust calibration strategies across eight high-stakes sectors, which are healthcare, criminal justice, financial services, autonomous transportation, education, manufacturing, content moderation, and human resources. Following a PRISMA-aligned protocol, the study analyzed sources drawn from the Association for Computing Machinery (ACM), Institute of Electrical and Electronics Engineers (IEEE), NeurIPS, the Association for the Advancement of Artificial Intelligence (AAAI), and major journal databases. The analysis revealed four recurring tensions in the literature, which are the explainability–performance tradeoff, autonomy–accountability gap, over-trust/under-trust duality, and the participation–effectiveness paradox. Building on these tensions and the synthesized evidence, the study introduced the Adaptive Oversight Calibration Model (AOCM), a sector-agnostic framework comprising six formal propositions that relate task criticality, AI competency boundaries, human cognitive capacity, institutional constraints, trust dynamics, and feedback loops to optimal oversight configurations. The AOCM advances prior work by operationalizing meaningful oversight as a continuous, context-sensitive function rather than a binary or static design choice, and by providing testable propositions amenable to empirical validation. Implications for system designers, policymakers, and AI practitioners are discussed, with particular attention to the European Union AI Act (2024) and NIST AI Risk Management Framework (2023) as regulatory anchors.