Victims of AI: Addressing the Legal Response Gap to AI-Related Crimes in Bangladesh

by M Sabbir Ahmed Shihab, Md. Easin, Md. Omar Faruk Shakib, Mostafizur Rahaman

Published: March 21, 2026 • DOI: 10.51244/IJRSI.2026.130200194

Abstract

AI technologies improving rapidly in Bangladesh have eased the path to novel forms of digital harm, such as deepfake harassment (using manipulated videos or images to create false representations), AI-assisted impersonation (using AI to mimic someone's identity), extortion, phishing, and algorithmic discrimination (bias in automated decision-making processes). This scenario raises doubts about the robustness of our current legal system in safeguarding individuals from AI-induced harm. The study examines the absence of a legal response to violence against women in Bangladesh by asking if laws are sufficient, whether institutions are prepared and capable of assisting victims and how accessible justice is for them. Using a mixed-methods approach, we analyzed survey data from 435 respondents aged 18–35 and two case studies of AI-generated deepfake harassment and blackmail. Overall, the results show that a majority of respondents (55%) perceive AI misuse as “very common," with women and children being perceived as particularly vulnerable groups. In addition, most respondents (65%) believe that existing laws and regulations are inadequate at preventing new harms caused by AI. The report also says that most people support new or updated laws and that digital literacy is a good way to stop problems before they happen. The paper concludes that to address the legal response deficit and ensure robust protection in an AI-driven society, Bangladesh requires victim-centered legal reform, enhanced digital forensic capabilities, clear liability regulations, and a more cohesive framework for AI governance.