The rapid development of deepfake technology powered by AI has raised global concerns regarding the manipulation of information, the usurpation of digital identities, and the erosion of public trust in the authenticity of online content. These challenges extend beyond technical issues and involve complex moral dimensions, rendering conventional, technologically driven, and reactive management approaches insufficient to address underlying causes such as intent, ethical responsibility, and intangible social harm. In response to these challenges, this study aims to formulate a comprehensive Islamic ethical framework as a preventive approach to mitigate the misuse of deepfake technology. This study employed a Systematic Literature Review (SLR) guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), selecting ten primary sources published between 2018 and 2025 to identify ethical gaps, regulatory needs, and appropriate normative solutions. The analysis demonstrates that integrating the principles of Maqasid al-Shariah, particularly hifz al-ird and hifz al-nafs, provides a strong normative foundation for governing the responsible use of digital technology. Based on the findings, this study proposes three strategic recommendations: regulatory reforms that recognize the intangible and psychological harms resulting from reputational damage; strengthened technology governance grounded in moral accountability and the values of adl, amanah, and transparency; and enhanced public digital literacy based on the principle of tabayyun. Overall, the findings suggest that the application of Islamic ethical principles shifts governance paradigms from punitive mechanisms toward preventive approaches that emphasize the protection of human dignity, the prevention of harm, and the promotion of the common good in the digital age.
In recent years, rapid advances in Artificial Intelligence (AI) have opened a new chapter in human-machine interaction. AI is no longer just an automation tool; it has become integral to how we work, communicate, and perceive digital reality. However, these advances also raise complex ethical dilemmas, particularly as AI's ability to generate highly similar synthetic content poses serious risks of misuse. As generative models such as GPT and diffusion models become more sophisticated, the line between reality and simulation is increasingly blurred. AI is now capable of imitating facial expressions, voices, and even human communication patterns with high precision. This phenomenon raises concerns about potential moral and social misconduct, including information manipulation, identity theft, and the spread of visual-based misinformation. [1]. One of the most problematic manifestations of this phenomenon is deepfakes, a technique based on Generative Adversarial Networks (GANs) that can create or modify audio-visual content to appear authentic. In legitimate contexts, deepfakes can provide positive benefits, such as in film production, educational simulations, or digital history restoration. However, when misused, this technology can compromise privacy, defame reputations, and even erode public trust in digital media. [2], [3].
Despite these severe risks, current reactive and technologically driven management methods-such as detection algorithms or quick regulatory responses often fail to address the core moral issues underlying deepfake abuse, including the user’s intent, the ethical vacuum in development, and the longterm, intangible social impact. Thus, the challenge is not only technical but fundamentally ethical. This paper argues that an effective, preventive solution requires a comprehensive ethical framework that focuses on human morality and responsibility, moving beyond mere detection. Based on this premise, we propose an exploration of Islamic ethics, with its established principles of truth, justice, and responsibility, as a robust foundation to construct a human-centered, value-based governance model for AI and deepfake technology.
Globally, the misuse of deepfakes has raised serious concerns due to its role in spreading disinformation and manipulating public opinion. Deepfake content is used for political gain, propaganda, and reputational attacks against individuals or institutions. [4]. As a result, the line between fact and fabrication is becoming increasingly blurred, weakening the principle of trust that underpins social interaction in the digital space. This problem is further complicated by the lack of comprehensive ethical and legal standards for addressing the misuse of deepfakes. Several countries have developed AI governance frameworks that emphasize the moral responsibility of technology developers and providers, but their effectiveness depends heavily on the integration of law, ethical values, and public awareness. [5].
Given the limitations of purely technical and regulatory solutions, the need for a foundational ethical approach becomes urgent. This study posits that the Islamic ethical system offers a robust, preventative framework due to its emphasis on core moral principles that are highly relevant to the digital age, such as s . idq (truthfulness) and amanah (trust/responsibility). By integrating these ethical values into the governance of deepfake technology, especially through the lens of Maqās . id al-Sharī’ah (the higher objectives of Islamic law), we can shift the focus from merely detecting harm to proactively protecting fundamental human interests, such as honor, life, and intellect, thereby ensuring AI serves the common good and prevents moral misconduct.
Beyond legal and social aspects, the spread of deepfakes also has psychological implications. Studies show that repeated exposure to synthetic content can reduce people’s trust in visual evidence, increase skepticism of online information, and create mass confusion [6]. This situation demonstrates that addressing deepfake abuse through repressive legal regulations alone is not sufficient. A more fundamental ethical approach is needed one that emphasizes moral responsibility, honesty, and respect for human dignity in the development and distribution of AI-based technology [7]. In this context, Islamic ethics offers a significant contribution through moral principles that emphasize benefit (mas . lah . ah), justice (‘adl), (amanah), and protection of honor and privacy (h . urmah). These values can serve as a foundation for formulating more humane and equitable AI ethics policies and governance [8].
In Indonesia, the misuse of deepfakes, including voice impersonations of public figures, non-consensual pornography, online fraud, and disinformation, has become a real threat. This situation creates epistemic uncertainty, where people are no longer sure whether the digital content they see or hear is an authentic represe
This content is AI-processed based on open access ArXiv data.