Protecting Women from Non-Consensual Explicit Deepfakes: Global Lessons for Malaysia
This paper summarises key findings by Nur Sakinah Alzian
March 2025

Deepfakes misuse is first and foremost a gender issue which disproportionately harms women. This became evident once again when a Malaysian cosplayer fell victim to this type of AI misuse when her nude deepfake images were being sold for RM18 on Tumblr. This incident underscores the urgent need to address the misuse of deepfakes in harming women as a form of technology-facilitated gender based violence (TFGBV). Therefore, stronger legal and policy responses in Malaysia can no longer be delayed.
This policy brief examines how other countries have approached deepfake regulation, analyzing their policies and legal frameworks to identify effective strategies. By assessing these international models, the brief explores how Malaysia can adapt and implement relevant measures to address existing regulatory gaps and strengthen protections against deepfake harms against women.
The Uphill Battle: Why Regulating Non-Consensual Explicit Deepfakes Is Challenging
Developing a framework to regulate deepfakes misuse as a form of TFGBV is a challenge shaped by a web of complex factors. It sits at the intersection of various legal domains, is constantly outpaced by the rapid evolution of technology, and is disseminated across social media platforms that operate under their own rules and standards.
01. Blurry Legal Lines:
-
Data Protection Laws: Regulating deepfakes under data protection laws could help, but it raises concerns if the victim’s photo was collected through automated data collection by an AI model, without human instructions.
-
Copyright & Intellectual Property Laws: Copyright and intellectual property laws might offer protection for artists whose work is stolen for deepfake creation. Yet, they do not account for individuals whose private images are used without consent.
-
Penal Code: An alternative approach is to address deepfakes through the penal code, but the lack of legal recognition for “explicit deepfakes” may complicate enforcement.
-
Jurisdictional Line: Perpetrators can operate from outside of Malaysia. Many websites and platforms which help create or distribute deepfake content are based in jurisdictions with different regulations, limiting the ability to take direct legal actions. Cross-border cooperation is often slow and ineffective. This allows perpetrators to exploit legal loopholes and evade accountability.
02. A Race Against Technology:
-
Accessibility of Explicit Deepfake Websites in Malaysia: Several websites offering explicit deepfake generation are easily accessible in Malaysia. Moreover, they often target women. For example, platforms like Undress.AI explicitly market themselves as a tool for undressing women in images, describing their service as: “...a free AI undressing tool that allows users to generate images of girls without clothing. This innovative undress AI site is designed to be accessible and user-friendly, making it easy for anyone to explore its features.” Various other similar platforms are emerging, often developed to target women, as stated by AI and deepfake expert Henry Ajder, these AI stripping tools predominantly target women and rarely work on men.
-
Limitations of Deepfake Detection Technology: Research has consistently shown that deepfake detection technologies remain flawed. Studies indicate that existing detection methods only achieve an average accuracy of 65% and can often be deceived. Some detection tools, such as Intel’s FakeCatcher could counteract deepfake proliferation, but their effectiveness remains limited against this rapidly evolving technology.
03. Big Tech’s role – The struggle to enforce platform accountability:
-
Social Media Platforms: Social media platforms such as TikTok or Meta encourages creators to label content that is fully generated or significantly altered by AI. For example, TikTok prohibiting AI-generated content includes deepfakes that misrepresent authoritative sources, depict crisis events falsely, or show public figures in misleading contexts such as being bullied, endorsing products, or making endorsements. Additionally, TikTok restricts the use of “likes” on minors under the age of 18 or adults whose images are used without their permission. However, this system relies on creators to voluntarily label their content, which poses a problem when someone with ill intent posts an explicit deepfake without consent. In such cases, there is little to no incentive for the creator to label the content in the first place.
-
Private Messaging Platforms: Encrypted social media platforms like WhatsApp and Telegram present an even greater challenge as they prioritize user privacy through end-to-end encryption. These platforms offer a higher level of anonymity, making it harder for authorities or platform moderators to detect and intervene in the distribution of harmful content. Perpetrators are more likely to post deepfakes on these encrypted platforms because the private nature of the communications allows them to share content without getting caught. As a result, these platforms have become hotspots for the distribution of malicious deepfake material.
Legal Framework for explicit Deepfake Victims in Malaysia
When someone becomes a victim of a non-consensual explicit deepfake, several existing legal avenues in Malaysia may offer some form of recourse. This section outlines which Act and its provisions are relevant to address obscene deepfakes in Malaysia.



Regulatory Gaps
This section examines potential gaps in Malaysia’s legal framework concerning non-consensual explicit deepfakes.


What the World is Doing to Combat Explicit Deepfakes
This section explores countries that have established laws addressing non-consensual explicit deepfakes. These include South Korea, Australia, the United Kingdom, and several U.S. states, which have implemented specific legal measures to combat the issue. China is included as it was the first country to introduce comprehensive regulations on generative AI. Singapore is also examined as a representative example of AI regulations within ASEAN countries.








Closing the Gap: Policy Recommendations for Malaysia
01. Explicit Legal Provisions for Non-Consensual Explicit Deepfakes
Introduce new provisions within Malaysia’s Communications and Multimedia Act 1998, Sexual Offences Against Children Act 2017, and the Penal Code to explicitly define non-consensual explicit deepfakes. These provisions must go beyond vague terminology such as “false image,” “altered image,” or “deepfake”. A possible definition could be: “The creation and dissemination of sexually explicit content, generated using artificial intelligence, without the subject’s consent.”. The severity of the issue requires precise legal language, similar to The Defiance Act in the US that delineates what constitutes an explicit deepfake.
02. Criminalization of Creation and Distribution of Deepfakes
Criminalize the creation, distribution, possession, and viewing of explicit deepfakes, regardless of whether the content has been shared publicly. Malaysia can model this provision on South Korea’s The Act on the Punishment of Sexual Crimes, amended in October 2024, which criminalizes the production of deepfake pornography along with its distribution, possession, purchasing, storing, and viewing. Similar to the Crime and Policing Bill in the UK, Malaysia should expand legislation to criminalize the creation of deepfake images, ensuring that it is treated as a serious offense.
03. Increased Penalties for explicit Deepfake Crimes
Strengthen penalties for the creation, distribution, and possession of explicit deepfakes. While Section 507 of the amended Penal Code addresses distressing content, including insulting words and general online abuse, explicit deepfakes have a far greater potential for long-term harm, including severe psychological trauma and professional consequences. The penalties for crimes involving explicit deepfakes must be significantly higher to reflect the devastating impact on victims.
04. Consent-Based, Not Intent-Based Criminalization
Shift the focus from the perpetrator’s intent to the victim’s lack of consent when prosecuting explicit deepfake crimes. A victim’s lack of consent should be sufficient for conviction, regardless of the perpetrator’s motivation. This consent-based approach ensures that perpetrators of explicit deepfakes can be held accountable without the need to prove malicious intent.
05. Legal Recourse for Victims
Introduce a system of legal recourse similar to the Defiance Act in the United States, which allows victims of non-consensual sexually explicit deepfakes to sue those responsible for creating, sharing, or receiving such images. The premise behind this is to facilitate victims in taking legal action.
06. Training for Law Enforcement
Equip police forces with specialized training to address the unique challenges posed by explicit deepfakes. These images represent a new form of crime that requires law enforcement personnel to understand the technical aspects of deepfake technology and its implications for victims. Empowering police with the necessary skills and resources will enhance their ability to investigate and prosecute these crimes effectively.
07. Education and Awareness Campaigns
Invest in comprehensive education and awareness programs for both women and men regarding the risks of explicit deepfakes and how to protect themselves online. The Ministry of Women, Family and Community Development, The Ministry of Education, and the Malaysian Communications and Multimedia Commission can play a key role in these initiatives, helping individuals understand the technology behind deepfakes, how to identify them, and the steps they can take if they become victims.
08. Support Services for Victims
Establish robust support systems for victims of explicit deepfakes, including psychological counseling, legal assistance, and financial support. Victims of deepfakes often experience severe emotional and financial distress, so it is crucial to provide comprehensive services that help them recover and rebuild their lives.
Conclusion
The proliferation of non-consensual explicit deepfakes represents a growing threat to gender based violence, personal privacy, mental health, and societal trust. Malaysia, like many countries, faces the challenge of adapting its legal frameworks to address this evolving issue. While existing laws offer some recourse for victims, they fail to adequately capture the specific harm caused by explicit deepfakes. It is crucial to recognize that technology is not neutral; it is deeply embedded in the culture, norms and social systems of society. In this case, misogyny plays a significant role in driving individuals to create deepfakes with the intent to harm women. The creation of non-consensual explicit deepfakes is often rooted in gender-based violence, and as such, explicit deepfakes should be framed not only as a gender issue or a form of technology-facilitated gender-based violence (TFGBV) but also as a matter of AI development and ethics. This issue must be at the forefront of Malaysia’s approach to AI regulation and ethical considerations, especially in the development of the National AI Action Plan.
By implementing the recommendations outlined in this brief, Malaysia can take a significant step forward in protecting its citizens from AI misuse. Moreover, investing in law enforcement training, public education, and victim support services will ensure that victims of explicit deepfakes are not only protected but also empowered to seek justice.
As technology continues to evolve, so too must our legal responses.
Nur Sakinah Alzian is a senior researcher at Social and Economic Research Initiative (SERI). SERI is a non-partisan think-tank dedicated to the promotion of evidence-based policies that address issues of inequality. Visit www.seri.my or email hello@seri.my for more information.
Reference List
-
Nur Sakinah Alzian, “Deepfakes Aren’t Just Fake - They’re Gendered. Here’s Why It Matters,” Seri, 2025.
-
Malaysiakini, “Cosplayer Falls Victim to ‘Nude’ Photo Editing, Police Open Probe,” Malaysiakini, January 23, 2025.
-
Inês Trindade Pereira, “Nude Deepfakes: Is the EU Doing Enough to Tackle the Issue?” Euronews, March 17, 2024.
-
University of California - San Diego, “Deepfake Detectors Can Be Defeated, Computer Scientists Show for the First Time,” ScienceDaily, February 8, 2021.
-
Bart van der Sloot and Yvette Wagensveld, “Deepfakes: Regulatory Challenges for the Synthetic Society,” Computer Law & Security Review 46, September 2022.
-
TikTok, “About AI-Generated Content,” Tiktok.com, accessed February 18, 2025.
-
Meta, “Our Approach to Labeling AI-Generated Content and Manipulated Media | Meta,” Meta, April 5, 2024.
-
Communications and Multimedia Act 1998 (Act 588), Malaysia, 1998.
-
Christopher & Lee Ong Malaysia, “An Overview of Key Changes Introduced by the CMA Amendment Bill,” Malaysia, January 2, 2025.
-
ZUL RAFIQUE & Partners, “Key Amendments to The Communications and Multimedia Act Pursuant to the Communications and Multimedia (Amendment) Bill 2024,” Zulrafique.com.my, December 8, 2024.
-
Kherk Ying Chew, Serena Kan, and Karine Chaw, “Malaysia: Licensing of Social Media and Internet Messaging Service Providers - from 1 January 2025 Onwards,” Connect On Tech, August 12, 2024.
-
Malaysian Communications and Multimedia Commission, “Code of Conduct (Best Practice) for Internet Messaging Service Providers and Social Media Service Providers,” Malaysia, December 20, 2024.
-
Sexual Offences Against Children Act 2017 (Act 792), Laws of Malaysia, 2017.
-
Sexual Offences Against Children Act 2017 (Act 792), Laws of Malaysia, Part II: Offences Relating to Child Pornography, 2017.
-
Personal Data Protection Act 2010 (Act 709), Laws of Malaysia, 2010.
-
Laws of Malaysia, Penal Code (Act 574), updated text of reprint as at May 31, 2023.
-
Penal Code (Amendment) (No. 2) Act 2024, Laws of Malaysia.
-
Cyberspace Administration of China, Ministry of Industry and Information Technology, and Ministry of Public Security, “Provisions on the Administration of Deep Synthesis of Internet-based Information Services,” China, November 25, 2022, effective January 10, 2023.
-
Asha Hemrajani, “China’s New Legislation on Deepfakes: Should the Rest of Asia Follow Suit?,” Thediplomat, March 8, 2023.
-
Jean Mackenzie, “South Korea: The Deepfake Crisis Engulfing Hundreds of Schools,”, BBC News, September 3, 2024.
-
National Assembly of the Republic of Korea, "The Act on the Punishment of explicit Crimes”.
-
Hansu Park, “South Korea’s Rising Deepfake Crimes and Recent Legal Responses,” Asia Democracy Research Network, December 12, 2024.
-
Georgia Smith and Joseph Brake, “South Korea Confronts a Deepfake Crisis,” East Asia Forum, November 18, 2024.
-
The Korea Times, “Police to Develop Deepfake Detection System,” The Korea Times, February 2, 2025.
-
Ministry of Justice, “Better Protection for Victims Thanks to New Law on explicitly Explicit Deepfakes,” GOV.UK, January 22, 2025.
-
Lucy Morgan, “Deepfake Laws Still Not Based on Consent – Survivors & Campaigners Speak Out,” Glamour UK, January 23, 2025.
-
Lucy Morgan, “Deepfake Laws Still Not Based on Consent,” Glamour UK, 2025.
-
Department for Science, Innovation & Technology, “Online Safety Act: Explainer,” GOV.UK, May 8, 2024.
-
Manasa Narayanan, “The UK’s Online Safety Act Is Not Enough to Address Non-Consensual Deepfake Pornography,” Tech Policy Press, March 13, 2024.
-
Natasha Singer, “States Move to Ban Deepfake Nudes to Fight explicitly Explicit Images of Minors,” The New York Times, April 22, 2024.
-
U.S. Congress, DEFIANCE Act of 2024, S.3696, 118th Congress, 2nd Session, introduced on January 30, 2024.
-
Parliament of Singapore, Elections (Integrity of Online Advertising) (Amendment) Bill, Bill No. 29/2024.
-
Florida Legislature, Senate Bill 1798, CHAPTER 2022-212, relating to explicitly related offenses, enacted in 2022.
-
Louisiana State Legislature, Act 457.
-
South Dakota Legislature, Senate Bill 79, relating to criminal penalties for the distribution of explicitly explicit material involving minors, 2024.
-
Washington State Legislature, House Bill 1999, concerning fabricated intimate or explicitly explicit images and depictions, 2024.