top of page

Deepfakes Aren’t Just Fake - They’re Gendered. Here’s Why It Matters

Prepared by Nur Sakinah Alzian

2 January 2025


One morning, a local banker went to work expecting an ordinary day, only to be shocked when a colleague informed her that her face had appeared in a pornographic video circulating online, generated without her consent using deep fake technology(1). While artificial intelligence is frequently touted as a panacea for Malaysia’s economic growth, with AI-centric forums and events springing up nationwide, the risks associated with the technology demands more attention.


In particular, the rise of deepfake-related crimes in Malaysia is raising alarm bells. Deep fake is an unofficial term which combines the word ‘deep learning’ and ‘fake’, referring to a synthetically produced media manufactured using machine or deep learning technology(2). There are two main methods for generating deep fakes: image creation, where neural networks generate a new image based on compiled data of faces, and morphing, which superimposes one face onto another(3). Deepfakes, therefore, enable the creation of deceptive media content that makes it appear as though real people are saying or doing things they have not, opening the door to fraud, misinformation, identity theft, reputation damage, and various forms of cybercrime. 


Despite their broader risks, deep fake crimes in Malaysia are frequently framed as a financial scam and identity fraud problem(4). For example, multiple news outlets have highlighted concerning data from the Royal Malaysia Police, which recorded 454 deep fake fraud cases since January, resulting in losses totaling 2.72 million ringgit(5). Although this issue definitely deserves the limelight, the more disturbing gendered impact of deep fakes on women is often underreported, which risks rendering the problem insignificant.


Why are Deep Fakes a Gender Issue First?


A simple online search for deepfake-related crimes in Malaysia reveals hundreds of news reports on fraud cases and associated financial loss, framing this as the foremost issue linked to deepfakes. However, an analysis conducted by Deeptrace in 2019 on deep fake videos found that 96% of deepfakes found online worldwide are pornographic, with all of them involving female subjects(6). A similar study done by Security Hero in 2023 showed a troubling trend, with 98% of deep fakes being pornographic and 99% of the victims being women(7).


These alarming statistics highlight a bitter truth: deep fake technology is predominantly being used to produce sexual content, with the vast majority involving women, and mostly without their consent. Therefore, deep fakes are above all, a gender issue first. 


Yet, the issue of gender-based violence tied to deep fakes in Malaysia remains largely sidelined in public discourse, which is both perplexing and concerning. A study by Gosse and Burkell on media coverage of deep fakes found that sexual deepfakes are consistently treated as a secondary issue in news reports(8). This pattern is also evident in the Malaysian context. This tendency not only diminishes the significance of the problem, but also leaves the victims—often women—voiceless and invisible.  


How Deep Fakes Exacerbates Gender Based Violence


Misuse of deepfakes are a form of image-based sexual abuse, falling under the broader category of technology-facilitated gender-based violence (TFGBV). The use of sexualized images and videos to harm women is not a new phenomenon; it has long been exploited as a tool for revenge porn, where perpetrators use intimate media to extort victims for money, sexual favors, or power(9). However, with the advent of deepfake technology, image-based violence has become significantly easier and more widespread. Perpetrators no longer need to acquire explicit photos or videos from victims; they can simply create them by manipulating publicly available images. In fact, it takes as little as 25 minutes and no financial cost to generate a 60-second deep fake pornographic video online(10). This ease of creation further undermines women's autonomy, stripping them of their power of consent and exposing them to greater risks of exploitation and harm.


Currently, there are no available statistics on the number of sexual deep fake victims in Malaysia. However, this absence of data should not be interpreted as an indication that the problem does not exist. On the contrary, the presence and accessibility of deepfake technology is likely to exacerbate the issue of gender-based violence against Malaysian women, a problem that has already affected more than 8,000 victims annually since 2018(11). The potential of deepfake technology being used as a tool against Malaysian women thus should not be underestimated. 



Figure 1. Source: Committee on the Elimination of Discrimination against Women (CEDAW), Replies of Malaysia to the List of Issues and Questions in Relation to Its Sixth Periodic Report (Annex A), November 1, 2023. (Author interpretation of data from the document).

Detection Alone Won’t Solve the Issue — We Need Stronger Regulations


Currently, Malaysia lacks specific laws to protect against deep fake-related crimes. Victims of deepfake gender-based violence must rely on the Communications and Multimedia Act (CMA) 1998, which allows action against content deemed inappropriate or offensive if it includes sounds, text, or images that are electronically stored or transmitted. They may also turn to Common Law Defamation, which enforces strict penalties for slander that harms the victim's reputation. If the victim is a minor, charges may be brought under the Sexual Offences Against Children Act 2017.


However, these protections are limited. Regrettably, the government’s first response to tackling deepfakes’ misuse is often to pressure social media platforms to take action. For example, Communications Minister Fahmi Fadzil reacted to the deep fake video of influencer Khairul Aming by calling on social media companies to label AI-generated content, essentially placing the onus on these platforms to regulate the issue(12). There are several problems with this approach. 


First, it reflects a misguided belief that deepfake crimes are objective and isolated from the social and cultural forces—particularly misogyny—that often drive their creation. Solely relying on deep fake detection technology overlooks the need to address the underlying motivations and systemic biases that fuel these abuses. 


As stated by Gosse and Burkell, “Deepfakes is a new technology, first used for a familiar purpose: to objectify and demean women. Misogyny is not, however, “built in” to the technology; instead, the decision to use the technology to create sexual deepfakes rests with the users and reflects the misogynistic culture within which the technology is deployed”(13).


Second, this response shifts focus to the aftermath of the crime rather than addressing its origin and creation. By waiting until harmful content appears online to take action, this approach gives perpetrators a larger margin for impunity, leaving victims vulnerable and the root of the problem unchecked. Moreover, perpetrators will inevitably find ways to circumvent these measures. 


Third, the law does not account for the borderless nature of cyber crimes, leaving victims vulnerable if perpetrators are located outside of Malaysia’s jurisdiction. Therefore, stronger laws are urgently needed to combat AI-related crimes. Given the severity of the issue, clear regulations on AI must be established to protect against its misuse. Forewarned is forearmed—we cannot allow this technology to become a tool of violence against women in Malaysia.


Reframing our understanding of technology


Often, technology is seen as impartial and free from the subjective nature of human influence. But to believe in this is to ignore the biases, intentions and ethical implications embedded in its creation and use. As Kate Crawford puts it in the Atlas of AI, “The second myth is that intelligence is something that exists independently, as though it were natural and distinct from social, cultural, historical and political forces.(14) Deepfakes, for instance, are not merely about creating false images and videos—they are profoundly gendered and disproportionately harm women. It is our responsibility to ensure that technological advancements serve society positively. To do so, we must reshape the narrative, update laws, and establish regulations that prevent AI misuse.


Nur Sakinah Alzian is a senior research analyst at Social and Economic Research Initiative (SERI). SERI is a non-partisan think-tank dedicated to the promotion of evidence-based policies that address issues of inequality. Visit www.seri.my or email hello@seri.my for more information. 




  1. Raveen Aingaran, “Deepfake Incidents Reaching Alarming Levels,” thesun.my, October 2024

  2. Natnicha Surasit, “Criminal Exploitation of Deepfakes in South East Asia,” Global Initiative, 2024. 

  3. Women in International Security, “Deepfakes as a Security Issue: Why Gender Matters - Women in International Security,” November 4, 2020. 

  4. Luqman Hakim, “Police: Deepfake Technology Fuels Growing Wave of Fraud Cases,” NST Online, August 28, 2024; Angelin Yeoh, “‘My Mom Was Fooled!’: AI Deepfake Tricks Khairul Aming Fans into Thinking He Needs Money for Warehouse Repairs,” The Star, August 16, 2024; Malay Mail, “Bukit Aman Tells Victims of AI-Generated Deepfakes to Lodge Police Report,” Malay Mail, July 16, 2024; The Star, “Three Lose Thousands to Deepfake Scam Calls from ‘Friends,’” The Star, August 28, 2024.

  5. Safeek Affendy Razali, “Kes Penipuan ‘Deepfake’ Akibatkan Kerugian RM2.72 Juta - PDRM,” Berita Harian, August 28, 2024. 

  6. Rachel Metz, “The Number of Deepfake Videos Online Is Spiking. Most Are Porn,” CNN, October 7, 2019.

  7. Security Hero, “2023 State of Deepfakes: Realities, Threats, and Impact,” Securityhero.io, 2023.  

  8. Gosse, Chandell, and Jacquelyn Burkell, "Politics and Porn: How News Media Characterizes Problems Presented by Deepfakes,” Critical Studies in Media Communication 37, no. 5, 2020. 

  9. UNFPA Technical Division, “Making All Spaces Safe: Technology-Facilitated Gender-Based Violence,” 2021.

  10. Security Hero, “2023 State of Deepfakes: Realities, Threats, and Impact.”

  11. See Figure 1. 

  12. Astro Awani, “Social Media Platforms Must Put AI Label on Every AI-Generated Content - Fahmi,” 2024.

  13. Gosse and Burkell, "Politics and Porn.”

  14. Kate Crawford, “The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence,” Yale University Press, 2021.

Comments


bottom of page