Making sense of our connected world

Unwillingly naked: How deepfake pornography intensifies sexualised violence against women
A seemingly innocent holiday photo becomes the template for a highly realistic nude image—generated by artificial intelligence (AI), circulated online, without the subject’s knowledge or consent. What sounds like science fiction is already a disturbing reality: Tens of thousands of so-called deepfake pornographic images are created daily using freely accessible AI tools. These are computer-generated images or videos that simulate nudity or sexual acts. Women are disproportionately affected. This is no coincidence: sexualised violence is deeply rooted in society—and digital technologies are significantly amplifying this reality. This article explores how deepfake pornography digitally perpetuates existing structures of violence—and what must be done to offer better protection for those affected.
Deepfake pornography is not an isolated phenomenon. It is a particularly insidious form of image-based sexual violence. This refers to digital assaults that use visual material to humiliate individuals or violate their sexual autonomy. Examples include upskirting—secretly photographing under skirts—and the non-consensual distribution of intimate images, often euphemistically referred to as “revenge porn,” or as in this case, the dissemination of fake, AI-generated depictions of nudity or intimacy. These images are frequently distributed anonymously via messaging services, imageboards, or pornographic platforms. Anyone can fall victim to deepfake pornography.
The psychological toll these attacks take becomes evident in the voices of survivors. Danielle Citron, legal scholar and professor at the University of Maryland, describes deepfakes as an “invasion of sexual privacy”. In an interview with Vice magazine, she quotes a survivor saying: “Yes, it isn’t your actual vagina, but […] others think that they are seeing you naked.” Citron continues: “As a deepfake victim said to me—it felt like thousands saw her naked. She felt her body wasn’t her own anymore.”
A click away from harm: The ease of creating deepfakes
But how are such deepfakes created—and who is behind them? What once required technical expertise, time, and powerful computers is now within easy reach. Deepfake images and videos can be generated using a smartphone and a single social media photo. So-called nudifier apps and browser-based services openly offer their tools online: users upload any image and, within seconds, the person pictured appears undressed. The first results are often free—followed by paid subscription models.
These are far from isolated cases. An investigation by netzpolitik.org revealed a wide range of providers generating thousands of such images daily. The business is booming. A study by 404 Media further illustrates the scale: Numerous AI-powered video generators—particularly from Chinese companies—offer minimal safeguards against the production of non-consensual pornographic content. These tools are already being used en masse to create disturbingly realistic sexualised deepfake videos using nothing more than a portrait photo. Once uploaded, these videos circulate in dedicated online communities and are nearly impossible to remove.
From Taylor Swift to schoolgirls: A shifting target group
What makes the issue especially concerning is that most deepfakes feature bodies read as female. One reason lies in the training data behind the systems: many AI models were trained on millions of images of naked women. The result is a structurally biased technology that doesn’t merely replicate gender-based violence—it amplifies it. What emerges is a deeply gendered, automated form of digital violence—primarily targeting women.
Initially, it was public figures who were targeted: actresses, influencers, female politicians. But as the tools became more accessible, the target group shifted. Today, it is often girls and women from users’ immediate social environments who are affected: classmates, colleagues, neighbours. In Spain, for example, AI-generated nude images of schoolgirls were circulated in messaging groups in 2023. In Pennsylvania, a teenager was arrested for creating deepfake nudes of his female classmates.
The full extent of the harm remains largely hidden. Reliable data are scarce. Many victims are unaware that manipulated images of them are being circulated online.
A systemic form of intersectional violence
This particular form of digital abuse is systemic. As legal scholar Danielle Citron accurately observes:
“Deepfake technology is being weaponised against women by inserting their faces into porn. It is terrifying, embarrassing, demeaning, and silencing. Deepfake sex videos tell individuals their bodies are not their own—and can make it difficult to stay online, get or keep a job, and feel safe.”
The targeted sexualised depiction of women’s—and increasingly queer—bodies is not a technical malfunction; it is an expression of patriarchal structures and is being deliberately used. This often occurs in the context of anti-feminist campaigns and incel movements, which aim to intimidate and exclude certain groups of people.
Studies show that over 95% of all deepfakes are sexual in nature. Almost 100% of these depict women. Marginalised groups are particularly affected: queer individuals, Black women, and trans women. This deliberate form of digital violence creates what is known as a silencing effect: it distorts digital visibility and restricts democratic participation.
Digital violence as a business model
What many still view as a niche phenomenon has become part of a lucrative market. Platforms offering deepfake services typically operate anonymously or from abroad. Access is easy: an email address suffices, payment is made by credit card, Google Pay, or cryptocurrency. A simple disclaimer (“no editing without consent”) is often the only nod to legality. Responsibility is shifted to users, while providers distance themselves from accountability.
However, this very business model could offer a leverage point: The case of Pornhub demonstrates what economic pressure can achieve. In 2020, Visa and Mastercard cut ties with the platform over non-consensual content, prompting significant changes in its upload policies and age verification processes. A similar mechanism could be applied to deepfake platforms—such as mandatory withdrawal of support from payment providers, hosting services or search engines that enable their digital infrastructure.
Legal gaps and political momentum
But economic pressure alone is not enough. Criminal law has so far struggled to keep pace with deepfake-related offences. Although German law includes provisions such as § 201a StGB (violation of personal privacy) and the right to one’s own image, many deepfake pornography cases fall through the cracks. Legal developments lag behind technological ones, perpetrators remain anonymous, and platforms operate outside EU jurisdiction. The German Women Lawyers’ Association (djb) has criticised these legal gaps and called for a dedicated, discrimination-sensitive criminal offence for the unauthorised creation and dissemination of sexualised deepfakes—independent of traditional pornography legislation. There is also an urgent need for further training for police, the judiciary, and a network of specialised prosecutors to raise awareness and develop effective solutions.
EU regulation: A first step, but not a cure-all
While national legislation lags behind, developments at the EU level are gaining momentum. The Digital Services Act (DSA) requires platforms to swiftly remove reported illegal content—including deepfakes, where clearly unlawful. The European AI Act introduces transparency obligations: synthetically generated content must be labelled as such. And with the new EU directive to combat violence against women, the non-consensual dissemination of sexualised deepfakes will, for the first time, be criminalised across Europe. Member States—including Germany—have until 2027 to incorporate these rules into national law. In parallel, Germany’s proposed Violence Support Act aims to improve access to legal advice and assistance for victims.
Digital self-defence and social responsibility
In addition to legal regulation, technical and societal prevention is essential. Tools like Glaze and Nightshade can, for example, alter images in such a way that they become unusable for AI systems—preventing original photos from being repurposed for training datasets or the generation of realistic deepfakes. Think of them as a digital cloak of invisibility against deepfake abuse.
At the same time, public awareness must shift. Image-based sexual violence is still trivialised. Victims are subjected to victim blaming rather than support. Yet this is not just about individual fates—it is about structural inequalities that are reproduced and exacerbated in the digital realm.
A complex problem demands multifaceted solutions
Sexualised deepfakes are more than technical manipulation. They reflect a shift in digital power dynamics—where existing inequalities are not only reproduced but intensified. The deliberate violation of intimacy and control disproportionately affects those who are already structurally disadvantaged. Deepfakes affect us all—but not equally. That’s why we need collective responses that are not merely technical, but feminist, human rights-based, and rooted in solidarity. Digital violence is not a fringe issue of internet culture. It is its litmus test.
Organisations like HateAid, the bff – Frauen gegen Gewalt e.V., and anna nackt are already taking action against non-consensual sexualised deepfakes. They support victims, offer contact points, and in 2023 jointly submitted a petition to German Digital Minister Volker Wissing, calling for stronger protections and clearer legal frameworks.
References
- Reuther, Juliane (2021): Digital Rape: Women Are Most Likely to Fall Victim to Deepfakes https://www.thedeepfake.report/en/09-digital-rape-en [23.05.2025]
- Ajder, Henry; Patrini, Giorgio; Cavalli, Francesco; Cullen, Laurence (2019): The State of Deepfakes: Landscape, Threats, and Impact https://regmedia.co.uk/2019/10/08/deepfake_report.pdf [23.05.2025]
- SecurityHero (2023): 2023 – STATE OF DEEPFAKES – Realities, Threats, and Impact https://www.securityhero.io/state-of-deepfakes/#targeted-individuals [23.05.2025]
- Meineck, Sebastian (2024): Wie Online-Shops mit sexualisierten Deepfakes abkassieren https://netzpolitik.org/2024/ki-nacktbilder-wie-online-shops-mit-sexualisierten-deepfakes-abkassieren/ [23.05.2025]
- Sittig, Jacqueline (2024): Strafrecht und Regulierung von Deepfake-Pornografie https://www.bpb.de/lernen/bewegtbild-und-politische-bildung/556843/strafrecht-und-regulierung-von-deepfake-pornografie/#footnote-target-25 [23.05.2025]
- Cole, Samantha (2019): This Horrifying App Undresses a Photo of Any Woman With a Single Click https://www.vice.com/en/article/deepnude-app-creates-fake-nudes-of-any-woman/
- Deutscher Juristinnenbund (2023): Policy Paper: 23-17 – Bekämpfung bildbasierter sexualisierter Gewalt – Policy Paper https://www.djb.de/presse/stellungnahmen/detail/st23-17
- Kira, Beatriz (2024): Deepfakes, the Weaponisation of AI Against Women and Possible Solutions https://verfassungsblog.de/deepfakes-ncid-ai-regulation/
- AI Act, DSA, DSGVO, KUG, StGB
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

You will receive our latest blog articles once a month in a newsletter.
Artificial intelligence and society
Opportunities to combat loneliness: How care facilities are connecting neighborhoods
Can digital tools help combat loneliness in old age? Care facilities are rethinking their role as inclusive, connected places in the community.
Artificial intelligence with purpose: Mapping the landscape of public interest AI
How is AI being used for the common good? A new dataset is mapping the landscape of public interest AI by cataloguing impactful projects worldwide.
Who hired this bot? On the ambivalence of using generative AI in recruiting
Generative AI in recruiting promises efficiency, but may also quietly undermine the human connection that HR decisions and candidate fit rely on.