Making sense of our connected world

Unwillingly naked: How deepfake pornography intensifies sexualised violence against women
A seemingly innocent holiday photo becomes the template for a highly realistic nude image – generated by artificial intelligence (AI), circulated online, without the subject’s knowledge or consent. What sounds like science fiction is already a disturbing reality: tens of thousands of so-called deepfake pornographic images are created daily using freely accessible AI tools. These are computer-generated images or videos that simulate nudity or sexual acts. Women are disproportionately affected. This is no coincidence: sexualised violence is deeply rooted in society and digital technologies are significantly amplifying this reality. This article explores how deepfake pornography digitally perpetuates existing structures of violence – and what must be done to offer better protection for those affected.
Deepfake pornography is not an isolated phenomenon. It is a particularly insidious form of image-based sexual violence (bff, 2024). This refers to digital assaults that use visual material to humiliate individuals or violate their sexual autonomy. Examples include upskirting (Polizei Nordrhein-Westfalen, 2021) — secretly photographing under skirts — and the non-consensual distribution of intimate images, often euphemistically referred to as “revenge porn” (Frauen gegen Gewalt e.V., 2025), or as in this case, the dissemination of fake, AI-generated depictions of nudity or intimacy. These images are frequently distributed anonymously via messaging services, imageboards (Wikipedia, 2025) or pornographic platforms. Anyone can fall victim to deepfake pornography.
The psychological toll these attacks take becomes evident in the voices of survivors. Danielle Citron, legal scholar and professor at the University of Maryland, describes deepfakes as an “invasion of sexual privacy”. In an interview with Vice magazine, she quotes a survivor saying: “Yes, it isn’t your actual vagina, but […] others think that they are seeing you naked.” Citron continues: “As a deepfake victim said to me — it felt like thousands saw her naked. She felt her body wasn’t her own anymore” (Cole, S., 2019).
A click away from harm: The ease of creating deepfakes
But how are such deepfakes created—and who is behind them? What once required technical expertise, time and powerful computers is now (perhaps too) easily within reach. Deepfake images and videos can be generated using a smartphone and a single social media photo. So-called nudifier apps and browser-based services openly offer their tools online: users upload any image and, within seconds, the person pictured appears undressed (Cole, S., 2019). The first results are often free, followed by an offer for paid subscription.
These are far from isolated cases. An investigation by netzpolitik.org revealed a wide range of providers generating thousands of such images daily. The business is booming. A study by 404 Media further illustrates the scale: Numerous AI-powered video generators — particularly from Chinese companies — offer minimal safeguards against the production of non-consensual pornographic content (Maiberg, E. 2025). These tools are already being used en masse to create disturbingly realistic sexualised deepfake videos, using nothing more than a portrait photo. Once uploaded, these videos circulate in dedicated online communities and are nearly impossible to remove.
From Taylor Swift to schoolgirls: A shifting target group
What makes the issue especially concerning is that most deepfakes feature bodies read as female. One reason lies in the training data behind the systems: many AI models were trained on millions of images of naked women. The result is a structurally biased technology that doesn’t merely replicate gender-based violence — it amplifies it. What emerges is a deeply gendered, automated form of digital violence — primarily targeting women (SecurityHero, 2023).
Initially, it was public figures who were targeted: actresses, influencers, female politicians (Ajder, H. et al., 2019). But as the tools became more accessible, the target group shifted. Today, it is often girls and women from users’ immediate social environments who are affected: classmates, colleagues, neighbours. In Spain, for example, AI-generated nude images of schoolgirls circulated in messaging groups caused a scandal already in 2023 (Köver, C., 2023). In Pennsylvania, a teenager was arrested the following year for creating deepfake nudes of his female classmates (Der Standard, 2024).
The full extent of the harm remains largely hidden. Reliable data are scarce. Many victims are not even aware that manipulated images of them are being circulated online.
A systemic form of intersectional violence
This particular form of digital abuse is systemic. As legal scholar Danielle Citron accurately observes:
“Deepfake technology is being weaponised against women by inserting their faces into porn. It is terrifying, embarrassing, demeaning, and silencing. Deepfake sex videos tell individuals their bodies are not their own — and can make it difficult to stay online, get or keep a job, and feel safe” (Ajder, H. et al., 2019).
The targeted sexualised depiction of women’s — and increasingly queer — bodies is not a technical malfunction; it is an expression of patriarchal structures and is being deliberately used. This often occurs in the context of anti-feminist campaigns and incel movements, which aim to intimidate and exclude certain groups of people (Sittig, J., 2024).
Studies show that over 95% of all deepfakes are sexual in nature. Almost 100% of these depict women. Marginalised groups are particularly affected: queer individuals, Black women and trans women (Paris, B. et al., (2019). This deliberate form of digital violence creates what is known as a silencing effect (NdM-Glossar, 2025): it distorts digital visibility and restricts democratic participation.
Digital violence as a business model
What many still view as a niche phenomenon has become part of a lucrative market. Platforms offering deepfake services typically operate anonymously or from abroad. Access is easy: an email address suffices, payment is made by credit card, Google Pay or cryptocurrency (Meineck, S., 2024). A simple disclaimer (“no editing without consent”) is often the only nod to legality. Responsibility is shifted to users, while providers distance themselves from accountability.
However, this very business model could offer a leverage point: The case of Pornhub demonstrates what economic pressure can achieve. In 2020, Visa and Mastercard cut ties with the platform over non-consensual content, prompting significant changes in its upload policies and age verification processes (Der Standard, 2022). A similar mechanism could be applied to deepfake platforms — such as mandatory withdrawal of support from payment providers, hosting services or search engines that enable their digital infrastructure (Kira, B., 2024).
Legal gaps and political momentum
But economic pressure alone is not enough. Criminal law has so far struggled to keep pace with deepfake-related offences. Although German law includes provisions such as StGB, 1871, § 201a (violation of personal privacy) and the right to one’s own image (Bittner, C., 2021), many deepfake pornography cases fall through the cracks. Legal developments lag behind technological ones, perpetrators remain anonymous, and platforms operate outside EU jurisdiction. The German Women Lawyers’ Association (djb) has criticised these legal gaps and called for a dedicated, discrimination-sensitive criminal offence for the unauthorised creation and dissemination of sexualised deepfakes — independent of traditional pornography legislation (Deutscher Juristinnenbund, 2023). There is also an urgent need for targeted training for police and the judiciary, as well as the establishment of a network of specialised prosecutors to raise awareness and develop effective solutions.
EU regulation: A first step, but not a cure-all
While national legislation lags behind, developments at the EU level are gaining momentum. The Digital Services Act (DSA) requires platforms to swiftly remove reported illegal content (DSA, Art. 16, 2022) — including deepfakes — where clearly unlawful. The European AI Act introduces transparency obligations (AI Act, Art. 50, 2024) that synthetically generated content must be labelled as such. And with the new EU directive to combat violence against women (European Parliament and Council, 2024), the non-consensual dissemination of sexualised deepfakes will, for the first time, be criminalised across Europe. Member States — including Germany — have until 2027 to incorporate these rules into national law. In parallel, Germany’s proposed Violence Support Act aims to improve access to legal advice and assistance for victims (Die Bundesregierung, 2025).
Digital self-defence and social responsibility
In addition to legal regulation, technical and societal prevention is essential. Tools like Glaze and Nightshade can, for example, alter images in such a way that they become unusable for AI systems — preventing original photos from being repurposed for training datasets or the generation of realistic deepfakes. Think of them as a digital cloak of invisibility against deepfake abuse.
At the same time, public awareness must shift. Image-based sexual violence is still trivialised. Victims are subjected to victim blaming (Friedrich Ebert Stiftung, 2025) rather than support. Yet this is not just about individual fates — it is about structural inequalities that are reproduced and exacerbated in the digital realm.
A complex problem demands multifaceted solutions
Sexualised deepfakes are more than technical manipulation. They reflect a shift in digital power dynamics — where existing inequalities are not only reproduced but intensified. The deliberate violation of intimacy and control disproportionately affects those who are already structurally disadvantaged. Deepfakes affect us all — but not equally. That’s why we need collective responses that are not merely technical, but feminist, human rights-based and rooted in solidarity. Digital violence is not a fringe issue of internet culture. It is its litmus test.
Organisations like HateAid, the bff – Frauen gegen Gewalt e.V. and anna nackt are already taking action against non-consensual sexualised deepfakes. They support victims, offer contact points, and in 2023 jointlysubmitted a petition to German Digital Minister Volker Wissing, calling for stronger protections and clearer legal frameworks (HateAid, 2023).
References
Ajder, H., Patrini, G., Cavalli, F., & Cullen, L.(2019): The State of Deepfakes: Landscape, Threats, and Impact. Deeptrace. https://regmedia.co.uk/2019/10/08/deepfake_report.pdf
Bittner, C. (2021): (Fast) keine Fotos ohne Datenschutz. Stiftung Datenschutz. https://stiftungdatenschutz.org/ehrenamt/praxisratgeber/praxisratgeber-detailseite/fotos-und-datenschutz-275
Bundesministerium der Justiz (Hrsg.) (1871). Strafgesetzbuch (StGB). https://www.gesetze-im-internet.de/stgb/
Cole, S. (2019): This Horrifying App Undresses a Photo of Any Woman With a Single Click. Vice. https://www.vice.com/en/article/deepnude-app-creates-fake-nudes-of-any-woman/
Deutscher Juristinnenbund (2023): Bekämpfung bildbasierter sexualisierter Gewalt. Policy Paper, 17. https://www.djb.de/presse/stellungnahmen/detail/st23-17
Der Standard (2024). Skandal um KI-Nacktbilder legt US-Schule lahm. Der Standard. https://www.derstandard.de/story/3000000245632/skandal-um-ki-nacktbilder-legt-us-schule-lahm
Der Standard (2022). Nach Vorwürfen: Visa und Mastercard sperren Zahlung von Pornhub-Werbung (2022). Der Standard. https://www.derstandard.de/story/2000138125747/nach-vorwuerfen-visa-und-mastercard-sperren-zahlung-von-pornhub-werbung
Die Bundesregierung (2025). Bessere Unterstützung für Gewaltopfer. Die Bundesregierung. https://www.bundesregierung.de/breg-de/service/archiv-bundesregierung/gewalthilfegesetz-2321756
European Commission (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on artificial intelligence and amending certain Union legislative acts (AI Act), Art. 50. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
European Commission (2022). Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act), Art. 16. https://eur-lex.europa.eu/eli/reg/2022/2065/oj
European Commission (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation). https://eur-lex.europa.eu/eli/reg/2016/679/oj
European Parliament and Council (2024). Directive (EU) 2024/1385 of 14 May 2024 on combating violence against women and domestic violence. Official Journal of the European Union. https://eur-lex.europa.eu/legal-content/DE/TXT/PDF/?uri=OJ:L_202401385
Frauen gegen Gewalt e.V. (2025). Hinweise für die Berichterstattung über Gewalt gegen Frauen und Kinder. Bundesverband Frauenberatungsstellen und Frauennotrufe – Frauen gegen Gewalt (bff). https://www.frauen-gegen-gewalt.de/de/ueber-uns/presse/informationen-fuer-die-presse/hinweise-fuer-die-berichterstattung-ueber-gewalt-gegen-frauen-und-kinder.html
Frauen gegen Gewalt e.V. (2024). bff fordert: Bildbasierte Gewalt umfassend bekämpfen. Bundesverband Frauenberatungsstellen und Frauennotrufe – Frauen gegen Gewalt (bff). https://www.frauen-gegen-gewalt.de/de/aktuelles/nachrichten/nachricht/pressemitteilung-bff-fordert-bildbasierte-gewalt-umfassend-bekaempfen-2.html
Friedrich Ebert Stiftung (2025). Victim Blaming. Friedrich Ebert Stiftung. https://www.fes.de/wissen/gender-glossar/victim-blaming
Hate Aid (2023): Deepfake-Pornos: Betroffene konfrontieren Wissing. Hate Aid gGmbH. https://hateaid.org/wp-content/uploads/2023/10/Pressemitteilung-Schutz-vor-Deepfake-Pornos.pdf
Kira, B. (2024): Deepfakes, the Weaponisation of AI Against Women and Possible Solutions. Verfassungsblog. https://doi.org/10.59704/9987d92e2c183c7f
Köver, C. (2023): Gefälschte Nacktbilder von Mädchen sorgen für Aufschrei. netzpolitik.org. https://netzpolitik.org/2023/deepfakes-in-spanien-gefaelschte-nacktbilder-von-maedchen-sorgen-fuer-aufschrei/
Maiberg, E. (2025): Chinese AI Video Generators Unleash a Flood of New Nonconsensual Porn. 404 Media. https://www.404media.co/chinese-ai-video-generators-unleash-a-flood-of-new-nonconsensual-porn-3/?ref=daily-stories-newsletter
Meineck, S. (2024): Wie Online-Shops mit sexualisierten Deepfakes abkassieren. Netzpolitik. https://netzpolitik.org/2024/ki-nacktbilder-wie-online-shops-mit-sexualisierten-deepfakes-abkassieren/
NdM-Glossar (2025): Silencing(-Effekt). Wörterverzeichnis der Neuen deutschen Medienmacher*innen (NdM). https://glossar.neuemedienmacher.de/glossar/silencing-effekt/
Paris, B., Donovan, J. (2019): Deepfakes and Cheap Fakes. The Manipulation of Audio and Visual Evidence. Data & Society. https://datasociety.net/wp-content/uploads/2019/09/DS_Deepfakes_Cheap_FakesFinal-1.pdf
Polizei Nordrhein-Westfalen (2021). „Upskirting“ und „Downblousing“ ist strafbar. Polizei Nordrhein-Westfalen. https://polizei.nrw/artikel/upskirting-und-downblousing-ist-strafbar
Reuther, J. (2021): Digital Rape: Women Are Most Likely to Fall Victim to Deepfakes. The Deepfake Report. https://www.thedeepfake.report/en/09-digital-rape-en
Sittig, J. (2024): Strafrecht und Regulierung von Deepfake-Pornografie. Bundeszentrale für politische Bildung. https://www.bpb.de/lernen/bewegtbild-und-politische-bildung/556843/strafrecht-und-regulierung-von-deepfake-pornografie/#footnote-target-25
State of Deepfakes – Realities, Threats, and Impact (2023). SecurityHero. https://www.securityhero.io/state-of-deepfak†Aes/#targeted-individuals
Wikipedia (2025). Imageboard. Wikipedia. https://www.fes.de/wissen/gender-glossar/victim-blaming
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

You will receive our latest blog articles once a month in a newsletter.
Artificial intelligence and society
The Human in the Loop in automated credit lending – Human expertise for greater fairness
How fair is automated credit lending? Where is human expertise essential?
Impactful by design: For digital entrepreneurs driven to create positive societal impact
How impact entrepreneurs can shape digital innovation to build technologies that create meaningful and lasting societal change.
Identifying bias, taking responsibility: Critical perspectives on AI and data quality in higher education
AI is changing higher education. This article explores the risks of bias and why we need a critical approach.



