Skip to content
Rowers hold on to each other in boats forming a row. The image illustrates that defending Europe’s disinformation researchers against coordinated attacks needs a united strategy.
27 August 2025| doi: 10.5281/zenodo.16963376

Defending Europe’s disinformation researchers

Disinformation researchers across Europe are being sued, harassed and publicly smeared simply for doing their job. These efforts aim to discredit and silence oversight of tech platforms and malign political actors. Recent examples range from a defamation case in France, to smear campaigns in Poland, and government pressure on research institutions in Latvia and Hungary. However, these attacks are not isolated incidents. They follow a coordinated strategy originating in fringe and extremist circles in the US that is now spreading across borders and threatening democratic values in the EU. This article critically analyses various attack frameworks against disinformation researchers, explaining how these tactics work, where they come from, and how the EU can respond. It’s time to defend the defenders, working to protect the public against propaganda, lies, and malign foreign influence.

Across Europe, researchers and civil society organisations have become the targets of increasingly aggressive and coordinated attacks. Attacks against climate scientists have been well documented, as have attacks against journalists, and scholars studying topics like gender and identity. Less discussed are attacks against disinformation researchers—those on the frontlines seeking to uncover and analyse deceptive information practices and safeguard European democratic institutions. 

While ostensibly local, many of these tactics mirror strategies developed and deployed in the United States by actors seeking to undermine public trust in institutions that monitor and expose harmful content online. The transatlantic diffusion of these methods has shifted the Overton Window in Europe, challenging the legitimacy of core democratic functions like transparency, accountability, and truth-seeking, while promoting political fringe actors, Russian propaganda, and anti-democratic forces—amongst other malign phenomena.

Why disinformation research matters

The work of experts being attacked is vital. It entails analysing social media content, tracing coordinated harassment campaigns, investigating bot activity, mapping influence networks, and monitoring evolving narratives around elections, conflicts, public health, climate change, and beyond. Myriad academic institutions, fact-checking NGOs, independent investigative outlets, and multi-stakeholder coalitions like the European Digital Media Observatory (EDMO) or the European Fact-Checking Standards Network (EFCSN) work together to uncover deceptive information practices from bad actors, while aiming to equip the public with the skills needed to navigate today’s chaotic media ecosystem. 

Disinformation research is especially critical today, as democracies face a convergence of threats—from foreign influence operations to domestic extremism—online. A critical point to this end is: what takes place online can often impact the real world, offline. Understanding and exposing these patterns is essential for protecting public trust, democratic processes, and social cohesion in democratic societies.

A growing pattern of attack

In the United States, a conspiracy-laden campaign emerged following the 2020 presidential election that has targeted disinformation researchers and institutions. This conspiracy began with claims that “Big Tech” rigged the election against President Trump and silenced conservative voices (claims which carry no empirical weight) across social media. The actors promoting these claims found a receptive audience, shifting their focus to numerous election integrity projects and academics who had served as trusted flaggers, alerting social media platforms to illegal content and content which materially violated platforms’ terms of service. 

This campaign sought to discredit these independent researchers and institutions by falsely portraying them as government censors. This effort intensified through congressional hearings, media amplification (notably the “Twitter Files”), lawsuits, and funding from anonymous, politically motivated donors. The campaign has culminated in a hostile legal and political environment for researchers, amplified by the House Weaponization Subcommittee and lawsuits like Missouri v. Biden (which was ultimately dismissed by the Supreme Court for lack of evidence). However, today, attacks against disinformation researchers, universities, private companies and any entity which publicly fights against disinformation are mainstream, originating in the White House itself, and indicative of a larger pattern of democratic backsliding.

How disinformation researchers are being targeted across Europe

The past two years have witnessed a surge in attacks on disinformation researchers across Europe. These include several illustrative examples:

  • A defamation lawsuit was allowed to proceed against a French researcher who made factual, public claims about foreign influence networks operating in France. Ultimately, the researcher prevailed in court.
  • Coordinated attacks against individual researchers at the Polish institute NASK, accusing them and the institute of ideological bias and foreign allegiance due to disinformation analysis conducted during the 2025 election.
  • An official government inquiry was sent to Latvian NGO Re:Baltica’s staff after publishing election interference and political finance investigations, asking about their funding and methodology for selecting topics to analyse.
  • The introduction of a bill in Hungary which classifies fact-checkers, disinformation- fighting organisations, and their funders as “foreign agents” based on the content they produce, adding them to watch lists and revoking their government funding.

Transatlantic playbook

While each European case outlined above occurs within a specific national context, the structure of these attacks reflects a broader transatlantic pattern. In the United States, disinformation researchers have faced campaigns of legal intimidation, reputational defamation, and accusations of censorship, particularly from far-right influencers and political figures.

The so-called “censorship-industrial complex” narrative—popularised by American commentators and influencers—casts researchers, journalists, and academics as part of an elite conspiracy to suppress conservative or minority political viewpoints. This narrative has now migrated to Europe, where it is being weaponised to attack EU-funded fact-checking coalitions and civil society watchdogs, under the guise of stopping a supposed shadowy cabal of censors.

A typology of attacks across geographies

To respond effectively, it is essential to understand the tactical landscape. Recent incidents in both the United States and Europe fall into several interconnected categories:

  1. Strategic litigation and legal pressure: The researcher in France was targeted with a SLAPP-style (strategic lawsuit against public participation) defamation lawsuit, costing them unquantifiable time, money and effort. In the United States, researchers at Stanford and the University of Washington have been served with subpoenas and compelled to testify before Congress in politically motivated investigations, and sued multiple times. Ultimately, Stanford was forced to shut down its Internet Observatory.
  2. Politicised smear campaigns: Re:Baltica faced an official government inquiry following reporting on foreign influence and political finance in Latvia. American researchers have been doxxed and had their funding cut, for projects merely mentioning disinformation, and the Trump administration has threatened to investigate the private funding of its perceived opponents, including research institutions, in an openly partisan, politicised manner.
  3. Free speech inversion: Hungarian NGOs have been accused of undermining national sovereignty by challenging falsehoods and corruption. In the United States, institutions like the Atlantic Council Digital Forensic Research (DFR) Lab and Graphika are framed by the Trump administration as suppressors of free expression, even though they merely document foreign influence networks. In one particularly well-known example, bad actors intentionally and falsely claimed these institutions “censored” 22 million tweets in 2020, when in reality, they examined 22 million tweets.
  4. Coordinated online harassment: Researchers at NASK were identified by face and name and had personal information posted online due to their work, leading to unprecedented harassment following the 2025 Polish election. In the United States, Nina Jankowicz faced widespread online harassment and personal threats following her appointment to the Department of Homeland Security’s Disinformation Governance Board, ultimately leading to the board’s premature dissolution.

These tactics form a recognisable playbook that travels easily between political environments. Their combined effect is to render disinformation research both professionally hazardous and politically fraught. Moreover, in the EU, these recent incidents are not isolated. They represent a strategic shift in how malign actors—both state and non-state, as well as social media platforms themselves—respond to scrutiny. Increasingly, the objective is not to rebut disinformation researchers’ findings, but to delegitimise the institutions and individuals producing these findings, and obscure the very real—and in many cases lucrative—online harms being perpetrated by bad actors. Even if these bad actors lose the lawsuits or eventually abandon the legal inquiries they bring against researchers, they cost researchers time and money, halt their work, and force them to appear in court rather than continue exposing online harms.

How European institutions should respond

To address these challenges, European institutions, national governments, and civil society funders must adopt a coordinated defense strategy. Several tangible, achievable recommendations are included below; with small adjustments to existing legal and institutional entities at the EU-level, disinformation researchers would be better protected, and attacks against them could be thwarted before they become mainstream.

  1. Anti-SLAPP legislation: The European Commission proposed an anti-SLAPP directive in April 2022, targeting abusive litigation in cross-border cases. Member States should accelerate its transposition into national law and expand its protections to include disinformation researchers and NGOs working in the public interest.
  2. Emergency legal support mechanisms: The European Commission, in partnership with the European Fact-Checking Standards Network (EFCSN), should establish a legal assistance mechanism akin to the European Centre for Press and Media Freedom (ECPMF)’s Legal Support Program. This would offer rapid counsel and cost coverage for researchers facing defamation SLAPP-style litigation. Efforts are currently underway in the United States from organisations including mine, The American Sunlight Project, to protect researchers in this way.
  3. Protective grant clauses: Horizon Europe and CERV (Citizens, Equality, Rights and Values) program grants should include explicit language guaranteeing institutional support and public defense in cases of reputational or legal attack on grantees working on disinformation. For example, organisations like the Knight Foundation provide this kind of support in the United States to their grantees.

Defending the defenders

Disinformation researchers are essential democratic actors whose work upholds electoral integrity, public health, and civil discourse. The attacks they face are not a natural consequence of political disagreement—they are the result of deliberate strategies aimed at undermining truth and democracy itself. Across both sides of the Atlantic, researchers are being pulled into culture wars they did not create, targeted not for their politics but for their empiricism. The convergence of tactics used to silence them signals a deeper global threat to institutional accountability. The EU cannot afford to lose its information watchdogs; a failure to protect them will not only embolden their attackers but degrade the very infrastructure of democratic accountability. It is time to defend the defenders, with law, policy, and public resolve.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore current HIIG Activities

Research issues in focus

HIIG is currently working on exciting topics. Learn more about our interdisciplinary pioneering work in public discourse.

Further articles

The picture shows a man wiping a large glass window. This is used as a metaphor for questioning assumptions about disinformation and seeking clearer understanding.

Debunking assumptions about disinformation: Rethinking what we think we know

Exploring definitions, algorithmic amplification, and detection, this article challenges assumptions about disinformation and calls for stronger research evidence.

A close-up of a red pedestrian “stop” signal, symbolising public resistance and protest. The image evokes the growing global pushback against artificial intelligence systems and the demand to pause, question, and regulate their unchecked development.

AI resistance: Who says no to AI and why?

This article shows how resisting AI systems means more than protest. It's a way to challenge power structures and call for more democratic governance.

The photo shows a close-up of a spiral seashell. This symbolises complexity and hidden layers, representing AI’s environmental impact across its full life cycle.

Blind spot sustainability: Making AI’s environmental impact measurable

AI's environmental impact spans its entire life cycle, but remains a blind spot due to missing data and limited transparency. What must change?