Skip to content
The picture shows a man wiping a large glass window. This is used as a metaphor for questioning assumptions about disinformation and seeking clearer understanding.
20 August 2025| doi: 10.5281/zenodo.16911431

Debunking assumptions about disinformation: Rethinking what we think we know

Disinformation has become one of the buzzwords of our time. It appears in news headlines, political speeches, and social media debates almost daily. Countless researchers study its conceptual and empirical aspects, fact-checking has become part of media coverage and politicians constantly warn about its dangers to democracy. With so much attention, it is easy to assume that we know everything there is to know about the phenomenon of disinformation. But do we? In a recent study commissioned by the Bundesnetzagentur, a research team led by Ann-Kathrin Watolla, Patrick Zerrer, Jan Rau, Lisa Merten, Matthias C. Kettemann, and Cornelius Puschmann conducted a scoping review on how disinformation affects electoral processes. This article unpacks three common assumptions about disinformation and takes a closer look at what we actually know and don’t know.

Assumption 1: We all have the same understanding of ‘disinformation’.

As the term ‘disinformation’ has increasingly become part of public discourse, most people have some understanding of what it means. However, even though disinformation as a phenomenon has been commonly addressed by researchers, politicians, media producers and distributors alike, there is no shared interpretation of the term (Bleyer-Simon & Reviglio, 2024; Dreyer et al., 2021). Especially when we take into account adjacent terms used in this context, like misinformation, fake news, or conspiracy theories, or look at the different types of false or misleading information, reaching a shared understanding becomes increasingly difficult.

What do we actually know about ‘disinformation‘?

The one thing most experts and people can agree on is that disinformation refers to content that contains false or misleading information (Kessler, 2023). Especially when it reaches a large number of people, disinformation has the potential to cause societal harm.

Think for example of ‘pizzagate’, the disinformation campaign during the 2016 presidential elections in the US, claiming that Democratic Party leaders including Hillary Clinton were linked to a global pedophilia ring operating through different establishments like a pizzeria in Washington, D.C. The spreading of this disinformation led to an armed man storming the pizzeria in search of hidden children, which shows what serious consequences disinformation can have.

However, there are several issues with this broad definition of disinformation. Firstly, malicious intent of spreading false or misleading information is often included in the definition. But is it always possible to attest malicious intent? If we look at the ‘Doppelganger campaign‘, an online information operation originating from Russia which frequently uses fake clones of legitimate websites, the malicious intent is rather obvious. But what about individuals spreading false information? Do we always know the intention behind the spreading of content?

Secondly, when we look at the substance of disinformation content, the false or misleading information can refer to a variety of aspects (Kapantai et al., 2021). For example, clickbait is credible content but utilises exaggerated or misleading headlines to lure users. You might have seen this in cases where the content of an article for example does not match the headline. Meanwhile, fabricated content does not have a factual basis and is intended to deceive and cause harm, which is commonly referred to as “fake news”, while imposter content imitates credible sources by using their logos and branding in order to mislead users. This is the case of the previously mentioned Doppelgänger campaign, where online presences of media organisations and public institutions were cloned and filled with disinformation content.

So, there is a broad variety of disinformation types, which aggravates a shared understanding of the term. Additionally, we need to look at the other terms often related to disinformation. While there is some research differentiating between disinformation, misinformation, and false information (Jack, 2017; Wardle & Derakhshan, 2017), distinguishing these is not as clear cut and we still don’t know where disinformation begins and where it ends.

Seeing that both the new laws of the European Union and large online platforms do not share an understanding of these terms (Bleyer-Simon & Reviglio, 2024), also does not help. Even more importantly, we need foundational research that considers disinformation as entire misleading stories or narratives, rather than individual false claims. This is because disinformation is often not disseminated as individual pieces of content, but as carefully constructed storylines that aim to create entire narratives based on false or misleading information.

Assumption 2: Echo chambers and filter bubbles amplify disinformation.

The ideas of ‘echo chambers’ and ‘filter bubbles’ describe situations where algorithms repeatedly serve us content that reinforces our existing views, creating sub-spaces dominated by certain perspectives. While their existence is strongly debated among scholars, we know that recommender systems – meaning the algorithms behind what appears in your feed – prioritise polarising or viral content, regardless of whether it’s accurate (Van Raemdonck & Meyer, 2024). These algorithms sort the massive amount of content that is available and present it to us in a way that is manageable and appealing, using collected user data on our previous activity and connections to other users as its basis (Sun, 2023).

What role do algorithms play in spreading disinformation?

The prevalent discussion on disinformation content being favoured by algorithms and thereby spread more wildly is closely linked to the concern of societal fragmentation. The worry goes like this: if the algorithm keeps showing people content that confirms their worldview, those people end up in their own echo chambers. While this intuitively makes sense, empirical research could only partly prove this (Hartmann et al., 2024). While it is true that politically more radical users are often more engaged in online spaces and tend to share contents that align with their views (Aruguete et al., 2023), it is important to remember that users meet their information needs through a variety of information sources.

Think of a user whose feed is full of politically charged memes, articles, and videos – content they eagerly like, comment on, and repost. But that same person might also get news from TV, conversations with friends, or even the local newspaper. So, even if algorithms amplify the spreading of disinformation in online spaces, these are rarely someone’s only source of information.

So, while user preferences are reinforced by recommendation algorithms that prioritise engagement (Figà Talamanca & Arfini, 2022), which may lead to politically one-sided communities, these constitute only one part of the users’ information sources. The challenge is that we don’t really know how these different information sources are combined in people’s everyday lives. And because researchers still lack meaningful access to platform data, we also can’t measure precisely how strong these recommendation effects are. With the upcoming Delegated Act of the Digital Services Act on research data access, we will hopefully be able to better understand the amplification effects of algorithms, while also taking into account the diversity of platforms and algorithms and the high rate of change over time.

Assumption 3: We can train people to detect AI-generated disinformation.

Measures to counteract disinformation can be categorised into three types of interventions: prebunking, referring to providing users with informative texts about disinformation, nudging which involves displaying warning labels and similar prompts to alert users to specific content while they are scrolling through their social media feeds, and debunking through retroactive fact-checking (Kessler, 2023).

While debunking is often used by news media – as can be regularly seen following interviews with Donald Trump or representatives of the German right-wing party Alternative für Deutschland (AfD) – many initiatives have emerged in recent years aiming to enable users to identify disinformation content through skills acquisition.

Is training people to detect disinformation the way forward?

Imagine scrolling through your social media feed and seeing a convincing video of a well-known politician announcing a controversial new policy. The voice sounds authentic, the facial expressions seem natural, and the background looks exactly like a press conference you’ve seen before. Would you be able to tell if it was fake?

Recent studies suggest many of us wouldn’t. Around 40% of participants are unable to identify deepfakes, meaning realistic-looking audiovisual disinformation content generated with AI, as manipulated material, yet many overestimate their ability to identify them correctly (Birrer & Just, 2024). Especially when it comes to AI-generated disinformation, which is often audiovisual content, we need to take into account people’s fundamental trust in that kind of content (Hameleers & Marquart, 2023). With this kind of content we tend to place more trust in seeing and hearing for ourselves.

Additionally, the progressive improvement in the quality of deepfakes poses a significant problem for the human detection of deepfakes (Patel et al., 2023). While a few years ago AI-generated content was relatively easy to recognise due to inauthentic facial expressions, unnatural speech rhythms and low image quality,  technical advances are increasingly eliminating those identifying factors.

Since we can’t predict how technology will evolve or how convincing AI-generated content will become, relying on human detection of AI-generated disinformation seems naive. With recent advances in AI regulation, like the EU AI Act, which already includes labelling requirements, one way forward is to strengthen platforms’ responsibility to manage AI-generated content distributed on them.

At the same time, AI is also being developed to fight fire with fire: alongside the advancement in AI-generated content comes the advancement of AI systems to automatically recognise disinformation. We already see AI models analysing texts for linguistic patterns and contradictions, statements, and missing references (Tajrian et al., 2023) or AI-supported tools to analyse fine details in audiovisual content, such as eye movements and lip synchronisation, to detect manipulated content (Ghai et al., 2024; Patel et al., 2023).

Where do we go from here? 

Taking a deeper look at common assumptions about disinformation reveals how much we don’t know when it comes to disinformation, especially when it comes to the basics. How many users are actually exposed to disinformation? And when they are, how does that affect their behaviour, such as their voting decisions or their views on democracy? At the moment, these remain open questions.

Hopefully, the upcoming access to platform data under the Digital Services Act will shed some light on this black box, giving researchers the evidence they need to move beyond speculation. Because even though we hear researchers, politicians, media outlets constantly and commonly talking about disinformation, we still lack foundational research needed to better understand the phenomenon of disinformation and create effective regulation and resilience. To tackle disinformation effectively, we need empirically grounded answers – not assumptions.

This article is based on the study ‚Gesellschaftliche Auswirkungen systemischer Risiken. Demokratische Prozesse im Kontext von Desinformationen‘ (2025) by Ann-Kathrin Watolla, Patrick Zerrer, Jan Rau, Lisa Merten, Matthias C. Kettemann, and Cornelius Puschmann. It was commissioned by the Bundesnetzagentur.

References

Aruguete, N., Calvo, E., & Ventura, T. (2023). News by Popular Demand: Ideological Congruence, Issue Salience, and Media Reputation in News Sharing. The International Journal of Press/Politics, 28(3), 558-579. https://doi.org/10.1177/19401612211057068 

Birrer, A., & Just, N. (2024). What we know and don’t know about deepfakes: An investigation into the state of the research and regulatory landscape. New Media & Society. https://doi.org/10.1177/14614448241253138 

Bleyer-Simon, K., & Reviglio, U. (2024). Defining Disinformation across EU  and VLOP Policies. European Digital Media Observatory. https://edmo.eu/wp-content/uploads/2024/10/EDMO-Report-%E2%80%93-Defining-Disinformation-across-EU-and-VLOP-Policies.pdf 

Dreyer, S., Stanciu, E., Potthast, K. C., & Schulz, W. (2021). Desinformation. Risiken, Regulierungslücken und adäquate Gegenmaßnahmen. Landesanstalt für Medien NRW.

Figà Talamanca, G., & Arfini, S. (2022). Through the Newsfeed Glass: Rethinking Filter Bubbles and Echo Chambers. Philosophy & Technology, 35(1), 20. https://doi.org/10.1007/s13347-021-00494-z 

Ghai, A., Kumar, P., & Gupta, S. (2024). A deep-learning-based image forgery detection framework for controlling the spread of misinformation. Information Technology & People, 37(2), 966–997. https://doi.org/10.1108/ITP-10-2020-0699 

Hameleers, M., & Marquart, F. (2023). It’s Nothing but a Deepfake! The Effects of Misinformation and Deepfake  Labels Delegitimizing an Authentic Political Speech. International Journal of Communication, 17, 6291–6311.

Hartmann, D., Pohlmann, L., Wang, S. M., & Berendt, B. (2024). A Systematic Review of Echo Chamber Research: Comparative Analysis of Conceptualizations, Operationalizations, and Varying Outcomes. arXiv preprint arXiv:2407.06631

Jack, C. (2017). Lexicon of Lies: Terms for Problematic Information (pp. 1–20). Data & Society.

Kapantai, E., Christopoulou, A., Berberidis, C., & Peristeras, V. (2021). A systematic literature review on disinformation: Toward a unified taxonomical framework. New Media & Society, 23(5), 1301–1326. https://doi.org/10.1177/1461444820959296

Kessler, S. H. (2023). Vorsicht #Desinformation: Die Wirkung von desinformierenden Social Media-Posts auf die Meinungsbildung und Interventionen. https://www.medienanstalt-nrw.de/fileadmin/user_upload/Bericht__Studie_Vorsicht_Desinformation 

Patel, Y., Tanwar, S., Gupta, R., Bhattacharya, P., Davidson, I. E., Nyameko, R., Aluvala, S., & Vimal, V. (2023). Deepfake Generation and Detection: Case Study and Challenges. IEEE Access, 11, 143296–143323. https://doi.org/10.1109/ACCESS.2023.3342107 

Sun, H. (2023). Regulating Algorithmic Disinformation.

Tajrian, M., Rahman, A., Kabir, M. A., & Islam, Md. R. (2023). A Review of Methodologies for Fake News Analysis. IEEE Access, 11, 73879–73893. https://doi.org/10.1109/ACCESS.2023.3294989 

Van Raemdonck, N., & Meyer, T. (2024). Why disinformation is here to stay. A socio-technical analysis of disinformation as a hybrid threat. In L. Lonardo (Ed.), Addressing Hybrid Threats (pp. 57–83). Edward Elgar Publishing. https://doi.org/10.4337/9781802207408.00009 

Wardle, C., & Derakhshan, H. (2017). Information Disorder: Toward an interdisciplinary framework for research and policy making (No. 27; pp. 1–107). Council of Europe.

Watolla, A., Zerrer, P., Rau, J., Merten, L., Kettemann, M.C., & Puschmann, C. (2025). Gesellschaftliche Auswirkungen systemischer Risiken. Demokratische Prozesse im Kontext von Desinformationen. Bundesnetzagentur. https://www.dsc.bund.de/DSC/DE/Aktuelles/studien/Auswirkungen%20Systemischer%20Risiken.pdf?__blob=publicationFile&v=3 

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Ann-Kathrin Watolla, Dr.

Senior Researcher & Project Lead

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Platform governance

In our research on platform governance, we investigate how corporate goals and social values can be balanced on online platforms.

Further articles

A close-up of a red pedestrian “stop” signal, symbolising public resistance and protest. The image evokes the growing global pushback against artificial intelligence systems and the demand to pause, question, and regulate their unchecked development.

AI resistance: Who says no to AI and why?

This article shows how resisting AI systems means more than protest. It's a way to challenge power structures and call for more democratic governance.

The photo shows a close-up of a spiral seashell. This symbolises complexity and hidden layers, representing AI’s environmental impact across its full life cycle.

Blind spot sustainability: Making AI’s environmental impact measurable

AI's environmental impact spans its entire life cycle, but remains a blind spot due to missing data and limited transparency. What must change?

The photo shows an old television set standing in the middle of a forest, symbolising the hidden environmental cost of digital technology and the concept of the digital metabolic rift.

The digital metabolic rift: Why do we live beyond our means online?

We cut plastic and fly less, but scroll and stream nonstop. The digital metabolic rift reveals why our eco-awareness ends where the digital begins.