{"id":109596,"date":"2025-08-20T14:52:39","date_gmt":"2025-08-20T12:52:39","guid":{"rendered":"https:\/\/www.hiig.de\/?p=109596"},"modified":"2025-08-21T09:33:11","modified_gmt":"2025-08-21T07:33:11","slug":"debunking-assumptions-about-disinformation","status":"publish","type":"post","link":"https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/","title":{"rendered":"Debunking assumptions about disinformation: Rethinking what we think we know"},"content":{"rendered":"\n<p><strong>Disinformation has become one of the buzzwords of our time. It appears in news headlines, political speeches, and social media debates almost daily. Countless researchers study its conceptual and empirical aspects, fact-checking has become part of media coverage and politicians constantly warn about its dangers to democracy. With so much attention, it is easy to assume that we know everything there is to know about the phenomenon of disinformation. But do we? In a <\/strong><a href=\"https:\/\/www.hiig.de\/en\/publication\/gesellschaftliche-auswirkungen-systemischer-risiken-demokratische-prozesse-im-kontext-von-desinformationen\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>recent study<\/strong><\/a><strong> commissioned by the Bundesnetzagentur, a research team led by Ann-Kathrin Watolla, Patrick Zerrer, Jan Rau, Lisa Merten, Matthias C. Kettemann, and Cornelius Puschmann conducted a scoping review on how disinformation affects electoral processes. This article unpacks three common assumptions about disinformation and takes a closer look at what we actually know and don\u2019t know.<\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Assumption 1: We all have the same understanding of \u2018disinformation\u2019.<\/strong><\/h2>\n\n\n\n<p>As the term \u2018disinformation\u2019 has increasingly become part of public discourse, most people have some understanding of what it means. However, even though disinformation as a phenomenon has been commonly addressed by researchers, politicians, media producers and distributors alike, there is no shared interpretation of the term (Bleyer-Simon &amp; Reviglio, 2024; Dreyer et al., 2021). Especially when we take into account adjacent terms used in this context, like misinformation, fake news, or conspiracy theories, or look at the different types of false or misleading information, reaching a shared understanding becomes increasingly difficult.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What do we actually know about \u2018disinformation\u2018?<\/strong><\/h3>\n\n\n\n<p>The one thing most experts and people can agree on is that disinformation refers to content that contains false or misleading information (Kessler, 2023). Especially when it reaches a large number of people, disinformation has the potential to cause societal harm.<\/p>\n\n\n\n<p>Think for example of <a href=\"https:\/\/www.bbc.com\/news\/blogs-trending-38156985\" target=\"_blank\" rel=\"noreferrer noopener\">\u2018pizzagate\u2019<\/a>, the disinformation campaign during the 2016 presidential elections in the US, claiming that Democratic Party leaders including Hillary Clinton were linked to a global pedophilia ring operating through different establishments like a pizzeria in Washington, D.C. The spreading of this disinformation led to an armed man storming the pizzeria in search of hidden children, which shows what serious consequences disinformation can have.<\/p>\n\n\n\n<p>However, there are several issues with this broad definition of disinformation. Firstly, malicious intent of spreading false or misleading information is often included in the definition. But is it always possible to attest malicious intent? If we look at the \u2018<a href=\"https:\/\/www.auswaertiges-amt.de\/resource\/blob\/2682484\/2da31936d1cbeb9faec49df74d8bbe2e\/technischer-bericht-desinformationskampagne-doppelgaenger-1--data.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Doppelganger campaign<\/a>\u2018, an online information operation originating from Russia which frequently uses fake clones of legitimate websites, the malicious intent is rather obvious. But what about individuals spreading false information? Do we always know the intention behind the spreading of content?<\/p>\n\n\n\n<p>Secondly, when we look at the substance of disinformation content, the false or misleading information can refer to a variety of aspects (Kapantai et al., 2021). For example, <em>clickbait<\/em> is credible content but utilises exaggerated or misleading headlines to lure users. You might have seen this in cases where the content of an article for example does not match the headline. Meanwhile, <em>fabricated content<\/em> does not have a factual basis and is intended to deceive and cause harm, which is commonly referred to as \u201cfake news\u201d, while <em>imposter content<\/em> imitates credible sources by using their logos and branding in order to mislead users. This is the case of the previously mentioned Doppelg\u00e4nger campaign, where online presences of media organisations and public institutions were cloned and filled with disinformation content.<\/p>\n\n\n\n<p>So, there is a broad variety of disinformation types, which aggravates a shared understanding of the term. Additionally, we need to look at the other terms often related to disinformation. While there is some research differentiating between disinformation, misinformation, and false information (Jack, 2017; Wardle &amp; Derakhshan, 2017), distinguishing these is not as clear cut and we still don\u2019t know where disinformation begins and where it ends.<\/p>\n\n\n\n<p>Seeing that both the new laws of the European Union and large online platforms do not share an understanding of these terms (Bleyer-Simon &amp; Reviglio, 2024), also does not help. Even more importantly, we need foundational research that considers disinformation as entire misleading stories or narratives, rather than individual false claims. This is because disinformation is often not disseminated as individual pieces of content, but as carefully constructed storylines that aim to create entire narratives based on false or misleading information.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Assumption 2: Echo chambers and filter bubbles amplify disinformation.<\/strong><\/h2>\n\n\n\n<p>The ideas of \u2018echo chambers\u2019 and \u2018filter bubbles\u2019 describe situations where algorithms repeatedly serve us content that reinforces our existing views, creating sub-spaces dominated by certain perspectives. While their existence is strongly debated among scholars, we know that recommender systems \u2013 meaning the algorithms behind what appears in your feed \u2013 prioritise polarising or viral content, regardless of whether it\u2019s accurate (Van Raemdonck &amp; Meyer, 2024). These algorithms sort the massive amount of content that is available and present it to us in a way that is manageable and appealing, using collected user data on our previous activity and connections to other users as its basis (Sun, 2023).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What role do algorithms play in spreading disinformation?<\/strong><\/h3>\n\n\n\n<p>The prevalent discussion on disinformation content being favoured by algorithms and thereby spread more wildly is closely linked to the concern of societal fragmentation. The worry goes like this: if the algorithm keeps showing people content that confirms their worldview, those people end up in their own echo chambers. While this intuitively makes sense, empirical research could only partly prove this (Hartmann et al., 2024). While it is true that politically more radical users are often more engaged in online spaces and tend to share contents that align with their views (Aruguete et al., 2023), it is important to remember that users meet their information needs through a variety of information sources.<\/p>\n\n\n\n<p>Think of a user whose feed is full of politically charged memes, articles, and videos \u2013 content they eagerly like, comment on, and repost. But that same person might also get news from TV, conversations with friends, or even the local newspaper. So, even if algorithms amplify the spreading of disinformation in online spaces, these are rarely someone\u2019s <em>only<\/em> source of information.<\/p>\n\n\n\n<p>So, while user preferences are reinforced by recommendation algorithms that prioritise engagement (Fig\u00e0 Talamanca &amp; Arfini, 2022), which may lead to politically one-sided communities, these constitute only one part of the users\u2019 information sources. The challenge is that we don\u2019t really know how these different information sources are combined in people\u2019s everyday lives. And because researchers still lack meaningful access to platform data, we also can\u2019t measure precisely how strong these recommendation effects are. With the upcoming <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/news\/commission-adopts-delegated-act-data-access-under-digital-services-act\" target=\"_blank\" rel=\"noreferrer noopener\">Delegated Act of the Digital Services Act on research data access<\/a>, we will hopefully be able to better understand the amplification effects of algorithms, while also taking into account the diversity of platforms and algorithms and the high rate of change over time.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Assumption 3: We can train people to detect AI-generated disinformation.<\/strong><\/h2>\n\n\n\n<p>Measures to counteract disinformation can be categorised into three types of interventions: <em>prebunking<\/em>, referring to providing users with informative texts about disinformation, <em>nudging<\/em> which involves displaying warning labels and similar prompts to alert users to specific content while they are scrolling through their social media feeds, and <em>debunking<\/em> through retroactive fact-checking (Kessler, 2023). <\/p>\n\n\n\n<p>While debunking is often used by news media \u2013 as can be regularly seen following interviews with Donald Trump or representatives of the German right-wing party Alternative f\u00fcr Deutschland (AfD) \u2013 many initiatives have emerged in recent years aiming to enable users to identify disinformation content through skills acquisition.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Is training people to detect disinformation the way forward?<\/strong><\/h3>\n\n\n\n<p>Imagine scrolling through your social media feed and seeing a convincing video of a well-known politician announcing a controversial new policy. The voice sounds authentic, the facial expressions seem natural, and the background looks exactly like a press conference you\u2019ve seen before. Would you be able to tell if it was fake?<\/p>\n\n\n\n<p>Recent studies suggest many of us wouldn\u2019t. Around 40% of participants are unable to identify deepfakes, meaning realistic-looking audiovisual disinformation content generated with AI, as manipulated material, yet many overestimate their ability to identify them correctly (Birrer &amp; Just, 2024). Especially when it comes to AI-generated disinformation, which is often audiovisual content, we need to take into account people\u2019s fundamental trust in that kind of content (Hameleers &amp; Marquart, 2023). With this kind of content we tend to place more trust in seeing and hearing for ourselves.<\/p>\n\n\n\n<p>Additionally, the progressive improvement in the quality of deepfakes poses a significant problem for the human detection of deepfakes (Patel et al., 2023). While a few years ago AI-generated content was relatively easy to recognise due to inauthentic facial expressions, unnatural speech rhythms and low image quality,&nbsp; technical advances are increasingly eliminating those identifying factors.<\/p>\n\n\n\n<p>Since we can\u2019t predict how technology will evolve or how convincing AI-generated content will become, relying on human detection of AI-generated disinformation seems naive. With recent advances in AI regulation, like the <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/policies\/regulatory-framework-ai\" target=\"_blank\" rel=\"noreferrer noopener\">EU AI Act<\/a>, which already includes labelling requirements, one way forward is to strengthen platforms\u2019 responsibility to manage AI-generated content distributed on them.<\/p>\n\n\n\n<p>At the same time, AI is also being developed to fight fire with fire: alongside the advancement in AI-generated content comes the advancement of AI systems to automatically recognise disinformation. We already see AI models analysing texts for linguistic patterns and contradictions, statements, and missing references (Tajrian et al., 2023) or AI-supported tools to analyse fine details in audiovisual content, such as eye movements and lip synchronisation, to detect manipulated content (Ghai et al., 2024; Patel et al., 2023).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Where do we go from here?&nbsp;<\/strong><\/h2>\n\n\n\n<p>Taking a deeper look at common assumptions about disinformation reveals how much we don\u2019t know when it comes to disinformation, especially when it comes to the basics. How many users are actually exposed to disinformation? And when they are, how does that affect their behaviour, such as their voting decisions or their views on democracy? At the moment, these remain open questions.<\/p>\n\n\n\n<p>Hopefully, the <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/news\/commission-adopts-delegated-act-data-access-under-digital-services-act\" target=\"_blank\" rel=\"noreferrer noopener\">upcoming access to platform data under the Digital Services Act<\/a> will shed some light on this black box, giving researchers the evidence they need to move beyond speculation. Because even though we hear researchers, politicians, media outlets constantly and commonly talking about disinformation, we still lack foundational research needed to better understand the phenomenon of disinformation and create effective regulation and resilience. To tackle disinformation effectively, we need empirically grounded answers \u2013 not assumptions.<\/p>\n\n\n\n<p><em>This article is based on the study <a href=\"https:\/\/www.hiig.de\/en\/publication\/gesellschaftliche-auswirkungen-systemischer-risiken-demokratische-prozesse-im-kontext-von-desinformationen\/\">\u201aGesellschaftliche Auswirkungen systemischer Risiken. Demokratische Prozesse im Kontext von Desinformationen\u2018<\/a> (2025) by Ann-Kathrin Watolla, Patrick Zerrer, Jan Rau, Lisa Merten, Matthias C. Kettemann, and Cornelius Puschmann. It was commissioned by the Bundesnetzagentur.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>References<\/strong><\/h2>\n\n\n\n<p>Aruguete, N., Calvo, E., &amp; Ventura, T. (2023). News by Popular Demand: Ideological Congruence, Issue Salience, and Media Reputation in News Sharing. The International Journal of Press\/Politics, 28(3), 558-579. <a href=\"https:\/\/doi.org\/10.1177\/19401612211057068\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.1177\/19401612211057068<\/a>&nbsp;<\/p>\n\n\n\n<p>Birrer, A., &amp; Just, N. (2024). What we know and don\u2019t know about deepfakes: An investigation into the state of the research and regulatory landscape. New Media &amp; Society. <a href=\"https:\/\/doi.org\/10.1177\/14614448241253138\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.1177\/14614448241253138<\/a>&nbsp;<\/p>\n\n\n\n<p>Bleyer-Simon, K., &amp; Reviglio, U. (2024). Defining Disinformation across EU&nbsp; and VLOP Policies. European Digital Media Observatory. <a href=\"https:\/\/edmo.eu\/wp-content\/uploads\/2024\/10\/EDMO-Report-%E2%80%93-Defining-Disinformation-across-EU-and-VLOP-Policies.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/edmo.eu\/wp-content\/uploads\/2024\/10\/EDMO-Report-%E2%80%93-Defining-Disinformation-across-EU-and-VLOP-Policies.pdf<\/a>&nbsp;<\/p>\n\n\n\n<p>Dreyer, S., Stanciu, E., Potthast, K. C., &amp; Schulz, W. (2021). Desinformation. Risiken, Regulierungsl\u00fccken und ad\u00e4quate Gegenma\u00dfnahmen. Landesanstalt f\u00fcr Medien NRW.<\/p>\n\n\n\n<p>Fig\u00e0 Talamanca, G., &amp; Arfini, S. (2022). Through the Newsfeed Glass: Rethinking Filter Bubbles and Echo Chambers. Philosophy &amp; Technology, 35(1), 20. <a href=\"https:\/\/doi.org\/10.1007\/s13347-021-00494-z\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.1007\/s13347-021-00494-z<\/a>&nbsp;<\/p>\n\n\n\n<p>Ghai, A., Kumar, P., &amp; Gupta, S. (2024). A deep-learning-based image forgery detection framework for controlling the spread of misinformation. Information Technology &amp; People, 37(2), 966\u2013997. <a href=\"https:\/\/doi.org\/10.1108\/ITP-10-2020-0699\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.1108\/ITP-10-2020-0699<\/a>&nbsp;<\/p>\n\n\n\n<p>Hameleers, M., &amp; Marquart, F. (2023). It\u2019s Nothing but a Deepfake! The Effects of Misinformation and Deepfake&nbsp; Labels Delegitimizing an Authentic Political Speech. International Journal of Communication, 17, 6291\u20136311.<\/p>\n\n\n\n<p>Hartmann, D., Pohlmann, L., Wang, S. M., &amp; Berendt, B. (2024). A Systematic Review of Echo Chamber Research: Comparative Analysis of Conceptualizations, Operationalizations, and Varying Outcomes. arXiv preprint arXiv:2407.06631<\/p>\n\n\n\n<p>Jack, C. (2017). Lexicon of Lies: Terms for Problematic Information (pp. 1\u201320). Data &amp; Society.<\/p>\n\n\n\n<p>Kapantai, E., Christopoulou, A., Berberidis, C., &amp; Peristeras, V. (2021). A systematic literature review on disinformation: Toward a unified taxonomical framework. New Media &amp; Society, 23(5), 1301\u20131326. <a href=\"https:\/\/doi.org\/10.1177\/1461444820959296\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.1177\/1461444820959296<\/a><\/p>\n\n\n\n<p>Kessler, S. H. (2023). Vorsicht #Desinformation: Die Wirkung von desinformierenden Social Media-Posts auf die Meinungsbildung und Interventionen. <a href=\"https:\/\/www.medienanstalt-nrw.de\/fileadmin\/user_upload\/Bericht__Studie_Vorsicht_Desinformation\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.medienanstalt-nrw.de\/fileadmin\/user_upload\/Bericht__Studie_Vorsicht_Desinformation<\/a>&nbsp;<\/p>\n\n\n\n<p>Patel, Y., Tanwar, S., Gupta, R., Bhattacharya, P., Davidson, I. E., Nyameko, R., Aluvala, S., &amp; Vimal, V. (2023). Deepfake Generation and Detection: Case Study and Challenges. IEEE Access, 11, 143296\u2013143323. <a href=\"https:\/\/doi.org\/10.1109\/ACCESS.2023.3342107\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.1109\/ACCESS.2023.3342107<\/a>&nbsp;<\/p>\n\n\n\n<p>Sun, H. (2023). Regulating Algorithmic Disinformation.<\/p>\n\n\n\n<p>Tajrian, M., Rahman, A., Kabir, M. A., &amp; Islam, Md. R. (2023). A Review of Methodologies for Fake News Analysis. IEEE Access, 11, 73879\u201373893. <a href=\"https:\/\/doi.org\/10.1109\/ACCESS.2023.3294989\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.1109\/ACCESS.2023.3294989<\/a>&nbsp;<\/p>\n\n\n\n<p>Van Raemdonck, N., &amp; Meyer, T. (2024). Why disinformation is here to stay. A socio-technical analysis of disinformation as a hybrid threat. In L. Lonardo (Ed.), Addressing Hybrid Threats (pp. 57\u201383). Edward Elgar Publishing. <a href=\"https:\/\/doi.org\/10.4337\/9781802207408.00009\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.4337\/9781802207408.00009<\/a>&nbsp;<\/p>\n\n\n\n<p>Wardle, C., &amp; Derakhshan, H. (2017). Information Disorder: Toward an interdisciplinary framework for research and policy making (No. 27; pp. 1\u2013107). Council of Europe.<\/p>\n\n\n\n<p>Watolla, A., Zerrer, P., Rau, J., Merten, L., Kettemann, M.C., &amp; Puschmann, C. (2025). Gesellschaftliche Auswirkungen systemischer Risiken. Demokratische Prozesse im Kontext von Desinformationen. Bundesnetzagentur. <a href=\"https:\/\/www.dsc.bund.de\/DSC\/DE\/Aktuelles\/studien\/Auswirkungen%20Systemischer%20Risiken.pdf?__blob=publicationFile&amp;v=3\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.dsc.bund.de\/DSC\/DE\/Aktuelles\/studien\/Auswirkungen%20Systemischer%20Risiken.pdf?__blob=publicationFile&amp;v=3<\/a>&nbsp;<\/p>\n<div class=\"shariff shariff-align-flex-start shariff-widget-align-flex-start\"><ul class=\"shariff-buttons theme-round orientation-horizontal buttonsize-medium\"><li class=\"shariff-button linkedin shariff-nocustomcolor\" style=\"background-color:#1488bf\"><a href=\"https:\/\/www.linkedin.com\/sharing\/share-offsite\/?url=https%3A%2F%2Fwww.hiig.de%2Fen%2Fdebunking-assumptions-about-disinformation%2F\" title=\"Share on LinkedIn\" aria-label=\"Share on LinkedIn\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#0077b5; color:#fff\" target=\"_blank\"><span class=\"shariff-icon\" style=\"\"><svg width=\"32px\" height=\"20px\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 27 32\"><path fill=\"#0077b5\" d=\"M6.2 11.2v17.7h-5.9v-17.7h5.9zM6.6 5.7q0 1.3-0.9 2.2t-2.4 0.9h0q-1.5 0-2.4-0.9t-0.9-2.2 0.9-2.2 2.4-0.9 2.4 0.9 0.9 2.2zM27.4 18.7v10.1h-5.9v-9.5q0-1.9-0.7-2.9t-2.3-1.1q-1.1 0-1.9 0.6t-1.2 1.5q-0.2 0.5-0.2 1.4v9.9h-5.9q0-7.1 0-11.6t0-5.3l0-0.9h5.9v2.6h0q0.4-0.6 0.7-1t1-0.9 1.6-0.8 2-0.3q3 0 4.9 2t1.9 6z\"\/><\/svg><\/span><\/a><\/li><li class=\"shariff-button bluesky shariff-nocustomcolor\" style=\"background-color:#84c4ff\"><a href=\"https:\/\/bsky.app\/intent\/compose?text=Debunking%20assumptions%20about%20disinformation%3A%20Rethinking%20what%20we%20think%20we%20know https%3A%2F%2Fwww.hiig.de%2Fen%2Fdebunking-assumptions-about-disinformation%2F  via @hiigberlin.bsky.social\" title=\"Share on Bluesky\" aria-label=\"Share on Bluesky\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#0085ff; color:#fff\" target=\"_blank\"><span class=\"shariff-icon\" style=\"\"><svg width=\"20\" height=\"20\" version=\"1.1\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 20 20\"><path class=\"st0\" d=\"M4.89,3.12c2.07,1.55,4.3,4.71,5.11,6.4.82-1.69,3.04-4.84,5.11-6.4,1.49-1.12,3.91-1.99,3.91.77,0,.55-.32,4.63-.5,5.3-.64,2.3-2.99,2.89-5.08,2.54,3.65.62,4.58,2.68,2.57,4.74-3.81,3.91-5.48-.98-5.9-2.23-.08-.23-.11-.34-.12-.25,0-.09-.04.02-.12.25-.43,1.25-2.09,6.14-5.9,2.23-2.01-2.06-1.08-4.12,2.57-4.74-2.09.36-4.44-.23-5.08-2.54-.19-.66-.5-4.74-.5-5.3,0-2.76,2.42-1.89,3.91-.77h0Z\"\/><\/svg><\/span><\/a><\/li><li class=\"shariff-button mailto shariff-nocustomcolor\" style=\"background-color:#a8a8a8\"><a href=\"mailto:?body=https%3A%2F%2Fwww.hiig.de%2Fen%2Fdebunking-assumptions-about-disinformation%2F&subject=Debunking%20assumptions%20about%20disinformation%3A%20Rethinking%20what%20we%20think%20we%20know\" title=\"Send by email\" aria-label=\"Send by email\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#999; color:#fff\"><span class=\"shariff-icon\" style=\"\"><svg width=\"32px\" height=\"20px\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 32 32\"><path fill=\"#999\" d=\"M32 12.7v14.2q0 1.2-0.8 2t-2 0.9h-26.3q-1.2 0-2-0.9t-0.8-2v-14.2q0.8 0.9 1.8 1.6 6.5 4.4 8.9 6.1 1 0.8 1.6 1.2t1.7 0.9 2 0.4h0.1q0.9 0 2-0.4t1.7-0.9 1.6-1.2q3-2.2 8.9-6.1 1-0.7 1.8-1.6zM32 7.4q0 1.4-0.9 2.7t-2.2 2.2q-6.7 4.7-8.4 5.8-0.2 0.1-0.7 0.5t-1 0.7-0.9 0.6-1.1 0.5-0.9 0.2h-0.1q-0.4 0-0.9-0.2t-1.1-0.5-0.9-0.6-1-0.7-0.7-0.5q-1.6-1.1-4.7-3.2t-3.6-2.6q-1.1-0.7-2.1-2t-1-2.5q0-1.4 0.7-2.3t2.1-0.9h26.3q1.2 0 2 0.8t0.9 2z\"\/><\/svg><\/span><\/a><\/li><\/ul><\/div>","protected":false},"excerpt":{"rendered":"<p>Exploring definitions, algorithmic amplification, and detection, this article challenges assumptions about disinformation and calls for stronger research evidence.<\/p>\n","protected":false},"author":313,"featured_media":109601,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1577,227,1579,224],"tags":[],"class_list":["post-109596","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-digital-so","category-everyday-life","category-ftif-plattformen-governance","category-policy-and-law"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Debunking assumptions about disinformation &#8211; Digital Society Blog<\/title>\n<meta name=\"description\" content=\"Exploring definitions, algorithmic amplification, and detection, this article challenges assumptions about disinformation and calls for research evidence.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Debunking assumptions about disinformation &#8211; Digital Society Blog\" \/>\n<meta property=\"og:description\" content=\"Exploring definitions, algorithmic amplification, and detection, this article challenges assumptions about disinformation and calls for research evidence.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/\" \/>\n<meta property=\"og:site_name\" content=\"HIIG\" \/>\n<meta property=\"article:published_time\" content=\"2025-08-20T12:52:39+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-08-21T07:33:11+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.hiig.de\/wp-content\/uploads\/2025\/08\/Titelbild_DebunkingDesinformation-\u2013-1-1.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1144\" \/>\n\t<meta property=\"og:image:height\" content=\"643\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Digital Society Blog\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Digital Society Blog\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Debunking assumptions about disinformation &#8211; Digital Society Blog","description":"Exploring definitions, algorithmic amplification, and detection, this article challenges assumptions about disinformation and calls for research evidence.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/","og_locale":"en_US","og_type":"article","og_title":"Debunking assumptions about disinformation &#8211; Digital Society Blog","og_description":"Exploring definitions, algorithmic amplification, and detection, this article challenges assumptions about disinformation and calls for research evidence.","og_url":"https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/","og_site_name":"HIIG","article_published_time":"2025-08-20T12:52:39+00:00","article_modified_time":"2025-08-21T07:33:11+00:00","og_image":[{"width":1144,"height":643,"url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2025\/08\/Titelbild_DebunkingDesinformation-\u2013-1-1.png","type":"image\/png"}],"author":"Digital Society Blog","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Digital Society Blog","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/#article","isPartOf":{"@id":"https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/"},"author":{"name":"Digital Society Blog","@id":"https:\/\/www.hiig.de\/#\/schema\/person\/a921ecfdfcb94cb9c718b90c3a5dedbd"},"headline":"Debunking assumptions about disinformation: Rethinking what we think we know","datePublished":"2025-08-20T12:52:39+00:00","dateModified":"2025-08-21T07:33:11+00:00","mainEntityOfPage":{"@id":"https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/"},"wordCount":2214,"publisher":{"@id":"https:\/\/www.hiig.de\/#organization"},"image":{"@id":"https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/#primaryimage"},"thumbnailUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2025\/08\/Titelbild_DebunkingDesinformation-\u2013-1-1.png","articleSection":["Digital Society Blog","Everyday Life","Ftif Platform governance","Policy and Law"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/","url":"https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/","name":"Debunking assumptions about disinformation &#8211; Digital Society Blog","isPartOf":{"@id":"https:\/\/www.hiig.de\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/#primaryimage"},"image":{"@id":"https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/#primaryimage"},"thumbnailUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2025\/08\/Titelbild_DebunkingDesinformation-\u2013-1-1.png","datePublished":"2025-08-20T12:52:39+00:00","dateModified":"2025-08-21T07:33:11+00:00","description":"Exploring definitions, algorithmic amplification, and detection, this article challenges assumptions about disinformation and calls for research evidence.","breadcrumb":{"@id":"https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/#primaryimage","url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2025\/08\/Titelbild_DebunkingDesinformation-\u2013-1-1.png","contentUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2025\/08\/Titelbild_DebunkingDesinformation-\u2013-1-1.png","width":1144,"height":643,"caption":"Exploring definitions, algorithmic amplification, and detection, this article challenges assumptions about disinformation and calls for research evidence."},{"@type":"BreadcrumbList","@id":"https:\/\/www.hiig.de\/en\/debunking-assumptions-about-disinformation\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.hiig.de\/en\/"},{"@type":"ListItem","position":2,"name":"Debunking assumptions about disinformation: Rethinking what we think we know"}]},{"@type":"WebSite","@id":"https:\/\/www.hiig.de\/#website","url":"https:\/\/www.hiig.de\/","name":"HIIG","description":"Alexander von Humboldt Institute for Internet and Society","publisher":{"@id":"https:\/\/www.hiig.de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.hiig.de\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.hiig.de\/#organization","name":"HIIG","url":"https:\/\/www.hiig.de\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hiig.de\/#\/schema\/logo\/image\/","url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2019\/06\/hiig.png","contentUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2019\/06\/hiig.png","width":320,"height":80,"caption":"HIIG"},"image":{"@id":"https:\/\/www.hiig.de\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.hiig.de\/#\/schema\/person\/a921ecfdfcb94cb9c718b90c3a5dedbd","name":"Digital Society Blog"}]}},"_links":{"self":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/109596","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/users\/313"}],"replies":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/comments?post=109596"}],"version-history":[{"count":5,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/109596\/revisions"}],"predecessor-version":[{"id":109614,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/109596\/revisions\/109614"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/media\/109601"}],"wp:attachment":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/media?parent=109596"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/categories?post=109596"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/tags?post=109596"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}