Making sense of our connected world
Disinformation: Are we really overestimating ourselves?
Public reporting frequently and urgently warns against disinformation. In addition, several surveys indicate a high level of concern among the population. On the other hand, numerous scientific studies suggest that the reach of disinformation online is very limited and affects only a very small part of the population. So does the public discourse not provide a balanced picture of disinformation? This blog article discusses the consequences and undesirable effects this can have.
Disinformation as a risk and cause for uncertainty
The World Economic Forum (WEF) has declared disinformation to be the greatest short-term risk worldwide. This means that disinformation will have an impact on a significant proportion of the global gross domestic product (GDP) or on the population or natural resources in the next two years.
This declaration is in line with the study “Unsettled Public”, which was recently published by the Bertelsmann Stiftung, according to which 70% of respondents are concerned that others are being deceived by disinformation. The result is not surprising and is in line with surveys conducted before Germany’s last national election (91%), after the last national election (92%) and last year (94%).
Do we overestimate our own abilities?
Another result that can also be found in all these studies is that far fewer people see disinformation as a personal risk. Only 38% – 44% of respondents in these studies said that they were worried about being deceived by disinformation themselves. In the Bertelsmann study, the figure was as low as 16%. In other words, many people are afraid that others will be deceived, but far fewer are worried about being deceived by disinformation themselves.
One obvious conclusion that was drawn in some places is that the majority of respondents lack reflection and overestimate themselves. However, this is only one possible interpretation and current empirical studies on the scope of disinformation suggest alternatives.
Significance and limitations of current studies
First of all: Empirical studies have to make numerous assumptions in order to measure the reach of disinformation. For starters, they focus primarily on alternative media and social media. We do not yet have sufficient knowledge about the extent to which disinformation online finds its way into established media (Tsfati et al. 2020) and the role of individual politicians must also be scrutinised. Furthermore, these studies focus on websites that repeatedly spread false information. However, even these websites do not exclusively disseminate false information, which raises the question of whether the results can be taken to the last decimal place. Nevertheless, numerous studies point in the same direction: The reach of disinformation online is in the single-digit percentage range.
Disinformation online only has a very limited reach
A look at the number of visits to German news websites from 2017 to 2021 reveals some interesting information. For example, the proportion of visits to websites that repeatedly spread false information is only 0.96% (Altay et al. 2022). The figures are similar for the UK and the USA, while the figure for France is also low at 3.3%. Interaction on Facebook is also relatively low in Germany in this context. Of the total number of likes, emojis, shares and comments on the Facebook accounts of German news pages, those that repeatedly spread false information received only 8.52%.
Other studies have come to similar conclusions. For example, Fletcher et al. 2020 show that non-trustworthy websites in Italy and France were visited many times less than the websites of established media in 2017. Grinberg et al. 2019 document that during the 2016 US election campaign, 80% of the sources of fake news disseminated on Twitter were consumed by only 1% of users. A small number of users had extreme consumption and for the vast majority, fake news sources made up on average only a small proportion (1.18%) of the political content in their Twitter feed. Guess et al. 2019 come to a similar conclusion for sharing posts on Facebook: More than 90% of users did not share posts from pages that repeatedly spread false information at all during the election campaign.
And while the Bertelsmann study also states that 35% of respondents had encountered disinformation online very or fairly frequently, this also means that 65% were (very) rarely or never affected. This is consistent with the other surveys, which state that 76%-81% of respondents have only occasionally or never come into contact with disinformation on the internet.
Realistic instead of unreflected?
Time for a little thought experiment: How worried are you about being deceived by disinformation?
There is a good chance that you have never or only rarely come into contact with disinformation. So you answer that you are not worried. But you keep hearing that disinformation is a big problem, that it is jeopardising the next elections or that it is being proclaimed the biggest short-term risk worldwide. So you’re worried that others are being deceived by disinformation. Afterall, this fatalism about disinformation has to come from somewhere.
If you consider the limited reach of disinformation, then the above-mentioned surveys can also be interpreted differently. Maybe citizens are not too self-confident or unreflective about disinformation? Maybe they simply have a realistic assessment of the content they encounter online? In this case, we should ask ourselves not so much whether we are warning enough about disinformation, but whether the public discourse is not providing a balanced picture of disinformation. Of course, the risks of disinformation should receive media attention. But what are the consequences if this happens in an undifferentiated way and without addressing the limited reach and effect of disinformation?
Worrying about disinformation? Yes, but please in a balanced and not indiscriminate manner!
Various studies suggest that indiscriminate warnings about digital disinformation can weaken trust in the media (Van Duyn and Collier 2019) and fuel doubts about the credibility of accurate information (Hameleers 2023). They can also increase support for more restrictive regulation of expression in the digital space (Jungherr and Rauchfleisch 2024) and reduce satisfaction with the current functioning of democracy (Nisbet et al. 2021). In short, untargeted warnings about the threat of disinformation can have exactly the opposite effect and lead to uncertainty and mistrust.
To be clear: disinformation is a real problem. Especially in neck-and-neck races, such as presumably in the USA this year, even disinformation with a small reach could have a major impact on the election result. Also, the limited reach of disinformation online is not a free pass for social media to reduce their efforts on content moderation. Furthermore, we should take a more holistic view of the problem: Disinformation, like hate speech, is a means of shifting public discourse and lending legitimacy to anti-democratic statements.
Nevertheless, disinformation must be seen in proportion and must not be the subject of hypes. After all, this only ends up undermining trust in the media and in reliable information online. What is needed is balanced reporting on disinformation that not only emphasises the problem but also strengthens trust in reliable information.
References
Altay, S., Nielsen, R. K., & Fletcher, R. (2022). Quantifying the “infodemic”: People turned to trustworthy news outlets during the 2020 coronavirus pandemic. Journal of Quantitative Description: Digital Media, 2. https://doi.org/10.51685/jqd.2022.020
Fletcher, R., Cornia, A., Graves, L., & Nielsen, R. K. (2018). Measuring the reach of “fake news” and online disinformation in Europe. Factsheets Reuters Institute.
Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on Twitter during the 2016 U.S. presidential election. Science, 363(6425), 374–378. https://doi.org/10.1126/science.aau2706
Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5(1). https://doi.org/10.1126/sciadv.aau4586
Hameleers, M. (2023). The (Un)Intended Consequences of Emphasizing the Threats of Mis- and Disinformation. Media and Communication, 11(2), 5–14. https://doi.org/10.17645/mac.v11i2.6301
Jungherr, A., & Rauchfleisch, A. (2024). Negative Downstream Effects of Alarmist Disinformation Discourse: Evidence from the United States. Political Behavior. https://doi.org/10.1007/s11109-024-09911-3
Nisbet, E. C., Mortenson, C., & Li, Q. (2021). The presumed influence of election misinformation on others reduces our own satisfaction with democracy. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-59
Tsfati, Y., Boomgaarden, H. G., Strömbäck, J., Vliegenthart, R., Damstra, A., & Lindgren, E. (2020). Causes and consequences of mainstream media dissemination of fake news: Literature review and synthesis. Annals of the International Communication Association, 44(2), 157–173. https://doi.org/10.1080/23808985.2020.1759443
Van Duyn, E., & Collier, J. (2019). Priming and Fake News: The Effects of Elite Discourse on Evaluations of News Media. Mass Communication and Society, 22(1), 29–48. https://doi.org/10.1080/15205436.2018.1511807
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.
You will receive our latest blog articles once a month in a newsletter.
Research issues in focus
Between time savings and additional effort: Generative AI in the workplace
Generative AI in the workplace is enhancing productivity, yet employees face mixed results. This post examines chatbots' paradoxical impact on efficiency.
Resistance to change: Challenges and opportunities in digital higher education
Resistance to change in higher education is inevitable. However, if properly understood, it can contribute to shaping digital transformation constructively.
From theory to practice and back again: A journey in Public Interest AI
This blog post reflects on our initial Public Interest AI principles, using our experiences from developing Simba, an open-source German text simplifier.