Making sense of our connected world

Who spreads disinformation, where, for what purpose, and to what extent?
Disinformation poses a threat to democracy — or so the widely shared narrative goes. But how much false information do politicians actually spread? On which platforms, and to what ends? Two new studies provide systematic answers: they show that disinformation is not a marginal phenomenon confined to social media, but also appears in talk shows on public television. The AfD, the CDU/CSU, and the BSW are particularly active, albeit with different objectives. While the AfD primarily targets institutions, the CDU/CSU use disinformation mainly to attack political opponents. This blog post summarises three key findings.
Attacks on the foundations of democracy, such as free elections or the independence of courts, are widely discussed. Disinformation is often regarded as one form of these assaults. Social media, in particular, is suspected of facilitating the spread of falsehoods. But is this justified? Together with colleagues, I have conducted two studies on the dissemination of disinformation by German politicians. One examined disinformation on social media (Nenno, Puschmann, Fuławka, and Lorenz-Spreen 2025), and the other investigated the phenomenon on public television (Nenno 2025). In this blog post, I would like to share three key findings that shed light on who spreads disinformation, where, for what purpose, and to what extent.
Before turning to the findings, a brief conceptual clarification may help to better contextualise the results. Both studies examine the dissemination of disinformation — which is not the same as exposure or effect. We do not all consume the same information; therefore, greater dissemination does not necessarily mean that everyone is more strongly or equally exposed to false information. And even those who are exposed to disinformation do not necessarily change their behaviour as a result. What interpretations, then, do our findings allow regarding the spread of disinformation?
Disinformation – widespread or false alarm?
In recent years, a growing number of studies have sought to measure the spread of online disinformation (Acerbi et al. 2022). Most of these studies conclude that disinformation is only minimally disseminated. Some researchers therefore argue that public concern about disinformation is exaggerated (Budak et al. 2024; Nenno 2024). However, this is not the only possible interpretation. If online disinformation is indeed limited in scope, yet widely known, this may be because its reach is amplified through coverage by established media outlets (Tsfati et al. 2020). In addition, a methodological concern has been raised: strictly speaking, these studies examine only a fraction of the types of disinformation. They identify links to news websites that frequently spread false information (Ecker et al. 2025). However, if the false information appears, for instance, within the body text of a social media post, it remains undetected.
The aim of the two studies was to address and connect these concerns. To do so, we employed a novel measurement approach. Using large language models, we analysed texts to detect contentious or verifiable claims. These newly collected texts were then automatically compared with a database of fact-checks to identify instances of disinformation. This may sound like a technical detail, yet the method offers two key advantages: it allows us to examine the text of posts and to measure the dissemination of disinformation in transcripts of TikTok videos or television programmes. In other words, compared with previous studies, our analysis could be extended to new media formats and types of content.
The studies focused on German politicians and their party accounts — not on disinformation spread by civil society actors or foreign campaigns. Politicians and parties are central actors in shaping public opinion, and the consequences can be serious when they disseminate disinformation.
1) Disinformation is not evenly distributed
1.14% of social media posts contained disinformation. The rate was highest on Facebook (1.52%), followed by TikTok (1.43%), Instagram (0.97%), and X/Twitter (0.73%). At first glance, this finding is not surprising. Social media platforms host an enormous variety of content — from cat videos and Easter greetings to political propaganda. We should therefore expect that any specific type of content, in this case disinformation, represents only a small fraction of all posts. More important than the overall figure, however, is the question of where disinformation is concentrated and whether these segments can be identified. Are there topics or actors particularly prone to spreading disinformation?
Among party accounts, the AfD shared the most disinformation. It was followed by the CDU, CSU, and BSW, while the SPD, Greens, Left Party, and FDP spread very little. In terms of subject matter, disinformation was most frequent in posts about economic issues (for example, Value added tax in the gastronomy sector), followed by law and crime, and agriculture (for example, subsidies in the agricultural sector).
When actors and topics are considered together, the concentration becomes even clearer: 6% of all posts about agriculture contained disinformation, but among CDU posts on this topic the figure rose to 12%. For CSU posts about health (for example, the decriminalisation of cannabis), the share was nearly 9%, and for AfD posts on energy supply (for example, energy-efficient renovation), it was 8%. Thus, the overall low rate of disinformation conceals the high concentration within certain segments.
2) Disinformation is not an exclusively social media problem
So far, the discussion has focused on the spread of disinformation on platforms such as Facebook. Yet the question remains: is disinformation exclusively a social media phenomenon? My hypothesis was that, in formats such as talk shows, it is often not possible to correct false claims made by guests quickly enough, and that disinformation therefore also appears there.
Indeed, on average, almost 12% of episodes from programmes such as Markus Lanz, Hart aber fair, or Maischberger contained at least one false statement. This finding is not entirely surprising, as some of these shows publish their own fact-checks reviewing claims made in previous episodes. They already assume, in other words, that their guests are not always accurate. However, the rate was even higher than these internal fact-checks had suggested.
In comparison, most of the disinformation originated from AfD politicians, followed by members of the CDU/CSU and the BSW. While disinformation is often associated with conspiracy theories or similar narratives, the talk shows primarily featured other forms of false claims. These concerned topics that were at the centre of public debate at the time — for example, the payment card for asylum seekers and the unfounded claim that it had already led to people leaving the country. This aligns with the findings from social media: disinformation shared by politicians often revolved around issues that were the subject of ongoing public controversy.
3) The type of disinformation distinguishes the AfD from other parties
The two studies we conducted show that while the AfD spread the largest share of disinformation, the Union parties also contributed significantly. It was particularly striking that this could not be attributed to a few individual CDU/CSU politicians, but extended across large parts of the parties and even their official accounts. In other words, when considering only the volume of disinformation, the difference between the AfD and the Union was smaller than one might expect.
However, the issue is not solely the amount of disinformation. The social media study revealed that the AfD was more isolated in terms of content than the Union. False claims shared by the AfD were less frequently picked up by other parties than those originating from the Union. In the study on public broadcasting, the content could be analysed in greater detail. It became clear that the AfD often used disinformation to attack democratic institutions, whereas the Union used it primarily to target political opponents. For instance, the AfD spread false information intended to cast doubt on the integrity of the federal office for the protection of the constitution, while the Union made unfounded claims about migration to criticise the asylum policy of the previous “traffic light” government.
Who spreads disinformation, where, for what purpose and to what extent?
In Germany, disinformation by politicians is spread primarily by the AfD, but also by the BSW and the Union parties. This occurs both online and in talk shows on public television. For most policy areas, the rate of disinformation is very low — and this also applies to posts or appearances by politicians from several parties. However, when it comes to highly contested topics and contributions from the AfD or the Union, the likelihood of false information is relatively high. While the Union mainly uses disinformation to attack political opponents, the AfD often targets democratic institutions with false claims.
As explained at the outset, the mere dissemination of disinformation is not enough to amplify its impact. An increase in disinformation during election periods does not automatically mean that voters will change their decisions at the ballot box. Nevertheless, as its dissemination grows, so too does the risk of such effects. Our results show that the probability of encountering disinformation is not insignificant for certain actors and topics.
A second aspect should also not be overlooked. While it is right to be concerned about the negative effects of consuming disinformation, our studies also shed light on its production — that is, on the politicians who spread it. This is not a matter of a few individuals from a single party, but of substantial parts of several parties. Even though the effects remain uncertain, our findings indicate that disinformation is firmly embedded in the communication strategies of some political actors and parties.
References
Acerbi, A., Altay, S., & Mercier, H. (2022). Research note: Fighting misinformation or fighting for information? Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-87
Budak, C., Nyhan, B., Rothschild, D. M., Thorson, E., & Watts, D. J. (2024). Misunderstanding the harms of online misinformation. Nature, 630(8015), 45–53. https://doi.org/10.1038/s41586-024-07417-w
Ecker, U. K. H., Tay, L. Q., Roozenbeek, J., van der Linden, S., Cook, J., Oreskes, N., & Lewandowsky, S. (2025). Why misinformation must not be ignored. American Psychologist, 80(6), 867–878. https://doi.org/10.1037/amp0001448
Nenno, S. (2024). Desinformation: Überschätzen wir uns? – Digital Society Blog. Digital Society Blog. https://www.hiig.de/desinformation-ueberschaetzen-wir-uns/
Nenno, S. (2025). Separate worlds of misinformation. An explorative study of checked claims in German public broadcast news and talk shows. Information, Communication & Society, 0(0), 1–17. https://doi.org/10.1080/1369118X.2025.2561030
Nenno, S., Puschmann, C., Fuławka, K., & Lorenz-Spreen, P. (2025). Content-based detection of misinformation expands its scope across politicians and platforms (No. p6yh9_v1). SocArXiv. https://doi.org/10.31235/osf.io/p6yh9_v1
Tsfati, Y., Boomgaarden, H. G., Strömbäck, J., Vliegenthart, R., Damstra, A., & Lindgren, E. (2020). Causes and consequences of mainstream media dissemination of fake news: Literature review and synthesis. Annals of the International Communication Association, 44(2), 157–173. https://doi.org/10.1080/23808985.2020.1759443
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

You will receive our latest blog articles once a month in a newsletter.
Research issues in focus
Inside content moderation: Humans, machines and invisible work
Content moderation combines human labour and algorithmic systems, exposing global inequalities in who controls what we see online.
Beyond Big Tech: National strategies for platform alternatives
China, Russia and India are building national platform alternatives to reduce their dependence on Big Tech. What can Europe learn from their strategies?
Counting without accountability? An analysis of the DSA’s transparency reports
Are the DSA's transparency reports really holding platforms accountable? A critical analyses of reports from major platforms reveals gaps and raises doubts.



