More important than ever! Social Platform Governance during and after the 2020 US Presidential Election
The 2020 US Presidential Election has shown that practices of social platform governance and content moderation decisions are more crucial than ever. While the transparency of these algorithmic systems remains low, they might crucially affect public opinion formation of the electorate and need to be therefore more accessible and better understood by researchers and policymakers.
Social Platform Governance and the 2020 US Presidential Election
Four years ago, many people around the world woke up feeling surprised or shocked. While on election eve it seemed as though Hilary Clinton would win the 58th US presidential election, an early morning look at the cellphone confronted sleepy eyes with a new reality. Donald J. Trump had won the electoral college by seizing several crucial swing states. Only months later, amid the revelations and allegations related to the Cambridge Analytica scandal, the role of targeted political advertisement and other forms of social media campaigning in persuading undecided voters were publicly much more debated. These voters eventually tilted the polls in favour of Donald Trump.
Much has happened over the past four years. To stay with the US campaign a moment longer, Brad Parscale, a former freelance online marketing specialist at Trump’s companies, who led the 2016 Trump campaign’s digital strategy, was promoted to chief of staff in the 2020 campaign. He was, however, later demoted after a failed rally in July, 2020. While Brad Parscale’s rise underlines the importance of digital political campaigning, his fall reflects the difficulties it can cause to transform online attention into large-scale offline mobilisation. Mr. Parscale effectively lost his job to this conundrum: only a fraction of registered guests for the campaign event in Tulsa showed up, not least as TikTok users launched a successful campaign to sabotage the event by registering multiple times.
To organise such events, the 2020 Trump campaign relied on an improved campaign app that itself acts as a platform linking supporters directly to the President, while generating politically and financially profitable data for Trump’s campaign and company, which owns the application. Nevertheless, Americans spend a lot of time on Facebook and 48% use it to consume political news according to the Reuters News Report 2020. This constitutes Facebook as the central online hub for public opinion formation and the most important platform for political advertising and digital campaigning.
A shift in discourse and increased awareness
The past years have increased public awareness of the challenges that digitalisation, datafication and algorithmic decision-making bring for society and democracy, but considering trends of algorithmic governance in the public and private sector further work is necessary and needed. The Cambridge Analytica scandal and the grown awareness of social platforms’ roles in society represent a public relations problem to platform companies that signaled understanding and willingness to comply with regulation. In fact, they often expressed the need to be more closely regulated and if not, to take more responsibility themselves. Facebook, for example, recently announced its decision to delete messages referring to QAnon as well as antisemitic content relating to Holocaust denial. The latter would not be unlawful in the United States, yet is illegal in many European countries.
While these decisions seem understandable for most US citizens, especially when considering the issue from a European perspective, they conflict with the US constitutional understanding of freedom of speech. In addition to these specific political decisions such as the moderation of the New York Post’s Biden story that are taken by high-level management in social platform companies, millions of messages and accounts are algorithmically deleted on a daily basis, in order to protect users from online harms. However, these safeguards simultaneously can be considered as potentially conflicting with democratic rights and skewing the formation of public opinion by over-blocking legitimate content.
What is algorithmic moderation and why is it important for elections?
In a recent research sprint on AI and content moderation organised by the Humboldt Institute for Internet and Society and the Network of Centers, we focused on current developments in platform governance and observed a general increase in the use of algorithmic content moderation among many social media platforms throughout the coronavirus pandemic.
Building on ground setting work by Grimmelmann (2015) who defined moderation as “the governance mechanisms that structure participation in a community to facilitate cooperation and prevent abuse” (p.47), I would underline that the distinction between algorithmic content moderation systems for recommendation and detection of illegal or harmful content necessary to develop a broader public perception of the problem. While recommendation systems that decide which content social platform users get to see also form online communities by recommending accounts to follow or groups to join, they have been criticised for leading users down rabbit holes or into groups that act as echo chambers and may intensify political polarisation or even radicalisation and extremism.
In the research sprint we focused the deletion of illegal and potentially harmful content by algorithmic content moderation systems that “often remain opaque, unaccountable and often poorly understood” (Gorwa, Binns and Katzenbach, 2020:2). This is problematic per se since the decisions of why content was removed are not transparent and, moreover, makes an empirical investigation of how and to what extent the use of algorithmic content moderation affects public opinion formation in political campaigns extremely difficult. Thus, the opaqueness of algorithmic content moderation systems also hampers scientific policy advice and decision making.
While platforms have started reporting on their ‘community guideline enforcement’, a.k.a. human and algorithmic detection and deletion of content, in so-called transparency reports, the data included in these reports is fragmented and not available in machine readable formats. If the aim is to genuinely improve the quality and inclusivity of online public discourses, civil society actors such as NGOs and research institutions must be granted increased access to platform data. This would allow an independent and effective assessment of the impact that algorithmic moderation decisions have on opinion formation during democratic election campaigns such as the 2020 US presidential election. Within the next weeks, the fellows of the HIIG research sprint will present three policy reports outlining recommendations on how to enhance transparency in algorithmic content moderation and better inform policy making on the governance of social media platforms.
Philipp Darius is a PhD candidate at the Hertie School’s Centre for Digital Governance and a political consultant. In his dissertation he applies methods from computational social science and political data science to investigate the intersection of politics, technology and democratic governance.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact email@example.com.
Sign up for HIIG's Monthly Digest
and receive our latest blog articles.
Whether civil society, politics or science – everyone seems to agree that the New Twenties will be characterised by digitalisation. But what about the tension of digital ethics? How do we create a digital transformation involving society as a whole, including people who either do not have the financial means or the necessary know-how to benefit from digitalisation? And what do these comprehensive changes in our actions mean for democracy? In this dossier we want to address these questions and offer food for thought on how we can use digitalisation for the common good.
Why is Artificial Intelligence so commonly depicted as a machine with a human brain? This article shows why one misleading metaphor became so prevalent.
Barriers in our physical environment are still widespread. While AI systems could eventually support detecting them, it first needs open training data. Here we provide a dataset for detecting steps...
How can we address the many inequalities in access to digital resources and lack of digital skills that were revealed by the COVID-19 pandemic?