Skip to content
andy-beales-BjcGdM-mjL0-unsplash
03 December 2020

When scholars sprint, bad algorithms are on the run

Der erste Research Sprint des von der Mercator-Stiftung finanzierten Projekts zur Ethik der Digitalisierung“hat die Ziellinie erreicht. Dreizehn internationale Fellows beschäftigten sich mit den Herausforderungen, die mit dem Einsatz von KI in der Moderation von Online-Inhalten einhergehen. Nach zehn intensiven Wochen interdisziplinärer Forschung geben wir einen Überblick über die zentralen Ergebnisse.


In response to increasing public pressure to tackle hate speech and other challenging content, platform companies have turned to algorithmic content moderation systems. These automated tools promise to be more effective and efficient in identifying potentially illegal or unwanted  material. But algorithmic content moderation also raises many questions – all of which eschew simple answers. Where is the line between hate speech and freedom of expression – and how to automate this on a global scale? Should platforms scale the use of AI tools for illegal online speech, like terrorism promotion, or also for regular content governance? Are platforms’ algorithms over-enforcing against legitimate speech, or are they rather failing to limit hateful content on their sites? And how can policymakers ensure an adequate level of transparency and accountability in platforms’ algorithmic content moderation processes?

Research sprint within the framework of “The Ethics of Digitalisation”

These were just some of the issues that drove the research sprint on AI and content moderation hosted by the Alexander von Humboldt Institute for Internet and Society. The sprint, which took place virtually over the course of ten weeks from August until October 2020, was the first research format of the project “The Ethics of Digitalisation – from Principles to Practices” under the patronage of the German Federal President Frank-Walter Steinmeier.  This project, which will run until July 2022, aims to foster a global dialogue on the ethics of digitalisation by involving stakeholders from academia, civil society, policy, and the industry. The project comprises research sprints and smaller clinic formats hosted by several research institutes of the Global Network of Centers. Main partners of the project are the Stiftung Mercator, the HIIG, the Berkman Klein Center at Harvard University, and the Digital Asia Hub.

Thirteen fellows, nine countries, seven time zones

In line with the project’s interdisciplinary approach, the HIIG team led by Nadine Birner, Christian Katzenbach, Matthias C. Kettemann, Alexander Pirang and Friederike Stock,assembled a highly diverse group of participants for the first research sprint. They selected thirteen brilliant fellows working in nine different countries and across seven different time zones, whose academic expertise ranged from law and public policy to data science and digital ethics.

The fellows formed working groups to address key challenges arising from the use of automation and machine learning in content moderation. They were mentored in this effort by Julia Reda (Gesellschaft für Freiheitsrechte, Berkman Klein Center), Mackenzie Nelson (AlgorithmWatch), and Juan Carlos de Martin (Politecnico di Torino, Berkman Klein Center). To engage with industry perspectives, the fellows also met with representatives from Facebook and Google. Most importantly, however, the fellows had intense discussions among themselves, which we – as scientific leads – found as captivating as thought-provoking.

This is a sprint, not a marathon: three policy briefs to guide the way

This journey was challenging at times. Research usually feels more like a marathon than a sprint, yet, in our case, the time pressure was high right from the start. And mind you, all this took place virtually during a pandemic.

The fellows more than met our high expectations, constantly pushing the boundaries of the research sprint’s format with their motivation and intellectual curiosity. Starting with the premise that algorithmic content moderation is here to stay, the fellows identified glaring gaps in our knowledge of how platform companies automate content moderation processes. Moreover, they recognized that highly imperfect machines pose grave risks for fundamental rights, particularly freedom of expression. Against this background, the working groups produced policy briefs that make recommendations on how to address these challenges across the following key areas.


Meaningful transparency obligations: In order to overcome the current information gap, the fellows propose wide-ranging measures to establish a multi-level transparency regime, thus facilitating evidence-based platform regulation and society-wide debate about how algorithmic content moderation systems should be designed.

Effective appeal mechanisms: Given a lack of redress against automated enforcement decisions, the fellows recommend imposing binding and enforceable obligations on platforms to provide users with effective appeal mechanisms. The proposals also recommend establishing an independent Ombudsperson with the powers to supervise and evaluate platforms’ algorithmic content moderation practices.

Principle-based algorithmic auditing: Lastly, the fellows identify algorithmic audits as the most promising mechanism for monitoring the risks associated with the use of AI in content moderation. In order to ensure carefully crafted legal mandates, the fellows recommend the four guiding principles of independence, access, publicity, and resources.

For more details, we invite you to read the policy briefs here:

What’s next? Interested in the ethics of digitalisation? Take a look at our upcoming virtual clinic on “Increasing Fairness in Targeted Advertising: The Risk of Gender Stereotyping by Job Ad Algorithms”.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Alexander Pirang

Former Associated Doctoral Researcher: AI & Society Lab

Matthias C. Kettemann, Prof. Dr. LL.M. (Harvard)

Head of Research Group and Associate Researcher: Global Constitutionalism and the Internet

Explore Research issue in focus

Du siehst eine Tastatur auf der eine Taste rot gefärbt ist und auf der „Control“ steht. Eine bildliche Metapher für die Regulierung von digitalen Plattformen im Internet und Data Governance. You see a keyboard on which one key is coloured red and says "Control". A figurative metaphor for the regulation of digital platforms on the internet and data governance.

Data governance

We develop robust data governance frameworks and models to provide practical solutions for good data governance policies.

Sign up for HIIG's Monthly Digest

and receive our latest blog articles.

Further articles

two Quechuas, sitting on green grass and looking at their smartphones, symbolising What are the indigenous perspectives of digitalisation? Quechuas in Peru show openness, challenges, and requirements to grow their digital economies

Exploring digitalisation: Indigenous perspectives from Puno, Peru

What are the indigenous perspectives of digitalisation? Quechuas in Peru show openness, challenges, and requirements to grow their digital economies.

eine mehrfarbige Baumlandschaft von oben, die eine bunte digitale Publikationslandschaft symbolisiert

Diamond OA: For a colourful digital publishing landscape

The blog post raises awareness of new financial pitfalls in the Open Access transformation and proposes a collaborative funding structure for Diamond OA in Germany.

a pile of crumpled up newspapers symbolising the spread of disinformation online

Disinformation: Are we really overestimating ourselves?

How aware are we of the effects and the reach of disinformation online and does the public discourse provide a balanced picture?