Zum Inhalt springen
andy-beales-BjcGdM-mjL0-unsplash
03 Dezember 2020

When scholars sprint, bad algorithms are on the run

Der erste Research Sprint des von der Mercator-Stiftung finanzierten Projekts zur Ethik der Digitalisierung“hat die Ziellinie erreicht. Dreizehn internationale Fellows beschäftigten sich mit den Herausforderungen, die mit dem Einsatz von KI in der Moderation von Online-Inhalten einhergehen. Nach zehn intensiven Wochen interdisziplinärer Forschung geben wir einen Überblick über die zentralen Ergebnisse.


In response to increasing public pressure to tackle hate speech and other challenging content, platform companies have turned to algorithmic content moderation systems. These automated tools promise to be more effective and efficient in identifying potentially illegal or unwanted  material. But algorithmic content moderation also raises many questions – all of which eschew simple answers. Where is the line between hate speech and freedom of expression – and how to automate this on a global scale? Should platforms scale the use of AI tools for illegal online speech, like terrorism promotion, or also for regular content governance? Are platforms’ algorithms over-enforcing against legitimate speech, or are they rather failing to limit hateful content on their sites? And how can policymakers ensure an adequate level of transparency and accountability in platforms’ algorithmic content moderation processes?

Research sprint within the framework of “The Ethics of Digitalisation”

These were just some of the issues that drove the research sprint on AI and content moderation hosted by the Alexander von Humboldt Institute for Internet and Society. The sprint, which took place virtually over the course of ten weeks from August until October 2020, was the first research format of the project “The Ethics of Digitalisation – from Principles to Practices” under the patronage of the German Federal President Frank-Walter Steinmeier.  This project, which will run until July 2022, aims to foster a global dialogue on the ethics of digitalisation by involving stakeholders from academia, civil society, policy, and the industry. The project comprises research sprints and smaller clinic formats hosted by several research institutes of the Global Network of Centers. Main partners of the project are the Stiftung Mercator, the HIIG, the Berkman Klein Center at Harvard University, and the Digital Asia Hub.

Thirteen fellows, nine countries, seven time zones

In line with the project’s interdisciplinary approach, the HIIG team led by Nadine Birner, Christian Katzenbach, Matthias C. Kettemann, Alexander Pirang and Friederike Stock,assembled a highly diverse group of participants for the first research sprint. They selected thirteen brilliant fellows working in nine different countries and across seven different time zones, whose academic expertise ranged from law and public policy to data science and digital ethics.

The fellows formed working groups to address key challenges arising from the use of automation and machine learning in content moderation. They were mentored in this effort by Julia Reda (Gesellschaft für Freiheitsrechte, Berkman Klein Center), Mackenzie Nelson (AlgorithmWatch), and Juan Carlos de Martin (Politecnico di Torino, Berkman Klein Center). To engage with industry perspectives, the fellows also met with representatives from Facebook and Google. Most importantly, however, the fellows had intense discussions among themselves, which we – as scientific leads – found as captivating as thought-provoking.

This is a sprint, not a marathon: three policy briefs to guide the way

This journey was challenging at times. Research usually feels more like a marathon than a sprint, yet, in our case, the time pressure was high right from the start. And mind you, all this took place virtually during a pandemic.

The fellows more than met our high expectations, constantly pushing the boundaries of the research sprint’s format with their motivation and intellectual curiosity. Starting with the premise that algorithmic content moderation is here to stay, the fellows identified glaring gaps in our knowledge of how platform companies automate content moderation processes. Moreover, they recognized that highly imperfect machines pose grave risks for fundamental rights, particularly freedom of expression. Against this background, the working groups produced policy briefs that make recommendations on how to address these challenges across the following key areas.


Meaningful transparency obligations: In order to overcome the current information gap, the fellows propose wide-ranging measures to establish a multi-level transparency regime, thus facilitating evidence-based platform regulation and society-wide debate about how algorithmic content moderation systems should be designed.

Effective appeal mechanisms: Given a lack of redress against automated enforcement decisions, the fellows recommend imposing binding and enforceable obligations on platforms to provide users with effective appeal mechanisms. The proposals also recommend establishing an independent Ombudsperson with the powers to supervise and evaluate platforms’ algorithmic content moderation practices.

Principle-based algorithmic auditing: Lastly, the fellows identify algorithmic audits as the most promising mechanism for monitoring the risks associated with the use of AI in content moderation. In order to ensure carefully crafted legal mandates, the fellows recommend the four guiding principles of independence, access, publicity, and resources.

For more details, we invite you to read the policy briefs here:

What’s next? Interested in the ethics of digitalisation? Take a look at our upcoming virtual clinic on “Increasing Fairness in Targeted Advertising: The Risk of Gender Stereotyping by Job Ad Algorithms”.

Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de

Alexander Pirang

Ehem. Assoziierter Doktorand: AI & Society Lab

Matthias C. Kettemann, Prof. Dr. LL.M. (Harvard)

Forschungsgruppenleiter und Assoziierter Forscher: Globaler Konstitutionalismus und das Internet

Aktuelle HIIG-Aktivitäten entdecken

Forschungsthemen im Fokus

Das HIIG beschäftigt sich mit spannenden Themen. Erfahren Sie mehr über unsere interdisziplinäre Pionierarbeit im öffentlichen Diskurs.

Forschungsthema im Fokus Entdecken

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Künstliche Intelligenz und Gesellschaft

Die Zukunft der künstliche Intelligenz funktioniert in verschiedenen sozialen Kontexten. Was können wir aus ihren politischen, sozialen und kulturellen Facetten lernen?

HIIG Monthly Digest

Jetzt anmelden und  die neuesten Blogartikel gesammelt per Newsletter erhalten.

Weitere Artikel

Generic visualizations generated by the author using Stable Diffusion AI

Liebling, wir müssen über die Zukunft sprechen

Können Zukunftsstudien den Status quo jenseits der akademischen Welt in Frage stellen und den öffentlichen Dialog als fantasievollen Raum für kollektive Unternehmungen nutzbar machen?

2 Quechuas, die auf einer grünen Wiese sitzen und im Sonnenlicht auf ihre Smartphones schauen, was folgendes symbolisiert: Was sind indigene Perspektiven der Digitalisierung? Die Quechuas in Peru zeigen Offenheit für die Anforderungen an das Wachstum ihrer digitalen Wirtschaft.

Digitalisierung erkunden: Indigene Perspektiven aus Puno, Peru

Was sind indigene Perspektiven der Digitalisierung? Die Quechuas in Peru zeigen Offenheit für die Anforderungen an das Wachstum ihrer digitalen Wirtschaft.

eine mehrfarbige Baumlandschaft von oben, die eine bunte digitale Publikationslandschaft symbolisiert

Diamond OA: Für eine bunte, digitale Publikationslandschaft

Der Blogpost macht auf neue finanzielle Fallstricke in der Open-Access-Transformation aufmerksam und schlägt eine gemeinschaftliche Finanzierungsstruktur für Diamond OA in Deutschland vor.