Zum Inhalt springen

When scholars sprint, bad algorithms are on the run

Author: Kettemann, M. C., & Pirang, A.
Published in: Digital society blog
Year: 2020
Type: Other publications

In response to increasing public pressure to tackle hate speech and other challenging content, platform companies have turned to algorithmic content moderation systems. These automated tools promise to be more effective and efficient in identifying potentially illegal or unwanted material. But algorithmic content moderation also raises many questions – all of which eschew simple answers. Where is the line between hate speech and freedom of expression – and how to automate this on a global scale? Should platforms scale the use of AI tools for illegal online speech, like terrorism promotion, or also for regular content governance? Are platforms’ algorithms over-enforcing against legitimate speech, or are they rather failing to limit hateful content on their sites? And how can policymakers ensure an adequate level of transparency and accountability in platforms’ algorithmic content moderation processes?

Visit publication

Publication

Connected HIIG researchers

Matthias C. Kettemann, Prof. Dr. LL.M. (Harvard)

Forschungsgruppenleiter und Assoziierter Forscher: Globaler Konstitutionalismus und das Internet

Alexander Pirang

Ehem. Assoziierter Doktorand: AI & Society Lab

Aktuelle HIIG-Aktivitäten entdecken

Forschungsthemen im Fokus

Das HIIG beschäftigt sich mit spannenden Themen. Erfahren Sie mehr über unsere interdisziplinäre Pionierarbeit im öffentlichen Diskurs.