Themen im Fokus
Ethik der Digitalisierung
Nach welchen Kriterien müssen Chatbots programmiert sein, damit sie diskriminierungsfrei kommunizieren? Welche Regeln müssen bei der Entwicklung von künstlicher Intelligenz gelten, damit KI-Anwendungen dem Wohl aller dienen? Wie gestalten wir die Algorithmen, die unsere Gesellschaft prägen? In unserem interdisziplinären Projekt „Ethik der Digitalisierung“ erarbeiten wir mit internationalen Partnerinstitutionen konkrete Problemlösungen, die praxistaugliche Anwendungen und den Dialog an der Schnittstelle von Wissenschaft, Politik, digitaler Wirtschaft und Zivilgesellschaft unterstützen.
Die Research Fellows des 1. Sprints stellen sich vor
I am a PhD candidate at the Faculty of Law of Maastricht University. My mission as a researcher is to promote digital civil rights. In my PhD project, I examine the dynamics of EU intermediary liability framework and its impact on freedom of expression and information online. Having started my academic journey only recently, I appreciate every opportunity to get to know researchers with similar interests; that is why I jumped at the chance to become a fellow of the research sprint. If we can rely on flexible, but commonly acknowledged ethical principles, we will be able to ensure productive collaboration between different actors in the online realm.
I am a second-year Ph.D. candidate at the Faculty of Law of the University of Luxembourg. My Ph.D. dissertation topic is about AI and the enforcement of norms on online platforms. As part of the legal hacker community, I am interested in discussing and proposing actionable solutions to pressing issues at the intersection of law and technology. Within the field of AI and content moderation, I am fascinated by algorithmic enforcement systems and their impact on different rights, such as freedom of expression. Currently, the use of these systems often accepts an implicit high social cost of over-enforcement. For this reason, and many others, I believe it is essential to have a discussion about the ethics of digitalization that goes beyond experts' views and engages society at large.
I work as a lead AI engineer at Koe Koe Tech. I was recommended by my boss to this research sprint. I find AI and content moderation important because it is shaping society and mass behaviour. As a researcher, I would like people to understand how online social interactions are being guided by algorithms and why it is important that platforms are properly regulated. We need to talk more about ethics because it is the basis on which law is made, and platforms without ethics are bound to cause problems in society.
I’m currently pursuing my PhD in Sociology at the University of Chicago after having studied computer science and worked in the ‘AI for social good’ space for a couple years. For me, artificial intelligence is a manifestation of what has appeared as the natural trend towards an informational world, and it is through these functions of compression that we increasingly engage with our world. Therefore, as researchers, it’s important for us to critique, understand, and produce research that examines these systems of information, which will allow us to reimagine that which may appear as natural.
I am currently a visiting scholar with the Elliott School of International Affairs, George Washington University. The field of AI and content moderation is one that seems very promising for scholars, but also very important generally for the world, as so much or our communication happens online, nowadays. A conversation about ethics helps tremendously in this perspective primarily because of its nature. It allows us to understand where we and our interlocutors stand, and gives us not just a great framework, but also a translation device to understand complex actions. What we must also do is guard against is the watering down of the concept, the misinterpretation of it as compliance, or the fetishization of its importance.
I am an Assistant Professor in Intellectual Property Law at the Tilburg Institute for Law, Technology, and Society (TILT) of Tilburg University, The Netherlands. The topic of platform governance and online service provider liability for user-generated content forms a core component of my ongoing research. I am particularly interested in exploring how the deployment of algorithmic content moderation systems could impact on creativity and the promotion of dialogic interaction within the digital environment. Furthermore, I am curious to examine whether existing legislative, regulatory and policy frameworks on algorithmic content moderation could be calibrated in a manner that enables online platforms to flourish as open public spaces for robust and ethical social discourse.
I am a PhD candidate at University of Brasilia (Brazil), and in October I am moving to Scotland to start my PhD in Law at University of Glasgow.
What fascinates me the most about the field is its great influence on how we communicate, consume information, products and services and relate to each other daily on the Internet. Besides that, as a copyright specialist, content moderation for the purpose of copyright enforcement has been a matter of discussion at least in the last two decades. More recently, with the development of new algorithms and AI, it has gained even more relevance for us who work in this field.
I am now a PhD Candidate at the University of Hong Kong. I also engage as an Administrative Officer at Creative Commons Hong Kong. Trained in Engineering and Law, I focus my research interests on IP & IT Law, Innovation Policy, particularly employing Computational Legal Studies and Data Science. I was exploring the Chinese digital policies by contributing to the CyberBRICS Project hosted by the institutions across Brazil, Russia, India, China and South Africa, as well as the Global Data Justice Project funded by the European Research Council (ERC). The ongoing content mediation systems have shown more far-reaching implications of behavioural transformation, and once bureaucratically observed, Internet regulations across jurisdictions are shaping the algorithm-based automatic mechanisms created by platforms.
I’m a DPhil (PhD) candidate at the University of Oxford’s Internet Institute, and a Research Associate at the Alan Turing Institute. For me, content moderation is the thin end of the wedge when it comes to big tech’s use of AI. I see my part of my mission as demystifying and demythologising new technology like AI, the rhetoric around which can often lead us to focus on theoretical problems arising 50 years from now, when in reality we should be thinking about the next 5 years — as dangers like automated facial recognition and the algorithmically powered spread of harmful content online pose increasing risks to human rights and democratic discourse.
Hannah Bloch - Wehba
I am a law professor at Texas A&M University, where I study, teach, and write about law and technology. Currently, I'm particularly interested in how the promise of "AI" can be used to conceal platform power and obscure relationships with law enforcement. I'm taking part in the sprint because I'm excited to work with an international, interdisciplinary group of scholars neck deep in debates about digital rights and values. My goal as a researcher is to shed new light on the challenges technology poses for democratic processes, institutions, rights, and values.
I am a PhD candidate at the Hertie School where I am affiliated to the Centre for Digital Governance. In my dissertation project I am applying methods from the interdisciplinary fields of computational social science and social data science to better understand the impact of social platforms on democracy, and in particular on political campaigning and democratic elections. The current implementation of content moderation and systems for algorithmic filtering is a pivotal puzzle piece to better understand how policy makers can effectively regulate harmful and illegal behaviors on social platforms and at the same time limit possible negative effects on democratic values such as liberty, equality and diversity.
Was unsere Partnerinstitutionen sagen
Amar Ashar | Berkman Klein Center at Harvard University
Warum wir einen globalen Dialog über die Ethik der Digitalisierung brauchen
Malavika Jayaram | Digital Asia Hub
Warum wir bei der Gestaltung von künstlicher Intelligenz mehr Mitsprache brauchen
Carlos Affonso Souza | Institute for Technology and Society of Rio de Janeiro
Warum wir uns mit digitaler Ungleichheit beschäftigen müssen
Texte zum Thema
Das Fieber um die Marktdominanz von Big Tech gipfelte vor einigen Monaten in einer umstrittenen Anhörung im US-Kongress. Während Google, Apple und Amazon neue lästige Regulierungen erfordern mögen, fürchtet Facebook...
Kann Ihr Kühlschrank Milch für Sie bestellen, Ihnen aber ein zweites Eis verweigern? Soll Ihr selbstfahrendes Auto mit Ihnen gegen einen Baum fahren, statt über einen unvorsichtigen Verkehrsteilnehmer? Dürfen selbstlernende…
Unter der Schirmherrschaft des Bundespräsidenten startet das HIIG heute ein weltweites Forschungsprojekt Berlin, 17.8.2020 – Bundespräsident Frank-Walter Steinmeier eröffnet heute in Schloss Bellevue die Auftaktveranstaltung des zweijährigen Projekts „Ethik der…
Das Internet glaubt, dass Künstliche Intelligenz (KI) ganz von alleine grundliegende Probleme unserer Gesellschaft lösen wird. Diesen Mythos hat sich Christian Katzenbach genauer angesehen. Zum diesjährigen Internet Governance Forum (IGF)…
Das Forschungsprojekt in den Medien
Bundespräsident Frank-Walter Steinmeier lädt zum Auftakt eines internationalen Forschungsprojektes „Ethik der Digitalisierung“ ins Schloss Bellevue ein. Im Zentrum der Auftaktkonferenz am 17. August stehen ethische Fragen der Digitalisierung etwa bei der Funktionsweise von künstlicher Intelligenz und Algorithmen, wie das Bundespräsidialamt mitteilte.