The Ethics of Digitalisation – From Principles to Practices
According to which criteria must chat bots be programmed to communicate without discrimination? What rules must apply when programming AI so that they serve the good of all? How do we design the algorithms that shape our society?
The international research project "Ethics of Digitalisation - From Principles to Practices" aims to develop groundbreaking answers to challenges in the area of conflict between ethics and digitalisation. Innovative scientific formats, research sprints and clinics, form the core of the project; they enable interdisciplinary scientific work on application-, and practice-oriented questions and achieve outputs of high social relevance and impact. The project promotes active exchange at the interface of science, politics and society and thus contributes to a global dialogue on an ethics of digitalisation.
Milestones of the project
Frank-Walter Steinmeier opened the launch event of the two-year project in Bellevue Castle
17 August 2020 – Humboldt Institute for Internet and Society
The first sprint starts on the topic "AI and Content Moderation"
August until October 2020 – Humboldt Institute for Internet and Society
Presentation of the research outputs and panel discussion on "AI and Content Moderation"
11 November 2020 – Humboldt Institute for Internet and Society
Virtual Clinic on the topic of "Increasing Fairness in Targeted Advertising"
February 2021 - Humboldt Institute for Internet and Society
February until April 2021 – Digital Asia Hub
April 2021 - Berkman Klein Center for Internet & Society
April until June 2021 – Humboldt Institute for Internet and Society
June until July 2021 - Digital Asia Hub
November 2021 – Humboldt Institute for Internet and Society
March - June 2022 – Humboldt Institute for Internet and Society
Project partners include HIIG, the Leibniz-Institut für Medienforschung | Hans-Bredow-Institut, the Berkman Klein Center at Harvard University and the Digital Asia Hub.
The project is funded by Stiftung Mercator. The Federal President of Germany Frank-Walter Steinmeier is patron of this project.
|Duration||07/2020 - 06/2022|
The Fellows of the first research sprint introduce themselves
I am a PhD candidate at the Faculty of Law of Maastricht University. My mission as a researcher is to promote digital civil rights. In my PhD project, I examine the dynamics of EU intermediary liability framework and its impact on freedom of expression and information online. Having started my academic journey only recently, I appreciate every opportunity to get to know researchers with similar interests; that is why I jumped at the chance to become a fellow of the research sprint. If we can rely on flexible, but commonly acknowledged ethical principles, we will be able to ensure productive collaboration between different actors in the online realm.
I am a second-year Ph.D. candidate at the Faculty of Law of the University of Luxembourg. My Ph.D. dissertation topic is about AI and the enforcement of norms on online platforms. As part of the legal hacker community, I am interested in discussing and proposing actionable solutions to pressing issues at the intersection of law and technology. Within the field of AI and content moderation, I am fascinated by algorithmic enforcement systems and their impact on different rights, such as freedom of expression. Currently, the use of these systems often accepts an implicit high social cost of over-enforcement. For this reason, and many others, I believe it is essential to have a discussion about the ethics of digitalization that goes beyond experts' views and engages society at large.
I work as a lead AI engineer at Koe Koe Tech. I was recommended by my boss to this research sprint. I find AI and content moderation important because it is shaping society and mass behaviour. As a researcher, I would like people to understand how online social interactions are being guided by algorithms and why it is important that platforms are properly regulated. We need to talk more about ethics because it is the basis on which law is made, and platforms without ethics are bound to cause problems in society.
I’m currently pursuing my PhD in Sociology at the University of Chicago after having studied computer science and worked in the ‘AI for social good’ space for a couple years. For me, artificial intelligence is a manifestation of what has appeared as the natural trend towards an informational world, and it is through these functions of compression that we increasingly engage with our world. Therefore, as researchers, it’s important for us to critique, understand, and produce research that examines these systems of information, which will allow us to reimagine that which may appear as natural.
I am currently a visiting scholar with the Elliott School of International Affairs, George Washington University. The field of AI and content moderation is one that seems very promising for scholars, but also very important generally for the world, as so much or our communication happens online, nowadays. A conversation about ethics helps tremendously in this perspective primarily because of its nature. It allows us to understand where we and our interlocutors stand, and gives us not just a great framework, but also a translation device to understand complex actions. What we must also do is guard against is the watering down of the concept, the misinterpretation of it as compliance, or the fetishization of its importance.
I am an Assistant Professor in Intellectual Property Law at the Tilburg Institute for Law, Technology, and Society (TILT) of Tilburg University, The Netherlands. The topic of platform governance and online service provider liability for user-generated content forms a core component of my ongoing research. I am particularly interested in exploring how the deployment of algorithmic content moderation systems could impact on creativity and the promotion of dialogic interaction within the digital environment. Furthermore, I am curious to examine whether existing legislative, regulatory and policy frameworks on algorithmic content moderation could be calibrated in a manner that enables online platforms to flourish as open public spaces for robust and ethical social discourse.
I am a PhD candidate at University of Brasilia (Brazil), and in October I am moving to Scotland to start my PhD in Law at University of Glasgow.
What fascinates me the most about the field is its great influence on how we communicate, consume information, products and services and relate to each other daily on the Internet. Besides that, as a copyright specialist, content moderation for the purpose of copyright enforcement has been a matter of discussion at least in the last two decades. More recently, with the development of new algorithms and AI, it has gained even more relevance for us who work in this field.
I am now a PhD Candidate at the University of Hong Kong. I also engage as an Administrative Officer at Creative Commons Hong Kong. Trained in Engineering and Law, I focus my research interests on IP & IT Law, Innovation Policy, particularly employing Computational Legal Studies and Data Science. I was exploring the Chinese digital policies by contributing to the CyberBRICS Project hosted by the institutions across Brazil, Russia, India, China and South Africa, as well as the Global Data Justice Project funded by the European Research Council (ERC). The ongoing content mediation systems have shown more far-reaching implications of behavioural transformation, and once bureaucratically observed, Internet regulations across jurisdictions are shaping the algorithm-based automatic mechanisms created by platforms.
I’m a DPhil (PhD) candidate at the University of Oxford’s Internet Institute, and a Research Associate at the Alan Turing Institute. For me, content moderation is the thin end of the wedge when it comes to big tech’s use of AI. I see my part of my mission as demystifying and demythologising new technology like AI, the rhetoric around which can often lead us to focus on theoretical problems arising 50 years from now, when in reality we should be thinking about the next 5 years — as dangers like automated facial recognition and the algorithmically powered spread of harmful content online pose increasing risks to human rights and democratic discourse.
Hannah Bloch - Wehba
I am a law professor at Texas A&M University, where I study, teach, and write about law and technology. Currently, I'm particularly interested in how the promise of "AI" can be used to conceal platform power and obscure relationships with law enforcement. I'm taking part in the sprint because I'm excited to work with an international, interdisciplinary group of scholars neck deep in debates about digital rights and values. My goal as a researcher is to shed new light on the challenges technology poses for democratic processes, institutions, rights, and values.
I am a PhD candidate at the Hertie School where I am affiliated to the Centre for Digital Governance. In my dissertation project I am applying methods from the interdisciplinary fields of computational social science and social data science to better understand the impact of social platforms on democracy, and in particular on political campaigning and democratic elections. The current implementation of content moderation and systems for algorithmic filtering is a pivotal puzzle piece to better understand how policy makers can effectively regulate harmful and illegal behaviors on social platforms and at the same time limit possible negative effects on democratic values such as liberty, equality and diversity.
I am a doctoral candidate at the doctoral candidate at the Center for Information and Communication Technologies & Society (ICT&S) at the Department of Communication Studies at the University of Salzburg in Austria. I’m interested in the architecture, algorithms and affordances of online platforms, particularly social effects of recommendation systems and content moderation. The legal scholar Daphne Keller says: “no communications medium in human history has ever worked in this way”. Automation technologies and artificial intelligence are very likely to increase their influence on online life. Young people coming of age in 2020 don’t really have the option to opt-out, so we should advocate for a humane approach to technological change. As researchers, our power is in giving voice to users, be that through empirical research or policy work.
Fellows of the 1st research sprint (17 August – 25 October 2020)
PhD candidate at University of Brasilia, Brasil (University of Glasgow, Great Britain from October 2020)
Wayne Wei Wang
The international and interdisciplinary research project is a joint initiative of the Global Network of Internet and Society Research Center (NoC).
On our dossier page on Ethics of Digitisation you will find articles, videos and other content on the topic.
Federal President Frank-Walter Steinmeier invites to the launch of an international research project "Ethics of Digitisation" at Bellevue Castle. The launch conference on August 17 will focus on ethical questions of digitization, for example in the functioning of artificial intelligence and algorithms, according to the Office of the Federal President.
In the 3sat program Scobel, Research Director Prof. Wolfgang Schulz speaks on the topic "Ethics for the Digital" (minute 42:00) about research in the course of the project "Ethics of Digitalisation".
Under the patronage of Federal President Frank-Walter Steinmeier, the project "Ethics of Digitisation" has been launched. One of the directors of research, Prof. Wolfgang Schulz, talks about digital communication at WDR as an opportunity and challenge.
Alexander PirangResearcher: AI & Society Lab
Friederike StockStudent Assistant: Ethics of Digitalisation | NoC
Matthias C. Kettemann, PD, Mag., Dr., LL.M.Associated Researcher: Leibniz-Insitut für Medienforschung | Hans-Bredow-Insitut
Nadine BirnerCoordinator: The ethics of digitalisation | NoC
Wolfgang Schulz, Prof. Dr.Research Director