Making Technology Great Again: How to Use Ethics to Save Digitalisation
Can your refrigerator order milk for you but refuse to give you a second ice cream? Should your self-driven car drive with you into a tree instead of over a careless road user? Are self-learning systems allowed to make decisions that even coders can no longer explain? And can the answers only be “yes” or “no”? This note based on the author’s talk at the opening of the “The Ethics of Digitalisation” project will explain why ethical decisions are the basis for answering these questions.
Scientists have the privilege to ask questions. For example: Are self-learning systems allowed to make decisions that even coders can no longer explain? And if yes, why? Who decides these questions? In which proceedings?
Yes, no, don’t know – these simple answers do not satisfy us. We want explanations. We deserve explanations. We have a right to explanations.
This is where the project “The Ethics of Digitalisation”, which begins today with this event, comes in, and I am very pleased to be able to help shape its content.
We know: Even if not every current social development can be traced back to “digitisation,” our society is changing with the digital transformation. However, it would be simplistic to look only at the one-sided effect of digitisation on the legal and social order. Just as the digital does something to us, we do something to the digital. Or we should. And that is what this project is all about: the normative shaping of digitisation by law, which must be preceded by an intensive examination of the ethics of digitilisation.
If law is coagulated politics, then ethical questions are the fire that burns under the cauldron of the political. Ethics is the meta-narration of law, and with the help of ethics we can show where the adjustable screws of law should be tightened and where legal reins need to be loosened.
Especially from a European perspective, it seems crucial that decision-making power in the relationship between technology and law must be understood in a more complex way, especially with regard to the freedom-securing effect of law.
The rule of law does not pause just because we are online.
When social platforms delete calls for more democracy but leave calls for violence online, when fact-checks on climate change are withdrawn for fear of censorship accusations, when search engines no longer list information – this is the exercise of power. Power that is dressed up technically and also stabilised by internal norms based on domestic law.
Of course, platforms and search engines also have rights, but the further their exercise moves away from the impetus of the protection of basic rights, the more likely they are to be restricted – for example with regard to the rights of users and the social role and impact of private communication actors.
Power is even more hidden from us in the pre-programmed usage characteristics of technology, the misleading designs, the “dark patterns”.
We must therefore defend freedom against new dangers, no longer (primarily) against the state, at least not within most democracies (but there are exceptions, even within Europe). Rather and especially freedom has to be defended against private actors who provide means and ways of communication.
To do this, however, we need a different (new) set of instruments to protect freedom.German courts have developed a state-equivalent obligation to grant basic rights to private actors when their services are essential for basic communications.
It is ironic (but nevertheless right and important) that attempts – admittedly imperfect – to push back the power of platforms and create more freedom in Germany and France are criticised as a state attack on this very freedom.
What is necessary? What can Europe do? As the right to forget and most recently the General Data Protection Regulation have shown, Europe can position itself as a source of normative insights and regulatory exports. Just like Germany contributed as co-champion to the reform process of the digital cooperation infrastructure, upon invitation of the UN Secretary-General. Germany undertook multiple multistakeholder consultations to gather the world’s opinion, including and especially that of non-state actors. Such a process, performed admirably for internet governance, is still missing in all other fields of technology governance.
That is what we want to achieve in this project: Differentiated thinking about the relationship between power and law in technology – and clear answers. In the next six months, Germany not only holds the EU Council Presidency, not only sits on the UN Security Council and the Geneva Human Rights Council, but also presents the UN Secretary-General with the above-mentioned options for reforming the cooperation architecture on the Internet.
We should make use of this happy constellation and not only focus on the normative power of the factual, or even freeze in front of it. Let us also reflect on the factual power of the normative, the effect of norms, especially when they are ethically stabilised. Then we can rightly and justifiably shape socio-technical change in a people-centered and development-oriented manner.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact email@example.com.
Sign up for HIIG's Monthly Digest
and receive our latest blog articles.
We approach the de-mystification of this claim by looking at concrete examples of how AI (re)produces inequalities and connect those to several aspects which help to illustrate socio-technical entanglements.
“System Risk Indication” (SyRI) deployed by the dutch government for automatically detecting social benefit fraud. The program was shut down due to a severe lack in transparency and unproportional collection...