Zum Inhalt springen
datenbrille-ai-human-rights
16 Januar 2018| doi: 10.5281/zenodo.1148245

Können wir der Black Box vertrauen?

Machen wir uns das Leben wirklich leichter, wenn wir Algorithmen Entscheidungen treffen lassen? Big Data-gestützte Modelle unserer Gesellschaft spiegeln oft unsere eigene Voreingenommenheit wieder. Auch lassen sich die Entscheidungsprozesse schwer nachvollziehen. Nicht umsonst sprechen wir von einer „Black Box“, um künstliche Intelligenz zu charakterisieren. Wen können wir also verantwortlich machen, wenn etwas schief läuft – Mensch oder Maschine? Welchen ethischen Herausforderungen müssen wir uns stellen und wie schützen wir unsere Daten?

Algorithmic representation of society and decision-making

The use of big data analytics creates new representations of society generated by algorithms, which predict future collective behaviour and are used to adopt general strategies on a large scale. These strategies are then applied to specific individuals, given the fact that they are part of one or more groups generated by analytics.

These decision-making processes based on algorithmic representations managed by AI are characterised by complexity and obscurity, which may hide potential internal biases. Moreover, these processes are usually affected by a lack of participation. In this sense, the image of the black box is frequently associated to AI and its applications.

More articles about algorithmic decisions and human rights

Finally, it is worth noting that in many cases algorithmic decision-making systems are not solely automated systems, but are decision support systems where the final resolution is adopted by a human being. This rises further concerns regarding the role of the human intervention in algorithms-supported decisions. In this context, the presumed objective nature of algorithms combined with the fact that the decision-maker is often a subordinate of a given organisation raises critical issues with regard to the role of human decision-makers and their freedom of choice.

The supposedly reliable nature of these mathematics-based tools leads those taking decisions on the basis of the results of algorithms to believe the picture of individuals and society that analytics suggest. Moreover, this attitude may be reinforced by the risk of potential sanctions for taking decisions that ignore the results provided by analytics.

In this regard, the Guidelines on the Protection of Individuals with Regard to the Processing of Personal Data in a World of Big Data recently adopted by the Council of Europe (1) state that “the use of Big Data should preserve the autonomy of human intervention in the decision-making process”. This autonomy also encompasses the freedom of decision-makers not to rely on the recommendations provided by big data applications.

Paradigm change in risk dimension: from individual to collective dimension

AI is designed to assist or (less frequently) to make decisions that affect a plurality of individuals in various fields. To reach this purpose AI solutions use a huge amount of data. Thus, data processing no longer concerns the individual dimension, but also regards the collective interests of the persons whose data is being collected, analysed and grouped for decision-making purposes.

Moreover, this collective dimension of data processing does not necessarily concern facts or information referring to a specific person. Nor does it concern clusters of individuals that can be considered groups in the sociological sense of the term. Society is divided into groups characterised by a variable geometry, which are shaped by algorithms.

The consequence of classifying people according to these groups and developing predictive models based on AI raise questions that go beyond the individual dimension. They mainly concern the ethical and social impacts of data use in decision-making processes.

In this sense, it is necessary to take into account the broader consequences of data use in our algorithmic society. Moreover, it is important to create awareness about the potential societal consequences of AI applications, as well as provide adequate remedies to prevent negative outcomes.

Values and responsible innovation: the Virt-EU approach

In order to properly address the challenges posed by the algorithmic representation and governance of our society, it is important to point out how the new AI applications do not entail a contrast between human decision-making and machine decision-making. Regarding algorithm-based decisions, it is worth remembering that any choice is not made by the machine alone, but it is significantly driven and affected by the human beings who are behind and beside the machine (i.e. AI developers). Therefore, the values and the representation of society that these persons have in mind impact on the development of algorithms and their applications.

In this context, ex post remedies can be adopted, such as the debated solutions to increase the transparency of algorithms at a technical level or to introduce auditing procedures for algorithms or new rights (e.g. right to explanation). Nevertheless, the impact of these remedies may be limited in driving AI development towards a more socially oriented approach. Transparency, as well as audits, may represent goals difficult to be achieved, due to the presence of conflicting interests (e.g. IP protection) and the risk to carry out mere formal audits, which are not taken into account by data subjects and do not increase users’ awareness.

For these reasons, it may be useful adopting a different approach based on a prior assessment of the proposed AI-based solutions, driving them towards socially and ethically acceptable goals, since the early stages of their development. In this light, it is important to create tools that enable assessment procedures that would align the social and ethical values embedded in AI solutions with those of the society in which these solutions are adopted. From this perspective, the main challenge of this approach concerns the definition of the values that should underpin this use of data.

The PESIA model

Since it is not possible to regulate AI per se, due to the broad and varied nature of this field, we can only imagine having regulations for specific sectors or given AI applications. Nonetheless, this may be quite a long process and detailed provisions may be quickly become updated.

For these reasons, in the meantime, it would be useful to develop a general model for self-assessment that may provide a more flexible tool to AI developers to figure out and address the potential societal challenges of AI use. This tool, like other assessment tools that have been developed over the years, can contribute to make human decisions accountable with regard to the use of AI solutions.

Nevertheless, ethical and social issues are more complicated than other issues already addressed by computer scientists and legal scholars (e.g. data security), since societal values are necessarily context-based. They differ from one community to another, making it hard to pinpoint the benchmark to adopt for this kind of risk assessment.

This point is clearly addressed in the Guidelines on Big Data recently adopted by the Council of Europe, which recognise the relative nature of social and ethical values. In this sense, the Guidelines require that data usage is not in conflict with the “ethical values commonly accepted in the relevant community or communities and should not prejudice societal interests, values and norms”.

Since it is not possible to adopt a general and uniform set of values, it is necessary to develop a modular and scalable model. This approach has been pointed out in the Guidelines adopted by the Council of Europe and is now further elaborated in the H2020 Virt-EU project.

In this sense, the Virt-EU project aims to create a new procedural tool (the Privacy, Ethical and Social Impact Assessment – PESIA) to facilitate a socially oriented development of technology. The PESIA model is based on an “architecture of values” articulated on different levels. This architecture preserves a uniform baseline approach in terms of common values and, at the same time, is open to the community peculiarity and demands, as well as addresses the specific questions posed by the societal impact of each given data processing.

For these reasons, the PESIA model is articulated in three different layers of values. The first of them is represented by the common ethical values recognised by international charters of human rights and fundamental freedoms. This common ground can be better defined on the basis of the analysis of the decisions concerning data processing adopted by the European courts (European Court of Justice and Europe Court of Human Rights).

The second layer takes into account the context-dependent nature of the social and ethical assessment and focuses on the values and social interests of given communities. Finding out these values may be difficult, since they are not codified in specific documents. Thus, the solution adopted in the Virt-EU project consists in analysing several legal sources that may provide a representation of the values characterising the use of data in a given society.

Finally, the third layer of this architecture consists in a more specific set of values that can be provided by ad hoc committees with regard to each specific data processing application. These PESIA committees will act on the basis of the model of ethics committees, which already exist in practice and are increasingly involved in assessing the implications of data processing. In this sense, these committees should identify the specific ethical values to be safeguarded with regard to a given use of data in the AI context, providing more detailed guidance for risk assessment.


(1) Der Autor dieses Beitrags war als beratender Experte am Entwurf der Guidelines beteiligt.


Alessandro Mantelero ist Assoziierter Professor für Privatrecht an der Polytechnischen Universität Turin und arbeitet dort am Department of Management and Production Engineering – Polito Law & Technology Research Group. Bis April 2017 war er Director of Privacy am NEXA-Center for Internet & Society.


Der Artikel ist Teil eines Dossiers über algorithmische Entscheidungen und Menschenrechte. Sie möchten selbst einen Artikel im Rahmen dieser Serie veröffentlichen? Dann senden Sie uns eine Email mit Ihrem Themenvorschlag.

Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de

Alessandro Mantelero

Auf dem Laufenden bleiben

HIIG-Newsletter-Header

Jetzt anmelden und  die neuesten Blogartikel einmal im Monat per Newsletter erhalten.

Forschungsthema im Fokus Entdecken

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Künstliche Intelligenz und Gesellschaft

Die Zukunft der künstliche Intelligenz funktioniert in verschiedenen sozialen Kontexten. Was können wir aus ihren politischen, sozialen und kulturellen Facetten lernen?

Weitere Artikel

Das Bild zeigt eine Wand mit vielen Uhren, die alle eine andere Uhrzeit zeigen. Das symbolisiert die paradoxen Effekte von generativer KI am Arbeitsplatz auf die Produktivität.

Zwischen Zeitersparnis und Zusatzaufwand: Generative KI in der Arbeitswelt

Generative KI am Arbeitsplatz steigert die Produktivität, doch die Erfahrungen sind gemischt. Dieser Beitrag beleuchtet die paradoxen Effekte von Chatbots.

Das Bild zeigt sieben gelbe Köpfe von Lego-Figuren mit unterschiedlichen Emotionen. Das symbolisiert die Gefühle, die Lehrende an Hochschulen als innere Widerstände gegen veränderung durchleben.

Widerstände gegen Veränderung: Herausforderungen und Chancen in der digitalen Hochschullehre

Widerstände gegen Veränderung an Hochschulen sind unvermeidlich. Doch richtig verstanden, können sie helfen, den digitalen Wandel konstruktiv zu gestalten.

Das Foto zeigt einen jungen Löwen, symbolisch für unseren KI-unterstützten Textvereinfacher Simba, der von der Forschungsgruppe Public Interest AI entwickelt wurde.

Von der Theorie zur Praxis und zurück: Eine Reise durch Public Interest AI

In diesem Blogbeitrag reflektieren wir unsere anfänglichen Überlegungen zur Public Interest AI anhand der Erfahrungen bei der Entwicklung von Simba.