Skip to content
datenbrille-ai-human-rights
16 January 2018| doi: 10.5281/zenodo.1148245

Can a black box be trusted?

Does our life really improve with algorithmic decision-making? Big Data-based models often reflect our very own biases of society. Further, in some cases it can be hard to look into the “black boxes”. Who can be held accountable when something goes wrong – the human or the machine? What ethical challenges are we facing and how can we protect our data?

Algorithmic representation of society and decision-making

The use of big data analytics creates new representations of society generated by algorithms, which predict future collective behaviour and are used to adopt general strategies on a large scale. These strategies are then applied to specific individuals, given the fact that they are part of one or more groups generated by analytics.

These decision-making processes based on algorithmic representations managed by AI are characterised by complexity and obscurity, which may hide potential internal biases. Moreover, these processes are usually affected by a lack of participation. In this sense, the image of the black box is frequently associated to AI and its applications.

More articles about algorithmic decisions and human rights

Finally, it is worth noting that in many cases algorithmic decision-making systems are not solely automated systems, but are decision support systems where the final resolution is adopted by a human being. This rises further concerns regarding the role of the human intervention in algorithms-supported decisions. In this context, the presumed objective nature of algorithms combined with the fact that the decision-maker is often a subordinate of a given organisation raises critical issues with regard to the role of human decision-makers and their freedom of choice.

The supposedly reliable nature of these mathematics-based tools leads those taking decisions on the basis of the results of algorithms to believe the picture of individuals and society that analytics suggest. Moreover, this attitude may be reinforced by the risk of potential sanctions for taking decisions that ignore the results provided by analytics.

In this regard, the Guidelines on the Protection of Individuals with Regard to the Processing of Personal Data in a World of Big Data recently adopted by the Council of Europe (1) state that “the use of Big Data should preserve the autonomy of human intervention in the decision-making process”. This autonomy also encompasses the freedom of decision-makers not to rely on the recommendations provided by big data applications.

Paradigm change in risk dimension: from individual to collective dimension

AI is designed to assist or (less frequently) to make decisions that affect a plurality of individuals in various fields. To reach this purpose AI solutions use a huge amount of data. Thus, data processing no longer concerns the individual dimension, but also regards the collective interests of the persons whose data is being collected, analysed and grouped for decision-making purposes.

Moreover, this collective dimension of data processing does not necessarily concern facts or information referring to a specific person. Nor does it concern clusters of individuals that can be considered groups in the sociological sense of the term. Society is divided into groups characterised by a variable geometry, which are shaped by algorithms.

The consequence of classifying people according to these groups and developing predictive models based on AI raise questions that go beyond the individual dimension. They mainly concern the ethical and social impacts of data use in decision-making processes.

In this sense, it is necessary to take into account the broader consequences of data use in our algorithmic society. Moreover, it is important to create awareness about the potential societal consequences of AI applications, as well as provide adequate remedies to prevent negative outcomes.

Values and responsible innovation: the Virt-EU approach

In order to properly address the challenges posed by the algorithmic representation and governance of our society, it is important to point out how the new AI applications do not entail a contrast between human decision-making and machine decision-making. Regarding algorithm-based decisions, it is worth remembering that any choice is not made by the machine alone, but it is significantly driven and affected by the human beings who are behind and beside the machine (i.e. AI developers). Therefore, the values and the representation of society that these persons have in mind impact on the development of algorithms and their applications.

In this context, ex post remedies can be adopted, such as the debated solutions to increase the transparency of algorithms at a technical level or to introduce auditing procedures for algorithms or new rights (e.g. right to explanation). Nevertheless, the impact of these remedies may be limited in driving AI development towards a more socially oriented approach. Transparency, as well as audits, may represent goals difficult to be achieved, due to the presence of conflicting interests (e.g. IP protection) and the risk to carry out mere formal audits, which are not taken into account by data subjects and do not increase users’ awareness.

For these reasons, it may be useful adopting a different approach based on a prior assessment of the proposed AI-based solutions, driving them towards socially and ethically acceptable goals, since the early stages of their development. In this light, it is important to create tools that enable assessment procedures that would align the social and ethical values embedded in AI solutions with those of the society in which these solutions are adopted. From this perspective, the main challenge of this approach concerns the definition of the values that should underpin this use of data.

The PESIA model

Since it is not possible to regulate AI per se, due to the broad and varied nature of this field, we can only imagine having regulations for specific sectors or given AI applications. Nonetheless, this may be quite a long process and detailed provisions may be quickly become updated.

For these reasons, in the meantime, it would be useful to develop a general model for self-assessment that may provide a more flexible tool to AI developers to figure out and address the potential societal challenges of AI use. This tool, like other assessment tools that have been developed over the years, can contribute to make human decisions accountable with regard to the use of AI solutions.

Nevertheless, ethical and social issues are more complicated than other issues already addressed by computer scientists and legal scholars (e.g. data security), since societal values are necessarily context-based. They differ from one community to another, making it hard to pinpoint the benchmark to adopt for this kind of risk assessment.

This point is clearly addressed in the Guidelines on Big Data recently adopted by the Council of Europe, which recognise the relative nature of social and ethical values. In this sense, the Guidelines require that data usage is not in conflict with the “ethical values commonly accepted in the relevant community or communities and should not prejudice societal interests, values and norms”.

Since it is not possible to adopt a general and uniform set of values, it is necessary to develop a modular and scalable model. This approach has been pointed out in the Guidelines adopted by the Council of Europe and is now further elaborated in the H2020 Virt-EU project.

In this sense, the Virt-EU project aims to create a new procedural tool (the Privacy, Ethical and Social Impact Assessment – PESIA) to facilitate a socially oriented development of technology. The PESIA model is based on an “architecture of values” articulated on different levels. This architecture preserves a uniform baseline approach in terms of common values and, at the same time, is open to the community peculiarity and demands, as well as addresses the specific questions posed by the societal impact of each given data processing.

For these reasons, the PESIA model is articulated in three different layers of values. The first of them is represented by the common ethical values recognised by international charters of human rights and fundamental freedoms. This common ground can be better defined on the basis of the analysis of the decisions concerning data processing adopted by the European courts (European Court of Justice and Europe Court of Human Rights).

The second layer takes into account the context-dependent nature of the social and ethical assessment and focuses on the values and social interests of given communities. Finding out these values may be difficult, since they are not codified in specific documents. Thus, the solution adopted in the Virt-EU project consists in analysing several legal sources that may provide a representation of the values characterising the use of data in a given society.

Finally, the third layer of this architecture consists in a more specific set of values that can be provided by ad hoc committees with regard to each specific data processing application. These PESIA committees will act on the basis of the model of ethics committees, which already exist in practice and are increasingly involved in assessing the implications of data processing. In this sense, these committees should identify the specific ethical values to be safeguarded with regard to a given use of data in the AI context, providing more detailed guidance for risk assessment.


(1) The author of this contribution had the privilege to be appointed as consultant expert in drafting the text of the guidelines.


Alessandro Mantelero is Aggregate Professor of Private Law at Polytechnic University of Turin, Department of Management and Production Engineering – Polito Law & Technology Research Group. He has been Director of Privacy at the NEXA-Center for Internet & Society until April 2017.


The article above is part of a series on algorithmic decisions and human rights. If you are interested in submitting an article yourself, send us an email with your suggestions.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Alessandro Mantelero

Explore Research issue in focus

Du siehst eine Tastatur auf der eine Taste rot gefärbt ist und auf der „Control“ steht. Eine bildliche Metapher für die Regulierung von digitalen Plattformen im Internet und Data Governance. You see a keyboard on which one key is coloured red and says "Control". A figurative metaphor for the regulation of digital platforms on the internet and data governance.

Data governance

We develop robust data governance frameworks and models to provide practical solutions for good data governance policies.

Sign up for HIIG's Monthly Digest

and receive our latest blog articles.

Further articles

eine mehrfarbige Baumlandschaft von oben, die eine bunte digitale Publikationslandschaft symbolisiert

Diamond OA: For a colourful digital publishing landscape

The blog post raises awareness of new financial pitfalls in the Open Access transformation and proposes a collaborative funding structure for Diamond OA in Germany.

a pile of crumpled up newspapers symbolising the spread of disinformation online

Disinformation: Are we really overestimating ourselves?

How aware are we of the effects and the reach of disinformation online and does the public discourse provide a balanced picture?

What skills does one need for the race with the machines on the labour market

Skills to ‘race with the machines’: The value of complementarity

As workers are constantly urged to reskill, how can they determine which skills to invest in? Learnings from one of the world’s largest online freelancing platforms.