Civil Society and AI: Striving for ethical governance
The involvement of civil society has been identified as key in ensuring ethical and equitable approaches towards the governance of AI by a variety of state and non-state actors. Civil society carries the potential to hold organisations and institutions accountable, to advocate for marginalised voices to be heard, to spearhead ethically sound applications of AI, and to mediate between a variety of different perspectives (Sanchez, 2021). But despite proclaimed ambitions and visible potentials, civil society actors face great challenges in actively engaging in the governance of AI.
Involving civil society actors is fundamental to the human-centric development and deployment of Artificial Intelligence (AI), proclaims the German government in their Update to the National Artificial Intelligence Strategy (NAIS), released on December 20, 2020. This call for the involvement of civil society acknowledges its increasing role in the governance of AI. Independent actors, such as the watchdog organisation AlgorithmWatch, or the Gesellschaft für Informatik (German Informatics Society), address topics ranging from the monitoring of Instagram’s newsfeed algorithm to the AI auditing project ExamAI.
Despite proclaimed national ambitions aimed at the human-centric development of AI through the involvement of civil society, researchers at the Stiftung Neue Verantwortung (SNV) find that “European civil society organisations that study and address the social, political and ethical challenges of AI are not sufficiently consulted and struggle to have an impact on the policy debate” (Beining et al., 2020: pp. 1). The HIIG Discussion Paper Towards Civil Strategization of AI in Germany explores this stark discrepancy between lofty ambitions and the reality of policy-making through the lens of the NAIS. The following paragraphs provide a look into the core themes and findings of this study on civil society and AI.
Civility in the governance of AI
The involvement of civil society in the governance of AI has been identified as crucial by a wide-range of state and non-state actors. The World Economic Forum (WEF) sees the involvement of civil society actors as key in ensuring ethical and equitable approaches towards AI in benefit of the common good. As watchdogs, they hold the power to move beyond mere principles for AI ethics towards holding organisations accountable. As advocates, they enable the participation of marginalised voices and communities to participate in the governance of AI. By making use of AI technologies, they can spearhead AI applications for the common good. As intermediaries, they can function as mediators between diverse sets of voices and perspectives.
Civil society is in a unique position to put critical topics on the governance agenda that economic and state actors might not be aware of. Especially in contexts that proclaim the ethical, human-centric, or for-the-common-good approaches towards AI. Why then is there such a great discrepancy between identified opportunities, proclaimed ambitions, and the reality of civil society participation in AI governance?
Algorithmic civil society in Germany
Many issues faced by civil society in the governance of AI are not new but rather rooted in historical, sociopolitical conceptions of the role of civil society vis-à-vis the state. In Germany, the state is envisioned as enabling the participation of a self-activating civil society (Strachwitz et al., 2020).
Unlike other countries, where AI tends to be treated as an independent subject, in Germany its regulation is largely seen as a subtopic of greater questions of digitalisation. As such, it is not only governments, corporations, and academia, but also civil society actors that address questions of AI through the broader lens of digitalisation. This is commonly referred to as the digital civil society.
The digital civil society ecosystem in Germany is strongly interwoven. Organisations such as the Gesellschaft für Informatik (German Informatics Society), the Bertelsmann Stiftung, the Stiftung Neue Verantwortung, AlgorithmWatch, or the iRights.lab frequently collaborate on a variety of projects. Exemplary of these is for instance the previously mentioned Algo.rules a joint project and study by the iRights.lab and the Bertelsmann Stiftung’s Ethik der Algorithmen (Ethics of Algorithms) initiative, which outlines a set of standards for the ethical design of algorithmic systems.
These actors are not only active on the national level but are also spearheading European initiatives On November 30, 2021, AlgorithmWatch, for instance, was at the forefront of a group of 119 civil society organisations under the umbrella of the European Digital Rights (EDRi) association. This consortium released a collective statement calling upon the European Union to put consideration for fundamental rights at the forefront of the European Artificial Intelligence Act (EAIA). Despite this strong entanglement, the wide range of organisations is far from presenting a unified view but rather a diverse yet shared criticality towards AI and digitalization more broadly.
Strategizing AI: Lofty ambitions, faulty procedures, lacking expertise
On December 20, 2020, the German government in a concerted effort by the three leading ministries, the Federal Ministry for Education and Research (BMBF), the Federal Ministry for Economic Affairs and Energy (BMWi) and the Federal Ministry for Labour and Social Affairs (BMAS) released the latest Update to its National Artificial Intelligence Strategy (NAIS). The NAIS is the result of a participatory policy-making process spanning online consultations and expert hearings encompassing representatives from the government, the private sector, academia, and civil society.
This consultative process faced several challenges, which are reflected by both the resulting policy documents, as well as the feedback of involved civil society actors. Among these challenges were:
- a lack of systematic approaches towards participatory governance processes;
- a disregard for inviting relevant civil society actors in favour of public, private, academic actors;
- unfolding and continuing interministerial competition;
- a hardening of individual argumentative positions;
- and above all, a lack of expertise across the board.
While these issues challenged the participatory governance effort there was an evolution throughout the process. The original NAIS released on November 15, 2018, beyond a reference to the development of AI for the common good (which is never clearly defined in any of the policy documents), barely touched upon any critical topics of civil society and AI. In contrast, the Update to the NAIS, which involved a broader range of civil society organisations throughout the consultative process, touched upon concrete topics of human-centric AI, curbing effects of automation on labour, and environmental protection. In addition, the policy document references the involvement of civil society actors as key in addressing these questions.
This certainly points towards greater involvement of civil society and its concern throughout the unfolding policy-making process. As one involved representative pointed out though:
“Overall the focus lies on things such as the AI competence centres, which help to bring AI applications to corporations. It is less centred on how to use potentials for the common good or how to use regulatory tools that can aid corporations in implementing ethical and societal visions in the development of AI. […] To summarise, civil society concerns are mainly found in the headlines of the AI strategy.”
By merely referencing human-centred AI in the headlines of policy documents, these refrain from deeper critical engagement of what this means in concrete terms. This is further illustrated by the fact that any sorts of concrete measures backed by the allocation of resources were negotiated behind the closed doors of interministerial negotiations. Despite the lofty proclamations of inclusive policy-making processes this rather underlines the black boxing not only of the technology but its governance.
Where to now?
Many of these hurdles faced by civil society and AI are not particular to the governance of this technology but rather reflect existing systemic issues in the organisation of participatory governance processes. The rapid development of AI in light of larger digital transformations rather multiplies the negative effects of inadequate governance, a disregard for equal and equitable representation, and lacking expertise in decision-making.
This then poses fundamental questions to participatory governance processes including: How valuable is a participatory process that is more of a knowledge-making exercise but lacks any sort of formal decision-making power? How democratic are these participatory processes when the actual allocation of resources is hidden behind closed doors? Why, with notably existing experience and research on participatory governance, are processes still poorly designed? Are these processes in support of the envisioned enabling function of the state vis-a-vis civil society?
A lack of expertise among all involved actors further questions what counts as expertise? Is technical understanding of AI fundamental? An understanding of societal effects? An understanding of policy-making processes? What about the usually unheard expertise of often already marginalised people that are most affected by the deployment of AI systems?
These and other fundamental questions related to governance processes at the interface between government, research, industry, civil society and AI are addressed by the HIIG’s AI & Society Lab.
Beining, L., Bihr, P., & Heumann, S. (2020). Towards a European AI & Society Ecosystem. Stiftung Neue Verantwortung.
Sanchez, C. (2021, July). Civil society can help ensure AI benefits us all. Here’s how. World Economic Forum. https://www.weforum.org/agenda/2021/07/civil-society-help-ai-benefits/
Strachwitz, R. G., Priller, E., & Triebe, B. (2020). Handbuch Zivilgesellschaft (Sonderausgabe für die Bundeszentrale für Politische Bildung). Bpb.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact firstname.lastname@example.org.
Sign up for HIIG's Monthly Digest
and receive our latest blog articles.
Whether civil society, politics or science – everyone seems to agree that the New Twenties will be characterised by digitalisation. But what about the tension of digital ethics? How do we create a digital transformation involving society as a whole, including people who either do not have the financial means or the necessary know-how to benefit from digitalisation? And what do these comprehensive changes in our actions mean for democracy? In this dossier we want to address these questions and offer food for thought on how we can use digitalisation for the common good.
Sustainable AI is becoming increasingly important. But how sustainable are AI models really?
Why is Artificial Intelligence so commonly depicted as a machine with a human brain? This article shows why one misleading metaphor became so prevalent.
Barriers in our physical environment are still widespread. While AI systems could eventually support detecting them, it first needs open training data. Here we provide a dataset for detecting steps...