Skip to content

Normative Challenges of Risk Regulation of Artificial Intelligence

Author: Orwat, C., Bareis, J., Folberth, A., Jahnel, J., & Wadephul, C.
Published in: Nanoethics, 18(11), 1-29
Year: 2024
Type: Academic articles
DOI: 10.1007/s11569-024-00454-9

Approaches aimed at regulating artificial intelligence (AI) include a particular form of risk regulation, i.e. a risk-based approach. The most prominent example is the European Union’s Artificial Intelligence Act (AI Act). This article addresses the challenges for adequate risk regulation that arise primarily from the specific type of risks involved, i.e. risks to the protection of fundamental rights and fundamental societal values. This is mainly due to the normative ambiguity of such rights and societal values when attempts are made to select, interpret, specify or operationalise them for the purposes of risk assessments and risk mitigation. This is exemplified by (1) human dignity, (2) informational self-determination, data protection and privacy, (3) anti-discrimination, fairness and justice, and (4) the common good. Normative ambiguities require normative choices, which are assigned to different actors under the regime of the AI Act. Particularly critical normative choices include selecting normative concepts by which to operationalise and specify risks, aggregating and quantifying risks (including the use of metrics), balancing value conflicts, setting levels of acceptable risks, and standardisation. To ensure that these normative choices do not lack democratic legitimacy and to avoid legal uncertainty, further political processes and scientific debates are suggested.

Visit publication

Publication

Connected HIIG researchers

Jascha Bareis

Associated researcher: The evolving digital society


  • Open Access
  • Transdisciplinary
  • Peer Reviewed

Explore current HIIG Activities

Research issues in focus

HIIG is currently working on exciting topics. Learn more about our interdisciplinary pioneering work in public discourse.