Skip to content
EU AI Act
14 November 2023

EU AI Act – Who fills the act with life?

The draft of the EU AI Act is currently being negotiated between the EU Parliament, the Council and the Commission in a trilogue. As it stands, the new regulation obliges operators of high-risk AI applications to, among other things, install a risk management system for the entire life cycle of the AI application, fulfil quality requirements for test and training data, ensure documentation and records, and guarantee transparency towards users. But who actually decides how and when these requirements are met? In other words: Who fills the AI Act with life?

Who sets the standards for the AI Act?

The regulation applies in large parts only to high-risk systems. These are primarily systems that pose a high risk to health or fundamental rights. These are, among others, systems that are used to evaluate employees or serve law enforcement. Providers of high-risk systems must ensure that their systems meet all the requirements of the AI Act (conformity assessment, Art. 19). 

Two procedures are provided for this (Art. 43): A purely internal determination (Annex VI) or the involvement of a conformity assessment body (Annex VII). The Commission may additionally adopt implementing acts specifying the requirements for high-risk systems set out in Chapter 2.

Technical standards

Harmonised technical standards may be used for internal conformity assessments. If a successful assessment is made against such a standard, the system is deemed to be compliant. They are the translation of the piece of legislation into concrete, practical steps. The practical implementation of the EU AI Act will largely depend on these technical standards. After all, relying on existing technical standards will be the easiest way for many companies. Alternatively, they would have to prove the conformity of the system with their own technical means. The latter would not only be technically more complex and thus more expensive, but also less legally secure.

In the EU, technical standards are developed by the European Committee for Standardization’s CEN and CENELEC, which are responsible for standardisation. They are something like the European equivalent of DIN. CEN and CENELEC have already founded a technical committee on AI at the beginning of 2021. The user perspective is also represented here via ANEC

The conformity assessment bodies

AI systems can also be assessed by so-called notified conformity assessment bodies (in short: notified bodies). The procedure for this is regulated in Annex VII. The notified bodies decide on the basis of the technical documentation whether the tested system complies with the requirements of the AI Act. 

Who can assess conformity as a notified body? 

These notified bodies do not necessarily have to be public authorities. Companies can also perform this task. However, only as long as they meet the requirements of Art. 33 regarding organisational structure, independence, expertise and much more. To do so, they must be inspected by a national authority and officially appointed as a notified conformity assessment body. The authority authorised to make such a notification must in turn operate independently of the notified bodies it has appointed (Art. 30 Para. 3).

What scope for action do these notifying authorities have? 

The national authority responsible for appointing the individual notified conformity assessment bodies should normally also act as the national market supervisory authority. In doing so, it has far-reaching powers. It can retrieve all training and test data and also request access to the source code of an AI system if there are reasonable grounds to do so. This also applies to authorities and bodies that check compliance with the regulations with regard to fundamental rights. Such a market supervisory authority can impose severe fines of up to 6% of a company’s global annual turnover for breaches of the regulation.

How does cooperation between individual conformity assessment bodies work? 

The notified conformity assessment bodies should exchange information and coordinate with each other. To this end, the EU Commission coordinates groups in which notified bodies that test similar technologies (e.g. text processing, evaluation systems, speech recognition, etc.) exchange information. In particular, negative decisions on the conformity of certain systems must be shared with all notified bodies in the EU. This should contribute to the uniformity of conformity assessments within the EU.

National implementation

It is not yet possible to foresee exactly what the implementation of the EU AI Act will look like at national level. In response to a parliamentary question, the Federal Government answered on 02.09.2022 that the implementation of the regulation could only take place when the final version was announced. From the same answer, however, it emerges elsewhere that the Federal Government is not planning any significant involvement of the Länder or municipalities. For its part, the CDU/CSU parliamentary group seems to expect a special role for the Federal Network Agency (Bundesnetzagentur). As a specialist authority for digital issues, it could play a leading role here.

Conclusion

The question of who sets the standards for future high-risk systems can be divided into four answers. The cornerstone is laid first by the legislative bodies of the European Union, and finally by the European Parliament. These determine which systems are to be classified as high-risk systems in the first place. 

On the second level, the Commission specifies the requirements for AI systems by means of implementing acts. This can sometimes considerably reduce the leeway of the deciding authorities and notified bodies.

The third level is formed by the technical standards according to which the internal conformity assessments are carried out. These “translation acts” of the legal regulations into technical instructions are issued by CEN and CENELEC.

The fourth level is the interplay between the notifying authority and the notified bodies. The latter make the original decision as to whether a system meets the requirements of the regulation. At the same time, these bodies are appointed by the notifying authority and thus initially checked for their independence and suitability. 

The monitoring and certification system provided for in the current version of the EU AI Act is reminiscent of the concept of auditing. These, too, are profit-oriented companies organised under private law, which certify the conformity of the audited company with the legal requirements. Among other things, this “private” supervision is blamed for the Wirecard scandal. In order to minimise the impact of profit interests of audit firms, the separation of consulting and auditing is demanded, among other things. In addition, companies must change audit firms every 10 years. Such regulations are lacking in the EU AI Act. Here, there is a risk of economic dependence or at least influencing the decision for economic reasons.

References

Bundestagsdrucksache 20/3284 (bundestag.de)

Bundestagsdrucksache 20/2984 (bundestag.de)

Klausa, Torben: Gutachten zu deutscher DSA-Aufsicht: Unabhängigkeit tut Not in: Tagesspiegel Background (last access: 26.09.2023)

Storbeck, Olaf, EY and Wirecard: anatomy of a flawed audit in Financial Times (ft.com)  (last access: 26.09.2023)

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Vincent Hofmann

Researcher: AI & Society Lab

Explore Research issue in focus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

The future of artificial Intelligence and society operates in diverse societal contexts. What can we learn from its political, social and cultural facets?

Sign up for HIIG's Monthly Digest

and receive our latest blog articles.

Further articles

Toolkit "Making Sense of the Future" lays on the table, representing digital futures in the classroom.

Making Sense of the Future: New brainteasers for digital futures in the classroom

Explore “Making Sense of the Future”, an open educational resource combining futures studies and creative exploration to reimagine our digital futures.

Generic visualizations generated by the author using Stable Diffusion AI representing futuristic visions for futures studies

Honey, we need to talk about the future

Can futures studies challenge the status quo beyond academia and approach public dialogue as an imaginative space for collective endeavours?

two Quechuas, sitting on green grass and looking at their smartphones, symbolising What are the indigenous perspectives of digitalisation? Quechuas in Peru show openness, challenges, and requirements to grow their digital economies

Exploring digitalisation: Indigenous perspectives from Puno, Peru

What are the indigenous perspectives of digitalisation? Quechuas in Peru show openness, challenges, and requirements to grow their digital economies.