Skip to content
Picture shows a transparent umbrella from above. It stands symbolic for the AI Transparency Circle.
24 March 2023

The AI Transparency Cycle

Why AI transparency?

AI is omnipresent and invisible at the same time. Do you notice every time you interact with an algorithm? What data is being collected and processed while you casually scroll through social media or browse products on retail websites? Privacy statements by platform providers promise full transparency, but what does this even mean and what is the underlying goal?

The devil’s in the details

Defining transparency has never been straightforward and defining transparency in the context of AI systems is no exception to that. Transparency, in a broad sense, is what one can perceive, comprehend and lets one act in light of that knowledge. Considering big tech companies’ privacy statements spanning well beyond 10.000 words, aiming to inform users about their intentions and protective rights, the effectiveness of transparency measures in place appear questionable. Do you understand, for example, when you interact with an AI system and why platforms recommend certain content to you? Even if this information might be available it might not be transparent, since availability does not always equal attainability.

Metaphors of transparency

Research on the use of the metaphor transparency (Ball, 2009) reveals from the context of non-governmental-organisations and other political stakeholders, that by transparency we imply different ends of information sharing. Ball (2009) identified three: accountability, openness and efficiency. Openness is probably the most intuitive goal of transparency. Openness enforces transparency to create trust. For instance, it creates trust by allowing viewers to see what is protected from others, e.g. to protect one’s privacy. This includes not only informed decision-making, but also knowing which questions to ask in the first place. Efficiency might be less intuitive as a goal of transparency, but it’s none the less crucial for today’s complex societies. Only by knowing and understanding complex systems can we allow them to function efficiently, since we do not need to question their workings each time we depend on them. Therefore, transparency is also important for progress in societies. Last, but not least, let’s look closely at accountability. 


The third important goal of transparency often recognized is accountability. Regarding AI systems, this refers to the question of who is responsible for each step in the development and application of machine learning algorithms. Mark Bovens, who researches public accountability, defined it “as a social relationship in which an actor feels an obligation to explain and to justify his or her conduct to some significant other.” (Bovens, 2005). He sees five characteristics for public accountability, namely 1. public access to accountability, 2. proactive explanation and justification of the actions, 3. addressing a specific audience, 4. an intrinsic motivation for accountability (in contrast to action only on demand), and 5. the possibility of debate, including potential sanctions in contrast to unsolicited monologues. Especially characteristic four presents a challenge, considering the common perception of accountability as a tool for preventing blame and legal ramifications. For accountability to be realised, practising diligent AI transparency is crucial, so it does not turn “into a garbage can filled with good intentions, loosely defined concepts, and vague images of good governance.” (Bovens, 2005).


Transparency is a constant process – not an everlasting fact. It is to be viewed in its context and the perspective of stakeholders affected (Lee & Boynton, 2017). A large company providing transparency regarding its software to a governmental agency cannot give the same explanation and information to a user and expect transparency to be achieved. In a way, more transparency can lead to less transparency through the overwhelming quantity of information provided to the wrong recipient. Relevant factors to tailor AI transparency measures include the necessary degree of transparency, the political or societal function of the system, target group(s) and specific function of transparency. At the core of it lies the need for informed decision-making.

AI Transparency is a Multi-Stakeholder Effort

In practice, transparency cannot be implemented by a single actor, but has to be applied in every step of the process. A data scientist is often not aware of ethical and legal risks; and a legal counsel, for example, cannot spot those by reading through code. This becomes especially apparent in the case of unintended outcomes, calling for not only prior certifications, but also periodic auditing and possibilities of intervention for stakeholders at the end of the line. A frequent hurdle for clearer transparency standards in this area arises from the conflict between the protection of business secrets and the necessity to get access to software codes for reasons of auditing. 

The  ‘AI Transparency-Cycle’ (see graphic above) provides an overview on how the many dimensions of AI development and deployment and its ever-changing nature could be modelized and serves as a roadmap to solve the transparency conundrum. It is important not to interpret the cycle as a chronological step-by-step manual, but rather as a continuous, self-improving feedback process where development, validation, interventions, and education by the actors involved happen in parallel.


Ball, C. (2009). What is Transparency?. Public Integrity, 11, 293-308.

Bovens, M. (2005). The Concept of Public Accountability. In Ferlie, E., Lynn Jr., L. E. and Pollitt, C., Eds., The Oxford Handbook of Public Management, Oxford University Press, Oxford, 182.

Lee, T., and Boynton, L. A. (2017). Conceptualizing transparency: Propositions for the integration of situational factors and stakeholders’ perspectives. Public Relations Inquiry, 6, 233-251.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact

Theresa Züger, Dr.

Research Group Lead: Public Interest AI | AI & Society Lab

Daniel Pothmann

Project Assistant: Knowledge Transfer | Public Interest AI

Explore Research issue in focus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

Artificial Intelligence is operating in diverse contexts of our society. What can we learn from its political, social and cultural facets?

Sign up for HIIG's Monthly Digest

and receive our latest blog articles.

Further articles

Green Tech seems promising

Net-zero growth driven by green tech – A story for all?  

The grand vision that green tech will contribute to net-zero emissions and sustainable economic growth is certainly appealing, but is it really a story for all?


Teaching Norms to Large Language Models – The Next Frontier of Hybrid Governance 

This blogpost explores the ways in which we can teach norms to LLMs and introduces the concept of hybrid governance.

Detecting easy language on the German web

Lowering the barriers: Accessible language and “Leichte Sprache” on the German Web

How much of the German web uses understandable language? And how much of it is in Leichte Sprache? Our AI & Society Lab takes a closer look.