{"id":91998,"date":"2023-03-24T15:59:28","date_gmt":"2023-03-24T14:59:28","guid":{"rendered":"https:\/\/www.hiig.de\/?p=91998"},"modified":"2023-08-22T14:53:07","modified_gmt":"2023-08-22T12:53:07","slug":"ai-transparency","status":"publish","type":"post","link":"https:\/\/www.hiig.de\/en\/ai-transparency\/","title":{"rendered":"The AI Transparency Cycle"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\"><strong>Why AI transparency?<\/strong><\/h2>\n\n\n\n<p><strong>AI is omnipresent and invisible at the same time. Do you notice every time you interact with an algorithm? What data is being collected and processed while you casually scroll through social media or browse products on retail websites? Privacy statements by platform providers promise full transparency, but what does this even mean and what is the underlying goal?<\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The devil\u2019s in the details<\/strong><\/h2>\n\n\n\n<p>Defining transparency has never been straightforward and defining transparency in the context of AI systems is no exception to that. Transparency, in a broad sense, is what one can perceive, comprehend and lets one act in light of that knowledge. Considering big tech companies\u2019 privacy statements spanning well beyond 10.000 words, aiming to inform users about their intentions and protective rights, the effectiveness of transparency measures in place appear questionable. Do you understand, for example, when you interact with an AI system and why platforms recommend certain content to you? Even if this information might be available it might not be transparent, since availability does not always equal attainability.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Metaphors of transparency<\/strong><\/h2>\n\n\n\n<p>Research on the use of the metaphor transparency (<a href=\"https:\/\/www.semanticscholar.org\/paper\/What-Is-Transparency-Ball\/614cc05af87fa2407373b3d33a5d6d01db1d84d7\" target=\"_blank\" rel=\"noreferrer noopener\">Ball, 2009<\/a>) reveals from the context of non-governmental-organisations and other political stakeholders, that by transparency we imply different ends of information sharing. Ball (2009) identified three: accountability, openness and efficiency. Openness is probably the most intuitive goal of transparency. Openness enforces transparency to create trust. For instance, it creates trust by allowing viewers to see what is protected from others, e.g. to protect one\u2019s privacy. This includes not only informed decision-making, but also knowing which questions to ask in the first place. Efficiency might be less intuitive as a goal of transparency, but it\u2019s none the less crucial for today\u2019s complex societies. Only by knowing and understanding complex systems can we allow them to function efficiently, since we do not need to question their workings each time we depend on them. Therefore, transparency is also important for progress in societies. Last, but not least, let\u2019s look closely at accountability.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Accountability<\/strong><\/h2>\n\n\n\n<p>The third important goal of transparency often recognized is accountability. Regarding AI systems, this refers to the question of who is responsible for each step in the development and application of machine learning algorithms. Mark Bovens, who researches public accountability, defined it \u201cas a social relationship in which an actor feels an obligation to explain and to justify his or her conduct to some significant other.\u201d (Bovens, 2005). He sees five characteristics for public accountability, namely 1. public access to accountability, 2. proactive explanation and justification of the actions, 3. addressing a specific audience, 4. an intrinsic motivation for accountability (in contrast to action only on demand), and 5. the possibility of debate, including potential sanctions in contrast to unsolicited monologues. Especially characteristic four presents a challenge, considering the common perception of accountability as a tool for preventing blame and legal ramifications. For accountability to be realised, practising diligent AI transparency is crucial, so it does not turn \u201cinto a garbage can filled with good intentions, loosely defined concepts, and vague images of good governance.\u201d (<a href=\"https:\/\/www.scirp.org\/(S(i43dyn45teexjx455qlt3d2q))\/reference\/ReferencesPapers.aspx?ReferenceID=2591045\" target=\"_blank\" rel=\"noreferrer noopener\">Bovens, 2005<\/a>).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>One-size-does-not-fit-all<\/strong><\/h2>\n\n\n\n<p>Transparency is a constant process \u2013 not an everlasting fact. It is to be viewed in its context and the perspective of stakeholders affected (<a href=\"https:\/\/www.semanticscholar.org\/paper\/Conceptualizing-transparency%3A-Propositions-for-the-Lee-Boynton\/c9b08f43622e3d11e123b77054b6ba41311d85e5\" target=\"_blank\" rel=\"noreferrer noopener\">Lee &amp; Boynton, 2017<\/a>). A large company providing transparency regarding its software to a governmental agency cannot give the same explanation and information to a user and expect transparency to be achieved. In a way, more transparency can lead to less transparency through the overwhelming quantity of information provided to the wrong recipient. Relevant factors to tailor AI transparency measures include the necessary degree of transparency, the political or societal function of the system, target group(s) and specific function of transparency. At the core of it lies the need for informed decision-making.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>AI Transparency is a Multi-Stakeholder Effort<\/strong><\/h2>\n\n\n\n<p>In practice, transparency cannot be implemented by a single actor, but has to be applied in every step of the process. A data scientist is often not aware of ethical and legal risks; and a legal counsel, for example, cannot spot those by reading through code. This becomes especially apparent in the case of unintended outcomes, calling for not only prior certifications, but also periodic auditing and possibilities of intervention for stakeholders at the end of the line. A frequent hurdle for clearer transparency standards in this area arises from the conflict between the protection of business secrets and the necessity to get access to software codes for reasons of auditing.&nbsp;<\/p>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/uB1g7E2s5bUUklJZb5iLalWENFdYdpHQrHNvJ6zMpWeulqDgEKVym-SCNcGUzceKJPrDOwf2w-qPLiO0f7Sq7Z9MS6Glt1kzA6gtTFrHK5vJl95n8F1k_lhmSgMLigE_eEdZh2w1xmxCuhv5fj4xbOk\" alt=\"\" width=\"840\" height=\"475\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>The&nbsp; \u2018AI Transparency-Cycle\u2019 (see graphic above) provides an overview on how the many dimensions of AI development and deployment and its ever-changing nature could be modelized and serves as a roadmap to solve the transparency conundrum. It is important not to interpret the cycle as a chronological step-by-step manual, but rather as a continuous, self-improving feedback process where development, validation, interventions, and education by the actors involved happen in parallel.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>References<\/strong><\/h2>\n\n\n\n<p><a href=\"https:\/\/www.semanticscholar.org\/paper\/What-Is-Transparency-Ball\/614cc05af87fa2407373b3d33a5d6d01db1d84d7\" target=\"_blank\" rel=\"noreferrer noopener\">Ball, C. (2009). What is Transparency?. <em>Public Integrity, 11<\/em>, 293-308.<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/www.scirp.org\/(S(i43dyn45teexjx455qlt3d2q))\/reference\/ReferencesPapers.aspx?ReferenceID=2591045\" target=\"_blank\" rel=\"noreferrer noopener\">Bovens, M. (2005). The Concept of Public Accountability. In Ferlie, E., Lynn Jr., L. E. and Pollitt, C., Eds., The Oxford Handbook of Public Management, <em>Oxford University Press<\/em>, Oxford, 182.<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/www.semanticscholar.org\/paper\/Conceptualizing-transparency%3A-Propositions-for-the-Lee-Boynton\/c9b08f43622e3d11e123b77054b6ba41311d85e5\" target=\"_blank\" rel=\"noreferrer noopener\">Lee, T., and Boynton, L. A. (2017). Conceptualizing transparency: Propositions for the integration of situational factors and stakeholders\u2019 perspectives. <em>Public Relations Inquiry, 6<\/em>, 233-251.<\/a><\/p>\n<div class=\"shariff shariff-align-flex-start shariff-widget-align-flex-start\"><ul class=\"shariff-buttons theme-round orientation-horizontal buttonsize-medium\"><li class=\"shariff-button linkedin shariff-nocustomcolor\" style=\"background-color:#1488bf\"><a href=\"https:\/\/www.linkedin.com\/sharing\/share-offsite\/?url=https%3A%2F%2Fwww.hiig.de%2Fen%2Fai-transparency%2F\" title=\"Share on LinkedIn\" aria-label=\"Share on LinkedIn\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#0077b5; color:#fff\" target=\"_blank\"><span class=\"shariff-icon\" style=\"\"><svg width=\"32px\" height=\"20px\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 27 32\"><path fill=\"#0077b5\" d=\"M6.2 11.2v17.7h-5.9v-17.7h5.9zM6.6 5.7q0 1.3-0.9 2.2t-2.4 0.9h0q-1.5 0-2.4-0.9t-0.9-2.2 0.9-2.2 2.4-0.9 2.4 0.9 0.9 2.2zM27.4 18.7v10.1h-5.9v-9.5q0-1.9-0.7-2.9t-2.3-1.1q-1.1 0-1.9 0.6t-1.2 1.5q-0.2 0.5-0.2 1.4v9.9h-5.9q0-7.1 0-11.6t0-5.3l0-0.9h5.9v2.6h0q0.4-0.6 0.7-1t1-0.9 1.6-0.8 2-0.3q3 0 4.9 2t1.9 6z\"\/><\/svg><\/span><\/a><\/li><li class=\"shariff-button bluesky shariff-nocustomcolor\" style=\"background-color:#84c4ff\"><a href=\"https:\/\/bsky.app\/intent\/compose?text=The%20AI%20Transparency%20Cycle https%3A%2F%2Fwww.hiig.de%2Fen%2Fai-transparency%2F  via @hiigberlin.bsky.social\" title=\"Share on Bluesky\" aria-label=\"Share on Bluesky\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#0085ff; color:#fff\" target=\"_blank\"><span class=\"shariff-icon\" style=\"\"><svg width=\"20\" height=\"20\" version=\"1.1\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 20 20\"><path class=\"st0\" d=\"M4.89,3.12c2.07,1.55,4.3,4.71,5.11,6.4.82-1.69,3.04-4.84,5.11-6.4,1.49-1.12,3.91-1.99,3.91.77,0,.55-.32,4.63-.5,5.3-.64,2.3-2.99,2.89-5.08,2.54,3.65.62,4.58,2.68,2.57,4.74-3.81,3.91-5.48-.98-5.9-2.23-.08-.23-.11-.34-.12-.25,0-.09-.04.02-.12.25-.43,1.25-2.09,6.14-5.9,2.23-2.01-2.06-1.08-4.12,2.57-4.74-2.09.36-4.44-.23-5.08-2.54-.19-.66-.5-4.74-.5-5.3,0-2.76,2.42-1.89,3.91-.77h0Z\"\/><\/svg><\/span><\/a><\/li><li class=\"shariff-button mailto shariff-nocustomcolor\" style=\"background-color:#a8a8a8\"><a href=\"mailto:?body=https%3A%2F%2Fwww.hiig.de%2Fen%2Fai-transparency%2F&subject=The%20AI%20Transparency%20Cycle\" title=\"Send by email\" aria-label=\"Send by email\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#999; color:#fff\"><span class=\"shariff-icon\" style=\"\"><svg width=\"32px\" height=\"20px\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 32 32\"><path fill=\"#999\" d=\"M32 12.7v14.2q0 1.2-0.8 2t-2 0.9h-26.3q-1.2 0-2-0.9t-0.8-2v-14.2q0.8 0.9 1.8 1.6 6.5 4.4 8.9 6.1 1 0.8 1.6 1.2t1.7 0.9 2 0.4h0.1q0.9 0 2-0.4t1.7-0.9 1.6-1.2q3-2.2 8.9-6.1 1-0.7 1.8-1.6zM32 7.4q0 1.4-0.9 2.7t-2.2 2.2q-6.7 4.7-8.4 5.8-0.2 0.1-0.7 0.5t-1 0.7-0.9 0.6-1.1 0.5-0.9 0.2h-0.1q-0.4 0-0.9-0.2t-1.1-0.5-0.9-0.6-1-0.7-0.7-0.5q-1.6-1.1-4.7-3.2t-3.6-2.6q-1.1-0.7-2.1-2t-1-2.5q0-1.4 0.7-2.3t2.1-0.9h26.3q1.2 0 2 0.8t0.9 2z\"\/><\/svg><\/span><\/a><\/li><\/ul><\/div>","protected":false},"excerpt":{"rendered":"<p>A common notion of AI transparency is to either make code public or explain exactly how an algorithm makes a decision. Both ways sound plausible, but fail in practice.<\/p>\n","protected":false},"author":9999998,"featured_media":92474,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1289,227,1582],"tags":[1045,1049,1244],"class_list":["post-91998","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","category-everyday-life","category-ftif-ai-and-society","tag-datenschutz-2","tag-privatsphare","tag-why-ai-en"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>The AI Transparency Cycle &#8211; Digital Society Blog<\/title>\n<meta name=\"description\" content=\"A common notion of AI transparency is to make code public or explain exactly how an algorithm makes a decision. But is that practicable?\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.hiig.de\/en\/ai-transparency\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The AI Transparency Cycle &#8211; Digital Society Blog\" \/>\n<meta property=\"og:description\" content=\"A common notion of AI transparency is to make code public or explain exactly how an algorithm makes a decision. But is that practicable?\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.hiig.de\/en\/ai-transparency\/\" \/>\n<meta property=\"og:site_name\" content=\"HIIG\" \/>\n<meta property=\"article:published_time\" content=\"2023-03-24T14:59:28+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-08-22T12:53:07+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.hiig.de\/wp-content\/uploads\/2023\/03\/AI-Transparency-Circle-2.png\" \/>\n\t<meta property=\"og:image:width\" content=\"800\" \/>\n\t<meta property=\"og:image:height\" content=\"450\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Stefanie Barth\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Stefanie Barth\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The AI Transparency Cycle &#8211; Digital Society Blog","description":"A common notion of AI transparency is to make code public or explain exactly how an algorithm makes a decision. But is that practicable?","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.hiig.de\/en\/ai-transparency\/","og_locale":"en_US","og_type":"article","og_title":"The AI Transparency Cycle &#8211; Digital Society Blog","og_description":"A common notion of AI transparency is to make code public or explain exactly how an algorithm makes a decision. But is that practicable?","og_url":"https:\/\/www.hiig.de\/en\/ai-transparency\/","og_site_name":"HIIG","article_published_time":"2023-03-24T14:59:28+00:00","article_modified_time":"2023-08-22T12:53:07+00:00","og_image":[{"width":800,"height":450,"url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2023\/03\/AI-Transparency-Circle-2.png","type":"image\/png"}],"author":"Stefanie Barth","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Stefanie Barth","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.hiig.de\/en\/ai-transparency\/#article","isPartOf":{"@id":"https:\/\/www.hiig.de\/en\/ai-transparency\/"},"author":{"name":"Stefanie Barth","@id":"https:\/\/www.hiig.de\/#\/schema\/person\/a07aa81c80d1dbd4ef1ab5c1cd9c10fd"},"headline":"The AI Transparency Cycle","datePublished":"2023-03-24T14:59:28+00:00","dateModified":"2023-08-22T12:53:07+00:00","mainEntityOfPage":{"@id":"https:\/\/www.hiig.de\/en\/ai-transparency\/"},"wordCount":879,"publisher":{"@id":"https:\/\/www.hiig.de\/#organization"},"image":{"@id":"https:\/\/www.hiig.de\/en\/ai-transparency\/#primaryimage"},"thumbnailUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2023\/03\/AI-Transparency-Circle-2.png","keywords":["datenschutz","privatsph\u00e4re","Why AI?"],"articleSection":["Artificial Intelligence","Everyday Life","ftif AI and Society"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.hiig.de\/en\/ai-transparency\/","url":"https:\/\/www.hiig.de\/en\/ai-transparency\/","name":"The AI Transparency Cycle &#8211; Digital Society Blog","isPartOf":{"@id":"https:\/\/www.hiig.de\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.hiig.de\/en\/ai-transparency\/#primaryimage"},"image":{"@id":"https:\/\/www.hiig.de\/en\/ai-transparency\/#primaryimage"},"thumbnailUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2023\/03\/AI-Transparency-Circle-2.png","datePublished":"2023-03-24T14:59:28+00:00","dateModified":"2023-08-22T12:53:07+00:00","description":"A common notion of AI transparency is to make code public or explain exactly how an algorithm makes a decision. But is that practicable?","breadcrumb":{"@id":"https:\/\/www.hiig.de\/en\/ai-transparency\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.hiig.de\/en\/ai-transparency\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hiig.de\/en\/ai-transparency\/#primaryimage","url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2023\/03\/AI-Transparency-Circle-2.png","contentUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2023\/03\/AI-Transparency-Circle-2.png","width":800,"height":450,"caption":"AI Transparency Cycle"},{"@type":"BreadcrumbList","@id":"https:\/\/www.hiig.de\/en\/ai-transparency\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.hiig.de\/en\/"},{"@type":"ListItem","position":2,"name":"The AI Transparency Cycle"}]},{"@type":"WebSite","@id":"https:\/\/www.hiig.de\/#website","url":"https:\/\/www.hiig.de\/","name":"HIIG","description":"Alexander von Humboldt Institute for Internet and Society","publisher":{"@id":"https:\/\/www.hiig.de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.hiig.de\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.hiig.de\/#organization","name":"HIIG","url":"https:\/\/www.hiig.de\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hiig.de\/#\/schema\/logo\/image\/","url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2019\/06\/hiig.png","contentUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2019\/06\/hiig.png","width":320,"height":80,"caption":"HIIG"},"image":{"@id":"https:\/\/www.hiig.de\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.hiig.de\/#\/schema\/person\/a07aa81c80d1dbd4ef1ab5c1cd9c10fd","name":"Stefanie Barth"}]}},"_links":{"self":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/91998","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/users\/9999998"}],"replies":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/comments?post=91998"}],"version-history":[{"count":4,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/91998\/revisions"}],"predecessor-version":[{"id":96413,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/91998\/revisions\/96413"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/media\/92474"}],"wp:attachment":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/media?parent=91998"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/categories?post=91998"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/tags?post=91998"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}