{"id":83042,"date":"2022-02-17T09:00:00","date_gmt":"2022-02-17T08:00:00","guid":{"rendered":"https:\/\/www.hiig.de\/?p=83042"},"modified":"2023-03-28T14:02:45","modified_gmt":"2023-03-28T12:02:45","slug":"explainable-ai","status":"publish","type":"post","link":"https:\/\/www.hiig.de\/en\/explainable-ai\/","title":{"rendered":"Why explainable AI needs such a thing as Society"},"content":{"rendered":"\n<p><strong>Have you ever asked yourself what the basis of your search engine autocompletions is? For example, when it was suggested that you search for what it feels like to have heartburn, whilst your intended search seemed to have nothing to do with it at all. There is not yet a standard for explaining such automated decisions. Moreover, today&#8217;s explainable AI (XAI) frameworks focus strongly on individual interests, while a societal perspective falls short. This article will give an introduction to communication in XAI and introduce the figure of <\/strong><strong><em>the public advocate<\/em><\/strong><strong> as a possibility to include collective interests in XAI frameworks.&nbsp;<\/strong><\/p>\n\n\n\n<p>The article is based on the thoughts of Dr. Theresa Z\u00fcger, Dr. Hadi Asghari, Johannes Baeck and Judith Fa\u00dfbender during the <a href=\"https:\/\/www.hiig.de\/wp-content\/uploads\/2021\/12\/XAI-Clinic-Abstract-for-Mercator-event-page.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">XAI Clinic in Autumn 2021<\/a>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"missing-or-insufficient-explainability-for-lay-people-and-society\"><strong>Missing or insufficient explainability for lay people and society<\/strong><strong>&nbsp;<\/strong><\/h2>\n\n\n\n<p>Have you ever asked yourself what the basis of your search engine autocompletions is? For example at that time when you typed \u201chow does\u201d and your search engine suggested \u201chow does&#8230;it feel to die\u201d, \u201chow does&#8230;it feel to love\u201d, \u201chow does&#8230;it feel to have heartburn\u201d but you actually wanted to continue typing \u201chow does\u2026 a<sup>2<\/sup> relate to b<sup>2<\/sup> in Pythagoras\u2019 theorem\u201d. If explanations for automated decisions were a standard, you would have been able to get an explanation of the inner workings of that search engine fairly easily. Due to a mixture of technical feasibility, communicational challenges and strategic avoidance, such a standard does not exist yet. Whilst a number of major providers and deployers of AI-models have published takes on Explainable AI (XAI) \u2013 most prominently <a href=\"https:\/\/www.ibm.com\/blogs\/research\/2019\/08\/ai-explainability-360\/\" target=\"_blank\" rel=\"noreferrer noopener\">IBM<\/a>, <a href=\"https:\/\/modelcards.withgoogle.com\/about\" target=\"_blank\" rel=\"noreferrer noopener\">Google<\/a> and Facebook \u2013 none of these efforts offer effective explanations for a lay audience. In some cases, lay people are simply not the target group, in others, the explanations are insufficient. Moreover, collective interests are not taken into account sufficiently when it comes to how to explain automated decisions; the focus lies predominantly on individual or private interests.<\/p>\n\n\n\n<p>This article will focus on how explanations for automated decisions need to differ in regards to the audience that is being addressed \u2013 in other words, on target group specific communication of automated decisions. In light of the neglected societal perspective, I will introduce the figure of <em>the public advocate<\/em> as a possibility to include collective interests in XAI frameworks.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"technical-elements-of-ai-systems-to-explain\"><strong>Technical elements of AI-systems to explain&nbsp;<\/strong><\/h2>\n\n\n\n<p>The technological complexity of AI-systems makes the traceability of automated decisions difficult. This is due to models with multiple layers, nonlinearities and untidy, large data sets, amongst other reasons. As a reaction to this problem, there have been increasing efforts to develop so-called white-box algorithms or to use more simple model architectures which produce traceable decisions, such as decision trees.<\/p>\n\n\n\n<p>But even if each element of an AI-system is explainable, a complete explanation for an automated decision would consist of a fairly large number of elements. To give an idea of these elements, let me share a dry yet helpful overview of possible elements (based on <a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3313831.3376590?casa_token=EexLN9hnQ8QAAAAA:s1W8A8esLAapDeZy5WFzdkOvAkS8JpN7gPUdFMfazM02pyOSZgo5k22pt1bxwkU5OH-xDzB3hd5lBA\" target=\"_blank\" rel=\"noreferrer noopener\">Liao et al. (2020)<\/a>):<\/p>\n\n\n\n<p>(1.) The <em>global model<\/em> which refers to the functionalities of the system that has been trained, this includes which training data has been used, and which architecture (i.e. a convolutional neural network, linear regression, etc.). Global means that the functionality of the system is not case-specific (2.) The <em>local<\/em> decision, which concerns a decision in a specific case. (3.) The <em>input <\/em>data which refers to the specific data a local decision is made on. (4.) The <em>output<\/em> refers to the format and the utilisation of the output the system gives (5.) A <em>counterfactual explanation<\/em>, which shows how different the input would have to be in order to get a different output; such as (6.) the <em>performance<\/em> of the system.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"the-challenge-of-target-group-specific-communication\"><strong>The challenge of target group specific communication&nbsp;<\/strong><\/h2>\n\n\n\n<p>If what you\u2019ve read up to now has either bored or overwhelmed you, it could either mean that you are not the target group for this blog post or that I have missed the sweet spot between what you, as part of my target group, knew already and what you expect from this article. Target group specific communication and hitting that sweet spot is a struggle when explaining automated decisions as well.<\/p>\n\n\n\n<p>To give you a schematic, but better, explanation, here are the elements listed above, applied to the search engine example from the beginning of this blog post:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>The <em>global model<\/em> in this case is the trained model which produces the autocomplete suggestions, the training data is most probably previous inputs by other users, what they were searching and their entire search history.&nbsp;<\/li><li>The <em>input<\/em> was what you typed in combination with your search history and other information the search engine provider has on you.&nbsp;<\/li><li>The <em>output<\/em> is the autocomplete suggestion.&nbsp;<\/li><li>The <em>local decision<\/em> is the suggestions you\u2019ve been given, based on your input.<\/li><li>A <em>counterfactual<\/em> could involve seeing what suggestions you would get when typing the exact same words, but taking parts of your search history out of the equation or changing another parameter of the input data.&nbsp;<\/li><li>The <em>performance<\/em> of the system would be based on how many people do actually want to find out how it feels to die etc., as opposed to how Pythagoras&#8217; theorem works.&nbsp;<\/li><\/ul>\n\n\n\n<p>The performance, for example, would probably not be interesting for the average lay person, which is different e.g. for the developer: People in different positions have different needs, expectations, and previous knowledge concerning explanations, and so therefore the type of presentation needs to differ for each target group.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"who-asked\"><strong>Who asked?&nbsp;<\/strong><\/h2>\n\n\n\n<p>The standard target groups for explanations of automated decisions \u2013 which are not catered to in the same manner \u2013&nbsp;are the developer, the domain expert and the affected party.&nbsp;<\/p>\n\n\n\n<p>The <strong>developers<\/strong> either build new AI-models or further develop pre-existing AI-models. This group basically needs to understand each element of the system, with a specific focus on the working of the global model and data representation \u2013 to be able to improve and verify the system in an accountable manner. Such explanations have to be available for developers throughout the whole process of development, employment and maintenance of the system.&nbsp;<\/p>\n\n\n\n<p>The <strong>domain expert <\/strong>is typically an employee of an organisation which uses AI-systems. This could be a medical doctor assisted by an AI-system when making a diagnosis or a content moderator on a social media platform who checks automatically flagged content. This person is assisted in their decision-making with suggestions from an AI-system, as a so-called &#8220;human in the loop&#8221;. Domain experts need to adapt to working with the system and need to develop an awareness of risks, of misleading or false predictions as well as the limitations. Therefore they do not only need explanations of local decisions (e.g. why did the system flag this content as being inappropriate), but importantly a thorough training on how the global system works (e.g what data the system was trained on, does the system look for specific words or objects). Such a training needs to take place in connection to the specific use context.<\/p>\n\n\n\n<p>The <strong>affected party<\/strong> is, as the name suggests, the person (or other entity) that an automated decision has an effect on. Their needs range from knowing if an AI-system was involved in a decision, to understanding an automated decision in respect to making informed decisions or to practice self-advocacy and challenge specific decisions or the use of an AI-system altogether. Affected parties primarily need an explanation on the elements of the system which are connected to their case (local decision). Counterfactual explanations can also be meaningful, as they would enable affected people to see what factors would need to change (in their input data) to produce a different result (the output).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"a-4th-target-group-the-public-advocate\"><strong>A 4th target group: the public advocate<\/strong><\/h2>\n\n\n\n<p>We propose considering a fourth target group: the public advocate.<\/p>\n\n\n\n<p>The <strong>public advocate<\/strong> describes a person or an organisation which takes care of the concerns of the general public or a group with special interests. The general North Star of all public advocate activities has to be to move closer to equality in our understanding of this target group. A public advocate might be an NGO\/NPO, dealing with societal questions connected to the use of AI-systems generally \u2013 such as e.g. <a href=\"https:\/\/www.accessnow.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">Access Now<\/a>, <a href=\"https:\/\/algorithmwatch.org\/en\/\" target=\"_blank\" rel=\"noreferrer noopener\">Algorithmwatch<\/a> or <a href=\"https:\/\/tacticaltech.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">Tactical Tech<\/a> \u2013 or an NGO\/NPO with a focus on specific groups or domains e.g. the \u00c4rztekammer or organisations supporting people who are affected by discrimination.<br>The concern of public advocates is on one hand lobbying and advocating for the public interests or special needs \u2013 this may be in deliberative processes in media, in court, in policy-making or in collaboration with providers of AI-systems. On the other hand, such organisations are well-qualified to educate others on AI-systems, tailored to the needs of their respective community. This might be the \u00c4rztekammer (professional representation of medical doctors in Germany) providing radiologists (domain experts) with training and background information on the possibilities, risks and limits of e.g. image recognitions of a lesion in the brain.<br>To facilitate such support, these groups need access to general information on the AI-system \u2013 to the global functioning of the model, input, and output. Further explanations of individual cases and the impact on individuals is crucial for this group, especially when their advocacy focuses on specific societal groups or use cases.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"why-is-a-collective-perspective-in-explainable-ai-important\"><strong>Why is a collective perspective in explainable AI important?<\/strong><\/h2>\n\n\n\n<p>The field of XAI is not free of power imbalances. Interests of different actors interfere with one another. Against this backdrop, the need of having a public advocate becomes more clear: None of the traditional target groups are intrinsically concerned with collective interests and consequences. But a collective focus is important, especially with regards to seemingly low-impact decisions e.g. which content is suggested to you on platforms or search engines. These automated decisions may count as low-impact in isolation, but can become problematic with scaling the number of users and\/or decisions \u2013 e.g. when F<a href=\"https:\/\/www.wsj.com\/articles\/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11590507499\" target=\"_blank\" rel=\"noreferrer noopener\">acebook&#8217;s recommendation tool contributed to the <\/a>growth of extremist groups. Whilst high impact decisions for individuals \u2013 such as the often cited loan-lending case \u2013 are highlighted in XAI frameworks, \u201clow-impact\u201d decisions are much more in the shadows, but viewing them from a societal, collective perspective sheds some light on their importance. The content that is suitable for an explanation from this perspective is different, and it can be formulated by considering the target group of the public advocate.&nbsp;<\/p>\n\n\n\n<p>Besides the representation of collective needs, public advocates can take over important tasks in the field of explainable AI. Training sessions on how specific AI-systems work should be given by an entity that does not develop or employ such systems themself and therefore does not have obvious conflicting private interests \u2013 which rules out commercial actors and governmental organisations. The public advocate can function as a consultant to the developing teams if they are included early enough in the development process and if there is a true interest in giving effective explanations.<\/p>\n\n\n\n<p>Last but not least, public advocates have more leverage than a singular affected person when lobbying for a collective. In comparison to the layperson, the organisations we have in mind have more technical expertise and ability to understand how the system works which increases their bargaining power further. Ideally, the work of the public advocates reduces the risk of ineffective explanations which are more a legal response than actual attempts to explain \u2013 see Facebook&#8217;s take on explaining third party advertisements. &nbsp;<\/p>\n\n\n\n<p>For all points mentioned above \u2013 automated decisions which become critical when viewed on a collective scale, the need to have a publicly minded entity to educate on AI-systems and the benefits of joining forces with different affected parties \u2013 there needs to be a &#8216;public advocate&#8217; in XAI frameworks. Not only to consequently include the societal and collective dimension when offering affected users explanations but to make collective interests visible and explicit for the development of explainable AI in the first place.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"further-reads\"><strong>Further reads<\/strong><\/h3>\n\n\n\n<p>Carvalho, D. V., Pereira, E. M., &amp; Cardoso, J. S. (2019). Machine learning interpretability: A survey on methods and metrics. <em>Electronics<\/em>, <em>8<\/em>(8), 832.<\/p>\n\n\n\n<p>Liao, Q. V., Gruen, D., &amp; Miller, S. (2020). Questioning the AI: informing design practices for explainable AI user experiences. In <em>Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems<\/em> (pp. 1-15).<\/p>\n\n\n\n<p>Ribera, M., &amp; Lapedriza, A. (2019). Can we do better explanations? A proposal of user-centered explainable AI. In <em>IUI Workshops<\/em> (Vol. 2327, p. 38).<\/p>\n\n\n\n<p>Rohlfing, K. J., Cimiano, P., Scharlau, I., Matzner, T., Buhl, H. M., Buschmeier, H., &#8230; &amp; Wrede, B. (2020). Explanation as a social practice: Toward a conceptual framework for the social design of AI systems. <em>IEEE Transactions on Cognitive and Developmental Systems<\/em>, <em>13<\/em>(3), 717-728.<\/p>\n\n\n\n<p>Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. <em>Artificial intelligence<\/em>, <em>267<\/em>, 1-38.<\/p>\n<div class=\"shariff shariff-align-flex-start shariff-widget-align-flex-start\"><ul class=\"shariff-buttons theme-round orientation-horizontal buttonsize-medium\"><li class=\"shariff-button linkedin shariff-nocustomcolor\" style=\"background-color:#1488bf\"><a href=\"https:\/\/www.linkedin.com\/sharing\/share-offsite\/?url=https%3A%2F%2Fwww.hiig.de%2Fen%2Fexplainable-ai%2F\" title=\"Share on LinkedIn\" aria-label=\"Share on LinkedIn\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#0077b5; color:#fff\" target=\"_blank\"><span class=\"shariff-icon\" style=\"\"><svg width=\"32px\" height=\"20px\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 27 32\"><path fill=\"#0077b5\" d=\"M6.2 11.2v17.7h-5.9v-17.7h5.9zM6.6 5.7q0 1.3-0.9 2.2t-2.4 0.9h0q-1.5 0-2.4-0.9t-0.9-2.2 0.9-2.2 2.4-0.9 2.4 0.9 0.9 2.2zM27.4 18.7v10.1h-5.9v-9.5q0-1.9-0.7-2.9t-2.3-1.1q-1.1 0-1.9 0.6t-1.2 1.5q-0.2 0.5-0.2 1.4v9.9h-5.9q0-7.1 0-11.6t0-5.3l0-0.9h5.9v2.6h0q0.4-0.6 0.7-1t1-0.9 1.6-0.8 2-0.3q3 0 4.9 2t1.9 6z\"\/><\/svg><\/span><\/a><\/li><li class=\"shariff-button bluesky shariff-nocustomcolor\" style=\"background-color:#84c4ff\"><a href=\"https:\/\/bsky.app\/intent\/compose?text=Why%20explainable%20AI%20needs%20such%20a%20thing%20as%20Society https%3A%2F%2Fwww.hiig.de%2Fen%2Fexplainable-ai%2F  via @hiigberlin.bsky.social\" title=\"Share on Bluesky\" aria-label=\"Share on Bluesky\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#0085ff; color:#fff\" target=\"_blank\"><span class=\"shariff-icon\" style=\"\"><svg width=\"20\" height=\"20\" version=\"1.1\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 20 20\"><path class=\"st0\" d=\"M4.89,3.12c2.07,1.55,4.3,4.71,5.11,6.4.82-1.69,3.04-4.84,5.11-6.4,1.49-1.12,3.91-1.99,3.91.77,0,.55-.32,4.63-.5,5.3-.64,2.3-2.99,2.89-5.08,2.54,3.65.62,4.58,2.68,2.57,4.74-3.81,3.91-5.48-.98-5.9-2.23-.08-.23-.11-.34-.12-.25,0-.09-.04.02-.12.25-.43,1.25-2.09,6.14-5.9,2.23-2.01-2.06-1.08-4.12,2.57-4.74-2.09.36-4.44-.23-5.08-2.54-.19-.66-.5-4.74-.5-5.3,0-2.76,2.42-1.89,3.91-.77h0Z\"\/><\/svg><\/span><\/a><\/li><li class=\"shariff-button mailto shariff-nocustomcolor\" style=\"background-color:#a8a8a8\"><a href=\"mailto:?body=https%3A%2F%2Fwww.hiig.de%2Fen%2Fexplainable-ai%2F&subject=Why%20explainable%20AI%20needs%20such%20a%20thing%20as%20Society\" title=\"Send by email\" aria-label=\"Send by email\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#999; color:#fff\"><span class=\"shariff-icon\" style=\"\"><svg width=\"32px\" height=\"20px\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 32 32\"><path fill=\"#999\" d=\"M32 12.7v14.2q0 1.2-0.8 2t-2 0.9h-26.3q-1.2 0-2-0.9t-0.8-2v-14.2q0.8 0.9 1.8 1.6 6.5 4.4 8.9 6.1 1 0.8 1.6 1.2t1.7 0.9 2 0.4h0.1q0.9 0 2-0.4t1.7-0.9 1.6-1.2q3-2.2 8.9-6.1 1-0.7 1.8-1.6zM32 7.4q0 1.4-0.9 2.7t-2.2 2.2q-6.7 4.7-8.4 5.8-0.2 0.1-0.7 0.5t-1 0.7-0.9 0.6-1.1 0.5-0.9 0.2h-0.1q-0.4 0-0.9-0.2t-1.1-0.5-0.9-0.6-1-0.7-0.7-0.5q-1.6-1.1-4.7-3.2t-3.6-2.6q-1.1-0.7-2.1-2t-1-2.5q0-1.4 0.7-2.3t2.1-0.9h26.3q1.2 0 2 0.8t0.9 2z\"\/><\/svg><\/span><\/a><\/li><\/ul><\/div>","protected":false},"excerpt":{"rendered":"<p>Explainable AI (XAI) frameworks focus strongly on individual interests, while a societal perspective falls short. The solution? An incorporation of collective interests in target group specific communication. <\/p>\n","protected":false},"author":356,"featured_media":83034,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1289,1582],"tags":[1055,1449,686],"class_list":["post-83042","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","category-ftif-ai-and-society","tag-ethik-2","tag-explainable-ai-2","tag-ki-2"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Why explainable AI needs such a thing as Society &#8211; Digital Society Blog<\/title>\n<meta name=\"description\" content=\"This article outlines how explainable AI (XAI) frameworks can incorporate collective interests in target group specific communication.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.hiig.de\/en\/explainable-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Why explainable AI needs such a thing as Society &#8211; Digital Society Blog\" \/>\n<meta property=\"og:description\" content=\"This article outlines how explainable AI (XAI) frameworks can incorporate collective interests in target group specific communication.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.hiig.de\/en\/explainable-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"HIIG\" \/>\n<meta property=\"article:published_time\" content=\"2022-02-17T08:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-03-28T12:02:45+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.hiig.de\/wp-content\/uploads\/2022\/02\/banner_blogpost_explainable_AI.png\" \/>\n\t<meta property=\"og:image:width\" content=\"800\" \/>\n\t<meta property=\"og:image:height\" content=\"450\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Hauke Odendahl\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Hauke Odendahl\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Why explainable AI needs such a thing as Society &#8211; Digital Society Blog","description":"This article outlines how explainable AI (XAI) frameworks can incorporate collective interests in target group specific communication.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.hiig.de\/en\/explainable-ai\/","og_locale":"en_US","og_type":"article","og_title":"Why explainable AI needs such a thing as Society &#8211; Digital Society Blog","og_description":"This article outlines how explainable AI (XAI) frameworks can incorporate collective interests in target group specific communication.","og_url":"https:\/\/www.hiig.de\/en\/explainable-ai\/","og_site_name":"HIIG","article_published_time":"2022-02-17T08:00:00+00:00","article_modified_time":"2023-03-28T12:02:45+00:00","og_image":[{"width":800,"height":450,"url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2022\/02\/banner_blogpost_explainable_AI.png","type":"image\/png"}],"author":"Hauke Odendahl","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Hauke Odendahl","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.hiig.de\/en\/explainable-ai\/#article","isPartOf":{"@id":"https:\/\/www.hiig.de\/en\/explainable-ai\/"},"author":{"name":"Hauke Odendahl","@id":"https:\/\/www.hiig.de\/#\/schema\/person\/91b3ac6e9a08e6cc5c739126166f02da"},"headline":"Why explainable AI needs such a thing as Society","datePublished":"2022-02-17T08:00:00+00:00","dateModified":"2023-03-28T12:02:45+00:00","mainEntityOfPage":{"@id":"https:\/\/www.hiig.de\/en\/explainable-ai\/"},"wordCount":2137,"publisher":{"@id":"https:\/\/www.hiig.de\/#organization"},"image":{"@id":"https:\/\/www.hiig.de\/en\/explainable-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2022\/02\/banner_blogpost_explainable_AI.png","keywords":["ethik","explainable AI","KI"],"articleSection":["Artificial Intelligence","ftif AI and Society"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.hiig.de\/en\/explainable-ai\/","url":"https:\/\/www.hiig.de\/en\/explainable-ai\/","name":"Why explainable AI needs such a thing as Society &#8211; Digital Society Blog","isPartOf":{"@id":"https:\/\/www.hiig.de\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.hiig.de\/en\/explainable-ai\/#primaryimage"},"image":{"@id":"https:\/\/www.hiig.de\/en\/explainable-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2022\/02\/banner_blogpost_explainable_AI.png","datePublished":"2022-02-17T08:00:00+00:00","dateModified":"2023-03-28T12:02:45+00:00","description":"This article outlines how explainable AI (XAI) frameworks can incorporate collective interests in target group specific communication.","breadcrumb":{"@id":"https:\/\/www.hiig.de\/en\/explainable-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.hiig.de\/en\/explainable-ai\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hiig.de\/en\/explainable-ai\/#primaryimage","url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2022\/02\/banner_blogpost_explainable_AI.png","contentUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2022\/02\/banner_blogpost_explainable_AI.png","width":800,"height":450,"caption":"Banner zum Blogbeitrag: explainable AI"},{"@type":"BreadcrumbList","@id":"https:\/\/www.hiig.de\/en\/explainable-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.hiig.de\/en\/"},{"@type":"ListItem","position":2,"name":"Why explainable AI needs such a thing as Society"}]},{"@type":"WebSite","@id":"https:\/\/www.hiig.de\/#website","url":"https:\/\/www.hiig.de\/","name":"HIIG","description":"Alexander von Humboldt Institute for Internet and Society","publisher":{"@id":"https:\/\/www.hiig.de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.hiig.de\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.hiig.de\/#organization","name":"HIIG","url":"https:\/\/www.hiig.de\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hiig.de\/#\/schema\/logo\/image\/","url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2019\/06\/hiig.png","contentUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2019\/06\/hiig.png","width":320,"height":80,"caption":"HIIG"},"image":{"@id":"https:\/\/www.hiig.de\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.hiig.de\/#\/schema\/person\/91b3ac6e9a08e6cc5c739126166f02da","name":"Hauke Odendahl"}]}},"_links":{"self":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/83042","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/users\/356"}],"replies":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/comments?post=83042"}],"version-history":[{"count":8,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/83042\/revisions"}],"predecessor-version":[{"id":83973,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/83042\/revisions\/83973"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/media\/83034"}],"wp:attachment":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/media?parent=83042"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/categories?post=83042"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/tags?post=83042"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}