{"id":104673,"date":"2024-10-15T12:41:47","date_gmt":"2024-10-15T10:41:47","guid":{"rendered":"https:\/\/www.hiig.de\/?p=104673"},"modified":"2024-11-27T13:18:38","modified_gmt":"2024-11-27T12:18:38","slug":"why-ai-is-currently-mainly-predicting-the-past","status":"publish","type":"post","link":"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/","title":{"rendered":"One step forward, two steps back: Why artificial intelligence is currently mainly predicting the past"},"content":{"rendered":"\n<p><strong>Artificial intelligence is often hailed as the technology of the future. Yet it primarily relies on historical data, reproducing old patterns instead of fostering progress. This blog article examines how AI systems can reinforce societal inequalities and marginalisation and explores their impact on social justice. Can we create AI that moves beyond the biases of the past?<\/strong><\/p>\n\n\n\n<p>The increasing use of AI in many areas of our daily lives has intensified the discourse surrounding the supposed omnipotence of machines.<sup>[i]<\/sup> The promises surrounding AI fluctuate between genuine excitement about problem-solving assistants and the dystopian notion that these machines could eventually replace humans. Although that scenario is still far off, both state and non-state actors place great hope in AI-driven processes, which are perceived as more efficient and optimised.<\/p>\n\n\n\n<p>Artificial Intelligence is ubiquitous in our daily lives, such as when it guides us directly to our destination or pre-sorts our emails. However, many are unaware that it is increasingly being used to make predictions about future events on various social levels. From weather forecasting to the financial markets and medical diagnoses, AI systems promise more accurate predictions and improved decision-making. These forecasts are based on historical training datasets that underpin AI technologies and on which they operate. More specifically, many AI systems are founded on machine learning <a href=\"https:\/\/www.bpb.de\/kurz-knapp\/lexika\/lexikon-in-einfacher-sprache\/303035\/algorithmus\/\" target=\"_blank\" rel=\"noreferrer noopener\">algorithms<\/a>. These algorithms analyse historical data to identify patterns and utilise these patterns to forecast the future for us. However, this also implies that <a href=\"https:\/\/www.internetmythen.de\/en\/index1b97.html?mythen=myth-42-algorithms-are-always-neutral\" target=\"_blank\" rel=\"noreferrer noopener\">the often-assumed neutrality or perceived objectivity of AI systems is an illusion<\/a>. In reality, current AI systems, especially due to inadequate and incomplete training data, tend to reproduce past patterns. This further entrenches existing social inequalities and power imbalances. For instance, if an <a href=\"https:\/\/www.zeit.de\/arbeit\/2018-10\/bewerbungsroboter-kuenstliche-intelligenz-amazon-frauen-diskriminierung\" target=\"_blank\" rel=\"noreferrer noopener\">AI system for job applications<\/a> is based on historical data predominantly sourced from a particular demographic (e.g., white, cisgender men), the system may preferentially favour similar candidates in the future. This amplifies existing inequalities in the job market and further disadvantages marginalised groups such as BIPoC, women, and individuals from the FLINTA\/LGBTIQA+ communities.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Power imbalances and perspectives in the AI discourse \u2013 Who imagines the future?<\/h2>\n\n\n\n<p>The discourse surrounding Artificial Intelligence is closely linked to social power structures and the question of who holds interpretative authority in this conversation. There exists a significant imbalance: certain narratives and perspectives dominate the development, implementation, and use of AI technology. This dominance profoundly impacts how AI is perceived, understood, and deployed. Thus, those who tell the story and control the narrative also wield the power to shape public opinion about AI. Several factors contribute to this so-called narrative power imbalance in the AI discourse.<\/p>\n\n\n\n<p><a href=\"https:\/\/policyreview.info\/articles\/news\/misguided-ai-regulation-needs-shift\/1796\" target=\"_blank\" rel=\"noreferrer noopener\">In the current technological landscape, large tech companies and actors from the Global North often dominate the AI narrative.<\/a> These entities possess the resources and power to foreground their perspectives and interests. Consequently, other voices, particularly from marginalised or minoritised groups, remain underrepresented and insufficiently heard. Additionally, <a href=\"https:\/\/www.hiig.de\/en\/dossier\/why-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">the way AI is portrayed in the media simplifies its complexities<\/a>, failing to do justice to the real impacts of AI on society.<\/p>\n\n\n\n<p>The narrative power imbalance arises because the teams developing AI are frequently not diverse. When predominantly white, cisgender male perspectives from resource-rich regions influence development, other viewpoints and needs are overlooked. For example, AI systems used to evaluate individual creditworthiness often rely on historical data that reflect populations already granted access to credit, such as white, cisgender men from economically prosperous areas. <a href=\"https:\/\/netzpolitik.org\/2021\/datenrassismus-wenn-algorithmen-den-hauskredit-verweigern\/\" target=\"_blank\" rel=\"noreferrer noopener\">This systematically disadvantages individuals from marginalised groups, including women, People of Colour, and those from low-income backgrounds.<\/a> Their applications are less frequently approved because the historical data do not represent their needs and financial realities. This exacerbates economic inequalities and reinforces power dynamics in favour of already privileged groups. The same applies to the regulation and governance of AI: those who have the authority to enact laws and policies determine how AI should be regulated.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The past as prophecy: Old patterns and distorted data<\/h2>\n\n\n\n<p>As previously mentioned, AI systems often rely on historical data shaped by existing power structures and social inequalities. These <a href=\"https:\/\/www.ethikrat.org\/fileadmin\/Publikationen\/Stellungnahmen\/deutsch\/stellungnahme-mensch-und-maschine.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">biases<\/a> are adopted by the systems and incorporated into their predictions. As a result, old inequalities not only persist but are also projected into the future. This creates a vicious cycle where the past dictates the future, cementing or, in the worst case, exacerbating social inequalities. The use of AI-based predictive systems leads to <a href=\"https:\/\/www.hiig.de\/en\/myth-ai-will-end-discrimination\/\" target=\"_blank\" rel=\"noreferrer noopener\">self-fulfilling prophecies, in which certain groups are favoured or disadvantaged<\/a> based on historical preferences or disadvantages.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Rules for AI: Where does the EU stand?<\/h2>\n\n\n\n<p>The discourse on the political regulation of Artificial Intelligence often lags behind the rapid development of technology. After lengthy negotiations, <a href=\"https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/?uri=OJ%3AL_202401689\" target=\"_blank\" rel=\"noreferrer noopener\">the European Union has finally initiated the AI Act<\/a>. Both <a href=\"https:\/\/www.unesco.de\/sites\/default\/files\/2022-03\/DUK_Broschuere_KI-Empfehlung_DS_web_final.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">UNESCO<\/a> and the EU have previously highlighted the risks associated with AI deployment in sensitive social areas. According to the European Commission, AI systems could jeopardise EU values and infringe upon fundamental rights such as freedom of expression, non-discrimination, and the protection of personal data. The Commission states:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote has-text-align-left is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"has-text-align-right\">&#8220;The use of AI can affect the values on which the EU is founded and lead to breaches of fundamental rights, including the rights to freedom of expression, freedom of assembly, human dignity, non-discrimination based on sex, racial or ethnic origin, religion or belief, disability, age or sexual orientation, as applicable in certain domains, protection of personal data and private life, or the right to an effective judicial remedy and a fair trial, as well as consumer protection.&#8221; (<a href=\"https:\/\/commission.europa.eu\/document\/download\/d2ec4039-c5be-423a-81ef-b9e44e79825b_en?filename=commission-white-paper-artificial-intelligence-feb2020_en.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">European Commission 2020:11<\/a>)<\/p>\n<\/blockquote>\n\n\n\n<p>UNESCO also emphasises that gender-specific stereotypes and discrimination in AI systems must be avoided and actively combated (<a href=\"https:\/\/www.unesco.org\/en\/articles\/recommendation-ethics-artificial-intelligence\" target=\"_blank\" rel=\"noreferrer noopener\">UNESCO 2021:32<\/a>). To this end, UNESCO launched the &#8220;Women for Ethical AI&#8221; project in 2022 to integrate gender justice into the AI agenda. The UN AI Advisory Board similarly calls for a public-interest-oriented AI and highlights gender as an essential cross-cutting issue. However, in Germany, concrete regulations to prevent algorithmic discrimination in this field are still lacking, necessitating <a href=\"https:\/\/www.unesco.de\/wissen\/wissenschaft\/ethik-und-philosophie\/studie-umsetzung-ki-ethik-empfehlung\" target=\"_blank\" rel=\"noreferrer noopener\">adaptations to the General Equal Treatment Act (AGG)<\/a>.<\/p>\n\n\n\n<p>With the <a href=\"https:\/\/digital-strategy.ec.europa.eu\/de\/policies\/regulatory-framework-ai%20\/%20https:\/\/artificialintelligenceact.eu\/de\/\" target=\"_blank\" rel=\"noreferrer noopener\">Artificial Intelligence Act<\/a>, the European Union has taken a significant step towards AI regulation. Nevertheless, specific provisions for gender equality have yet to be firmly established. The legislation adopts a risk-based approach to ensure that the use of AI-based systems does not adversely affect people&#8217;s safety, health, and fundamental rights. The legal obligations are contingent on the risk potential of an AI system: unacceptable high-risk systems are prohibited, high-risk systems are subject to certain rules, and low-risk AI systems are not subject to any obligations. Certain applications for so-called social scoring and predictive policing are thus set to be banned. Social scoring refers to the assessment of individuals based on personal data, such as credit behaviour, traffic violations, or social engagement, to regulate their access to certain services or privileges. Predictive policing, on the other hand, employs AI to predict potential future crimes based on data analysis and take preventive measures. Both practices face criticism for potentially leading to discrimination, privacy violations, and social control. However, there are exceptions in the context of &#8220;national security&#8221; (AI Act 2024, Article 2).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Gaps in the AI Act<\/h2>\n\n\n\n<p>Critical voices, such as <a href=\"https:\/\/algorithmwatch.org\/en\/\" target=\"_blank\" rel=\"noreferrer noopener\">AlgorithmWatch<\/a>, argue that while the AI Act restricts the use of facial recognition by law enforcement in public spaces, it also contains numerous loopholes (<a href=\"https:\/\/algorithmwatch.org\/de\/ki-verordnung-eu-parlament-stimmt-ab\/\" target=\"_blank\" rel=\"noreferrer noopener\">Vieth-Ditlmann\/Sombetzki 2024<\/a>). Although the recognition of the risks posed by &#8220;(unfair) bias&#8221; is mentioned in many parts of the legislation, it is inadequately addressed in concrete terms. Gender, race, and other aspects appear as categories of discrimination. However, a specific call to prevent biases in datasets and outcomes is found only in <a href=\"https:\/\/artificialintelligenceact.eu\/article\/10\/\" target=\"_blank\" rel=\"noreferrer noopener\">Article 10, paragraphs 2f) and 2g)<\/a>, concerning so-called high-risk systems. These sections demand &#8220;appropriate measures&#8221; to identify, prevent, and mitigate potential biases. The question of what these &#8220;appropriate measures&#8221; entail can only be answered through legal implementation. A corresponding definition is still pending, particularly given the significantly different systems. What conclusions can be drawn from this regulatory debate?<\/p>\n\n\n\n<p>To prevent discriminatory practices from being embedded in AI technologies, the focus must be on datasets that represent society as a whole. The AI Act only vaguely addresses this in <a href=\"https:\/\/artificialintelligenceact.eu\/article\/10\/\" target=\"_blank\" rel=\"noreferrer noopener\">Article 10, paragraph 3<\/a>, stating that data must be &#8220;sufficiently representative and, as far as possible, accurate and complete concerning the intended purpose.&#8221; However, these requirements allow for various interpretations.<\/p>\n\n\n\n<p>One potential solution involves the <a href=\"https:\/\/brighter.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">use of synthetic data<\/a>, which consists of <a href=\"https:\/\/publica-rest.fraunhofer.de\/server\/api\/core\/bitstreams\/f22502f5-5379-40b8-a420-006a2479a1ba\/content#:~:text=Bei%20der%20Anwendung%20von%20K%C3%BCnstlicher,generierte%20Daten%20bilden%20einen%20Ausweg\" target=\"_blank\" rel=\"noreferrer noopener\">artificially generated information<\/a> simulating real data to fill gaps in the dataset and avoid privacy issues. Simultaneously, diversity in development teams should be promoted to prevent one-sided biases. Whether the AI Act will achieve the desired effects remains to be seen in practice.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The God Trick<sup>[ii]<\/sup>&nbsp;<\/h2>\n\n\n\n<p>The EU and other regulatory bodies have recognised that data must be representative, but this is not sufficient. The problem is twofold: on one hand, essential data points from marginalised groups are often missing from training datasets; on the other, these groups are disproportionately represented in certain societal areas, such as social benefits allocation. This creates an intriguing paradox and a further divergence in intersectional feminist critique. It is not only the absence of data points that is problematic but also the excessive inclusion of data from marginalised groups into other systems. Governments frequently possess far more data about poor or marginalised populations than about resource-rich, privileged sectors. This excessive collection leads to injustices.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"has-text-align-right\">\u201eFar too often, the problem is not that data about minoritized groups are missing but the reverse: the databases and data systems of powerful institutions are built on the excessive surveillance of minoritized groups. This results in women, people of color, and poor people, among others, being overrepresented in the data that these systems are premised upon.\u201d&nbsp; (D\u2019Ignazio\/Klein 2020:39)<\/p>\n<\/blockquote>\n\n\n\n<p>An example of this is the system described by Virginia Eubanks in \u201c<a href=\"https:\/\/virginia-eubanks.com\/automating-inequality\/\" target=\"_blank\" rel=\"noreferrer noopener\">Automating Inequality<\/a>\u201d in Allegheny County, Pennsylvania. An algorithmic model was designed to predict the risk of child abuse in order to protect vulnerable children. However, while the model collects vast amounts of data on poorer parents reliant on public services\u2014including information from child protection, addiction treatment, mental health services, and Medicaid\u2014such data is often absent for wealthier parents who utilise private health services. The result is that low-income families are overrepresented and disproportionately classified as risk cases, often leading to the separation of children from their parents. Eubanks describes this process as a conflation of \u201cparenting in poverty\u201d with \u201cbad parenting.\u201d This serves as yet another example of how biased data reinforces inequalities and further oppresses the most disadvantaged members of society.<\/p>\n\n\n\n<p>In Europe, AI systems are increasingly deployed in sensitive areas that extend far beyond consumer protection. In Austria, the <a href=\"https:\/\/www.oeaw.ac.at\/en\/ita\/projects\/ams-algorithm\" target=\"_blank\" rel=\"noreferrer noopener\">AMS Algorithm<\/a> was intended to assist caseworkers in deciding which resources, such as training opportunities, should be allocated based on a calculated &#8220;integration chance.&#8221; Here, there is a risk that existing inequalities will be amplified by biased data, <a href=\"https:\/\/netzpolitik.org\/2021\/oesterreich-jobcenter-algorithmus-landet-vor-hoechstgericht\/\" target=\"_blank\" rel=\"noreferrer noopener\">disadvantaging individuals with lesser chances in the job market<\/a>. In Germany, the predictive policing system HessenData relies on the controversial software from the US company Palantir to <a href=\"https:\/\/netzpolitik.org\/2024\/hessendata-erneute-verfassungsbeschwerde-gegen-polizeiliche-big-data-analysen\/\" target=\"_blank\" rel=\"noreferrer noopener\">predict crime<\/a>. Such technologies are under scrutiny because they are often based on distorted datasets, which can result in marginalised groups, such as people of colour and low-income individuals, being disproportionately surveilled and criminalised. Both examples illustrate that the use of AI in such areas carries the risk of entrenching existing discrimination rather than reducing it.<\/p>\n\n\n\n<p>These biases make it particularly challenging to create fair and representative AI systems. The regulation of such systems is complicated by the fact that privileged groups are often less intensely monitored and can more easily resist access to their data. Conversely, marginalised groups have little means to contest the use of their data or the discriminatory outcomes of such systems. In many cases, the processes in which AI is deployed are opaque, and avenues for challenge remain unclear. All of this indicates that regulation often lags behind technological developments, further entrenching existing power structures through AI technologies.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Human in the Loop<\/h2>\n\n\n\n<p>In light of these challenges, the <a href=\"https:\/\/www.zew.de\/en\/zew\/news\" target=\"_blank\" rel=\"noreferrer noopener\">role of human oversight is increasingly brought to the fore<\/a> in regulatory debates. A central approach of the AI Act is the call for \u201cHuman in the Loop\u201d models, which are intended to ensure that humans are involved in the decision-making processes of AI systems. Article 14 of the AI Act stipulates that high-risk AI systems must be designed so that they can be effectively monitored during their use. It states:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"has-text-align-right\">\u201cHigh-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.\u201d (<a href=\"https:\/\/artificialintelligenceact.eu\/article\/14\/\" target=\"_blank\" rel=\"noreferrer noopener\">AI Act, Article 14<\/a>)<\/p>\n<\/blockquote>\n\n\n\n<p>This reveals an interesting turn. The use of AI was initially justified as a solution to eliminate discrimination and to ensure non-discrimination. Simplistically put, the idea was that machines would be much better equipped to judge impartially and make decisions free from all discriminatory logics and societal practices. However, the preceding argument of this article has made it clear why this promise has yet to be fulfilled. Reality shows that human intervention remains essential to ensure that automated decisions do not continue to reproduce biases. But what requirements must these humans meet to make a meaningful intervention? What information do they need? At which points should they be involved? How can we ensure that this interaction is not only legally sound but also transparent and comprehensible?<\/p>\n\n\n\n<p>The Alexander von Humboldt Institute for Internet and Society is actively contributing to addressing these challenges through its <a href=\"https:\/\/www.hiig.de\/en\/project\/human-in-the-loop\/\" target=\"_blank\" rel=\"noreferrer noopener\">\u201cHuman in the Loop?\u201d project<\/a>. It explores new approaches to meaningfully involve humans in AI decision-making processes. Here, a fascinating crystallisation point emerges between the past and the future: while AI was originally conceived as a solution to eliminate biases, reality teaches us that human intervention remains necessary. Simultaneously, research into \u201cHuman in the Loop\u201d models allows for the development of new approaches that redefine the interplay between humans and machines. This creates a tension at the intersection of old ways of working and future value creation, where humans and machines mutually vie for interpretative authority. This <a href=\"https:\/\/www.hiig.de\/en\/ai-under-supervision-human-in-the-loop\/\" target=\"_blank\" rel=\"noreferrer noopener\">blog post<\/a> offers a deeper insight into the still-open questions surrounding this human-machine interaction.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Remarks<\/h2>\n\n\n\n<p>[i] This contribution is based on reflections that I have already made <a href=\"https:\/\/www.helmut-schmidt.de\/en\/bkhs-magazine-remaking-globalisation\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a> and on the <a href=\"https:\/\/leibniz-hbi.de\/blog\" target=\"_blank\" rel=\"noreferrer noopener\">blog<\/a> of the Leibniz Institute for Media Research | Hans-Bredow-Institut (HBI).<\/p>\n\n\n\n<p>[ii] The God Trick by Donna Haraway (1988) refers to the paradox of the supposedly omniscient. According to Haraway, a position that is fundamentally biased (mostly male, white, heterosexual) is generalised here under the guise of objectivity and neutrality. (This metaphor has often been referenced in discussions of technological development and data science, closely aligned with what AI suggests. Interesting thoughts on this context can be found in Marcus (2020).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">References<\/h2>\n\n\n\n<p>Ahmed, Maryam (2024): Car insurance quotes higher in ethnically diverse areas. Online:<a href=\"https:\/\/www.bbc.com\/news\/business-68349396\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/www.bbc.com\/news\/business-68349396<\/a> [20.03.2024]&nbsp;<\/p>\n\n\n\n<p>Biselli, Anna (2022): BAMF weitet automatische Sprachanalyse aus. Online:<a href=\"https:\/\/netzpolitik.org\/2022\/asylverfahren-bamf-weitet-automatische-sprachanalyse-aus\/\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/netzpolitik.org\/2022\/asylverfahren-bamf-weitet-automatische-sprachanalyse-aus\/<\/a> [20.03.2024]<\/p>\n\n\n\n<p>Boulamwini, Joy\/Gebru, Timnit (2018): Gender Shades. Online:<a href=\"https:\/\/www.media.mit.edu\/projects\/gender-shades\/overview\/\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/www.media.mit.edu\/projects\/gender-shades\/overview\/<\/a> [20.03.2024]<\/p>\n\n\n\n<p>Bundesregierung (2020): Strategie K\u00fcnstliche Intelligenz der Bundesregierung. Online:<a href=\"https:\/\/www.bundesregierung.de\/breg-de\/service\/publikationen\/strategie-kuenstliche-intelligenz-der-bundesregierung-fortschreibung-2020-1824642\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/www.bundesregierung.de\/breg-de\/service\/publikationen\/strategie-kuenstliche-intelligenz-der-bundesregierung-fortschreibung-2020-1824642<\/a> [20.03.2024]&nbsp;<\/p>\n\n\n\n<p>Deutscher Bundestag (2023): Sachstand. Regulierung von k\u00fcnstlicher Intelligenz in Deutschland. Online:<a href=\"https:\/\/bundestag.de\/resource\/blob\/940164\/51d5380e12b3e121af9937bc69afb6a7\/WD-5-001-23-pdf-data.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/bundestag.de\/resource\/blob\/940164\/51d5380e12b3e121af9937bc69afb6a7\/WD-5-001-23-pdf-data.pdf<\/a> [20.03.2024]<\/p>\n\n\n\n<p>Europ\u00e4ische Kommission (2020a): EU Whitepaper. On Artificial Intelligence \u2013 A European approach to excellence and trust. Online:<a href=\"https:\/\/commission.europa.eu\/document\/download\/d2ec4039-c5be-423a-81ef-b9e44e79825b_en?filename=commission-white-paper-artificial-intelligence-feb2020_en.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/commission.europa.eu\/document\/download\/d2ec4039-c5be-423a-81ef-b9e44e79825b_en?filename=commission-white-paper-artificial-intelligence-feb2020_en.pdf<\/a> [20.03.2024]&nbsp;<\/p>\n\n\n\n<p>K\u00f6ver, Chris (2019): Streit um den AMS-Algorithmus geht in die n\u00e4chste Runde. Online:<a href=\"https:\/\/netzpolitik.org\/2019\/streit-um-den-ams-algorithmus-geht-in-die-naechste-runde\/\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/netzpolitik.org\/2019\/streit-um-den-ams-algorithmus-geht-in-die-naechste-runde\/<\/a> [20.03.2024]<\/p>\n\n\n\n<p>Leisegang, Daniel (2023): Automatisierte Datenanalyse f\u00fcr die vorbeugende Bek\u00e4mpfung von Straftaten ist verfassungswidrig. Online:<a href=\"https:\/\/netzpolitik.org\/2023\/urteil-des-bundesverfassungsgerichts-automatisierte-datenanalyse-fuer-die-vorbeugende-bekaempfung-von-straftaten-ist-verfassungswidrig\/\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/netzpolitik.org\/2023\/urteil-des-bundesverfassungsgerichts-automatisierte-datenanalyse-fuer-die-vorbeugende-bekaempfung-von-straftaten-ist-verfassungswidrig\/<\/a> [20.03.2024]<\/p>\n\n\n\n<p>L\u00fctz, Fabian (2024): Regulierung von KI: auf der Suche nach \u201eGender\u201c. Online:<a href=\"https:\/\/www.gender-blog.de\/beitrag\/regulierung-ki-gender\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/www.gender-blog.de\/beitrag\/regulierung-ki-gender<\/a> [20.03.2024]<\/p>\n\n\n\n<p>&nbsp;Netzforma (2020): Wenn KI, dann feministisch. Impulse aus Wissenschaft und Aktivismus. Online:<a href=\"https:\/\/netzforma.org\/wp-content\/uploads\/2021\/01\/2020_wenn-ki-dann-feministisch_netzforma.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/netzforma.org\/wp-content\/uploads\/2021\/01\/2020_wenn-ki-dann-feministisch_netzforma.pdf<\/a>&nbsp; [20.03.2024]<\/p>\n\n\n\n<p>Pena, Paz\/Varon, Joana (2021): Oppressive A.I.: Feminist Categories to Unterstand its Political Effect. Online:<a href=\"https:\/\/notmy.ai\/news\/oppressive-a-i-feminist-categories-to-understand-its-political-effects\/\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/notmy.ai\/news\/oppressive-a-i-feminist-categories-to-understand-its-political-effects\/<\/a>&nbsp; [20.03.2024]<\/p>\n\n\n\n<p>Rau, Franziska (2023): Polizei Hamburg will ab Juli Verhalten automatisch scannen. Online:<a href=\"https:\/\/netzpolitik.org\/2023\/intelligente-videoueberwachung-polizei-hamburg-will-ab-juli-verhalten-automatisch-scannen\/\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/netzpolitik.org\/2023\/intelligente-videoueberwachung-polizei-hamburg-will-ab-juli-verhalten-automatisch-scannen\/<\/a> [20.03.2024]<\/p>\n\n\n\n<p>UN AI Advisory Board (2023): Governing AI for Humanity. Online:<a href=\"https:\/\/www.un.org\/techenvoy\/sites\/www.un.org.techenvoy\/files\/ai_advisory_body_interim_report.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/www.un.org\/techenvoy\/sites\/www.un.org.techenvoy\/files\/ai_advisory_body_interim_report.pdf<\/a> [20.03.2024]<\/p>\n\n\n\n<p>UNESCO (2021): UNESCO recommendation on Artificial Intelligence. Online:<a href=\"https:\/\/www.unesco.org\/en\/articles\/recommendation-ethics-artificial-intelligence\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/www.unesco.org\/en\/articles\/recommendation-ethics-artificial-intelligence<\/a> [20.03.2024]<\/p>\n\n\n\n<p>UNESCO (2022): Project Women4Ethical AI. Online:<a href=\"https:\/\/unesco.org\/en\/artificial-intelligence\/women4ethical-ai\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/unesco.org\/en\/artificial-intelligence\/women4ethical-ai<\/a> [20.03.2024]<\/p>\n\n\n\n<p>Legislative Entschlie\u00dfung des Parlaments vom 13.03.2024 zu dem Vorschlag f\u00fcr eine Verordnung des Europ\u00e4ischen Parlaments und des Rates zur Festlegung harmonisierter Vorschriften f\u00fcr K\u00fcnstliche Intelligenz und zur \u00c4nderung bestimmter Rechtsakte der Union (2024), TA (2024)0138. Online:<a href=\"https:\/\/www.europarl.europa.eu\/RegData\/seance_pleniere\/textes_adoptes\/definitif\/2024\/03-13\/0138\/P9_TA(2024)0138_DE.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/www.europarl.europa.eu\/RegData\/seance_pleniere\/textes_adoptes\/definitif\/2024\/03-13\/0138\/P9_TA(2024)0138_DE.pdf<\/a> [27.03.2024]<\/p>\n\n\n\n<p>D&#8217;Ignazio, Catherin\/Klein, Lauren F. (2020): Data Feminism. MIT Press.<\/p>\n\n\n\n<p>Eubanks, Virginia (2018): Automating Inequality &#8211; How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin\u2019s Press.<\/p>\n\n\n\n<p>Haraway, Donna (1988): Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. In: Feminist Studies, 14. Jg, Heft 3, S. 575-599.<\/p>\n\n\n\n<p>Koenecke, Allison et al. (2020): Racial disparities in automatic speech recognition. In: Proceedings of the National Academy of Sciences, Jg. 117, Heft 14, S. 7684-7689.<\/p>\n\n\n\n<p>Kolleck, Alma\/Orwat, Carsten (2020): M\u00f6gliche Diskriminierung durch algorithmische Entscheidungssysteme und maschinelles Lernen \u2013 ein \u00dcberblick. Online:<a href=\"https:\/\/publikationen.bibliothek.kit.edu\/1000127166\/94887549\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/publikationen.bibliothek.kit.edu\/1000127166\/94887549<\/a> [20.03.2024]<\/p>\n\n\n\n<p>Rehak, Rainer (2023): Zwischen Macht und Mythos: Eine kritische Einordnung aktueller KI-Narrative. In: Soziopolis: Gesellschaft beobachten.<\/p>\n\n\n\n<p>Varon, Joana\/Pe\u00f1a, Paz (2021): Artificial intelligence and consent: a feminist anti-colonial critique. In: Internet Policy Review. 10 (4). Online:<a href=\"https:\/\/policyreview.info\/articles\/analysis\/artificial-intelligence-and-consent-feminist-anti-colonial-critique\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/policyreview.info\/articles\/analysis\/artificial-intelligence-and-consent-feminist-anti-colonial-critique<\/a> [12.03.2024]<\/p>\n\n\n\n<p>West, Sarah Meyers, Whittaker, Meredith and Crawford, Kate (2019): Discriminating Systems: Gender, Race and Power in AI. AI Now Institute. Online:<a href=\"https:\/\/ainowinstitute.org\/discriminatingsystems.html\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/ainowinstitute.org\/discriminatingsystems.html<\/a> [20.03.2024] &nbsp; Zur intersektionalen feministischen Kritik am AI Act \u2013 FemAI (2024): Policy Paper &#8211; A feminist vision for the EU AI Act. Online:<a href=\"https:\/\/www.fem-ai-center-for-feminist-artificial-intelligence.com\/_files\/ugd\/f05f97_0c369b5785d944fea2989190137835a1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"> https:\/\/www.fem-ai-center-for-feminist-artificial-intelligence.com\/_files\/ugd\/f05f97_0c369b5785d944fea2989190137835a1.pdf<\/a> [08.04.2024]<\/p>\n<div class=\"shariff shariff-align-flex-start shariff-widget-align-flex-start\"><ul class=\"shariff-buttons theme-round orientation-horizontal buttonsize-medium\"><li class=\"shariff-button linkedin shariff-nocustomcolor\" style=\"background-color:#1488bf\"><a href=\"https:\/\/www.linkedin.com\/sharing\/share-offsite\/?url=https%3A%2F%2Fwww.hiig.de%2Fen%2Fwhy-ai-is-currently-mainly-predicting-the-past%2F\" title=\"Share on LinkedIn\" aria-label=\"Share on LinkedIn\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#0077b5; color:#fff\" target=\"_blank\"><span class=\"shariff-icon\" style=\"\"><svg width=\"32px\" height=\"20px\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 27 32\"><path fill=\"#0077b5\" d=\"M6.2 11.2v17.7h-5.9v-17.7h5.9zM6.6 5.7q0 1.3-0.9 2.2t-2.4 0.9h0q-1.5 0-2.4-0.9t-0.9-2.2 0.9-2.2 2.4-0.9 2.4 0.9 0.9 2.2zM27.4 18.7v10.1h-5.9v-9.5q0-1.9-0.7-2.9t-2.3-1.1q-1.1 0-1.9 0.6t-1.2 1.5q-0.2 0.5-0.2 1.4v9.9h-5.9q0-7.1 0-11.6t0-5.3l0-0.9h5.9v2.6h0q0.4-0.6 0.7-1t1-0.9 1.6-0.8 2-0.3q3 0 4.9 2t1.9 6z\"\/><\/svg><\/span><\/a><\/li><li class=\"shariff-button bluesky shariff-nocustomcolor\" style=\"background-color:#84c4ff\"><a href=\"https:\/\/bsky.app\/intent\/compose?text=One%20step%20forward%2C%20two%20steps%20back%3A%20Why%20artificial%20intelligence%20is%20currently%20mainly%20predicting%20the%20past https%3A%2F%2Fwww.hiig.de%2Fen%2Fwhy-ai-is-currently-mainly-predicting-the-past%2F  via @hiigberlin.bsky.social\" title=\"Share on Bluesky\" aria-label=\"Share on Bluesky\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#0085ff; color:#fff\" target=\"_blank\"><span class=\"shariff-icon\" style=\"\"><svg width=\"20\" height=\"20\" version=\"1.1\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 20 20\"><path class=\"st0\" d=\"M4.89,3.12c2.07,1.55,4.3,4.71,5.11,6.4.82-1.69,3.04-4.84,5.11-6.4,1.49-1.12,3.91-1.99,3.91.77,0,.55-.32,4.63-.5,5.3-.64,2.3-2.99,2.89-5.08,2.54,3.65.62,4.58,2.68,2.57,4.74-3.81,3.91-5.48-.98-5.9-2.23-.08-.23-.11-.34-.12-.25,0-.09-.04.02-.12.25-.43,1.25-2.09,6.14-5.9,2.23-2.01-2.06-1.08-4.12,2.57-4.74-2.09.36-4.44-.23-5.08-2.54-.19-.66-.5-4.74-.5-5.3,0-2.76,2.42-1.89,3.91-.77h0Z\"\/><\/svg><\/span><\/a><\/li><li class=\"shariff-button mailto shariff-nocustomcolor\" style=\"background-color:#a8a8a8\"><a href=\"mailto:?body=https%3A%2F%2Fwww.hiig.de%2Fen%2Fwhy-ai-is-currently-mainly-predicting-the-past%2F&subject=One%20step%20forward%2C%20two%20steps%20back%3A%20Why%20artificial%20intelligence%20is%20currently%20mainly%20predicting%20the%20past\" title=\"Send by email\" aria-label=\"Send by email\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#999; color:#fff\"><span class=\"shariff-icon\" style=\"\"><svg width=\"32px\" height=\"20px\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 32 32\"><path fill=\"#999\" d=\"M32 12.7v14.2q0 1.2-0.8 2t-2 0.9h-26.3q-1.2 0-2-0.9t-0.8-2v-14.2q0.8 0.9 1.8 1.6 6.5 4.4 8.9 6.1 1 0.8 1.6 1.2t1.7 0.9 2 0.4h0.1q0.9 0 2-0.4t1.7-0.9 1.6-1.2q3-2.2 8.9-6.1 1-0.7 1.8-1.6zM32 7.4q0 1.4-0.9 2.7t-2.2 2.2q-6.7 4.7-8.4 5.8-0.2 0.1-0.7 0.5t-1 0.7-0.9 0.6-1.1 0.5-0.9 0.2h-0.1q-0.4 0-0.9-0.2t-1.1-0.5-0.9-0.6-1-0.7-0.7-0.5q-1.6-1.1-4.7-3.2t-3.6-2.6q-1.1-0.7-2.1-2t-1-2.5q0-1.4 0.7-2.3t2.1-0.9h26.3q1.2 0 2 0.8t0.9 2z\"\/><\/svg><\/span><\/a><\/li><\/ul><\/div>","protected":false},"excerpt":{"rendered":"<p>While AI is seen as technology of the future, it often relies on historical data. This blog post examines how AI can reproduce social inequalities and bias.<\/p>\n","protected":false},"author":10000024,"featured_media":104674,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1289,1577,1582],"tags":[],"class_list":["post-104673","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","category-digital-so","category-ftif-ai-and-society"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Why AI is currently mainly predicting the past &#8211; Digital Society Blog<\/title>\n<meta name=\"description\" content=\"While AI is seen as technology of the future, it often relies on historical data. This blog post examines how AI can reproduce social inequalities and bias.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"One step forward, two steps back: Why Artificial Intelligence is currently mainly predicting the past\" \/>\n<meta property=\"og:description\" content=\"While AI is seen as technology of the future, it often relies on historical data. This blog post examines how AI can reproduce social inequalities and bias.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/\" \/>\n<meta property=\"og:site_name\" content=\"HIIG\" \/>\n<meta property=\"article:published_time\" content=\"2024-10-15T10:41:47+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-11-27T12:18:38+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.hiig.de\/wp-content\/uploads\/2024\/10\/HiLo2.png\" \/>\n\t<meta property=\"og:image:width\" content=\"800\" \/>\n\t<meta property=\"og:image:height\" content=\"450\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Sarah Ziedler\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sarah Ziedler\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"16 minutes\" \/>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Why AI is currently mainly predicting the past &#8211; Digital Society Blog","description":"While AI is seen as technology of the future, it often relies on historical data. This blog post examines how AI can reproduce social inequalities and bias.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/","og_locale":"en_US","og_type":"article","og_title":"One step forward, two steps back: Why Artificial Intelligence is currently mainly predicting the past","og_description":"While AI is seen as technology of the future, it often relies on historical data. This blog post examines how AI can reproduce social inequalities and bias.","og_url":"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/","og_site_name":"HIIG","article_published_time":"2024-10-15T10:41:47+00:00","article_modified_time":"2024-11-27T12:18:38+00:00","og_image":[{"width":800,"height":450,"url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2024\/10\/HiLo2.png","type":"image\/png"}],"author":"Sarah Ziedler","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Sarah Ziedler","Est. reading time":"16 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/#article","isPartOf":{"@id":"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/"},"author":{"name":"Sarah Ziedler","@id":"https:\/\/www.hiig.de\/#\/schema\/person\/35544a00c06dd18b5c1fdc6920839a1b"},"headline":"One step forward, two steps back: Why artificial intelligence is currently mainly predicting the past","datePublished":"2024-10-15T10:41:47+00:00","dateModified":"2024-11-27T12:18:38+00:00","mainEntityOfPage":{"@id":"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/"},"wordCount":3003,"publisher":{"@id":"https:\/\/www.hiig.de\/#organization"},"image":{"@id":"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/#primaryimage"},"thumbnailUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2024\/10\/HiLo2.png","articleSection":["Artificial Intelligence","Digital Society Blog","ftif AI and Society"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/","url":"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/","name":"Why AI is currently mainly predicting the past &#8211; Digital Society Blog","isPartOf":{"@id":"https:\/\/www.hiig.de\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/#primaryimage"},"image":{"@id":"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/#primaryimage"},"thumbnailUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2024\/10\/HiLo2.png","datePublished":"2024-10-15T10:41:47+00:00","dateModified":"2024-11-27T12:18:38+00:00","description":"While AI is seen as technology of the future, it often relies on historical data. This blog post examines how AI can reproduce social inequalities and bias.","breadcrumb":{"@id":"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/#primaryimage","url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2024\/10\/HiLo2.png","contentUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2024\/10\/HiLo2.png","width":800,"height":450,"caption":"While AI is seen as technology of the future, it often relies on historical data. This blog post examines how AI can reproduce social inequalities and bias."},{"@type":"BreadcrumbList","@id":"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.hiig.de\/en\/"},{"@type":"ListItem","position":2,"name":"One step forward, two steps back: Why artificial intelligence is currently mainly predicting the past"}]},{"@type":"WebSite","@id":"https:\/\/www.hiig.de\/#website","url":"https:\/\/www.hiig.de\/","name":"HIIG","description":"Alexander von Humboldt Institute for Internet and Society","publisher":{"@id":"https:\/\/www.hiig.de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.hiig.de\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.hiig.de\/#organization","name":"HIIG","url":"https:\/\/www.hiig.de\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hiig.de\/#\/schema\/logo\/image\/","url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2019\/06\/hiig.png","contentUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2019\/06\/hiig.png","width":320,"height":80,"caption":"HIIG"},"image":{"@id":"https:\/\/www.hiig.de\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.hiig.de\/#\/schema\/person\/35544a00c06dd18b5c1fdc6920839a1b","name":"Sarah Ziedler"}]}},"_links":{"self":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/104673","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/users\/10000024"}],"replies":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/comments?post=104673"}],"version-history":[{"count":7,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/104673\/revisions"}],"predecessor-version":[{"id":105468,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/104673\/revisions\/105468"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/media\/104674"}],"wp:attachment":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/media?parent=104673"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/categories?post=104673"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/tags?post=104673"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}