{"id":114259,"date":"2026-04-10T15:11:36","date_gmt":"2026-04-10T13:11:36","guid":{"rendered":"https:\/\/www.hiig.de\/?p=114259"},"modified":"2026-04-10T16:28:27","modified_gmt":"2026-04-10T14:28:27","slug":"ai-observatories-as-democratic-infrastructure","status":"publish","type":"post","link":"https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/","title":{"rendered":"Algorithms under scrunity: AI observatories as democratic infrastructure"},"content":{"rendered":"\n<p><strong>Artificial intelligence systems now determine who receives a loan, which job applicants are interviewed and what information billions of people encounter online. Yet most citizens\u2014and even most policymakers\u2014have little insight into how these consequential decisions are made. AI observatories are an emerging solution: As independent institutions they monitor AI systems, assess their societal impacts and generate evidence to inform democratic governance. They ask not only &#8220;Does this algorithm work?&#8221; but &#8220;Who does it work for? Who benefits? Who is harmed?&#8221;. This article examines why AI observatories matter and what distinguishes them from existing governance mechanisms.<\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What are AI observatories?<\/strong><\/h2>\n\n\n\n<p>Imagine a public health observatory, but for algorithms. Just as epidemiologists track disease outbreaks to protect population health, AI observatories monitor algorithmic systems to safeguard democratic values and human rights. They are independent, multistakeholder institutions, typically involving researchers, civil society organisations, policymakers and affected communities. They systematically observe, document and assess artificial intelligence as it operates in the real world. AI observatories perform several critical functions:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Monitoring and mapping:<\/strong> They identify where AI systems are deployed, by whom, and for what purposes. In doing so, they create a public record that is often unavailable from governments or corporations.<\/li>\n\n\n\n<li><strong>Impact assessment:<\/strong> They evaluate how AI systems affect different populations, with particular attention to vulnerable groups and structural inequalities.<\/li>\n\n\n\n<li><strong>Knowledge production:<\/strong> They generate independent, evidence-based research that challenges both corporate narratives and regulatory blind spots.<\/li>\n\n\n\n<li><strong>Public engagement:<\/strong> They translate technical findings for diverse audiences, from policymakers to affected communities, fostering informed democratic discourse.<\/li>\n\n\n\n<li><strong>Epistemic justice:<\/strong> They contest the monopoly of technosolutionist frameworks by incorporating multiple ways of knowing, for example by including voices from the Global South and Indigenous epistemologies. Technosolutionism is the belief that technological innovation is the best or only solution to social, political and economic problems.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>AI observatories in practice: A global landscape<\/strong><\/h2>\n\n\n\n<p>AI observatories have emerged worldwide across diverse institutional contexts. A few examples: The <a href=\"http:\/\/oecd.ai\" target=\"_blank\" rel=\"noreferrer noopener\">OECD.AI Policy Observatory<\/a> tracks AI policies and national strategies across 69 countries, serving as a reference point for comparative governance. The <a href=\"http:\/\/ai-watch.ec.europa.eu\" target=\"_blank\" rel=\"noreferrer noopener\">European Commission&#8217;s AI Watch<\/a> monitors market developments, technological capabilities, and policy implementations across the EU, informing the AI Act&#8217;s evolution. UNESCO&#8217;s <a href=\"http:\/\/ircai.org\" target=\"_blank\" rel=\"noreferrer noopener\">International Research Centre on Artificial Intelligence<\/a> (IRCAI), hosted in Slovenia, focuses on AI for sustainable development, connecting research communities across continents. The <a href=\"http:\/\/gpai.ai\" target=\"_blank\" rel=\"noreferrer noopener\">Global Partnership on AI<\/a> (GPAI), established by the G7, brings together 29 member countries to support responsible AI development through working groups on key themes.<br><br>Regional and national observatories offer additional models. Canada&#8217;s <a href=\"http:\/\/observatoire-ia.ulaval.ca\" target=\"_blank\" rel=\"noreferrer noopener\">Observatoire international sur les impacts soci\u00e9taux de l&#8217;IA et du num\u00e9rique<\/a> (OBVIA) for example exemplifies university-led, multi-institution collaboration addressing social impacts. Brazil&#8217;s <a href=\"http:\/\/obia.nic.br\" target=\"_blank\" rel=\"noreferrer noopener\">AI Observatory<\/a> (OBIA), launched in 2024 as part of the Brazilian AI Plan (2024\u20132028), explicitly prioritises epistemic diversity and international South-South cooperation.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Why AI observatories matter: The governance gap<\/strong><\/h2>\n\n\n\n<p>Artificial intelligence systems increasingly shape life chances without democratic authorisation. Consider three illustrative cases:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Case 1: Algorithmic welfare allocation<\/strong><\/h3>\n\n\n\n<p>In several European countries, algorithmic systems now determine eligibility for unemployment benefits or disability support. When the Dutch government deployed SyRI (System Risk Indication) to detect welfare fraud, it disproportionately targeted low-income neighborhoods and migrant communities. A court eventually ruled the system violated human rights, but only after years of discriminatory impact. (<a href=\"https:\/\/www.ohchr.org\/en\/press-releases\/2020\/02\/landmark-ruling-dutch-court-stops-government-attempts-spy-poor-un-expert\" target=\"_blank\" rel=\"noreferrer noopener\">Office of the United Nations High Commissioner for Human Rights 2020<\/a>) An AI observatory could have identified these disparities earlier, documented their patterns and equipped civil society with evidence for intervention.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Case 2: Hiring algorithms and structural bias<\/strong><\/h3>\n\n\n\n<p>Major corporations now use AI to screen job applications, with little transparency about evaluation criteria. Research by Upturn and other organisations has documented how these systems systematically disadvantage women, racial minorities and people with disabilities. (<a href=\"https:\/\/www.upturn.org\/static\/reports\/2018\/hiring-algorithms\/files\/Upturn%20--%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Bogen &amp; Rieke 2018<\/a>) This disadvantage often is by design, as AI systems optimise for historically biased hiring patterns. (<a href=\"https:\/\/www.hiig.de\/en\/why-ai-is-currently-mainly-predicting-the-past\/\" target=\"_blank\" rel=\"noreferrer noopener\">Mosene 2024<\/a>) Observatories can audit these systems independently, publish findings and pressure for accountability in ways isolated complaints cannot.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Case 3: Credit scoring and algorithmic redlining<\/strong><\/h3>\n\n\n\n<p>Fintech companies increasingly use alternative data\u2014social media activity, online behavior\u2014to assess creditworthiness. Studies suggest these systems reproduce historical patterns of redlining, denying loans to communities of color even when traditional credit scores are equivalent. (<a href=\"https:\/\/haas.berkeley.edu\/wp-content\/uploads\/Consumer-Lending-Discrimination-in-the-FinTech-Era.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Bartlett et al. 2022<\/a>) Without systematic monitoring, this new form of discrimination operates invisibly. Observatories make it legible and thus contestable.<\/p>\n\n\n\n<p>In each case, the problem is not merely technical failure but a democratic deficit. Decisions that profoundly affect people&#8217;s lives are made by systems that are proprietary, opaque and unaccountable. Traditional governance mechanisms prove inadequate. Regulators often lack technical capacity. Courts address individual cases but miss systemic patterns. Corporate self-regulation is, predictably, self-serving.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Informational asymmetry as power asymmetry<\/strong><\/h2>\n\n\n\n<p>This governance gap reflects a deeper structural problem: informational asymmetry. As Antoinette Rouvroy and Thomas Berns argue in their work on &#8220;algorithmic governmentality&#8221;, we inhabit an era where governance operates through data-driven prediction and behavioral modulation rather than explicit political debate. (Rouvroy &amp; Berns 2013) Big Tech corporations monopolise not only the algorithms but the data, infrastructure and expertise required to understand them.<br><br>Recent reports underscore this imbalance. The MIT Sloan Management Review&#8217;s <em>The Emerging Agentic Enterprise<\/em> (2025) finds that agentic AI is being deployed at scale faster than organisations are developing governance structures to oversee it. (<a href=\"https:\/\/sloanreview.mit.edu\/projects\/the-emerging-agentic-enterprise-how-leaders-must-navigate-a-new-age-of-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">Ransbotham et al. 2025<\/a>) The AI Now Institute&#8217;s <em>Artificial Power<\/em> Landscape Report 2025 further documents how external audits, where they exist at all, are typically commissioned and controlled by the very companies under scrutiny, structurally precluding the independence they purport to offer. (<a href=\"https:\/\/ainowinstitute.org\/publications\/research\/ai-now-2025-landscape-report\" target=\"_blank\" rel=\"noreferrer noopener\">Brennan et al. 2025<\/a>)<\/p>\n\n\n\n<p>This asymmetry is most acute in the Global South, where populations become sites of data extraction without corresponding governance capacity or benefit-sharing. Nick Couldry and Ulises Mej\u00edas describe this dynamic as &#8220;data colonialism&#8221;. (Couldry &amp; Mej\u00edas 2019) This means the continuation of historical patterns of resource appropriation through new technological means. AI systems trained on Global South populations are governed by Global North institutions, with minimal accountability to affected communities. AI observatories represent a partial remedy. They build public-interest technical capacity, generate independent evidence and create forums where asymmetry can be challenged, if not fully overcome.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What AI observatories <\/strong>offer that others cannot<\/h2>\n\n\n\n<p>To understand what AI observatories contribute, it helps to clarify what they are not and what they offer as alternatives to existing governance mechanisms.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Observatories vs. regulatory agencies<\/strong><\/h3>\n\n\n\n<p>Regulators like the European Data Protection Authorities possess enforcement powers but are often under-resourced, politically constrained and reactive rather than anticipatory. Observatories lack legal authority but gain agility, independence and capacity for proactive systemic analysis. They function as the &#8220;eyes&#8221; of democratic governance. They identify problems regulators can then address.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Observatories vs. corporate audits<\/strong><\/h3>\n\n\n\n<p>Companies increasingly commission internal or contracted audits to demonstrate compliance. Yet as Meredith Whittaker of Signal and the AI Now Institute have argued, these audits are typically designed to minimise liability rather than maximise accountability. (Whittaker et al. 2018) Observatories, by contrast, answer to public interest rather than shareholder value and their findings cannot be suppressed or selectively disclosed.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Observatories vs. academic research<\/strong><\/h3>\n\n\n\n<p>Universities produce crucial AI scholarship, but academic incentives, such as publication in prestigious journals, theoretical innovation, do not always align with timely, policy-relevant intervention. Observatories bridge research and action, translating findings into formats accessible to policymakers, journalists and civil society.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Observatories vs. civil society advocacy<\/strong><\/h3>\n\n\n\n<p>Advocacy organisations like Access Now, Algorithmic Justice League, and Digital Rights Foundation perform essential work holding power accountable. AI observatories complement advocacy by providing the empirical infrastructure that makes campaigns credible and durable.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Epistemic and cognitive justice<\/strong><\/h2>\n\n\n\n<p>One of the most important, yet often overlooked, contributions of AI observatories is epistemic: they expand who can produce, interpret and contest knowledge about artificial intelligence. Dominant AI discourse is overwhelmingly technosolutionist, presenting algorithmic systems as inevitable, neutral and beneficial by default. This framing obscures power relations, forecloses alternatives, and marginalises dissenting voices, particularly those from communities most affected by AI harms.<br><br>AI observatories create space for epistemic diversity. By incorporating critical perspectives, like feminist technoscience, decolonial theory, disability justice or environmental humanities, they challenge narrow conceptions of what counts as &#8220;expertise&#8221;. By engaging with local and Indigenous knowledge systems, they recognise that understanding technology&#8217;s social impacts requires more than computer science. Brazil&#8217;s OBIA, for instance, explicitly seeks to integrate diverse stakeholders in assessing AI impacts, moving beyond technocratic metrics to consider cultural, relational and environmental dimensions often ignored in Global North frameworks. (<a href=\"https:\/\/indicelatam.cl\/obia-the-brazilian-ai-observatory\/\" target=\"_blank\" rel=\"noreferrer noopener\">Indice Latinoamericano de Inteligencia Artificial 2024<\/a>) This commitment to epistemic justice is not merely procedural. It changes what questions are asked, which harms are recognised and what futures are imaginable.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>AI observatories as democratic infrastructure<\/strong><\/h2>\n\n\n\n<p>Taken together, these distinctions point to something more than a functional niche. AI observatories do not merely monitor or simply fill gaps left by regulators, auditors or academics. They are doing something qualitatively different: maintaining the conditions under which algorithmic power can be made visible, contested and accountable to those it affects. This is what it means to call them democratic infrastructure. Like courts, a free press or public broadcasting, their value lies not in any single finding but in what they make possible: informed deliberation, meaningful accountability and the capacity of citizens to participate in decisions that shape their lives.<\/p>\n\n\n\n<p>That potential, however, is not self-fulfilling. Inclusion can become tokenistic if power imbalances remain unaddressed. Observatories risk reproducing inequalities, for instance, through algorithmic bias audits that identify racial disparities but offer no path toward reparative justice. Genuine epistemic justice requires not only diverse voices but redistributed power. AI observatories alone cannot achieve this. Whether observatories actually function as democratic infrastructure depends on how they are built, funded and governed \u2014 and on whether the conditions for genuine independence are met.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The challenge ahead<\/strong><\/h2>\n\n\n\n<p>AI observatories are not universal cure. They face significant challenges:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Capture risks: <\/strong>Dependence on government or corporate funding may pressure observatories to soften criticism. The revolving door between industry and governance institutions can threaten independence. If corporate actors dominate, multistakeholder models can reproduce power imbalances.<\/li>\n\n\n\n<li><strong>Resource constraints: <\/strong>Effective monitoring requires significant technical capacity, data access and sustained funding. Many observatories operate on precarious budgets, limiting scope and longevity.<\/li>\n\n\n\n<li><strong>Legitimacy questions: <\/strong>Unlike elected regulators, observatories lack direct democratic mandate. Their authority rests on expertise and transparency, which can be contested as technocratic or elitist.<\/li>\n\n\n\n<li><strong>Limited enforcement power:<\/strong> Without regulatory authority, observatories rely on &#8216;soft power&#8217;\u2014naming and shaming, setting the agenda, providing evidence. When institutions ignore their findings, their impact is limited.&nbsp;<\/li>\n\n\n\n<li><strong>Geopolitical disparities: <\/strong>Most well-resourced observatories are based in the Global North, potentially perpetuating epistemic hierarchies even as they claim to challenge them.<\/li>\n<\/ul>\n\n\n\n<p>Addressing these challenges requires institutional design choices. This includes transparent governance structures, diverse funding streams, explicit principles for multistakeholder balance, South-led and South-focused initiatives and clear pathways connecting observatory findings to regulatory action.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What policymakers must do<\/strong><\/h2>\n\n\n\n<p>For AI observatories to fulfill their democratic potential, policymakers must:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Invest in genuine independence:<\/strong> Fund observatories through arms-length mechanisms that insulate them from political and commercial pressure. Models exist: research councils, independent commissions, multi-donor funds.<\/li>\n\n\n\n<li><strong>Mandate data access:<\/strong> Require AI developers and deployers to provide observatories with the data and documentation necessary for meaningful oversight, with appropriate privacy safeguards.<\/li>\n\n\n\n<li><strong>Integrate findings into regulation:<\/strong> Create formal channels linking observatory research to policy-making\u2014expert testimony requirements, mandatory consultations, and legislative review.<\/li>\n\n\n\n<li><strong>Support South-led initiatives:<\/strong> Prioritize funding and capacity-building for observatories in the Global South, ensuring governance reflects those most affected by data colonialism and algorithmic harm.<\/li>\n\n\n\n<li><strong>Foster transnational cooperation:<\/strong> AI systems cross borders; governance must too. Observatories should form networks enabling shared learning and coordinated responses to global platforms.<\/li>\n<\/ul>\n\n\n\n<p>Above all, policymakers must recognise that observatories are not a substitute for regulation but a complement\u2014democratic infrastructure that makes effective regulation possible.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Conclusion: Preserving space for democratic choice<\/strong><\/h2>\n\n\n\n<p>As Hannah Arendt observed, technological acceleration does not eliminate the necessity of judgment. It intensifies it. (Arendt 1971) The question is not whether AI systems will continue to grow in significance \u2014 they will. The question is whether their expansion remains subject to democratic deliberation, or whether it becomes something simply done to us, shaped by corporate strategy and technological momentum alone.<\/p>\n\n\n\n<p>AI observatories are one answer to that question. They represent a commitment to the proposition that algorithmic power should be visible, questionable and accountable to those it affects. If we believe that democracy must be defended in the digital age, then investing in the institutions that make democratic AI governance possible is not optional. AI observatories are such institutions: incomplete and still evolving, but indispensable. The future of AI must remain a matter of collective decision. Observatories help ensure it does.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>References<\/strong><\/h2>\n\n\n\n<p>Arendt, H. (1971). <em>Thinking and Moral Considerations. Social Research<\/em>, 38(3), 417\u2013446.<\/p>\n\n\n\n<p>Bartlett, R., Morse, A., Stanton, R., &amp; Wallace, N. (2022). <em>Consumer-lending discrimination in the FinTech era<\/em>. Journal of Financial Economics, 143(1), 30\u201356. <a href=\"https:\/\/haas.berkeley.edu\/wp-content\/uploads\/Consumer-Lending-Discrimination-in-the-FinTech-Era.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/haas.berkeley.edu\/wp-content\/uploads\/Consumer-Lending-Discrimination-in-the-FinTech-Era.pdf<\/a><\/p>\n\n\n\n<p>Brennan, K., Kak, A., &amp; Myers West, S. (2025). <em>Artificial power: AI Now 2025 landscape report.<\/em> AI Now Institute. <a href=\"https:\/\/ainowinstitute.org\/publications\/research\/ai-now-2025-landscape-report\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/ainowinstitute.org\/publications\/research\/ai-now-2025-landscape-report<\/a><\/p>\n\n\n\n<p>Bogen, M. &amp; Rieke, A. (2018). <em>Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias.<\/em> Upturn. <a href=\"https:\/\/www.upturn.org\/static\/reports\/2018\/hiring-algorithms\/files\/Upturn%20--%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.upturn.org\/static\/reports\/2018\/hiring-algorithms\/files\/Upturn%20&#8211;%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf<\/a>&nbsp;<\/p>\n\n\n\n<p>Couldry, N., &amp; Mej\u00edas, U. A. (2019). <em>The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism<\/em>. Stanford University Press.<\/p>\n\n\n\n<p>Crawford, K. (2021). <em>Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence<\/em>. Yale University Press.<\/p>\n\n\n\n<p>Floridi, L. (2013). <em>The Ethics of Information<\/em>. Oxford University Press.<\/p>\n\n\n\n<p>Floridi, L. (2014). <em>The Fourth Revolution: How the Infosphere Is Reshaping Human Reality<\/em>. Oxford University Press.<\/p>\n\n\n\n<p>\u00cdndice Latinoamericano de Inteligencia Artificial (2024). <em>OBIA, the Brazilian AI Observatory<\/em>. <a href=\"https:\/\/indicelatam.cl\/obia-the-brazilian-ai-observatory\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/indicelatam.cl\/obia-the-brazilian-ai-observatory\/<\/a><\/p>\n\n\n\n<p>Ransbotham, S., Kiron, D., Khodabandeh, S., Iyer, S., &amp; Das, A. (2025). <em>The emerging agentic enterprise: How leaders must navigate a new age of AI.<\/em> MIT Sloan Management Review &amp; Boston Consulting Group. <a href=\"https:\/\/sloanreview.mit.edu\/projects\/the-emerging-agentic-enterprise-how-leaders-must-navigate-a-new-age-of-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/sloanreview.mit.edu\/projects\/the-emerging-agentic-enterprise-how-leaders-must-navigate-a-new-age-of-ai\/<\/a><\/p>\n\n\n\n<p>Rouvroy, A., &amp; Berns, T. (2013). <em>Algorithmic Governmentality and Prospects of Emancipation: Disparateness as a Precondition for Individuation through Relationships?.<\/em> R\u00e9seaux, 177(1), 163\u2013196.<\/p>\n\n\n\n<p>United Nations Office of the High Commissioner for Human Rights. (2020, February 5). <em>Landmark ruling by Dutch court stops government attempts to spy on the poor \u2013 UN expert<\/em>. <a href=\"https:\/\/www.ohchr.org\/en\/press-releases\/2020\/02\/landmark-ruling-dutch-court-stops-government-attempts-spy-poor-un-expert\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.ohchr.org\/en\/press-releases\/2020\/02\/landmark-ruling-dutch-court-stops-government-attempts-spy-poor-un-expert<\/a><\/p>\n\n\n\n<p>Whittaker, M., et al. (2018). <em>AI Now Report 2018<\/em>. AI Now Institute, New York University.<\/p>\n<div class=\"shariff shariff-align-flex-start shariff-widget-align-flex-start\"><ul class=\"shariff-buttons theme-round orientation-horizontal buttonsize-medium\"><li class=\"shariff-button linkedin shariff-nocustomcolor\" style=\"background-color:#1488bf\"><a href=\"https:\/\/www.linkedin.com\/sharing\/share-offsite\/?url=https%3A%2F%2Fwww.hiig.de%2Fen%2Fai-observatories-as-democratic-infrastructure%2F\" title=\"Share on LinkedIn\" aria-label=\"Share on LinkedIn\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#0077b5; color:#fff\" target=\"_blank\"><span class=\"shariff-icon\" style=\"\"><svg width=\"32px\" height=\"20px\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 27 32\"><path fill=\"#0077b5\" d=\"M6.2 11.2v17.7h-5.9v-17.7h5.9zM6.6 5.7q0 1.3-0.9 2.2t-2.4 0.9h0q-1.5 0-2.4-0.9t-0.9-2.2 0.9-2.2 2.4-0.9 2.4 0.9 0.9 2.2zM27.4 18.7v10.1h-5.9v-9.5q0-1.9-0.7-2.9t-2.3-1.1q-1.1 0-1.9 0.6t-1.2 1.5q-0.2 0.5-0.2 1.4v9.9h-5.9q0-7.1 0-11.6t0-5.3l0-0.9h5.9v2.6h0q0.4-0.6 0.7-1t1-0.9 1.6-0.8 2-0.3q3 0 4.9 2t1.9 6z\"\/><\/svg><\/span><\/a><\/li><li class=\"shariff-button bluesky shariff-nocustomcolor\" style=\"background-color:#84c4ff\"><a href=\"https:\/\/bsky.app\/intent\/compose?text=Algorithms%20under%20scrunity%3A%20AI%20observatories%20as%20democratic%20infrastructure https%3A%2F%2Fwww.hiig.de%2Fen%2Fai-observatories-as-democratic-infrastructure%2F  via @hiigberlin.bsky.social\" title=\"Share on Bluesky\" aria-label=\"Share on Bluesky\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#0085ff; color:#fff\" target=\"_blank\"><span class=\"shariff-icon\" style=\"\"><svg width=\"20\" height=\"20\" version=\"1.1\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 20 20\"><path class=\"st0\" d=\"M4.89,3.12c2.07,1.55,4.3,4.71,5.11,6.4.82-1.69,3.04-4.84,5.11-6.4,1.49-1.12,3.91-1.99,3.91.77,0,.55-.32,4.63-.5,5.3-.64,2.3-2.99,2.89-5.08,2.54,3.65.62,4.58,2.68,2.57,4.74-3.81,3.91-5.48-.98-5.9-2.23-.08-.23-.11-.34-.12-.25,0-.09-.04.02-.12.25-.43,1.25-2.09,6.14-5.9,2.23-2.01-2.06-1.08-4.12,2.57-4.74-2.09.36-4.44-.23-5.08-2.54-.19-.66-.5-4.74-.5-5.3,0-2.76,2.42-1.89,3.91-.77h0Z\"\/><\/svg><\/span><\/a><\/li><li class=\"shariff-button mailto shariff-nocustomcolor\" style=\"background-color:#a8a8a8\"><a href=\"mailto:?body=https%3A%2F%2Fwww.hiig.de%2Fen%2Fai-observatories-as-democratic-infrastructure%2F&subject=Algorithms%20under%20scrunity%3A%20AI%20observatories%20as%20democratic%20infrastructure\" title=\"Send by email\" aria-label=\"Send by email\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#999; color:#fff\"><span class=\"shariff-icon\" style=\"\"><svg width=\"32px\" height=\"20px\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 32 32\"><path fill=\"#999\" d=\"M32 12.7v14.2q0 1.2-0.8 2t-2 0.9h-26.3q-1.2 0-2-0.9t-0.8-2v-14.2q0.8 0.9 1.8 1.6 6.5 4.4 8.9 6.1 1 0.8 1.6 1.2t1.7 0.9 2 0.4h0.1q0.9 0 2-0.4t1.7-0.9 1.6-1.2q3-2.2 8.9-6.1 1-0.7 1.8-1.6zM32 7.4q0 1.4-0.9 2.7t-2.2 2.2q-6.7 4.7-8.4 5.8-0.2 0.1-0.7 0.5t-1 0.7-0.9 0.6-1.1 0.5-0.9 0.2h-0.1q-0.4 0-0.9-0.2t-1.1-0.5-0.9-0.6-1-0.7-0.7-0.5q-1.6-1.1-4.7-3.2t-3.6-2.6q-1.1-0.7-2.1-2t-1-2.5q0-1.4 0.7-2.3t2.1-0.9h26.3q1.2 0 2 0.8t0.9 2z\"\/><\/svg><\/span><\/a><\/li><\/ul><\/div>","protected":false},"excerpt":{"rendered":"<p>Algorithms have a profound impact on people&#8217;s lives. This article explores why AI observatories are essential for democratic governance.<\/p>\n","protected":false},"author":313,"featured_media":114263,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1289,1576,1577,226,1145,224,52,221],"tags":[],"class_list":["post-114259","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","category-digital-society-blog","category-digital-so","category-knowledge","category-kuenstliche-intelligenz","category-policy-and-law","category-politik-und-recht","category-wissen"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>AI observatories as democratic infrastructure &#8211; Digital Society Blog<\/title>\n<meta name=\"description\" content=\"Algorithms have a profound impact on people&#039;s lives. This article explores why AI observatories are essential for democratic governance.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI observatories as democratic infrastructure &#8211; Digital Society Blog\" \/>\n<meta property=\"og:description\" content=\"Algorithms have a profound impact on people&#039;s lives. This article explores why AI observatories are essential for democratic governance.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/\" \/>\n<meta property=\"og:site_name\" content=\"HIIG\" \/>\n<meta property=\"article:published_time\" content=\"2026-04-10T13:11:36+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-04-10T14:28:27+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/04\/Titelbild_Paola-\u2013-3.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1144\" \/>\n\t<meta property=\"og:image:height\" content=\"643\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Digital Society Blog\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Digital Society Blog\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"14 minutes\" \/>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"AI observatories as democratic infrastructure &#8211; Digital Society Blog","description":"Algorithms have a profound impact on people's lives. This article explores why AI observatories are essential for democratic governance.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/","og_locale":"en_US","og_type":"article","og_title":"AI observatories as democratic infrastructure &#8211; Digital Society Blog","og_description":"Algorithms have a profound impact on people's lives. This article explores why AI observatories are essential for democratic governance.","og_url":"https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/","og_site_name":"HIIG","article_published_time":"2026-04-10T13:11:36+00:00","article_modified_time":"2026-04-10T14:28:27+00:00","og_image":[{"width":1144,"height":643,"url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/04\/Titelbild_Paola-\u2013-3.png","type":"image\/png"}],"author":"Digital Society Blog","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Digital Society Blog","Est. reading time":"14 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/#article","isPartOf":{"@id":"https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/"},"author":{"name":"Digital Society Blog","@id":"https:\/\/www.hiig.de\/#\/schema\/person\/a921ecfdfcb94cb9c718b90c3a5dedbd"},"headline":"Algorithms under scrunity: AI observatories as democratic infrastructure","datePublished":"2026-04-10T13:11:36+00:00","dateModified":"2026-04-10T14:28:27+00:00","mainEntityOfPage":{"@id":"https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/"},"wordCount":2371,"publisher":{"@id":"https:\/\/www.hiig.de\/#organization"},"image":{"@id":"https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/#primaryimage"},"thumbnailUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/04\/Titelbild_Paola-\u2013-3.png","articleSection":["Artificial Intelligence","Digital Society Blog","Digital Society Blog","Knowledge","K\u00fcnstliche Intelligenz","Policy and Law","Politik und Recht","Wissen"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/","url":"https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/","name":"AI observatories as democratic infrastructure &#8211; Digital Society Blog","isPartOf":{"@id":"https:\/\/www.hiig.de\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/#primaryimage"},"image":{"@id":"https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/#primaryimage"},"thumbnailUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/04\/Titelbild_Paola-\u2013-3.png","datePublished":"2026-04-10T13:11:36+00:00","dateModified":"2026-04-10T14:28:27+00:00","description":"Algorithms have a profound impact on people's lives. This article explores why AI observatories are essential for democratic governance.","breadcrumb":{"@id":"https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/#primaryimage","url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/04\/Titelbild_Paola-\u2013-3.png","contentUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2026\/04\/Titelbild_Paola-\u2013-3.png","width":1144,"height":643,"caption":"AI observatories are not passive watchdogs, they are active democratic infrastructure. This article examines why they matter and what makes them distinct."},{"@type":"BreadcrumbList","@id":"https:\/\/www.hiig.de\/en\/ai-observatories-as-democratic-infrastructure\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.hiig.de\/en\/"},{"@type":"ListItem","position":2,"name":"Algorithms under scrunity: AI observatories as democratic infrastructure"}]},{"@type":"WebSite","@id":"https:\/\/www.hiig.de\/#website","url":"https:\/\/www.hiig.de\/","name":"HIIG","description":"Alexander von Humboldt Institute for Internet and Society","publisher":{"@id":"https:\/\/www.hiig.de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.hiig.de\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.hiig.de\/#organization","name":"HIIG","url":"https:\/\/www.hiig.de\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hiig.de\/#\/schema\/logo\/image\/","url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2019\/06\/hiig.png","contentUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2019\/06\/hiig.png","width":320,"height":80,"caption":"HIIG"},"image":{"@id":"https:\/\/www.hiig.de\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.hiig.de\/#\/schema\/person\/a921ecfdfcb94cb9c718b90c3a5dedbd","name":"Digital Society Blog"}]}},"_links":{"self":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/114259","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/users\/313"}],"replies":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/comments?post=114259"}],"version-history":[{"count":16,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/114259\/revisions"}],"predecessor-version":[{"id":114334,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/114259\/revisions\/114334"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/media\/114263"}],"wp:attachment":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/media?parent=114259"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/categories?post=114259"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/tags?post=114259"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}