Making sense of our connected world

Algorithms under scrunity: AI observatories as democratic infrastructure
Artificial intelligence systems now determine who receives a loan, which job applicants are interviewed and what information billions of people encounter online. Yet most citizens—and even most policymakers—have little insight into how these consequential decisions are made. AI observatories are an emerging solution: As independent institutions they monitor AI systems, assess their societal impacts and generate evidence to inform democratic governance. They ask not only “Does this algorithm work?” but “Who does it work for? Who benefits? Who is harmed?”. This article examines why AI observatories matter and what distinguishes them from existing governance mechanisms.
What are AI observatories?
Imagine a public health observatory, but for algorithms. Just as epidemiologists track disease outbreaks to protect population health, AI observatories monitor algorithmic systems to safeguard democratic values and human rights. They are independent, multistakeholder institutions, typically involving researchers, civil society organisations, policymakers and affected communities. They systematically observe, document and assess artificial intelligence as it operates in the real world. AI observatories perform several critical functions:
- Monitoring and mapping: They identify where AI systems are deployed, by whom, and for what purposes. In doing so, they create a public record that is often unavailable from governments or corporations.
- Impact assessment: They evaluate how AI systems affect different populations, with particular attention to vulnerable groups and structural inequalities.
- Knowledge production: They generate independent, evidence-based research that challenges both corporate narratives and regulatory blind spots.
- Public engagement: They translate technical findings for diverse audiences, from policymakers to affected communities, fostering informed democratic discourse.
- Epistemic justice: They contest the monopoly of technosolutionist frameworks by incorporating multiple ways of knowing, for example by including voices from the Global South and Indigenous epistemologies. Technosolutionism is the belief that technological innovation is the best or only solution to social, political and economic problems.
AI observatories in practice: A global landscape
AI observatories have emerged worldwide across diverse institutional contexts. A few examples: The OECD.AI Policy Observatory tracks AI policies and national strategies across 69 countries, serving as a reference point for comparative governance. The European Commission’s AI Watch monitors market developments, technological capabilities, and policy implementations across the EU, informing the AI Act’s evolution. UNESCO’s International Research Centre on Artificial Intelligence (IRCAI), hosted in Slovenia, focuses on AI for sustainable development, connecting research communities across continents. The Global Partnership on AI (GPAI), established by the G7, brings together 29 member countries to support responsible AI development through working groups on key themes.
Regional and national observatories offer additional models. Canada’s Observatoire international sur les impacts sociétaux de l’IA et du numérique (OBVIA) for example exemplifies university-led, multi-institution collaboration addressing social impacts. Brazil’s AI Observatory (OBIA), launched in 2024 as part of the Brazilian AI Plan (2024–2028), explicitly prioritises epistemic diversity and international South-South cooperation.
Why AI observatories matter: The governance gap
Artificial intelligence systems increasingly shape life chances without democratic authorisation. Consider three illustrative cases:
Case 1: Algorithmic welfare allocation
In several European countries, algorithmic systems now determine eligibility for unemployment benefits or disability support. When the Dutch government deployed SyRI (System Risk Indication) to detect welfare fraud, it disproportionately targeted low-income neighborhoods and migrant communities. A court eventually ruled the system violated human rights, but only after years of discriminatory impact. (Office of the United Nations High Commissioner for Human Rights 2020) An AI observatory could have identified these disparities earlier, documented their patterns and equipped civil society with evidence for intervention.
Case 2: Hiring algorithms and structural bias
Major corporations now use AI to screen job applications, with little transparency about evaluation criteria. Research by Upturn and other organisations has documented how these systems systematically disadvantage women, racial minorities and people with disabilities. (Bogen & Rieke 2018) This disadvantage often is by design, as AI systems optimise for historically biased hiring patterns. (Mosene 2024) Observatories can audit these systems independently, publish findings and pressure for accountability in ways isolated complaints cannot.
Case 3: Credit scoring and algorithmic redlining
Fintech companies increasingly use alternative data—social media activity, online behavior—to assess creditworthiness. Studies suggest these systems reproduce historical patterns of redlining, denying loans to communities of color even when traditional credit scores are equivalent. (Bartlett et al. 2022) Without systematic monitoring, this new form of discrimination operates invisibly. Observatories make it legible and thus contestable.
In each case, the problem is not merely technical failure but a democratic deficit. Decisions that profoundly affect people’s lives are made by systems that are proprietary, opaque and unaccountable. Traditional governance mechanisms prove inadequate. Regulators often lack technical capacity. Courts address individual cases but miss systemic patterns. Corporate self-regulation is, predictably, self-serving.
Informational asymmetry as power asymmetry
This governance gap reflects a deeper structural problem: informational asymmetry. As Antoinette Rouvroy and Thomas Berns argue in their work on “algorithmic governmentality”, we inhabit an era where governance operates through data-driven prediction and behavioral modulation rather than explicit political debate. (Rouvroy & Berns 2013) Big Tech corporations monopolise not only the algorithms but the data, infrastructure and expertise required to understand them.
Recent reports underscore this imbalance. The MIT Sloan Management Review’s The Emerging Agentic Enterprise (2025) finds that agentic AI is being deployed at scale faster than organisations are developing governance structures to oversee it. (Ransbotham et al. 2025) The AI Now Institute’s Artificial Power Landscape Report 2025 further documents how external audits, where they exist at all, are typically commissioned and controlled by the very companies under scrutiny, structurally precluding the independence they purport to offer. (Brennan et al. 2025)
This asymmetry is most acute in the Global South, where populations become sites of data extraction without corresponding governance capacity or benefit-sharing. Nick Couldry and Ulises Mejías describe this dynamic as “data colonialism”. (Couldry & Mejías 2019) This means the continuation of historical patterns of resource appropriation through new technological means. AI systems trained on Global South populations are governed by Global North institutions, with minimal accountability to affected communities. AI observatories represent a partial remedy. They build public-interest technical capacity, generate independent evidence and create forums where asymmetry can be challenged, if not fully overcome.
What AI observatories offer that others cannot
To understand what AI observatories contribute, it helps to clarify what they are not and what they offer as alternatives to existing governance mechanisms.
Observatories vs. regulatory agencies
Regulators like the European Data Protection Authorities possess enforcement powers but are often under-resourced, politically constrained and reactive rather than anticipatory. Observatories lack legal authority but gain agility, independence and capacity for proactive systemic analysis. They function as the “eyes” of democratic governance. They identify problems regulators can then address.
Observatories vs. corporate audits
Companies increasingly commission internal or contracted audits to demonstrate compliance. Yet as Meredith Whittaker of Signal and the AI Now Institute have argued, these audits are typically designed to minimise liability rather than maximise accountability. (Whittaker et al. 2018) Observatories, by contrast, answer to public interest rather than shareholder value and their findings cannot be suppressed or selectively disclosed.
Observatories vs. academic research
Universities produce crucial AI scholarship, but academic incentives, such as publication in prestigious journals, theoretical innovation, do not always align with timely, policy-relevant intervention. Observatories bridge research and action, translating findings into formats accessible to policymakers, journalists and civil society.
Observatories vs. civil society advocacy
Advocacy organisations like Access Now, Algorithmic Justice League, and Digital Rights Foundation perform essential work holding power accountable. AI observatories complement advocacy by providing the empirical infrastructure that makes campaigns credible and durable.
Epistemic and cognitive justice
One of the most important, yet often overlooked, contributions of AI observatories is epistemic: they expand who can produce, interpret and contest knowledge about artificial intelligence. Dominant AI discourse is overwhelmingly technosolutionist, presenting algorithmic systems as inevitable, neutral and beneficial by default. This framing obscures power relations, forecloses alternatives, and marginalises dissenting voices, particularly those from communities most affected by AI harms.
AI observatories create space for epistemic diversity. By incorporating critical perspectives, like feminist technoscience, decolonial theory, disability justice or environmental humanities, they challenge narrow conceptions of what counts as “expertise”. By engaging with local and Indigenous knowledge systems, they recognise that understanding technology’s social impacts requires more than computer science. Brazil’s OBIA, for instance, explicitly seeks to integrate diverse stakeholders in assessing AI impacts, moving beyond technocratic metrics to consider cultural, relational and environmental dimensions often ignored in Global North frameworks. (Indice Latinoamericano de Inteligencia Artificial 2024) This commitment to epistemic justice is not merely procedural. It changes what questions are asked, which harms are recognised and what futures are imaginable.
AI observatories as democratic infrastructure
Taken together, these distinctions point to something more than a functional niche. AI observatories do not merely monitor or simply fill gaps left by regulators, auditors or academics. They are doing something qualitatively different: maintaining the conditions under which algorithmic power can be made visible, contested and accountable to those it affects. This is what it means to call them democratic infrastructure. Like courts, a free press or public broadcasting, their value lies not in any single finding but in what they make possible: informed deliberation, meaningful accountability and the capacity of citizens to participate in decisions that shape their lives.
That potential, however, is not self-fulfilling. Inclusion can become tokenistic if power imbalances remain unaddressed. Observatories risk reproducing inequalities, for instance, through algorithmic bias audits that identify racial disparities but offer no path toward reparative justice. Genuine epistemic justice requires not only diverse voices but redistributed power. AI observatories alone cannot achieve this. Whether observatories actually function as democratic infrastructure depends on how they are built, funded and governed — and on whether the conditions for genuine independence are met.
The challenge ahead
AI observatories are not universal cure. They face significant challenges:
- Capture risks: Dependence on government or corporate funding may pressure observatories to soften criticism. The revolving door between industry and governance institutions can threaten independence. If corporate actors dominate, multistakeholder models can reproduce power imbalances.
- Resource constraints: Effective monitoring requires significant technical capacity, data access and sustained funding. Many observatories operate on precarious budgets, limiting scope and longevity.
- Legitimacy questions: Unlike elected regulators, observatories lack direct democratic mandate. Their authority rests on expertise and transparency, which can be contested as technocratic or elitist.
- Limited enforcement power: Without regulatory authority, observatories rely on ‘soft power’—naming and shaming, setting the agenda, providing evidence. When institutions ignore their findings, their impact is limited.
- Geopolitical disparities: Most well-resourced observatories are based in the Global North, potentially perpetuating epistemic hierarchies even as they claim to challenge them.
Addressing these challenges requires institutional design choices. This includes transparent governance structures, diverse funding streams, explicit principles for multistakeholder balance, South-led and South-focused initiatives and clear pathways connecting observatory findings to regulatory action.
What policymakers must do
For AI observatories to fulfill their democratic potential, policymakers must:
- Invest in genuine independence: Fund observatories through arms-length mechanisms that insulate them from political and commercial pressure. Models exist: research councils, independent commissions, multi-donor funds.
- Mandate data access: Require AI developers and deployers to provide observatories with the data and documentation necessary for meaningful oversight, with appropriate privacy safeguards.
- Integrate findings into regulation: Create formal channels linking observatory research to policy-making—expert testimony requirements, mandatory consultations, and legislative review.
- Support South-led initiatives: Prioritize funding and capacity-building for observatories in the Global South, ensuring governance reflects those most affected by data colonialism and algorithmic harm.
- Foster transnational cooperation: AI systems cross borders; governance must too. Observatories should form networks enabling shared learning and coordinated responses to global platforms.
Above all, policymakers must recognise that observatories are not a substitute for regulation but a complement—democratic infrastructure that makes effective regulation possible.
Conclusion: Preserving space for democratic choice
As Hannah Arendt observed, technological acceleration does not eliminate the necessity of judgment. It intensifies it. (Arendt 1971) The question is not whether AI systems will continue to grow in significance — they will. The question is whether their expansion remains subject to democratic deliberation, or whether it becomes something simply done to us, shaped by corporate strategy and technological momentum alone.
AI observatories are one answer to that question. They represent a commitment to the proposition that algorithmic power should be visible, questionable and accountable to those it affects. If we believe that democracy must be defended in the digital age, then investing in the institutions that make democratic AI governance possible is not optional. AI observatories are such institutions: incomplete and still evolving, but indispensable. The future of AI must remain a matter of collective decision. Observatories help ensure it does.
References
Arendt, H. (1971). Thinking and Moral Considerations. Social Research, 38(3), 417–446.
Bartlett, R., Morse, A., Stanton, R., & Wallace, N. (2022). Consumer-lending discrimination in the FinTech era. Journal of Financial Economics, 143(1), 30–56. https://haas.berkeley.edu/wp-content/uploads/Consumer-Lending-Discrimination-in-the-FinTech-Era.pdf
Brennan, K., Kak, A., & Myers West, S. (2025). Artificial power: AI Now 2025 landscape report. AI Now Institute. https://ainowinstitute.org/publications/research/ai-now-2025-landscape-report
Bogen, M. & Rieke, A. (2018). Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias. Upturn. https://www.upturn.org/static/reports/2018/hiring-algorithms/files/Upturn%20–%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf
Couldry, N., & Mejías, U. A. (2019). The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford University Press.
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
Floridi, L. (2013). The Ethics of Information. Oxford University Press.
Floridi, L. (2014). The Fourth Revolution: How the Infosphere Is Reshaping Human Reality. Oxford University Press.
Índice Latinoamericano de Inteligencia Artificial (2024). OBIA, the Brazilian AI Observatory. https://indicelatam.cl/obia-the-brazilian-ai-observatory/
Ransbotham, S., Kiron, D., Khodabandeh, S., Iyer, S., & Das, A. (2025). The emerging agentic enterprise: How leaders must navigate a new age of AI. MIT Sloan Management Review & Boston Consulting Group. https://sloanreview.mit.edu/projects/the-emerging-agentic-enterprise-how-leaders-must-navigate-a-new-age-of-ai/
Rouvroy, A., & Berns, T. (2013). Algorithmic Governmentality and Prospects of Emancipation: Disparateness as a Precondition for Individuation through Relationships?. Réseaux, 177(1), 163–196.
United Nations Office of the High Commissioner for Human Rights. (2020, February 5). Landmark ruling by Dutch court stops government attempts to spy on the poor – UN expert. https://www.ohchr.org/en/press-releases/2020/02/landmark-ruling-dutch-court-stops-government-attempts-spy-poor-un-expert
Whittaker, M., et al. (2018). AI Now Report 2018. AI Now Institute, New York University.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

You will receive our latest blog articles once a month in a newsletter.
Featured Topics
Forget the “killing machine”: why AI is a question of responsibility, not apocalypse
The authors challenge the metaphor of artificial intelligence as a "killing machine" that will one day surpass its human creators.
The bot that bit back: AI agents, defamation and the digital construction of identity
A real case of an AI agent publishing a smear piece raises new legal questions about responsibility and digital identity.
The Human in the Loop in automated credit lending – Human expertise for greater fairness
How fair is automated credit lending? Where is human expertise essential?




