
AI & Society Lab
The AI & Society Lab is an interdisciplinary interface and experimental space within the Technology, Power and Values research focus. It brings together academic insights on artificial intelligence (AI) and puts them into practice, developing new formats that address social, political and cultural issues. Our work centres on the question of how AI systems can be designed, deployed and regulated in ways that align with societal values, democratic principles and human rights standards. The Lab understands AI not as a purely technical innovation, but as a socio-technical phenomenon: technologies are embedded in social contexts, designed by humans, and continuously interact with human action. AI systems therefore do not emerge independently of society; rather, they are shaped by values, norms and political frameworks, and in turn reflect them. Based on this understanding, the AI & Society Lab positions itself as a thought leader and driver of inclusive, human rights-compliant and sustainable AI development. It actively promotes dialogue between science and society. Workshops with various interest groups, public events and media formats are used to discuss, test and further develop scientific findings.
Explainability of AI
Understanding automated decisions
AI systems often lack transparency. Different approaches to explainability demonstrate how such systems function and the assumptions on which they are based. This enables us to critically examine AI and make informed decisions in politics, public administration and society.
AI, Sustainability and the Public Interest
Benefits for society
The use of Artificial Intelligence raises questions about its long-term social, ecological and institutional impacts. Public interest–oriented AI means that the development and deployment of AI systems are not primarily driven by profit maximisation or private interests, but by benefits for society as a whole.
AI-Compass
This card game offers a realistic insight into the use of active AI systems in society. Developed using a transdisciplinary approach with citizens and experts, it provides an accessible introduction to artificial intelligence and its societal implications.
The AI Compass is an outcome of the research project Artificial intelligence, explained in human terms.
Experiment: Gesichtserkennung
This open-source application uses a playful approach to demonstrate how facial recognition software works. It aims to make basic AI knowledge understandable and accessible, while highlighting the capabilities and limitations of such systems.
The experiment was developed as part of the research project Artificial intelligence, explained in human terms.
Workshop "Public Interest AI"
Roundtable "Human in the Loop: Kreditvergabe im Fokus"
Public Interest AI Interface and Network
What constitutes public interest–oriented AI is both complex and socially relevant. Researchers and a network of diverse societal actors engage in a shared dialogue to clarify key concepts and distinctions. This interface presents the principles developed through this process, inviting further discussion and knowledge exchange.
BACKROUND
The interface, the project map shown below, and the two AI prototypes are key results of the the Public Interest AI research project.
Public Interest AI Project Map
This interactive map provides an overview of AI initiatives worldwide that are oriented towards the public interest. It illustrates how these projects function, how they present their work, and how they perceive themselves, while making information freely accessible to the public.
Simba Text Assistant
This AI prototype comprises two applications designed to help people understand German-language online texts. One application is a web app that selectively simplifies users' own texts, while the other is a browser extension that automatically summarises online content.
Claimspotting
This AI prototype is a monitoring application designed to support fact-checkers in reviewing content on Telegram. The words "claim" and "spotting" refer to the identification of statements that may contain potential misinformation.
Feels Fair?
These zines for teachers, students and professionals working with and on AI explore key aspects of fairness and artificial intelligence step by step. Across four chapters, we follow Techie, a data scientist, on their journey towards building fairer systems. The zines explain how AI models are developed and trained and examine which distortions, known as biases, can influence AI-driven decisions.
BACKGROUND
The zines are outcomes of the research project Public Interest AI.
Sustainable support of public interest AI
We have developed 16 recommendations aimed at fostering sustainable support for AI in the public interest. This work was commissioned by the Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (BMUV) as part of the Civic Coding initiative, in collaboration with the Federal Ministry of Labour and Social Affairs (BMAS) and the Federal Ministry for Family Affairs, Senior Citizens, Women and Youth (BMFSFJ).
BACKGROUND
The policy paper is based on empirical findings from the study „Civic Coding – Grundlagen und empirische Einblicke zur Unterstützung gemeinwohlorientierter KI”.
Diversity as a Future Factor
This position paper by the AI and Women* roundtable presents ideas and demands for more diverse AI development, with a particular focus on increasing the participation of women* in AI design and development processes. It is based on contributions and discussions involving 20 representatives from academia, industry, politics and civil society.
BACKGROUND
The roundtable took place on 13 October 2020 and was hosted by the Alexander von Humboldt Institute for Internet and Society, together with the Representation of the European Commission in Germany.
