Skip to content
projektseite_project page_human in the loop

Human in the Loop?

Automated decisions and AI systems are becoming increasingly important in our digital world. A well-known example is granting of loans, where banks use technological systems to automatically assess the creditworthiness of applicants. Content moderation on digital platforms such as Instagram, YouTube and TikTok is also increasingly automated. Here, algorithms evaluate users' posts, images and videos and flag them as approved or inappropriate, depending on their content. The greater the responsibility, the more critical the role of automated systems becomes - in the cockpit of an aeroplane, for example. Technologies such as the autopilot or warning mechanisms help pilots make safety-related decisions. 

It should be noted that such (partially) automated decisions are often not error-free. For example, they contain and reproduce unintentional biases from training data. In addition, machines lack the ability to replicate human contextual understanding. This means that individual, machine-based decisions are often not appropriate to people's individual situations. For this reason, there have long been calls for humans to be meaningfully involved in such processes, so that they can play a specific role in monitoring and improving technological systems.

In this context, the research project Human in the Loop? Autonomy and Automation in Socio-technical Systems investigates to what extent the active and targeted involvement of humans can influence and change (partially) automated decision-making processes. The central questions are: How can meaningful human-machine interaction be designed in complex application contexts? What is the role of human judgement in the verification and quality assurance of automated processes? How can we ensure that the decision-making process is not only legally compliant, but also transparent and traceable? And what are the requirements for the design of human-machine interaction when, in addition to technical systems and human decision-makers, the specific context of use and the social and organisational environment are also taken into account?

Research focus and transfer

Four case studies

Analysis of human participation in automated decision-making processes through field analysis, workshops and dialogue formats in four selected scenarios.

Taxonomy of influencing factors

Examination of the factors that influence human decisions and identification of the errors, vulnerabilities and strengths of all technical systems and people involved in decision-making processes.

Recommendations for action

Development of practical solutions to optimise collaboration between humans and machines and improve the implementation and interpretation of existing legislation and regulations. This includes, for example, the General Data Protection Regulation (GDPR), the Artificial Intelligence Act (AI Act) and the Digital Services Act (DSA).

 

Automated content moderation: power, law and the role of human decisions

Governance generally refers to the control and setting of rules in social spheres, including the digital space. In social networks, this includes a variety of tasks: Platform governance describes how providers such as Meta, TikTok or YouTube organise their structures, content and interactions and create framework conditions for social exchange. A central sub-area is content governance, which refers specifically to the handling of user-generated content. Content moderation plays a key role here: it involves the control, evaluation and, if necessary, removal of contributions in order to comply with internal platform guidelines and legal requirements. The goal of effective content governance and moderation is to enable respectful and safe online discourse - for example, by identifying and mitigating hate speech, abuse or disinformation.

This moderation process is increasingly taking place in interaction between algorithmic systems and human moderators. Automated processes pre-filter content based on predefined criteria, identify problematic content or assign it to specific categories. However, humans often still make the final decision on the acceptability of content. 

This case study models the framework and success factors of this complex interaction. We analyse the advantages and disadvantages of automated and human moderation decisions. In collaboration with a group of experts, we also develop a 'code of conduct' that formulates normative and ethical guidelines for the design and regulation of moderation processes. Particular attention will be paid to practical recommendations and the tension between the current practices of large US platforms and the requirements of European regulatory approaches such as the Digital Services Act.

Field analysis

January to April 2025

Dialogue formats with experts and practitioners

April to June 2025
Different workshops to develop the Codes of Conduct

October 7th 2024
Opening event: Zukunft der Content Moderation durch effektive Mensch-Maschine-Kollaboration

Recommendation for action for automated content moderation

August 2025

Between autopilot and supervision: human control in automated aviation

In modern aviation, automated systems perform many safety-related tasks, from take-off to landing. However, despite all the technological advances, humans remain indispensable in the cockpit. This case study analyses how decision-making processes are designed to maintain human control even in highly automated environments - and whether this actually works in practice. Key questions arise: How is responsibility distributed? How does trust develop? And what is the role of human judgement in situations where technology, but also humans, reach their limits?

Literature research

March to July 2025

Field analysis

July to December 2025

Dialogue formats with experts and practitioners

Starting from August 2025

Recommendation for action

Starting from March 2026

Credit granting: between automation and ethical challenges

The (partially) automated granting of consumer loans by credit institutions brings efficiency benefits, but also raises ethical and trust issues. In this case study, we analyse the factors that influence human decision-makers in the lending process. In doing so, we identify both general framework conditions - such as the economic orientation or risk appetite of a credit institution - and specific aspects of the interaction between humans and machines, such as the understanding of the roles of the people involved (humans-in-the-loop) and the technical design of the systems. Our research questions include the impact of automated credit decisions on consumers' trust in their credit institutions. We are also investigating the importance of the principle of non-discrimination and other legal frameworks in this context. 

We are particularly interested in the responsibility of the humans-in-the-loop - the human decision-makers in the lending process - for the final credit decision. We look at the extent to which they can influence decisions that have been prepared automatically: for example, whether they can fully review them, can only make changes, or are bound by pre-defined review criteria. The effectiveness of these options depends heavily on the specific institutional and technical context of the lending process.

Literature research

October 2023 to February 2024

Field analysis

May to September 2024

Dialogue formats with experts and practitioners

September to December 2024

April 10th 2024
Kick-off event: Human in the Loop: Kreditvergabe im Fokus

May 29th 2024
Digitaler Salon: Damage Control

January 20th 2025
Closing event: Human in the Loop: Human and machine in lending decisions 

"When automation – accelerated by artificial intelligence – creates risks, it is often pointed out that a human must ultimately be involved and make the final decision. But under what conditions does this‚human in the loop‘ really make a difference? That depends on many conditions: their qualifications, the ability to influence the machine processes, liability regulations and much more. In the project that is now starting, we want to analyse these conditions indifferent areas of society. The results should help to enable the use of AI that is orientated towards rights and values."

Wolfgang Schulz

"We are fascinated by the question of how humans and AI systems interact in decision-making processes. What can machines control for us? When do humans have to make decisions? These are topics that are becoming increasingly relevant and help us to contribute to redefining the role of people in digital times. Humans are often seen as a panacea for the problems and sources of error in automated decision-making. However, it is often unclear exactly how such integration should work. With our case studies in the Human in the loop? research project, we are looking for solutions to this problem that also hold up in practice."

Matthias C. Kettemann

"In many areas, the supervision and final decision in the interaction between humans and AI systems and algorithms should remain with humans. We are asking ourselves how this interaction needs to be organised in order for this to succeed and what ‚good‘ decision-making systems look like. Because in our society, the interaction between humans and machines plays a role in more and more decisions. In the Human in the loop? research project, we are asking ourselves how the interaction between humans and AI systems must be designed so that they can continue to safeguard democratic values and civil rights in the future."

Theresa Züger

„The integration of AI into decision-making processes raises not only organisational, technical and legal questions, but also social ones: What do we mean by a 'good decision' and who defines the criteria? It is often assumed that human intervention will automatically lead to better results. However, human decisions are not always rational, but embedded in a socio-cultural context and characterised by power relations. This is also true of AI technologies. We therefore need to look at the decision-making processes themselves: How are they organised, communicated and legitimised? Only if these are open, comprehensible and pluralistic will the outcome be democratically viable - whether it comes from a human, a machine or both.”

Maurice Stenzel

Former employees

  • Lara Kauter
    Former student assistant: Human in the loop?

Other publications

Züger, T., Mahlow, P., Pothmann, D., & Mosene, K. (2025). Human in the Loop im Feld Kreditvergabe. Praxisbericht für den Sektor Finanzdienstleistung. HIIG Impact Publication Series. Publication details

Mahlow, P., Züger, T., & Kauter, L. (2024). KI unter Aufsicht: Brauchen wir ‘Humans in the Loop’ in Automatisierungsprozessen? Digital Society Blog. Publication details

Lectures and presentations

Recht und Ethik der Mensch-Maschine-Interaktion
Workshop: Zukunft der Content Moderation durch effektive Mensch-Maschine-Kollaboration. Humboldt Institut für Internet und Gesellschaft. Humboldt Institut für Internet und Gesellschaft, Berlin, Germany: 07.10.2024 Further information

Matthias C. Kettemann

Organisation of events

Codes of Conduct Expert*innenrunde: Human in the Loop
02.04.2025. Humboldt Institute for Internet and Society, Berlin, Germany (International)

Daniel Pothmann, Sarah Spitz, Katharina Mosene

Human in the Loop: Mensch und Maschine in der Kreditvergabe
20.01.2025. Humboldt Institut für Internet und Gesellschaft, Berlin, Germany (National) Further information

Philipp Mahlow, Lara Kauter, Daniel Pothmann, Sarah Spitz, Katharina Mosene, Matthias C. Kettemann, Theresa Züger

Human in the Loop: Content Moderation
Zukunft der Content Moderation durch effektive Mensch-Maschine-Kollaboration. 07.10.2024. Humboldt Institut für Internet und Gesellschaft, Berlin, Germany (National) Further information

Philipp Mahlow, Ann-Kathrin Watolla, Lara Kauter, Daniel Pothmann, Sarah Spitz, Katharina Mosene, Matthias C. Kettemann, Wolfgang Schulz, Theresa Züger

Human in the Loop: Kreditvergabe im Fokus
Human in the Loop: Kreditvergabe im Fokus. 10.04.2024. Humboldt Institut für Internet und Gesellschaft, Berlin, Germany (National) Further information

Philipp Mahlow, Lara Kauter, Daniel Pothmann, Sarah Spitz, Vincent Hofmann, Katharina Mosene, Matthias C. Kettemann, Wolfgang Schulz, Theresa Züger

Funded by

 

 

DurationOktober 2023 - September 2027
FundingStiftung Mercator

 

Contact

Sarah Spitz

Head of Dialogue & Knowledge Transfer | Project Coordinator Human in the Loop?

AI & Society Lab

The AI & Society Lab is a research group at HIIG. It functions as an interface between research, industry and civil society.

Digital Society Blog