Skip to content
The photo pictures a basketball hoop, symbolising the Human in the Loop in automated credit lending.
08 December 2025| doi: 10.5281/zenodo.17857675

The Human in the Loop in automated credit lending – Human expertise for greater fairness

Not every credit decision can be left to machines. Although banks use automated assessment systems to save time, real-life situations are often too complex for purely algorithmic models. This is precisely where human expertise becomes essential. In our report, ‘Human in the Loop in Credit Decision-Making’, we demonstrate how front-desk staff, risk analysts, and external agencies collaborate with automated models to evaluate creditworthiness. Our analysis highlights why human judgement is indispensable in credit lending, particularly in ambiguous cases. The question is: how can human and machine decision-making interact to make credit assessments fairer, more transparent and easier to understand?

In the digital age, automated processes that make decisions with or without human intervention are becoming increasingly common. These processes are based on algorithms, artificial intelligence (AI) or rule-based systems, and are used in areas such as healthcare and finance, including for loan approvals. The first case study of our ‘Human in the Loop?’ research project, examines precisely this area. Autonomy and Automation in Socio-Technical Systems, examines precisely this area. How do automated processes and human actors collaborate in the context of lending decisions? Who is responsible for oversight? Who ensures the quality of decisions?

The aim of lending is to evaluate applications efficiently and fairly. First, algorithms analyse data such as income, credit history, existing debts and repayment behaviour. This initial check is typically performed by third-party providers, such as credit bureaus. The resulting credit scores are then fed into an internal bank traffic light model that calculates the credit default risk. If the risk is within a predefined range, the application is either approved or rejected immediately. In unclear cases, risk analysts review the financial data and make a different decision if necessary.

Humans and machines working together

Interacting with machines can therefore bring considerable advantages. Automated systems can process large amounts of data quickly and consistently, while human decision-making allows for flexibility and the consideration of individual circumstances. At the same time, automation reduces emotional or subjective influences, which can lead to more objective lending decisions. Humans, in turn, can correct misjudgements by considering aspects that algorithms overlook, such as sudden changes in income due to parental benefits, or alternative collateral, such as property. This prevents overly strict system rules from leading to unjustified rejections. However, for this interaction to lead to fair and well-founded decisions, the algorithms used must be critically examined, distortions must be identified, and human assessments must be used in a targeted manner. Only then can the system remain both economically efficient and socially just.

The Human in the Loop project

The key question of our project ‘Human in the Loop? Autonomy and Automation in Socio-Technical Systems based at the Alexander von Humboldt Institute for Internet and Society (HIIG), is: How can humans and machines collaborate effectively to exploit the advantages of automation without losing important human skills and values? Funded by the Mercator Foundation, we are examining this topic through various case studies. In this blog post, we present key findings from a practical report on credit lending, analysing how lending decisions are made in practice, from the initial consultation to risk assessment, and examining the importance of human expertise and automation in this environment.

It is important to note that current widespread procedures are predominantly based on rigid if-then rule-based systems. Modern, adaptive AI solutions, on the other hand, are not yet widely used in creditworthiness assessments. Instead, banks use deterministic systems supplemented by human experience and expertise. The human contribution remains indispensable, especially in special cases where automated processes reach their limits.

Humans in the Loop: Front-desk staff and risk analysts

The concept of a ‘human in the loop’, whereby a single person monitors and controls an automated system, does not reflect the reality of lending. Within a bank, several people are usually involved at different stages of the decision-making process. They often not only passively monitor the results of automated systems, but also actively intervene in the decision-making process. A key finding of the project is that human actors perform a variety of different functions in lending; two stand out in particular:

1. Front-desk staff as the first point of contact

The front desk staff are the first point of contact for customers. They accept loan applications and guide customers through the application process. Their responsibilities extend well beyond merely recording data. They provide advice and support to applicants, help them to avoid input errors and, if necessary, forward applications to risk analysts. In practice, creditworthiness checks often use a traffic light system: green means a positive credit decision and red means rejection. If the signal is yellow, indicating an unclear recommendation, the case is forwarded to the risk analysts. Front desk employees have access to credit scores and other relevant data, but they do not have any decision-making authority in cases where the data is clear. Their role is therefore primarily advisory and coordinating, and they only make decisions in exceptional cases involving individual special solutions.

The following example from our interviews illustrates this:

“I’ve had this happen: parents on parental leave. Looking back, I’d never have thought that we could offer them a mortgage. But then I spoke to a caseworker who said, ‘That’s understandable. Just process it manually.’ As long as a human factor is involved, decisions outside the standard are possible.”
— René Stephan, Business Customer Advisor

2. Risk analysts as experts in credit assessment

In particular, risk analysts take over the review of loan applications when the automated system issues a yellow signal, i.e. an ambiguous recommendation. They review each case individually and make the final decision on whether to grant the loan. These experts have an in-depth knowledge of financial data and often have many years of experience in evaluating loan applications. This enables them to identify and correct deviations from standardised assessments manually if necessary.

“Of course we use a rating system to assess creditworthiness. It’s required by regulation. But at the end of the day, it’s humans who make the decision.”
— Credit Risk Management Expert

Risk analysts play a key role in preventing bad decisions by ensuring that individual circumstances, which standardised systems cannot properly assess, are taken into account in the final decision.

Factors influencing human decision-making

However, neither front desk employees nor risk analysts operate in isolation when making decisions. Their assessments are influenced by various factors, ranging from economic conditions and the internal guidelines of credit institutions to individual experiences and judgements. We identified these factors influencing the quality of decisions through various stakeholder interviews. These factors can be divided into three dimensions:

1. External influencing factors

These include various social, economic and legal conditions. Shortages of skilled workers, banks’ economic goals, and legal requirements such as the General Equal Treatment Act (AGG) influence the decision-making process, affecting the scope, transparency, fairness, and quality of decisions and processes. The completeness and traceability of the data sets used to train automated systems and provided by external credit agencies, such as Schufa, also play a decisive role in data quality.

2. Influencing factors at the level of the actors

Several factors influence the quality of credit decisions at this level: the actors’ understanding of their roles and professions, personal contact, and the scope available for decision-making. Prejudices against specific life circumstances or subjective assessments by individual employees can impair objective evaluation. At the same time, experts emphasise that, when it comes to lending, the human ability to understand special cases and individual life situations represents significant added value.

3. Technical influencing factors

The available technology also influences how people make decisions. Decisive factors here include the database, the transparency and traceability of the system, and the design of the user interface, i.e. whether it enables intuitive, efficient and aesthetically appealing interaction between humans and machines. A poorly designed interface can cause users to overlook important information or enter it incorrectly. Transparency of algorithms is also essential, so employees can understand how and why the system arrives at a particular recommendation and question it if necessary. Added to this is the error culture that a company cultivates when dealing with automated processes.

Challenges in the interplay between humans and machines

But how can we ensure that automated processes remain fair and transparent? Where do risks arise, and what measures are necessary to minimise them? Our analysis in the practical report reveals several challenges that need to be addressed.

Communication problems and lack of overall process knowledge

One of the key challenges is that neither individuals nor institutions have a complete overview of the entire automated decision-making process. Insufficient understanding of the system architecture — i.e. how decisions are made in individual cases, which algorithms are used and how data flows — makes it difficult to recognise important connections. This becomes particularly problematic when human intervention is necessary to correct special cases.

Lack of transparency

Another point of criticism is the lack of transparency, both at credit institutions and at credit agencies such as Schufa and Creditreform. Consumers often have little idea why a credit decision has been made. This makes it difficult for them to realistically assess their own creditworthiness and make the necessary adjustments. At the same time, employees often lack sufficient information about how the automated system works, leading to uncertainty and potential misjudgements.

Discrimination and biases

Discrimination also plays an important role in lending. Current legislation, such as the General Equality Act, does not offer consumers adequate protection against unfair discrimination. Furthermore, there is a lack of effective mechanisms for investigating allegations of discrimination in court. There is a discrepancy between the theoretical neutrality of algorithms and the fact that their design and operation are influenced by human biases and experience.

Uncertainty in dealing with automation technologies

Many employees involved lack sufficient technical understanding of the systems used. Our interviews revealed that some employees are unclear as to whether rule-based systems or AI-based applications are used in lending, and what limitations these systems have. This lack of technical know-how carries the risk that the system’s recommendations will either be accepted uncritically or questioned too much.

Recommendations for improving credit decision-making

Based on these findings, the practical report sets out specific suggestions on how to strengthen the role of the ‘human in the loop’ and improve the entire decision-making process.

1. Expanding anti-discrimination law (AGG)

To combat discrimination in the lending sector more effectively, the legal framework should be expanded. Extending the AGG to consumer credit would make it easier for affected individuals to take legal action against unjustified rejections. This could be achieved by reversing the burden of proof and providing increased support from anti-discrimination associations. This would incentivise banks to implement discrimination-sensitive processes.

2. Increasing transparency

Greater transparency on the part of credit institutions and credit agencies would be helpful. Consumers should be informed about the relevant decision-making factors in easily understandable language. From 2026, this will likely become legally binding due to the revision of the European Consumer Credit Directive. Additionally, banks could enhance internal communication to ensure that all stakeholders, from counter staff to risk analysts, clearly understand the decision-making logic and limitations of the system.

3. Improving financial literacy

In addition to institutional measures, educating consumers individually also plays an important role. Targeted educational programmes can help applicants to assess their creditworthiness more realistically and to enter their financial data correctly during the application process. Interactive explanatory formats and targeted training courses are useful for explaining how to deal with credit decisions in an understandable, everyday context.

4. Training and professional development

To increase the effectiveness of human involvement, front desk employees, particularly risk analysts, should receive regular training in technical and procedural issues. The aim is to impart a technical understanding of the systems used and to develop the ability to question their limitations critically.

5. User-friendly application design

The increasing automation of the application process must not result in consumers having to acquire specific skills necessary for the success or failure of their application, for which they would then be held responsible. Intuitive, accessible application interfaces and personal contact options are important in ensuring that people without in-depth technical knowledge can also successfully apply for a loan. These two factors together would ensure that individual circumstances are adequately considered and that special cases are not overlooked due to rigid, automated processes.

Outlook: Human expertise as a guarantee of fair decisions

Automating lending processes can speed things up and reduce errors. However, human expertise remains indispensable, particularly in sensitive areas that directly impact people’s lives. Our case study clearly shows that the combination of human judgement and automated data processing creates real added value, provided the relevant influencing factors are understood and controlled effectively.

Although automated systems based on rules are now standard, humans remain an integral part of the decision-making process. Employees’ expertise and experience complement what algorithms cannot do. In special cases in particular, this difficult-to-quantify information and the human factor can mean the difference between a mechanical and a responsible decision.

At the same time, however, our study also reveals significant challenges. There is often a lack of comprehensive understanding of processes, both within banks and towards customers. Insufficient communication channels and technical knowledge can lead to automated recommendations being misinterpreted or implemented incorrectly. Furthermore, discrimination and biases pose risks that must be urgently addressed as automation increases.

Automation should not be an end in itself; rather, it should be understood as a tool that, when used correctly, can complement and strengthen human judgement. Only in this way can banks and credit institutions continue to provide responsible, non-discriminatory lending services in future.

This article was first published in German on April 15, 2025, on the eFin blog of the discourse project “Democracy issues of the digitalised financial sector” (eFin & Democracy) at the Center Responsible Digitality (ZEVEDI).

Full report

Züger, T., Mahlow, P., Mosene, K., & Pothmann, D. (2025),  Praxisbericht: Human in the Loop im Feld der Kreditvergabe [Praxisbericht für den Sektor Finanzdienstleistung], Alexander von Humboldt Institut für Internet und Gesellschaft (HIIG).

Further information

on the website of our research project Human in the Loop?

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Philipp Mahlow

Associate researcher

Katharina Mosene

Researcher: AI & Society Lab

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Platform governance

In our research on platform governance, we investigate how corporate goals and social values can be balanced on online platforms.

Further articles

Rows of chairs in a higher education setting, symbolising how meaningful impact in technology design begins with understanding real people and their contexts.

Impactful by design: For digital entrepreneurs driven to create positive societal impact

How impact entrepreneurs can shape digital innovation to build technologies that create meaningful and lasting societal change.

A shelf with books and a deconstructed face sculpture, symbolising how AI and bias influence knowledge and learning in higher education.

Identifying bias, taking responsibility: Critical perspectives on AI and data quality in higher education

AI is changing higher education. This article explores the risks of bias and why we need a critical approach.

The photo shows fighting white and black pelicans on the water, symbolising the spread of disinformation by German politicians and parties.

Who spreads disinformation, where, for what purpose, and to what extent?

How much disinformation do German politicians and parties actually spread? On which platforms and to what ends? Two new studies provide systematic answers.