The use of AI in HR management – Curse or blessing?
During the first Pop-Up Lab of HIIG’s AI & Society Lab an interdisciplinary team of researchers tackled pressing issues at the intersection of Artificial Intelligence (AI) and HR management over the course of three intense days and by engaging in a variety of methods.
Hands-on research: What are the most pressing challenges regarding the use of AI in HR management?
There has been a growing interest in the use of algorithms in hiring, and more and more employers rely on AI-driven tools in human resource (HR) management. Automated decision-making (ADM) systems are used to perform time-consuming and tedious tasks such as shortlisting promising candidates out of large applicant pools.
At the same time, ADM systems are also increasingly used to ensure employee happiness by designing personalised programmes tailored to people’s individual needs, career objectives and learning preferences. Such specialised AI systems are promoted as being cost-effective and even key to ending discrimination in HR.
However, not only the “Amazon case” where the use of AI in hiring inadvertently favored male candidates has highlighted how that can go very wrong. One is therefore inclined to assume that there exists a substantial gap between the promise and the reality of AI in HR management. With critics warning that automated systems can be just as biased as the humans who develop and train them, there is concern that these tools may eventually replicate and perpetuate existing discrimination patterns.
The use of AI in HR thus raises urgent questions revolving e.g. around the impact of incomplete or unrepresentative data or the lack of transparency and explainability of AI systems – a number of which we tackled in our first Pop-Up Lab.
Pop-Up Labs – A new type of knowledge work
The increasing automation of infrastructures poses both chances and challenges to our society at large. As described above, this holds particularly true for the area of HR with its sensitive data (allegedly prone to bias) where far-reaching decisions can have a tremendous impact on individuals.
AI and society research always requires a 360 degree view. It needs to accommodate a variety of perspectives to ensure that we see the full picture of how AI systems have already entered many domains of our lives: perspectives from different interest groups (alongside researchers representatives from industry and civil society need to be heard) as well as from various disciplines brought together to enrich the urgent public debate on AI.
What is also needed, however, is a new way of approaching these phenomena. In this light, Pop-Up Labs are an innovative research format by the HIIG’s AI & Society Lab with the goal to put into practice its applied research focus. In the context of several Pop-Up Labs, interdisciplinary teams of researchers work together with partners from industry and / or civil society over a defined period of time. Together they develop practical research questions and tackle them by using a variety of methods tailored to each Pop-Up Lab. It is therefore an open co-creation format that strongly encourages adversarial thinking and wants to allow for ambiguity and learning journeys throughout the process.
The 1st Pop-Up Lab: Process, participants and contributors
Over the course of three intense days, 10 young researchers with backgrounds ranging from computer science and philosophy, law and public policy, sociology and (HR) management engaged in this first edition of the Pop-Up Lab focused on three dimensions:
- stimulating joint research on the topic of AI and HR;
- fostering interdisciplinary exchange and collaboration (participants had very diverse backgrounds and experiences);
- piloting the sequential approach (mixing different working methods).
On the first day, the participants got to know different perspectives on the topic through presentations by Dr. Bastian Lücke, data analyst and HR specialist at Haufe Group, Matthias Spielkamp, co-founder of AlgorithmWatch, and HIIG guest researcher Deniz Erden. At the end of day 1, co-host of the Pop-Up Lab, Shlomi Hod, data scientist and educator at ethically.ai and CS PhD candidate at PhD at Boston University, led a workshop on Bias in Machine Learning.
The following two days of the Pop-Up Lab were dedicated to the mini hackathon “Build it, break it, fix it” carried out by Shlomi, for the development of design principles for a system by incorporating “adversarial” thinking. In the context of a ‘Share your work’ session and following a mini impact school implemented by Dr. Marcel Hebing from Impact Distillery, the participants got the chance to collect feedback on their hackathon results and findings by HIIG researchers and speakers from day 1.
Findings and next steps
Both formats, the Pop-Up Lab in general and the mini hackathon in particular, turned out to be particularly suitable to explore the topic of AI and HR management, and the variety of disciplines an asset for approaching the challenges and pitfalls from all angles. On the basis of two exemplary case studies focusing on 1. hiring and 2. retention, the participants explored the current status quo and ongoing debates on AI in HR and developed hands-on and practice-oriented solutions – while simultaneously reflecting on the overall design and dynamics of this format, e.g. mix of methods and interdisciplinarity.
Main findings of the Pop-Up Lab include that AI in HR does not automatically mean an automatisation of HR, but that an AI system can rather be a helpful tool just like a calculator for which a ‘human in the loop’ is needed to check the final results. Consequently, the goal should be to demystify AI and to avoid solutionism by raising awareness for existing structural problems instead. More generally, it became visible that many questions concerning fairness and transparency in the context of AI systems remain unsolved – some of which future Pop-Up Labs will focus on.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact email@example.com.
Sign up for HIIG's Monthly Digest
and receive our latest blog articles.
We approach the de-mystification of this claim by looking at concrete examples of how AI (re)produces inequalities and connect those to several aspects which help to illustrate socio-technical entanglements.
“System Risk Indication” (SyRI) deployed by the dutch government for automatically detecting social benefit fraud. The program was shut down due to a severe lack in transparency and unproportional collection...