Skip to content
Illustration of a woman searching for jobs online, highlighting how generative AI in recruiting influences candidate experiences and job application processes.
15 May 2025| doi: 10.5281/zenodo.15424362

Who hired this bot? On the ambivalence of using generative AI in recruiting

As generative artificial intelligence advances to various areas of organisational life, hiring processes are no exception. While the use of technology in this area has long been associated with hopes of freeing up resources for relational work, some evidence points to the contrary. As both recruiters and candidates lean into AI assistance, something essential risks being left behind: The human connection that recruiting depends on. This article explores how generative AI is reshaping recruitment and why we must pause to ask not just what we can optimise, but why we are optimising in the first place.

“When I prepared this guide for my […] interview […], I ended up with […] a great competency-based interview guide, with behavioral anchors and all sorts of things. And I almost forgot to ask [the candidate]: ‘What is actually important to you when looking for a new employer?’ […] because the AI didn’t generate these questions for me.” 

– from an interview with a recruiting manager, 2025

In many areas in and around the workplace, people are currently experimenting with ways of integrating generative AI tools like ChatGPT into their routines and workflows (Dell’Acqua et al., 2023; Retkowsky et al., 2024). The goal is often simple: Improve personal effectiveness and save time. For example, few job seekers enjoy writing lengthy cover letters for dozens of applications. In a similar way, few recruiters enjoy scanning dozens of those applications. In comes generative AI, promising to support both parties throughout the recruiting process. This can lead to significant shifts in a process that is meant to help job seekers and employers evaluate whether or not they will be a good fit for each other; a meaningful task for both (Hunkenschroer & Kriebitz, 2022). It also raises new questions: Do recruiters or hiring managers want to read generic AI-written cover letters? Do job seekers want their applications to be screened by AI? The answer to both may be no. Yet, as generative AI further advances into hiring practices, we are left to ask: What exactly are we optimising for? And at what cost?

The rise of generative AI in recruiting

To understand these shifts, it helps to distinguish between the types of AI typically used in people management. Although the line is blurry, scholars often differentiate between discriminative and generative AI. Discriminative AI systems make predictions and classifications, while generative AI systems produce seemingly new content (Feuerriegel et al., 2023; Jebara, 2004). In the context of people management, discriminative AI helps organisations make better people decisions (e.g., by predicting candidate-job fit) and generative AI can help to create more effective HR-related content (e.g., images or texts for job ads) (Andrieux et al., 2024). Generative AI qualitatively differs from discriminative AI because it can, among other things, be applied to a broad array of tasks. Thanks to tools like ChatGPT it is also easily accessible to many (Krakowski, 2025). In people management in general and recruiting in particular, this opens up a wide range of applications (Budhwar et al., 2023; Chowdhury et al., 2024). 

The allure of automation

The introduction of new technology comes with high hopes and (sometimes broken) promises (Garvey, 2018) In the workplace such promises are often related to automating repetitive tasks to free up resources for more meaningful work. In the case of people management, hopes are often about reducing administrative tasks to free up time for relational or strategic work. For example, if recruiters can use AI to screen resumes more efficiently, they can spend more time on personal interactions with candidates. In interviews I conducted as part of our project on Generative AI in the Workplace, human resources professionals further mentioned the use of generative AI to develop interview questions and tasks for work samples, better tailor job ads to desired target groups, identify SEO keywords or generate images for job ads, and write rejection letters. The hope of reclaiming time for more personal interactions with candidates and employees is a consistent theme among human resources professionals. Yet in practice, these hopes often outpace reality, as these tools can disrupt human interactions in subtle but significant ways. 

Hidden costs of efficiency

As the quote from the recruiting manager in the introduction suggests, using generative AI in hiring comes with notable risks. While the AI system did help them develop useful interview questions, things can also be lost along the way: They almost forgot to ask the candidate what their expectations were for a new employer! This question can be important to make candidates feel seen and for exploring whether mutual expectations align. Thus, while promising efficiency, generative AI can also diminish aspects of the process that matter deeply or render communication superficial. More broadly, frequent use of AI tools has been linked to declines in critical thinking (Gerlich, 2025). As Nyberg and colleagues (2025) note, verifying simple outputs is relatively easy (e.g., prompting ChatGPT to draft a rejection letter), but these are often the very tasks that were already rather automated before the introduction of generative AI (e.g., through templates or form letters). Verifying outputs requires domain knowledge, yet generative AI threatens the quality of knowledge in organisations (Retkowsky et al., 2024). Scholars warn that the very use of generative AI for people-related tasks may signal a lack of care for employees, potentially eroding perceptions of interactional justice (i.e., the sense that one has been treated with dignity and respect) (Narayanan et al., 2024; Nyberg et al., 2025). And even when time is saved, our interviews suggest that it remains unclear how that time is actually used.

An arms race of using generative AI?

Job seekers are also turning to generative AI in the hiring process, sometimes in ways that complicate the evaluation of their fit with the organisation and their true expertise. They can now use widely accessible tools like ChatGPT to produce polished resumes and cover letters, prepare ideal responses to likely interview questions, or even use AI teleprompters during virtual interviews that suggest ideal responses in real time (Kwok, 2025). This makes it much harder for recruiters and hiring managers to assess these candidates. As a result, some companies no longer expect cover letters or emphasise the importance of in-person interviews. There have also been reports of job seekers sending “AI note takers” to information sessions hosted by potential employers (Ellis, 2024) or using AI to auto-apply to hundreds of job openings at once (Demopoulos, 2024). The outcome can be a “bot versus bot war” where job seekers use AI to send out hundreds of applications, while employers use AI to filter the thousands of similar sounding applications that they receive (Ellis, 2024). Worst case, those screening bots can even give preference to AI-generated applications. The growing use of generative AI on both sides can feel increasingly absurd and raises the question of whether we are trying to out-automate each other at the cost of authenticity. 

Rethinking the “human” in human resources

So, is generative AI a good bot or a bad bot for hiring practices? As with most new technologies, the answer is: It depends. Not on the tool itself, but on how we choose to use it. Generative AI’s impact depends on the kind of people management we aspire to build and whether we can align AI with that vision. While tools like ChatGPT can enhance efficiency, particularly in the early stages of hiring (e.g., to optimize job ads), they also risk alienating job seekers through impersonal or literally robotic interactions (e.g., during interviews). 

Which brings us back to the question: Who hired this bot? In a sense, we all did⸺organisations, recruiters, and even candidates⸺often in pursuit of speed, convenience, and competitiveness. But in doing so, we may have overlooked the cost of delegating deeply human tasks to machines. The real challenge is not whether to use generative AI, but how to use it with intention and care. As HR leaders remind us, the guiding question should not just be what could be done with generative AI, but what should be done with it (Nyberg et al., 2025). Only then can we ensure that we are not just optimising for efficiency, but for the kind of environments we actually want to work in.

References

Andrieux, P., Johnson, R. D., Sarabadani, J., & Van Slyke, C. (2024). Ethical considerations of generative AI-enabled human resource management. Organizational Dynamics, 53(1), 1–9. https://doi.org/10.1016/j.orgdyn.2024.101032

Budhwar, P. et al. (2023). Human resource management in the age of generative artificial intelligence: Perspectives and research directions on ChatGPT. Human Resource Management Journal, 33, 606–659. https://doi.org/10.1111/1748-8583.12524

Chowdhury, S., Budhwar, P., & Wood, G. (2024). Generative Artificial Intelligence in Business: Towards a Strategic Human Resource Management Framework. British Journal of Management, 35(4), 1680–1691. https://doi.org/10.1111/1467-8551.12824

Dell’Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality (No. Working Paper 24-013). Harvard Business School. https://www.ssrn.com/abstract=4573321

Demopoulos, A. (2024). The job applicants shut out by AI: ‘The interviewer sounded like Siri’. The Guardian. https://www.theguardian.com/technology/2024/mar/06/ai-interviews-job-applications 

Ellis, L. (2024). ‘You’re Fighting AI With AI’: Bots Are Breaking the Hiring Process. The Wall Street Journal. https://www.wsj.com/lifestyle/careers/ai-job-application-685f29f7 

Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2024). Generative AI. Business & Information Systems Engineering, 66(1), 111–126. https://doi.org/10.1007/s12599-023-00834-7

Garvey, C. (2018). Broken promises and empty threats: The evolution of AI in the USA, 1956-1996. Technology’s Stories, 6(1). https://doi.org/10.15763/jou.ts.2018.03.16.02

Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 1-28. https://doi.org/10.3390/soc15010006

Hunkenschroer, A. L., & Kriebitz, A. (2022). Is AI recruiting (un)ethical? A human rights perspective on the use of AI for hiring. AI and Ethics, 3, 199–213. https://doi.org/10.1007/s43681-022-00166-4

Jebara, T. (2004). Generative Versus Discriminative Learning. In T. Jebara, Machine Learning (S. 17–60). Springer US. https://doi.org/10.1007/978-1-4419-9011-2_2

Krakowski, S. (2025). Human-AI agency in the age of generative AI. Information and Organization, 35(1), 1–25. https://doi.org/10.1016/j.infoandorg.2025.100560

Kwok, N. (2025). When Candidates Use Generative AI for the Interview. MIT Sloan Management Review. https://sloanreview.mit.edu/article/when-candidates-use-generative-ai-for-the-interview/

Narayanan, D., Nagpal, M., McGuire, J., Schweitzer, S., & De Cremer, D. (2024). Fairness perceptions of artificial intelligence: A review and path forward. International Journal of Human–Computer Interaction, 40(1), 4–23. https://doi.org/10.1080/10447318.2023.2210890

Nyberg, A. J., Schleicher, D. J., Bell, B. S., Boon, C., Cappelli, P., Collings, D. G., Molle, J. E. D., Feuerriegel, S., & Gerhart, B. (2025). A Brave New World of Human Resources Research: Navigating Perils and Identifying Grand Challenges of the GenAI Revolution. Journal of Management, 00(00), p. 1-42.

Retkowsky, J., Hafermalz, E., & Huysman, M. (2024). Managing a ChatGPT-empowered workforce: Understanding its affordances and side effects. Business Horizons, 67(5), 511–523. https://doi.org/10.1016/j.bushor.2024.04.009

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Sonja Köhne

Doctoral Researcher: Innovation, Entrepreneurship & Society

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Man sieht einen leeren Büroraum ohne Möbel und braunen Teppichboden. Das Bild steht sinnbildlich für die Frage, wie die Arbeit der Zukunft und digitales Organisieren und Zukunft unseren Arbeitsplatz beeinflusst. You see an empty office room without furniture and brown carpeting. The image is emblematic of the question of how the work of the future and digital organising and the future will influence our workplace.

Digital future of the workplace

How will AI and digitalisation change the future of the workplace? We assess their impact, and the opportunities and risks for the future of work.

Further articles

Blank white paper on a yellow background symbolising how AI emails can lack personal touch and emotion.

Polished yet impersonal: The unintended consequences of writing your emails with AI

AI-written emails can save workers time and improve clarity – but are we losing connection, nuance, and communication skills in the process?

A person holding headphones, symbolising the rise of smarter digital audio technology, with AI at the microphone transforming the way podcasts and audiobooks are created and experienced.

AI at the microphone: The voice of the future?

From synthesising voices and generating entire episodes, AI is transforming digital audio. Explore the opportunities and challenges of AI at the microphone.

The image shows a collection of red flags mounted on poles, arranged in a structured pattern. This image visually represents the concept of **Community Notes** and their role in highlighting and addressing information accuracy.

Do Community Notes have a party preference?

This article explores whether Community Notes effectively combat disinformation or mirror political biases, analysing distribution and rating patterns.