Making sense of our connected world

Emotionless competition at work: When trust in Artificial Intelligence falters
In many companies, people are already working side by side with artificial intelligence (AI). Systems such as ChatGPT, DALL·E or specialised analytical tools support decision-making, provide creative input or take on complex tasks – often faster and more accurately than their human colleagues. Yet this very superiority comes at a price. Our research shows that trust in AI falters when people feel inferior in direct comparison. The reason is simple. People compare themselves – with colleagues and with machines. And this comparison is emotional. Those who find themselves outperformed by AI begin to doubt not only their own abilities but also the technology itself. Paradoxical as it may sound, the machine’s superiority can weaken trust in it. Rather than fostering collaboration with AI, the willingness to use it then declines. Companies introducing AI systems should be aware of this dynamic.
When AI systems take on tasks previously reserved for humans, they are expected to do above all one thing: make everyday work more efficient and ease the workload of employees. Expectations are high. Companies hope for more precise decisions, fewer mistakes and new ways to streamline processes. Even today, algorithms analyse data, screen applications or propose creative solutions.
At first glance, this sounds like clear progress – a powerful new tool to complement and ease our work. Many people and organisations therefore place their trust in the performance of AI. The underlying principle seems straightforward: the better the machine, the greater the trust.
This perspective has so far also dominated research. Studies show that people are willing to collaborate with AI when they perceive it as competent (Choung et al., 2023). However, most of the focus has been on absolute performance – in other words, on how good the technology is in itself. At first sight this seems plausible. Yet it overlooks an important aspect: people are not neutral observers. They assess their own performance in relation to others. And this applies even when the comparison is with a machine.
When AI becomes too powerful
Our study therefore takes a fresh look at this logic and shifts the perspective. It does not simply ask how capable AI is. It also considers how it performs in comparison with humans – and what consequences this “social comparison” entails.
To explore this, we conducted an experiment. At its core was the question of how people experience direct comparison with AI. We were particularly interested in one issue: how willing are employees to collaborate with AI as a colleague, especially once they realise the algorithm delivers better results than they do?
Relative comparison rather than absolute performance
To better understand how people respond to particularly powerful AI systems, we conducted a vignette experiment with 797 participants. This is a well-established method in behavioural research, in which individuals read short, realistic scenarios and imagine themselves in the given situation.
In our case, participants were asked to imagine they had just finished a poker tournament. They were then told whether they had performed equally well or worse than either a human or an AI opponent. Afterwards, they were asked how willing they would be to collaborate with this counterpart on a completely different and unrelated task.
The study revealed two key effects:
- General enthusiasm for AI: Regardless of the comparison outcome, participants were overall more willing to imagine working with an AI than with a human. This general openness towards AI is an encouraging sign. In research, this phenomenon has been described as “algorithm appreciation” (Logg et al., 2019).
- Superiority of AI can undermine trust: However, as soon as it became clear that the AI had outperformed them, trust in the system dropped sharply (Benbasat & Wang, 2005). This was true even when the context was merely a game of poker that had no connection to the subsequent task. What was particularly striking was that trust in the AI’s goodwill and integrity decreased, but so too did perceptions of its competence – even though it had demonstrably performed better than human counterparts.
Social competition with algorithms?
The explanation lies in social psychology. People do not interpret superiority – even that of a machine – in a neutral way. They react emotionally. Those who feel inferior often experience negative emotions such as envy, self-doubt or a threat to their self-esteem (Smith, 2000). These emotional reactions spill over into their evaluation of AI as an interaction partner, even though it is of course not human. The system is no longer perceived as a neutral tool but as a social actor – akin to a human competitor.
Social comparisons are deeply ingrained in human behaviour. We constantly measure ourselves against others, often unconsciously. When we fare poorly in such comparisons, we frequently perceive it as a threat to our self-esteem. To cushion this negative feeling, many people resort to a typical defence mechanism: they withdraw, emotionally or even literally, from the superior counterpart (Tesser, 1988).
What is new, however, is that this mechanism is also triggered by AI systems. Even though we know that AI assistants have no “feelings” or “intentions”, we treat them in social comparison much like human beings.
What companies can learn
If AI triggers not only technical but also psychological dynamics, companies need to adopt a sensitive approach to its introduction. Powerful systems offer enormous potential, but they can also have unintended side-effects – particularly when they are perceived less as tools and more as competitors.
To maintain trust in the technology and ensure successful collaboration between humans and machines, companies should take active steps.
Three approaches can help:
- Less triumph, more teamwork: Constantly emphasising AI’s superiority may provoke resistance. It is more effective to present AI systems as partners that complement human capabilities.
- Take social dynamics seriously: Managers should recognise that new technologies can also threaten employees’ sense of self-worth. Trust needs to be built deliberately.
- Adapt communication: Instead of portraying AI merely as a superior “super brain”, the focus should be on collaboration and support.
Adhering to these principles can help lay the foundation for genuine acceptance. In this way, AI can be used in the workplace not only efficiently but also in a way that fosters trust and sustainability.
No success without trust
These insights demonstrate that handling AI in companies requires far more than technical expertise. Modern systems, such as large language models, continually open up fascinating new possibilities – from data analysis to creative content generation. Yet one thing must be kept in mind: technical superiority alone is not enough to integrate AI sustainably into organisations. People generally do not want to work solely with the “best” technology. They want to work with colleagues – human or otherwise – whom they can trust.
To implement AI successfully, organisations must therefore take the psychology of social comparison seriously. They need strategies to build trust, reduce threats to self-worth and enable genuine collaboration between people and machines. Only then can the full potential of AI in the workplace be realised.
References
Asbach, S., Graf-Vlachy, L., Fuegener, A., & Schinnen, M.H. (2025) Can Superior AI Performance in Unrelated Tasks Reduce People’s Willingness To Collaborate With the AI? Proceedings of the European Conference on Information Systems (ECIS). https://aisel.aisnet.org/ecis2025/human_ai/human_ai/1
Benbasat, I., & Wang, W. (2005). Trust in and adoption of online recommendation agents. Journal of the Association for Information Systems, 6(3).
Choung, H., David, P., & Ross, A. (2023). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction, 39(9), 1727–1739.
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103.
Smith, R. H. (2000). Assimilative and contrastive emotional reactions to upward and downward social comparisons. In J. Suls & L. Wheeler (Eds.), Handbook of Social Comparison: Theory and Research (pp. 173–200). Springer.
Tesser, A. (1988). Toward a self-evaluation maintenance model of social behavior. In L. Berkowitz(Ed.), Advances in experimental social psychology (Vol. 21, pp. 181–227). Academic Press.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

You will receive our latest blog articles once a month in a newsletter.
Digital future of the workplace
Defending Europe’s disinformation researchers
Disinformation researchers in Europe face lawsuits, harassment & smear campaigns. What is behind these attacks? How should the EU respond?
Debunking assumptions about disinformation: Rethinking what we think we know
Exploring definitions, algorithmic amplification, and detection, this article challenges assumptions about disinformation and calls for stronger research evidence.
AI resistance: Who says no to AI and why?
This article shows how resisting AI systems means more than protest. It's a way to challenge power structures and call for more democratic governance.