Skip to content

You can only be what you can see: How platforms and advertisers can make job ads algorithms fairer

22 March 2021

We successfully concluded the first virtual Clinic of the “Ethics of Digitalisation” project financed by Stiftung Mercator. Twelve international fellows developed innovative approaches to improving fairness in targeted job advertising. Looking back at two intense weeks of interdisciplinary collaboration, we share highlights and key outcomes.

Who gets to see what on the internet? And who decides why? These are among the most crucial questions regarding online communication spaces – and they especially apply to job advertising online. Sociologists and development psychologists assure us: You can only be and become what you see. Targeted advertising on online platforms offers advertisers the chance to deliver ads to carefully selected audiences. But what if these criteria – inadvertently or not – further stereotypes? Optimizing job ads for relevance carries risks – from gender stereotyping to algorithmic discrimination. To make digitalization more fair, new approaches to ad delivery are necessary. The spring 2021 Clinic “Increasing Fairness in Targeted Advertising: The Risk of Gender Stereotyping by Job Ad Algorithms” examined the ethical implications of targeted advertising, with the aim to develop fairness-oriented solutions that are ready to be implemented. 

The virtual Clinic brought together twelve fellows from six continents and eight disciplines. During two intense weeks in February 2021, they participated in an interdisciplinary solution-oriented process facilitated by a project team at the Alexander von Humboldt Institute for Internet and Society. The fellows also had the chance to learn from and engage with a number of leading experts on targeted advertising, who joined the Clinic for thought-provoking spark sessions.

The Clinic is part of the research project “The Ethics of Digitalisation – From Principles to Practices”, which aims to develop viable answers to challenges at the intersection of ethics and digitalisation. The project, led by the Global Network of Internet & Society Centers (NoC), is conducted under the patronage of the German Federal President Frank-Walter Steinmeier and is supported by Stiftung Mercator. In addition to the Alexander von Humboldt Institute for Internet and Society, the main project partners are the Berkman Klein Center at Harvard University, the Digital Asia Hub, and the Leibniz Institute  for Media Research I Hans-Bredow-Institut.

The objective of the Clinic was to produce actionable outputs that contribute to improving fairness in targeted job advertising. To this end, the fellows developed three sets of guidelines, which cover the whole targeted advertising spectrum. While the guidelines provide concrete recommendations for platform companies and online advertisers, they are also of high interest to policymakers.

Read the guidelines here

The first set of guidelines focuses on ad targeting by advertisers. This stage of the targeting advertising process involves creating the ad, selecting the target audience, and choosing a bidding strategy. In light of the variety of targeting options, researchers have voiced concerns about potentially discriminatory targeting choices, which may exclude marginalized user groups from receiving e.g. job or housing ads, thus increasing marginalization in a “Matthew effect” of accumulated disadvantage. Although discrimination based on certain protected categories such as gender or race is prohibited in many jurisdictions, and even though platforms such as Google and Facebook  restrict sensitive targeting features in sectors like employment and housing, problems persist due to problematic proxy categories (like language or location). The fellows address these challenges by calling for a legality by default approach to ad targeting and for a feedback loop that informs advertisers about potentially discriminatory outcomes of their ad campaigns.

The second set of guidelines centers on ad delivery by platforms, which mainly refers to auctioning ads and optimizing them for relevance. Research has revealed that ad delivery can still be skewed along gender lines even where advertisers were careful not to exclude any kind of user group from their ad campaign. This can be partially explained by market effects. Younger women, for instance, are more likely to engage with ads and therefore more expensive in ad auctions. Another reason is that platforms optimize for relevance based on past user behavior, which means that gender stereotyping is likely to happen with respect to historically male or female dominated employment sectors. Against this background, the fellows develop a user-centered approach in their guidelines that allows users to be in charge of their own advertising profiles.

The third set of guidelines addresses how ads are displayed to users. As of now, users usually cannot look behind the scenes of targeted advertising and understand why they see certain ads and why they do not see others. Existing transparency initiatives by platforms still fall short of providing users meaningful transparency. The proposed Digital Services Act imposes online advertising transparency obligations on online platforms, but these provisions have yet to become law. The fellows propose an Avatar-solution in their guidelines, that is, a user-friendly, gamified tool to visually communicate the information collected by the platform and the attributes used to target the user with job ads.

For more details, read the report by researchers and fellows of the HIIG clinic on Increasing fairness in targeted advertising – the risk of gender stereotyping by job ad algorithms.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact

Alexander Pirang

Former Associated Doctoral Researcher: AI & Society Lab

Matthias C. Kettemann, Prof. Dr. LL.M. (Harvard)

Head of Research Group and Associate Researcher: Global Constitutionalism and the Internet

Sign up for HIIG's Monthly Digest

and receive our latest blog articles.

Man sieht in Leuchtschrift das Wort "Ethical"

Digital Ethics

Whether civil society, politics or science – everyone seems to agree that the New Twenties will be characterised by digitalisation. But what about the tension of digital ethics? How do we create a digital transformation involving society as a whole, including people who either do not have the financial means or the necessary know-how to benefit from digitalisation?  And what do these comprehensive changes in our actions mean for democracy? In this dossier we want to address these questions and offer food for thought on how we can use digitalisation for the common good.

Discover all 12 articles

Further articles

Federal German states have their own approach of AI Policy

ArtificiaI Intelligence made in X: The German AI policy landscape

AI is also discussed at the subnational level. We wondered: Why do German federal states feel the need to also issue AI policies for themselves?

Digital Democracy needs deliberation

Designing Digital Democracy

Designing rules for digital democracy is difficult. Private platforms' orders imperfectly shape what can be said online, as new ideas for more democracy on platforms through deliberative elements are being...

Shaping AI in the interests of employees

Shaping AI in the interests of employees

AI offers opportunities and risks for employees. But what can managers and works councils do to enable potential positive effects and avoid negative effects?