You can only be what you can see: How platforms and advertisers can make job ads algorithms fairer
We successfully concluded the first virtual Clinic of the “Ethics of Digitalisation” project financed by Stiftung Mercator. Twelve international fellows developed innovative approaches to improving fairness in targeted job advertising. Looking back at two intense weeks of interdisciplinary collaboration, we share highlights and key outcomes.
Who gets to see what on the internet? And who decides why? These are among the most crucial questions regarding online communication spaces – and they especially apply to job advertising online. Sociologists and development psychologists assure us: You can only be and become what you see. Targeted advertising on online platforms offers advertisers the chance to deliver ads to carefully selected audiences. But what if these criteria – inadvertently or not – further stereotypes? Optimizing job ads for relevance carries risks – from gender stereotyping to algorithmic discrimination. To make digitalization more fair, new approaches to ad delivery are necessary. The spring 2021 Clinic “Increasing Fairness in Targeted Advertising: The Risk of Gender Stereotyping by Job Ad Algorithms” examined the ethical implications of targeted advertising, with the aim to develop fairness-oriented solutions that are ready to be implemented.
The virtual Clinic brought together twelve fellows from six continents and eight disciplines. During two intense weeks in February 2021, they participated in an interdisciplinary solution-oriented process facilitated by a project team at the Alexander von Humboldt Institute for Internet and Society. The fellows also had the chance to learn from and engage with a number of leading experts on targeted advertising, who joined the Clinic for thought-provoking spark sessions.
The Clinic is part of the research project “The Ethics of Digitalisation – From Principles to Practices”, which aims to develop viable answers to challenges at the intersection of ethics and digitalisation. The project, led by the Global Network of Internet & Society Centers (NoC), is conducted under the patronage of the German Federal President Frank-Walter Steinmeier and is supported by Stiftung Mercator. In addition to the Alexander von Humboldt Institute for Internet and Society, the main project partners are the Berkman Klein Center at Harvard University, the Digital Asia Hub, and the Leibniz Institute for Media Research I Hans-Bredow-Institut.
The objective of the Clinic was to produce actionable outputs that contribute to improving fairness in targeted job advertising. To this end, the fellows developed three sets of guidelines, which cover the whole targeted advertising spectrum. While the guidelines provide concrete recommendations for platform companies and online advertisers, they are also of high interest to policymakers.
The first set of guidelines focuses on ad targeting by advertisers. This stage of the targeting advertising process involves creating the ad, selecting the target audience, and choosing a bidding strategy. In light of the variety of targeting options, researchers have voiced concerns about potentially discriminatory targeting choices, which may exclude marginalized user groups from receiving e.g. job or housing ads, thus increasing marginalization in a “Matthew effect” of accumulated disadvantage. Although discrimination based on certain protected categories such as gender or race is prohibited in many jurisdictions, and even though platforms such as Google and Facebook restrict sensitive targeting features in sectors like employment and housing, problems persist due to problematic proxy categories (like language or location). The fellows address these challenges by calling for a legality by default approach to ad targeting and for a feedback loop that informs advertisers about potentially discriminatory outcomes of their ad campaigns.
The second set of guidelines centers on ad delivery by platforms, which mainly refers to auctioning ads and optimizing them for relevance. Research has revealed that ad delivery can still be skewed along gender lines even where advertisers were careful not to exclude any kind of user group from their ad campaign. This can be partially explained by market effects. Younger women, for instance, are more likely to engage with ads and therefore more expensive in ad auctions. Another reason is that platforms optimize for relevance based on past user behavior, which means that gender stereotyping is likely to happen with respect to historically male or female dominated employment sectors. Against this background, the fellows develop a user-centered approach in their guidelines that allows users to be in charge of their own advertising profiles.
The third set of guidelines addresses how ads are displayed to users. As of now, users usually cannot look behind the scenes of targeted advertising and understand why they see certain ads and why they do not see others. Existing transparency initiatives by platforms still fall short of providing users meaningful transparency. The proposed Digital Services Act imposes online advertising transparency obligations on online platforms, but these provisions have yet to become law. The fellows propose an Avatar-solution in their guidelines, that is, a user-friendly, gamified tool to visually communicate the information collected by the platform and the attributes used to target the user with job ads.
For more details, read the report by researchers and fellows of the HIIG clinic on “Increasing fairness in targeted advertising – the risk of gender stereotyping by job ad algorithms“.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact email@example.com.
The dueling slogans: “free flow of data” and “data sovereignty” actually prevent seriously addressing the complex challenges of cross-border data flows. A report, released this week by the Internet &...
Trump’s ban from social media, while gaining significant media attention, does not represent an isolated case. Preventing individuals from accessing social media platforms or parts of them is a common...