Skip to content
zuzanna-adamczyk-573592-unsplash
24 July 2018| doi: 10.5281/zenodo.1327555

A risk worth taking: Studying content moderation on social media platforms

While social media platforms choose to identify themselves as neutral platforms, there is growing complexity around the ways to analyze their structure, operational dynamics as well as the regulatory frameworks which must accompany. Guest author Sana Ahmad researches the content moderation industry in India.

The terminology ‘social media’ has come a long way. From its early inception as a ‘computer-mediated communication’ in the form of emails, forums, BBSs identified by information scientists and inter-personal communication researchers in the early 1980s to the evolutionary terminology (from broadcast media) of ‘new media’ in the 1990s to ‘Web 2.0’ in mid-2000s for the then growing social technologies such as MySpace, Wikipedia, Reddit etc., to even the broad ranging ‘digital media’ which also included video games, e-books and internet radio, the term ‘social media’ has become a commonly used jargon.

However, it was Tarleton Gillespie, a Microsoft researcher, who helped facilitate the definitional evolution of social networking sites as ‘social media platforms’. He draws light on the growth of those digital intermediaries who he identifies as ‘platforms’ and recommends looking at social media sites as platforms, in the way of their ‘technical design, economic imperatives, regulatory frameworks and public character’.

This is a great development, to which other critical theorists of political economy have contributed as well. However, with the prevalence of hate speech, violent content, fake news and other illicit material on social media platforms, as well as its role in democracy manipulation and influence on the election outcomes, sudden public interest has been sparked in the way these otherwise leniently regulated social media platforms operate.

Countries such as Germany, Austria and now the USA (especially in wake of the 2016 US election scandal and the Cambridge Analytica controversy) are using legal channels to prohibit the presence of hate speech and violent content on social media platforms. However, there exist the conceptual difficulties in defining hate speech and its commixture at times with the users’ freedom of expression. Leafing through the examples of the GamerGate scandal, a misogynistic campaign aimed at hate-driven harassment of women in the world of video games, or the manifestation of online hate through the periods of the bulletin board systems or the progression of 4Chan and its infamous random board /b/ etc., provide grounds for analyzing the complex interplay of factors such as power, history, culture, subjectivity and others in networked communicative practices.

Content moderation practices are treated as industrial secrets

While there is an ongoing discussion on the need to protect the social media users from harmful content on these platforms and enable stringent regulatory measures to do so, there is not enough information on the industrial level processes of moderation and controlling the illicit content on these platforms. Of what is known, the content moderation practices are treated as industrial secrets by the social media companies, on grounds of protecting the identity of the workers (the moderators) or guarding their tech property or simply because it would warrant further liability for the moderators.

Further, moderation on social media platforms is publicly understood through automation. Technologies such as PhotoDNA – image detection software against child exploitation, or the developments in Adaptive Listening technology to assess user intent or even the 3D Modeling Technology, modeled on industrial assembly line moderation, are assistive in moderating the mammoth amount of content posted online. However, the question worth asking is if these automated technologies are capable of detecting cases related to satire, awareness- or education-related issues or even politically sensitive issues.

There is much that can be written about the discrepancies involved in assuming the appropriation of human jobs by machines, contradicting the wealth of academic literature that indicates the human occupation of menial service sector jobs. However, the focus for this blog remains on negotiating the importance of researching on the existing content moderation labour practices. Researchers such as Sarah Roberts and Gillespie have been occupied with shedding light on industrial level commercial content moderation. However, these research pieces along with the sporadic media articles and carefully packaged audio-visual documentaries have to gander through closed doors of an industry that guards its secrets heavily.

Also read our issue in focus: Work in the digital age

My doctoral project looks at the content moderation industry’s production model, with focus on the labour practices in India. Wanting to do this research is exciting, especially because it enables me to learn about this invisible work, laboured by moderators in exchange for low wages and lacking basic work standards. While there is also a small and high-skilled in-house moderation team of social media companies, the work is often outsourced across national borders, either to a content management company and/or online to a global pool of freelancers through both international and domestic online labour markets. A large section of this work is outsourced to India, where dreams of belonging to the Information and Communication Technology sector run high. India’s history of pre-existing business connections, glaringly lower wage rates as well as weak regulatory frameworks, have made the country a popular destination for exuviating work from the Global North.

The process of studying this subject is not uncomplicated, especially due to lack of access to companies’ policies and workers’ testimonials. Nevertheless, it is a risk worth taking in order to start understanding the current production and consumption models of networked communication systems.


Sana Ahmad is a PhD student at the Freie Universität Berlin and is writing her thesis on the content moderation industry in India. She is currently affiliated as a guest researcher with the “Globalisation, Work and Production” unit at the Wissenschaftszentrum Berlin für Sozialforschung (WZB).

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Sana Ahmad

Explore Research issue in focus

Du siehst eine Tastatur auf der eine Taste rot gefärbt ist und auf der „Control“ steht. Eine bildliche Metapher für die Regulierung von digitalen Plattformen im Internet und Data Governance. You see a keyboard on which one key is coloured red and says "Control". A figurative metaphor for the regulation of digital platforms on the internet and data governance.

Data governance

We develop robust data governance frameworks and models to provide practical solutions for good data governance policies.

Sign up for HIIG's Monthly Digest

and receive our latest blog articles.

Further articles

Generic visualizations generated by the author using Stable Diffusion AI representing futuristic visions for futures studies

Honey, we need to talk about the future

Can futures studies challenge the status quo beyond academia and approach public dialogue as an imaginative space for collective endeavours?

two Quechuas, sitting on green grass and looking at their smartphones, symbolising What are the indigenous perspectives of digitalisation? Quechuas in Peru show openness, challenges, and requirements to grow their digital economies

Exploring digitalisation: Indigenous perspectives from Puno, Peru

What are the indigenous perspectives of digitalisation? Quechuas in Peru show openness, challenges, and requirements to grow their digital economies.

eine mehrfarbige Baumlandschaft von oben, die eine bunte digitale Publikationslandschaft symbolisiert

Diamond OA: For a colourful digital publishing landscape

The blog post raises awareness of new financial pitfalls in the Open Access transformation and proposes a collaborative funding structure for Diamond OA in Germany.