Skip to content
Semi-transparent glass floor with only footprints visible, symbolising partial insight like the DSA's transparency reports that show traces but not the full picture.
25 September 2025| doi: 10.5281/zenodo.17201618

Counting without accountability? An analysis of the DSA’s transparency reports

The Digital Services Act aims to hold platforms more accountable for illegal content by demanding greater transparency.  Platforms like Facebook, Instagram, Tiktok or X must now publish detailed reports, showing, for example, how many posts they removed and how quickly. These reports are meant to give regulators, researchers, and the public insight into how well platforms are enforcing the rules. But do the reports really deliver what they promise? Or is this measure just a new but ultimately useless addition to the flood of EU reports without any actual improvements in practice?

Online platforms such as Instagram, TikTok or X have become an integral part of everyday life. However, despite their societal relevance, these privately owned platform companies remain largely opaque when it comes to understanding how they work. They rarely explain how they choose which content to distribute, remove or suppress, even though their decisions determine what we see online.

Holding platforms accountable

One of the core aims of the new EU digital rulebook, including the Digital Services Act (DSA), is to regulate how platforms handle illegal content and enable more effective action against it. What constitutes illegal content is not explicitly codified in the DSA, but ultimately depends on what is illegal per Union or member state law. This often includes child sexual abuse material, incitement to terrorism, illegal hate speech, or infringement of intellectual property rights. 

Platforms are to be held accountable for their reactions once they are made aware of illegal content. For example, if a user reports a post or video that they believe contains illegal hate speech, the platform must review the report and decide on an action such as deleting or restricting the content. Later, the platform must also disclose how many such cases were handled within a given timeframe and what they did about the reported content in a transparency report.

One of the DSA’s main aims is to increase the accountability of platforms by promoting transparency. A key assumption underlying the DSA is that digital services may pose systemic risks to society. Examples of such risks include widespread disinformation and the undermining of electoral integrity. To identify and limit these risks at an early stage, maximum transparency is necessary. This is supposed to enable public authorities, researchers and civil society to recognise potential systemic risks, and allow individual users to understand and assert their rights.

The DSA’s transparency reports

Transparency reports are therefore a key accountability measure under the DSA. Under Articles 15, 24 and 42, platforms must publish comprehensible reports on their content moderation activities. These reports must include information on the types of illegal content moderation and the actions taken. Content moderation actions can include deleting the content, demoting it or geo-blocking it in a specific country. The reports must be publicly available in a machine-readable format (Art. 15(1) DSA). Most platforms provide them as PDF documents or HTML pages on their websites. They are also collected and linked on an EU website

But how much transparency do these reports actually provide? Are they really suited to uncover how platforms decide what to take down and what not to? And most importantly, are they an adequate measure to increase platform accountability? By analysing the transparency reports of selected online platforms, I argue that current transparency reports fall short of delivering true accountability with regard to the moderation of illegal content. Even though a new standardised reporting template has recently been introduced, and many hope it will improve the situation, I argue that the template can only address some of the inadequacies of the current reports while potentially creating new problems.

Transparency reports in practice: Three observations

What do the transparency reports actually reveal about how platforms handle illegal content? Based on a qualitative analysis of the two reporting rounds of DSA transparency reports published in 2024 by seven very large online platforms (VLOPs, platforms with more than 45 million monthly average users), namely Instagram, Facebook, LinkedIn, Pinterest, Snapchat, TikTok and X, I examined how these VLOPs fulfill their transparency requirements. The analysis was conducted using MAXQDA, a software for qualitative content analysis. 

Although the European Commission (EC) provides guidance on the content of transparency reports, it does not specify their structure or level of detail. The general idea seems to have been, on the one hand, to give the platforms some leeway and, on the other, to see whether it would be possible to build on best practices and refine the specifications later.

The new template is intended to clarify the expected form, content and level of detail of the reports (European Commission, 2024). However, the regulation has only been in force since July 2025 and has not yet been applied in practice. In the available reports, each platform has interpreted the DSA specifications independently. My analysis revealed three key findings, highlighting the variety of approaches platforms adopt for their transparency reporting obligations and the limitations of the current reporting format.

Observation 1: Disconnected data points

One major issue is the lack of connection between different data points within the individual reports. For example, figures relating to moderation decisions, user complaints or automatic deletions are often only presented as individual values, with no reference to one another. This means that it is hard to establish a relationship between data which makes it harder for researchers to interpret or integrate them into meaningful analyses. 

An example of this is the reporting of authority orders under Article 9 of the DSA, which sets out how Member State authorities, such as national courts or Digital Services Coordinators (in Germany, the Bundesnetzagentur), can request that platforms take action against illegal content. The term “order” can be misleading though, as platforms are not required to delete content that is referred to them by a Member State authority. Instead, they review the content independently and then decide whether to take action.

In its 2024 transparency reports, Facebook provides two separate tables: one lists the number of orders received per Member State, the other the number of orders by content type, such as terrorist content, illegal speech, etc. (Facebook, 2024, p.3-5; Facebook, 2025, p.4-6). However, these two tables are not linked. It would therefore be impossible to find out, for example, how many orders to act against terrorist activity were issued by Italy against Facebook. This lack of cross-referencing severely limits the analytical value of the data. 

Observation 2: Arbitrary and inconsistent categories

There is no standardised categorisation of illegal content across platforms, sometimes not even within the individual reports. Each platform has created its own set of categories, loosely based on EU or member state law, but ultimately inconsistent and often arbitrary.

LinkedIn, for example, uses the label  “Illegal or harmful speech” (LinkedIn, 2024, p.17) while Facebook uses terms like “Hate speech” and “Misinformation” (Facebook, 2024, p.4). Pinterest takes a different approach and directly refers to the specific laws that a piece of content is said to violate.

Most platforms also include a vague catch-all category such as “Other illegal content”. For example, Instagram’s transparency report for April to September 2024 states that it received 91 orders from authorities to act against “other” types of illegal content, accounting for almost one third of all cases (Instagram, 2024, p.4). Facebook received 113,638 user notices for “other illegal content”, accounting for around 45% of the total 248,748 notices received. 

Furthermore, the fact that some platforms use different categories for authority orders and user-submitted reports of illegal content adds unnecessary complexity. This makes it difficult to compare different platforms and further complicates the picture of how illegal content is handled.

Observation 3: Opaque decision-making 

The most striking issue, perhaps, is the lack of clarity surrounding what happens after a platform receives a user notice or an authority order. In many reports, it is unclear what action was taken, on what basis, and whether any action was taken at all. While most platforms report how many orders they have received from authorities, they do not report whether they responded by deleting the reported post, deleting the account, geo-blocking the content in a specific country, or reducing a post’s visibility. 

LinkedIn merely hints whether „at least some action was taken“ (LinkedIn, 2025, p.16-17) in response to an authority order, but does not provide further details. Pinterest is the notable exception: it clearly indicates whether it deactivated content, restricted it geographically, or limited its distribution (Pinterest, 2025).

Another issue is the considerable confusion surrounding the two types of reporting mechanisms: the specific mechanism for reporting illegal content under Article 16 of the DSA, and the general channels that platforms have in place for reporting any type of rule violation (e.g. content that violates a platform’s advertising policy but not any laws). Article 16 requires platforms to have a reporting mechanism through which users can report potentially illegal content in a precise and substantiated way. For example, if a user believes that a post incites terrorism, they must be able to report it to the platform in a way that clearly indicates that it is potentially illegal content.  

In practice, all user reports, regardless of the reason given for the report, are first reviewed for violations of the platforms’ own rules, such as community guidelines or advertising policies. Therefore, at the outset of the review process, it is irrelevant whether it is an Article 16 notice or a different kind of user report. If an Art. 16 notice is found to violate a platform rule and if this leads to the reported content being deleted globally, it is never checked to see if it violates the law, even if it was originally reported for that reason. So, if the terrorism-inciting post in our example also violated a platform’s advertising policy, it would never be reviewed for breaching any anti-terrorism laws. This means: The content would disappear globally, not just in the country where it might be unlawful. 

While this approach is clearly efficient – why block a post in only one country if it violates a platform rule and would be del­eted globally anyway? – it raises questions about who decides how public debate takes place and on the basis of which rules.

Neither LinkedIn nor Snapchat explicitly distinguish between actions taken following a user report based on the law and those based on internal policies. Snapchat even argues that breaches of the law are automatically covered by their own rules, as a violation of their Community Guidelines includes “reasons of illegality” (Snapchat, 2024). This seems to violate Article 15 of the DSA, which clearly states that providers must specify whether an action was taken on the basis of the law or their own terms and conditions.

In summary, the reports analysed here reveal major inconsistencies and blind spots. From disconnected data points and arbitrary categories to the opaque reasoning behind content moderation decisions, the reports currently fall short of offering real transparency. Against this backdrop, the following section outlines three key criticisms of the current reporting system and considers whether the new template might address some of these shortcomings.

Three key flaws in platform transparency reports 

  1. The provided data is borderline unusable. The big differences in amount, level of detail, operationalisation and presentation of data across reports make it really hard to compare the platforms and assess what measures are most effective in combating illegal content. In many cases, it is not even possible to establish connections between data points within individual reports, which further hinders any meaningful evaluation. The new template, which is an Excel spreadsheet where platforms can enter their data, should help to address some of these problems. For one thing, comparability is likely to improve if all platforms provide their information in the same format. The template also introduces fixed categories of illegal content and requires that the category “other” be described. However, the template only asks platforms to report the “number of items moderated” (European Commission, 2025), without specifying what type of content moderation action was taken. This is surprising, given that Article 15 of the DSA requires platforms to report how many notices they received under Article 16, and to categorise these by “any action taken pursuant to the notices” (Art. 15(1b) DSA). The new template, however, does not seem to include this requirement. 
  1. The process of moderating illegal content remains largely opaque. Even when the data in the reports shows which actions followed which notices or reports, the underlying process remains a black box. Questions such as “What criteria are used to decide cases?” and “What legal expertise do content moderators have?” remain unanswered. Snapchat, for example, has received 82,011 Art. 16 notices for content that potentially violates rules against false information during the reporting period between January and June 2024. Out of these, 106 pieces of content were deleted, 255 accounts were issued warnings and 12 accounts were locked. Setting aside the absurdly low number of actual actions, it is impossible for us to know why Snapchat deleted content in some cases and not in others, or why it locked accounts in some cases and merely issued warnings in others. The new template does not ask for that kind of information. So, it is unlikely that we will see an improvement in this regard.
  1. Platform rules remain the gold standard. While it makes sense for efficiency reasons and is completely in line with the DSA, platforms still primarily check if content violates their own rules instead of EU or member state law. This renders the separate mechanism for reporting illegal content somewhat obsolete. It also raises questions about which rules are considered more important: those set by a private platform company or democratically legitimised laws. The new template cannot fundamentally challenge this hierarchy, nor does it make more visible the rules and criteria that platforms use to make content moderation decisions. Instead, the template further entrenches the already limited amount of information that platforms make available.

The template is not the saviour some make it out to be

The analysis of the 2024 Transparency Reports from seven very large online platforms (VLOPs) (Instagram, Facebook, LinkedIn, Pinterest, Snapchat, TikTok and X) reveals significant shortcomings in the way these companies report on the moderation of illegal content. Key data points are not connected, categories of illegal content are applied inconsistently and arbitrarily, and the reasoning behind content moderation remains largely opaque. 

These gaps make it difficult to assess how platforms actually respond to illegal content and measure the effectiveness of their response. The newly introduced EU template for transparency reports is a step towards greater comparability and clarity as it standardises reporting formats and categories. It thereby may help reduce the inconsistency observed so far. However, the template leaves important blind spots unaddressed. Most notably, it does not require platforms to explain their reasoning behind moderation decisions or distinguish clearly between enforcement based on law versus internal rules. There might also be potential issues with platforms adhering only to the template’s minimum requirements, which could reinforce existing shortcomings and further limit the availability of meaningful information. Thus, neither the transparency reports nor the new template currently achieve accountability through transparency. 

References

Transparency reports 

Facebook. (2024). Regulation (EU) 2022/2065 Digital Services Act Transparency Report for Facebook. April—September 2024. Facebook.

Facebook. (2025). Regulation (EU) 2022/2065 Digital Services Act Transparency Report for Facebook. October—December 2024. Facebook.

Instagram. (2024). Regulation (EU) 2022/2065 Digital Services Act Transparency Report for Instagram. April—September 2024. Instagram.

Instagram. (2025). Regulation (EU) 2022/2065 Digital Services Act Transparency Report for Instagram. October—December 2024. Instagram.

LinkedIn. (2024). Digital Services Act Transparency Report. January—June 2024. LinkedIn. https://www.linkedin.com/help/linkedin/answer/a1678508

LinkedIn. (2025). Digital Services Act Transparency Report. July—December 2024. LinkedIn. https://content.linkedin.com/content/dam/help/tns/en/February-2025-DSA-Transparency-Report.pdf

Pinterest. (2024). Digital Services Act Transparency Report. January—June 2024. Pinterest. https://policy.pinterest.com/en/transparency-report-h1-2024

Pinterest. (2025). Digital Services Act Transparency Report. July—December 2024. Pinterest. https://policy.pinterest.com/en/digital-services-act-transparency-report-jul-2024-dec-2024

Snapchat. (2024). European Union Transparency | Snapchat Transparency. January—June 2024. Snapchat. https://values.snap.com/privacy/transparency/european-union

TikTok. (2024). TikTok’s DSA Transparency report. January—June 2024. TikTok. https://sf16-va.tiktokcdn.com/obj/eden-va2/zayvwlY_fjulyhwzuhy[/ljhwZthlaukjlkulzlp/DSA_H2_2024/TikTok-DSA-Transparency-Report-Jan-to-Jun-2024.pdf

TikTok. (2025). TikTok’s DSA Transparency report. July—December 2024. TikTok. https://sf16-va.tiktokcdn.com/obj/eden-va2/zayvwlY_fjulyhwzuhy[/ljhwZthlaukjlkulzlp/DSA_H2_2024/Corrected%20Data/TikTok%20-%20DSA%20Transparency%20report%20-%20July%20-%20December%202024%20-21.03.2025.pdf

X. (2024). DSA Transparency Report. April—September 2024. X. https://transparency.x.com/dsa-transparency-report.html

X. (2025). DSA Transparency Report. October 2024—March 2025. X. https://transparency.x.com/assets/dsa/transparency-report/dsa-transparency-report-april-2025.pdf

EU regulations and template

European Commission. (2022). Regulation (EU) 2022/2065 of the European Parliament and of the European Council of 19 October 2022 on a Single Market For Digital Services and Amending Directive 2000/31/EC (DSA).

European Commission. (2024). Commission Implementing Regulation (EU) laying down templates concerning the transparency reporting obligations of providers of intermediary services and of providers of online platforms under Regulation (EU) 2022/2065 of the European Parliament and the Council.

European Commission. (2025). Annex I – Transparency reports template [Microsoft Excel file]. Publications Office of the European Union. 

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Jella Ohnesorge

Student assistant: DSA research network

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Platform governance

In our research on platform governance, we investigate how corporate goals and social values can be balanced on online platforms.

Further articles

A lone shark in the blue ocean symbolises pressure, rivalry and the “shark tank” metaphor. The image reflects emotionless competition at work, where AI can trigger feelings of inferiority and lead to a loss of trust in the technology.

Emotionless competition at work: When trust in Artificial Intelligence falters

Emotionless competition with AI harms workplace trust. When employees feel outperformed by machines, confidence in their skills and the technology declines.

Rowers hold on to each other in boats forming a row. The image illustrates that defending Europe’s disinformation researchers against coordinated attacks needs a united strategy.

Defending Europe’s disinformation researchers

Disinformation researchers in Europe face lawsuits, harassment & smear campaigns. What is behind these attacks? How should the EU respond?

The picture shows a man wiping a large glass window. This is used as a metaphor for questioning assumptions about disinformation and seeking clearer understanding.

Debunking assumptions about disinformation: Rethinking what we think we know

Exploring definitions, algorithmic amplification, and detection, this article challenges assumptions about disinformation and calls for stronger research evidence.