New EU Regulation: Upload filter against terrorist content?
A proposed EU Regulation aims to stop the spreading terrorist content on online platforms. Unfortunately, the misguided draft gravely threatens freedom of expression. HIIG researcher Alexander Pirang explains how the newly elected European Parliament can still curb the damage.
On March 15, 2019, a gunman used Facebook’s Live Stream function to broadcast his killing of 51 people in two mosques in Christchurch, New Zealand. Until content moderators stopped the stream, roughly 4,000 Facebook users had been watching the attack in real-time. Then copies of the video went viral. Facebook stated that it removed 1.5 million videos within the first day of the incident. Despite these efforts, the video material still circulates on the Internet.
In response, several governments and tech companies recently commited to eliminate violent extremist material online in the Christchurch Call. One significant step already taken in this direction is the EU’s draft Regulation on preventing the dissemination of terrorist content online. Proposed by the EU Commission in September 2018, it aims to ensure that online platforms are not being abused to spread terrorist material. However, the draft Regulation suffers from severe shortcomings, and will likely lead to the arbitrary removal of lawful content. The newly elected European Parliament will play a key role in mitigating the risks for freedom of expression.
Platforms as quasi-regulators
With the draft Regulation, the Commission followed an already familiar approach to platform regulation (think NetzDG). After the Commission had unsuccessfully pressed online platforms to voluntarily limit the dissemination of terrorist content, it introduced binding rules that devolve regulatory powers to private companies. Put differently, the proposal forces platforms to control online speech on the EU’s behalf.
The scope of the proposed Regulation is broad. It covers “hosting service providers” who offer services within the EU, regardless of their place of establishment or size. The proposal provides for two novel instruments: The first requires providers to remove terrorist material from their services within one hour following a removal order from “competent authorities” to be designated by Member States. Systematic failure to meet this time frame may be sanctioned with up to 4% of the respective provider’s global turnover. The second instrument is a referral system, under which competent authorities and EU bodies such as Europol notify providers that certain content might be terrorist material. Providers are only obliged to expeditiously assess this content against their terms and conditions; they make the final decision to remove the material or not.
In addition, providers that are exposed to terrorist content are required to proactively deploy automated filters. The Commission’s proposal also foresees reporting obligations and safeguards, such as a complaints mechanism for affected users.
Grave concerns regarding freedom of expression
If the Commission’s proposal passed into law without substantial amendments, the Regulation would severely undermine freedom of expression. Needless to say, online terrorist content is a serious challenge, which needs to be countered with targeted and effective measures. It should be just as obvious, however, that any legislation to that end must adhere to EU law, including the Charter of Fundamental Rights. The draft Regulation fails to strike that balance, as Martin Scheinin, former UN Special Rapporteur on human rights and counter-terrorism, noted on the the occasion of a recent talk at the HIIG.
Damage control by the European Parliament
The Council nevertheless largely endorsed the Commission’s proposal in December 2018, despite sharp criticism from human rights organizations. Fortunately, the European Parliament rose to the occasion and voted for a comprehensive overhaul of the proposal in its First Reading on April 19, 2019, including key improvements to the Commission’s proposal:
(i) The Parliament limited the Regulation’s scope to content disseminated to the public. This clarification was missing in the Commission’s proposal, which could be interpreted as also applying to private communication hosted by messenger services or cloud infrastructure providers. The Parliament also narrowed the Regulation’s definition of terrorist content by excluding educational, journalistic or research material, as well as “content which represents an expression of polemic or controversial views in the course of public debate.”
(ii) The Parliament set out that removal orders may only be given by a judicial or functionally independent administrative authority. As three UN Special Rapporteurs on human rights noted in a joint Report, “the [Commission’s] proposal does not specify whether the competent authorities designated by Member States would benefit from any level of institutional and substantive independence from the executive.” Considering the democratic backsliding in some EU countries, this amendment is crucial. On the other hand, the Parliament regrettably did not change the rigid one-hour time frame, which especially burdens smaller platforms.
(iii) The referral system was scrapped. Under the Commission’s proposal, platforms would be compelled to assess material that could be terrorist content. National authorities would be allowed to hide behind platforms’ decisions, thereby bypassing their obligations under the Charter of Fundamental Rights. This is irresponsible, given that platforms’ terms of services often lack clarity and do not reflect fundamental rights standards – and the parliament rightly rejected this provision.
(iv) The Parliament also removed the provision on proactive measures. It emphasized that obligating providers to proactively filter content and to prevent its re-upload is not compatible with EU law, namely art. 15 E-Commerce Directive. This is fortunate – as Amélie Heldt pointed out in a recent paper, “[u]pload-filters still lack the ability to understand content in context or to identify satire in videos,” which means that they are “not fit for purpose in meeting the requirements of our common human rights framework.”
What happens next
These improvements may be fleeting, however, as they might be rolled back in the next stages of the lawmaking process. Whether this will be the case will largely depend on how successful the newly elected European Parliament will handle the so-called trilogue negotiations with the Commission and the Council. These closed-door meetings are expected to start in September or October 2019. Their objective is to establish a common position between the EU’s co-legislators, based on which the Regulation can be formally adopted. Considering that the Council appears to take little issue with the Commission’s proposal, any compromise may likely require the Parliament to abandon some of the improvements described above.
The new Parliament therefore needs to continue to push for a Regulation that does not erode freedom of expression under the pretense of combating terrorism.
An abridged version of this commentary was first published by Verfassungsblog on May 22, 2019. Alexander Pirang is a research associate with the Humboldt Institute for Internet and Society. His interests focus on constitutional law, EU law, and media regulation.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact firstname.lastname@example.org.
Sign up for HIIG's Monthly Digest
and receive our latest blog articles.
Should it be up to private actors to decide whether or not to ban the US President from the digital public sphere? Most probably have a clear opinion on these...
Open source hardware (OSH) is an essential approach to public interest technology, not unlike well-maintained infrastructure. While OSH is a field with a range of challenges, we see tremendous potential...