Zum Inhalt springen
Picture of a coffee filter
28 Mai 2019| doi: 10.5281/zenodo.3254822

Neue EU-Verordnung: Upload-Filter gegen Terrorpropaganda?

Eine neue EU-Verordnung soll die Verbreitung von terroristischen Inhalten auf Online-Plattformen verhindern. Tatsächlich droht ein erheblicher Kollateralschaden für die Meinungsfreiheit im Internet. HIIG-Doktorand Alexander Pirang erklärt, wie das neu gewählte Europäische Parlament das Schlimmste verhindern kann.

On March 15, 2019, a gunman used Facebook’s Live Stream function to broadcast his killing of 51 people in two mosques in Christchurch, New Zealand. Until content moderators stopped the stream, roughly 4,000 Facebook users had been watching the attack in real-time. Then copies of the video went viral. Facebook stated that it removed 1.5 million videos within the first day of the incident. Despite these efforts, the video material still circulates on the Internet.

In response, several governments and tech companies recently commited to eliminate violent extremist material online in the Christchurch Call. One significant step already taken in this direction is the EU’s draft Regulation on preventing the dissemination of terrorist content online. Proposed by the EU Commission in September 2018, it aims to ensure that online platforms are not being abused to spread terrorist material. However, the draft Regulation suffers from severe shortcomings, and will likely lead to the arbitrary removal of lawful content. The newly elected European Parliament will play a key role in mitigating the risks for freedom of expression.

Platforms as quasi-regulators

With the draft Regulation, the Commission followed an already familiar approach to platform regulation (think NetzDG). After the Commission had unsuccessfully pressed online platforms to voluntarily limit the dissemination of terrorist content, it introduced binding rules that devolve regulatory powers to private companies. Put differently, the proposal forces platforms to control online speech on the EU’s behalf.

The scope of the proposed Regulation is broad. It covers “hosting service providers” who offer services within the EU, regardless of their place of establishment or size. The proposal provides for two novel instruments: The first requires providers to remove terrorist material from their services within one hour following a removal order from “competent authorities” to be designated by Member States. Systematic failure to meet this time frame may be sanctioned with up to 4% of the respective provider’s global turnover. The second instrument is a referral system, under which competent authorities and EU bodies such as Europol notify providers that certain content might be terrorist material. Providers are only obliged to expeditiously assess this content against their terms and conditions; they make the final decision to remove the material or not.

In addition, providers that are exposed to terrorist content are required to proactively deploy automated filters. The Commission’s proposal also foresees reporting obligations and safeguards, such as a complaints mechanism for affected users.

Grave concerns regarding freedom of expression

If the Commission’s proposal passed into law without substantial amendments, the Regulation would severely undermine freedom of expression. Needless to say, online terrorist content is a serious challenge, which needs to be countered with targeted and effective measures. It should be just as obvious, however, that any legislation to that end must adhere to EU law, including the Charter of Fundamental Rights. The draft Regulation fails to strike that balance, as Martin Scheinin, former UN Special Rapporteur on human rights and counter-terrorism, noted on the the occasion of a recent talk at the HIIG.

Damage control by the European Parliament

The Council nevertheless largely endorsed the Commission’s proposal in December 2018, despite sharp criticism from human rights organizations. Fortunately, the European Parliament rose to the occasion and voted for a comprehensive overhaul of the proposal in its First Reading on April 19, 2019, including key improvements to the Commission’s proposal:

(i) The Parliament limited the Regulation’s scope to content disseminated to the public. This clarification was missing in the Commission’s proposal, which could be interpreted as also applying to private communication hosted by messenger services or cloud infrastructure providers. The Parliament also narrowed the Regulation’s definition of terrorist content by excluding educational, journalistic or research material, as well as “content which represents an expression of polemic or controversial views in the course of public debate.”

(ii) The Parliament set out that removal orders may only be given by a judicial or functionally independent administrative authority. As three UN Special Rapporteurs on human rights noted in a joint Report, “the [Commission’s] proposal does not specify whether the competent authorities designated by Member States would benefit from any level of institutional and substantive independence from the executive.” Considering the democratic backsliding in some EU countries, this amendment is crucial. On the other hand, the Parliament regrettably did not change the rigid one-hour time frame, which especially burdens smaller platforms.

(iii) The referral system was scrapped. Under the Commission’s proposal, platforms would be compelled to assess material that could be terrorist content. National authorities would be allowed to hide behind platforms’ decisions, thereby bypassing their obligations under the Charter of Fundamental Rights. This is irresponsible, given that platforms’ terms of services often lack clarity and do not reflect fundamental rights standards – and the parliament rightly rejected this provision.

(iv) The Parliament also removed the provision on proactive measures. It emphasized that obligating providers to proactively filter content and to prevent its re-upload is not compatible with EU law, namely art. 15 E-Commerce Directive. This is fortunate – as Amélie Heldt pointed out in a recent paper, “[u]pload-filters still lack the ability to understand content in context or to identify satire in videos,” which means that they are “not fit for purpose in meeting the requirements of our common human rights framework.”

What happens next

These improvements may be fleeting, however, as they might be rolled back in the next stages of the lawmaking process. Whether this will be the case will largely depend on how successful the newly elected European Parliament will handle the so-called trilogue negotiations with the Commission and the Council. These closed-door meetings are expected to start in September or October 2019. Their objective is to establish a common position between the EU’s co-legislators, based on which the Regulation can be formally adopted. Considering that the Council appears to take little issue with the Commission’s proposal, any compromise may likely require the Parliament to abandon some of the improvements described above.

The new Parliament therefore needs to continue to push for a Regulation that does not erode freedom of expression under the pretense of combating terrorism.

An abridged version of this commentary was first published by Verfassungsblog on May 22, 2019. Alexander Pirang is a research associate with the Humboldt Institute for Internet and Society. His interests focus on constitutional law, EU law, and media regulation.

Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de

Alexander Pirang

Ehem. Assoziierter Doktorand: AI & Society Lab

Forschungsthema im Fokus

Du siehst eine Tastatur auf der eine Taste rot gefärbt ist und auf der „Control“ steht. Eine bildliche Metapher für die Regulierung von digitalen Plattformen im Internet und Data Governance. You see a keyboard on which one key is coloured red and says "Control". A figurative metaphor for the regulation of digital platforms on the internet and data governance.

Digitale Plattformregulierung und Data Governance

Von sozialen Netzwerken, über Videoplattformen bis hin zu Messenger-Apps: digitale Plattformen und ihre Dienste prägen unser Alltagsleben.Am HIIG untersuchen wir, wie neue digitale Öffentlichkeiten reguliert werden können. Neben der Sicherung von Rechtsstaatlichkeit und demokratischen Werten in der stehen auch die riesigen Datenmengen im Fokus, die von den Plattformunternehmen verwaltet werden.
Forschungsthema entdecken

HIIG Monthly Digest

Jetzt anmelden und  die neuesten Blogartikel gesammelt per Newsletter erhalten.

Weitere Artikel

Man sieht mehrer Spiegel, die in unterschiedlichen Formen angeordnet sind und verschiedene Oberflächen, wie den Himmel, eine Hauswand und so weiter widerspiegeln. Das Bild steht sinnbildlich für die vielen verschiedenen Bedeutungen von autonomen Systemen in unserer Gesellschaft. You see several mirrors arranged in different shapes reflecting different surfaces, such as the sky, a house wall and so on. The image is emblematic of the many different meanings of autonomous machines in our society.

Im Zeitalter der autonomen Systeme und Maschinen?

Können Maschinen autonom sein – oder ist das ein Privileg des Menschen? Diese kategorische Frage dominiert viele Diskussionen über unser Verhältnis zu den (vermeintlich) intelligenten Maschinen.

remote work is moving towards the city

Arbeiten aus der Ferne? Wie Remote Work in die Städte abwandert

Fernarbeit ermöglicht es uns, von "überall" aus zu arbeiten. Warum also werden ausgerechnet die Städte zu den neuen Mega-Hubs für die digitale Arbeit? Geraten ländliche Regionen ins Hintertreffen?

Content Moderation auf digitalen Plattformen: Eine intensivere Drittwirkung der Meinungsfreiheit?

Inwieweit können und sollten digitale Plattformen an die Meinungsfreiheit gebunden sein, obwohl sie private Unternehmen sind?