{"id":51204,"date":"2018-08-09T09:00:49","date_gmt":"2018-08-09T07:00:49","guid":{"rendered":"https:\/\/www.hiig.de\/?p=51204"},"modified":"2023-03-28T17:12:53","modified_gmt":"2023-03-28T15:12:53","slug":"removals-of-online-hate-speech-numbers","status":"publish","type":"post","link":"https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/","title":{"rendered":"Removals of online hate speech in numbers"},"content":{"rendered":"<p><i><span style=\"font-weight: 400;\">Six months after a new German law <\/span><\/i><i><span style=\"font-weight: 400;\">\u2013<\/span><\/i><i><span style=\"font-weight: 400;\"> the Network Enforcement Act <\/span><\/i><i><span style=\"font-weight: 400;\">\u2013<\/span><\/i><i><span style=\"font-weight: 400;\"> has come into full effect, social media platforms are tasked to report on illegal hate speech. But as these figures have been made available, what can we learn from them? HIIG researcher <\/span><\/i><b><i>Kirsten Gollatz<\/i><\/b><i><span style=\"font-weight: 400;\">, HIIG fellow <\/span><\/i><b><i>Martin J. Riedl<\/i><\/b><i><span style=\"font-weight: 400;\"> as well as <\/span><\/i><b><i>Jens Pohlmann<\/i><\/b><i><span style=\"font-weight: 400;\"> had a closer look at the reports.<\/span><\/i><\/p>\n<p><strong>CORRECTION: An earlier version of this blog article incorrectly referred to YouTube being \u201cflooded with complaint notices, but [that it] only identified a need to take action in roughly 11 percent of reported pieces of content.\u201d The reference to YouTube ought to have been to Twitter. This has now been corrected in the text below.<\/strong><\/p>\n<p>In June 2017, the German parliament passed a federal law, which requires social media platforms to implement procedures that allow users to report illegal content. According to the <em>Network Enforcement Act<\/em><sup><a id=\"anker1\" href=\"#fn1\">[1]<\/a><\/sup>, also known as NetzDG, platforms are tasked to publish reports on how they deal with these complaints. The law came into force on October 1, 2017 and took full effect on January 1, 2018 after a transition period. If platforms systematically fail to establish and enforce such reporting complaint management systems, they can be fined with penalties of up to 50 Million Euros based on the law.<\/p>\n<p>At the end of July, <a href=\"https:\/\/cdn.cms-twdigitalassets.com\/content\/dam\/transparency-twitter\/data\/download-netzdg-report\/netzdg-jan-jun-2018.pdf\">Twitter<\/a>, <a href=\"https:\/\/fbnewsroomus.files.wordpress.com\/2018\/07\/facebook_netzdg_july_2018_english-1.pdf\">Facebook<\/a> and <a href=\"https:\/\/transparencyreport.google.com\/netzdg\/overview?hl=en\">Google<\/a> released their figures for the first time. While the numbers will bring public attention to platforms\u2019 content moderation practices in the short term, it is yet unclear as to what effects the newly-established infrastructure for removing problematic content will have in the long haul. Will the largest platforms act more proactively when detecting and removing user content and accounts? The question that lies at the core of these recent reports is this: Do figures indicate \u2018overblocking\u2019, a concern that had previously been articulated by non-governmental organisations such as Human Rights Watch, the rapporteur on freedom of opinion and expression of the United Nations, and civil society organisations? Or do figures even suggest that the NetzDG establishes a form of institutionalised pre-censorship?<\/p>\n<p>TL;DR: The bare numbers don\u2019t tell much, but they matter anyway.<\/p>\n<h2>NetzDG, the risk of \u2018overblocking\u2019 and other concerns<\/h2>\n<p>In the wake of ongoing spread of hateful content on social media platforms, more and more governments pressure companies to take action and proactively detect and remove problematic content. In 2015, then German Federal Minister of Justice and Consumer Protection Heiko Maas formed a task force with representatives from large platforms to improve the ways in which companies handle illegal hate speech. However, the group\u2019s efforts could not stop the heated debate surrounding \u201cfake news\u201d and \u201chate speech\u201d. The two terms were at the center of a <a href=\"https:\/\/www.hiig.de\/hate-speech-fake-news-two-concepts-got-intertwined-politicised\/\">highly politicised discourse<\/a>, which eventually prompted a shift towards a legislative solution. In the spring of 2017, this culminated in a draft law. In June 2017, and in spite of harsh criticism, the draft was accepted by the German parliament \u2013 albeit in a watered-down version.<\/p>\n<p>Concerns have been raised on multiple levels since the law first appeared in draft form. Repeatedly, critics have expressed fears that the law would undermine freedom of expression by delegating the determination of whether certain speech is unlawful to private companies. Experts have argued that the draft law is at odds with human rights standards<sup><a id=\"anker2\" href=\"#fn2\">[2]<\/a><\/sup> and <a href=\"https:\/\/netzpolitik.org\/2017\/anhoerung-zum-netzdg-mehrheit-der-experten-haelt-gesetzentwurf-fuer-verfassungswidrig\/\">German constitutional law<\/a>. Civil and human rights organisations as well as industry bodies in the ICT sector <a href=\"https:\/\/edri.org\/eu-action-needed-german-netzdg-draft-threatens-freedomofexpression\/\">called on the EU Commision<\/a> to ensure compliance with EU law fearing that outsourcing decisions about the legality of speech to private corporations would incentivise the use of automated content filters. <a href=\"https:\/\/www.ohchr.org\/Documents\/Issues\/Opinion\/Legislation\/OL-DEU-1-2017.pdf\">David Kaye<\/a>, United Nations Special Rapporteur on freedom of opinion and expression, raised further concerns: He argued that decisions about the legitimacy of content would in many cases require an in-depth assessment of the context of speech, something social media companies would not be able to provide. <a href=\"https:\/\/www.hrw.org\/news\/2018\/02\/14\/germany-flawed-social-media-law\">Human Rights Watch<\/a> and other international critics also opposed the NetzDG because, according to them, it would set a precedent for governments around the world to restrict online speech.<\/p>\n<p>The law again and again triggered fears of \u2018overblocking\u2019, which means that content that is lawful or should be considered acceptable is mistakenly and\/or preventively banned. Short turnaround times have been at the center of the critique, since time remains the most significant factor when determining the legality of content. According to Article 1, Section 3 of the NetzDG, platforms must remove or block access to content that is \u201cmanifestly unlawful\u201d within 24 hours of receiving the complaint, and furthermore, to \u201call unlawful content\u201d within a 7-day period. The final version of the law aims to minimise the risk of \u2018overblocking\u2019 by allowing platform operators to exceed the 7-day time limit in more complicated cases. Particularly, if factual allegations are necessary, or if decisions are referred to self-regulatory institutions for external review. Nonetheless, the basic principle remains: while the law imposes drastic fines in the event that platforms do not detect and remove illegal content sufficiently within the given time frame, there are no legal consequences should platforms remove more content than legally necessary.<\/p>\n<h2>A closer look at the numbers<\/h2>\n<p>Do the reports that social media platforms released in July provide data that can substantiate the risk of \u2018overblocking\u2019? Intended as a measure to better hold platforms accountable and to increase transparency, Article 1 Section 2 of the law mandates all commercial social media platforms which receive more than 100 complaints about unlawful hate speech per year to publish a report every six months.<\/p>\n<h4><\/h4>\n<p>&nbsp;<\/p>\n<h4><strong>Table 1: Reported numbers by selected platforms, January &#8211; June 2018<\/strong><\/h4>\n<table style=\"height: 226px;\" width=\"706\">\n<tbody>\n<tr>\n<td>\n<p style=\"text-align: center;\"><strong>Platform<\/strong><\/p>\n<\/td>\n<td>\n<p style=\"text-align: center;\"><strong>Total items reported<\/strong><\/p>\n<\/td>\n<td style=\"text-align: center;\">\n<p style=\"text-align: center;\"><strong>Reports resulted in action<br \/>\n(removal rate)<\/strong><\/p>\n<\/td>\n<td style=\"text-align: center;\"><strong>Removal rate<br \/>\n<\/strong><strong>within 24h<\/strong><\/td>\n<\/tr>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/fbnewsroomus.files.wordpress.com\/2018\/07\/facebook_netzdg_july_2018_english-1.pdf\"><span style=\"font-weight: 400;\">Facebook<\/span><\/a><\/td>\n<td style=\"text-align: center;\"><span style=\"font-weight: 400;\">1,704<\/span><\/td>\n<td style=\"text-align: center;\"><span style=\"font-weight: 400;\">362 (21.2 percent)<\/span><\/td>\n<td style=\"text-align: center;\"><span style=\"font-weight: 400;\">76.4 percent (of reports<\/span><span style=\"font-weight: 400;\">)<\/span><\/td>\n<\/tr>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/transparencyreport.google.com\/netzdg\/youtube?hl=en\"><span style=\"font-weight: 400;\">YouTube<\/span><\/a><\/td>\n<td style=\"text-align: center;\"><span style=\"font-weight: 400;\">241,827<\/span><\/td>\n<td style=\"text-align: center;\"><span style=\"font-weight: 400;\">58,297 (27.1 percent)<\/span><\/td>\n<td style=\"text-align: center;\"><span style=\"font-weight: 400;\">93.0 percent (54.199)<\/span><\/td>\n<\/tr>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/transparencyreport.google.com\/netzdg\/googleplus?hl=en\"><span style=\"font-weight: 400;\">Google+<\/span><\/a><\/td>\n<td style=\"text-align: center;\"><span style=\"font-weight: 400;\"> 2,769<\/span><\/td>\n<td style=\"text-align: center;\"><span style=\"font-weight: 400;\">1,277 (46.1 percent)<\/span><\/td>\n<td style=\"text-align: center;\"><span style=\"font-weight: 400;\">93.8 percent (1.198)<\/span><\/td>\n<\/tr>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/cdn.cms-twdigitalassets.com\/content\/dam\/transparency-twitter\/data\/download-netzdg-report\/netzdg-jan-jun-2018.pdf\"><span style=\"font-weight: 400;\">Twitter<\/span><\/a><\/td>\n<td style=\"text-align: center;\"><span style=\"font-weight: 400;\">264,818<\/span><\/td>\n<td style=\"text-align: center;\"><span style=\"font-weight: 400;\">28,645 (10.8 percent)<\/span><\/td>\n<td style=\"text-align: center;\"><span style=\"font-weight: 400;\">97.9 percent (28.044)<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<p>The table above provides an overview of the total number of items mentioned in respective transparency reports during the six-month reporting period as well as the percentage of cases in which platforms have complied with the complaints and subsequently removed or blocked content.<\/p>\n<p>What are the key takeaways? Since the law took full effect in January, Facebook, Twitter and Google\u2019s YouTube and G+ said they had blocked tens of thousands of pieces of content \u2013 mostly within 24 hours as is mandated by the law. Noticeably, all platforms responded to most of the reported content that required action within one day, and deleted or blocked access. However, by a large majority platforms rejected the complaints they received from users. Especially Twitter was flooded with complaint notices, but only identified a need to take action in roughly 11 percent of reported pieces of content.<\/p>\n<h2>Can we notice signs of \u2018overblocking\u2019? Answer: It depends on who you ask.<\/h2>\n<p>Critics have claimed the NetzDG would encourage social media platforms to err on the side of caution, by blocking non-illegal content in order to avoid fines. The numbers are now up for interpretation by various parties.<\/p>\n<p>After the reports had been released, Gerd Billen, State Secretary in the German Ministry of Justice, expressed satisfaction that the law was having its intended effect. &#8220;Nevertheless, we are only at the very beginning&#8221;, he added in a statement to German news agency dpa. <a href=\"https:\/\/www.cducsu.de\/themen\/innen-recht-sport-und-ehrenamt\/netzwerkdurchsetzungsgesetz-wirkt\">Members of the governing parliamentary group CDU\/CSU<\/a> shared his opinion and did not see any evidence of \u2018overblocking\u2019 yet.<\/p>\n<p>However, previous opponents of the law see their concerns proven true and argue that the law indeed leads to false removals of content. Among them are industry representatives who see concerns to be justified because of the required speed of examining content when it should rather be a matter of careful consideration, as a <a href=\"https:\/\/germany.googleblog.com\/2018\/07\/update-netzdg.html\">Google spokesperson<\/a> said. Further criticism addresses the lack of public oversight over a platform\u2019s decision-making and removal practices. Reporters Without Borders\u2019 manager director <a href=\"https:\/\/www.reporter-ohne-grenzen.de\/pressemitteilungen\/meldung\/netzdg-fuehrt-offenbar-zu-overblocking\/\">Christian Mihr<\/a> argues that the installation of \u201can independent control authority is necessary to recognise \u2018overblocking\u2019, that is, the deletion of legally permitted content.\u201d And there <a href=\"https:\/\/www.zeit.de\/digital\/internet\/2017-06\/hasskommentare-netzdg-bundestag-gesetz-verabschiedet\">continues to be even more fundamental criticism<\/a> of the law with regard to due process, since the law does not provide an effective mechanism for users to object to the deletion or blocking of their content or their account.<\/p>\n<h2>Discussion and trajectories for research<\/h2>\n<p>Pushing platforms for more transparency with regard to their content moderation practices has long been a major concern. As such, providing numbers on content takedown requests and complaints is useful and can help inform public discussion making, instigate new advocacy projects and policy initiatives. The numbers are now up for debate, and may also prompt new research. Here are some critical questions that should be pondered upon:<\/p>\n<p><em>What do numbers in reports actually tell?<\/em><br \/>\nBased on statistical reports, indexes and rankings we know that bare numbers have flaws, yet they are being interpreted and weaponised by various stakeholders. The aggregated numbers that Facebook, Twitter and Google have produced and disclosed are difficult to verify because researchers don\u2019t have access to the \u201craw data\u201d. Often times, reports on numbers conceal more than they reveal, as they eliminate important information on context and intent. What\u2019s more, the current reports do not provide a valuable measure to compare performance between platforms, since each of the reports follows its own organising structure. At this time, platforms have developed their distinct reporting practices which also shape what comes to be presented as transparent, knowable and governable. It is, and we should point this out, also a fallacy of German lawmakers that transparency reports look the way they do. Still, if no standardised reporting practices exist and users have to go through different hurdles on each platform to file a request, then what do the numbers actually reveal?<\/p>\n<p><em>How does the NetzDG affect reporting behavior of users, and furthermore, the content moderation of platforms?<\/em><br \/>\nIf we want to better understand how companies make decisions about acceptable and unacceptable speech online, we need a more granular understanding of case-by-case determinations. This would allow to learn more about the scale and scope of policies and practices, and consequently, it would enable us to understand the ways in which the current content moderation systems are affected by NetzDG. Who are the requesters for takedowns, and how strategic are their uses of reporting systems? How do flagging mechanisms affect user behavior? And furthermore, does the NetzDG offer the ground for new reporting mechanisms of content governance online, including a powerful rhetorical legitimation for takedowns? Platforms should team up with social scientists and let academics survey users on questions like perceived chilling effects and self censorship prior to posting.<\/p>\n<p><em>How does NetzDG affect public perception and debates on online hate speech in the long run?<\/em><br \/>\nTransparency efforts in form of quantitative measures are promoted as a solution to governance problems. Disclosing timely information to the public is believed to shed light on these problems for purposes of openness, accountability, and creating trust. However, balancing free speech principles and protection of users against hateful content requires more than just mere numbers. It is a highly complex and contextual undertaking for society at large. Thus, making sense of the numbers is a shared responsibility among all actors. For instance, what would an increase in numbers in the next reports mean? We simply don\u2019t know. It could be that communication has become more hateful and that the law is not very effective in preventing this from happening. It could also indicate that more complaints have been filed, or that companies are just getting better in their removal practice. Furthermore, what these reports do not indicate is whether there is a change in public discourse, its structure and intensity towards hate speech. And this may be one of the larger effects of laws like the NetzDG in society.<\/p>\n<p><em>Does the NetzDG affect how platforms will formulate and enforce own community standards globally?<\/em><br \/>\nFar from just publishing a few figures, these reports matter! We should be aware that the numbers they contain, how they are produced, disclosed and interpreted have governing effects by creating a field of visibility<sup><a id=\"anker3\" href=\"#fn3\">[3]<\/a><\/sup>. They may affect how users, policy makers and the platforms themselves perceive and define the problem, and what solutions they consider to be acceptable. The NetzDG directs attention to unlawful online hate speech in the German context, while at the same time leaving other issues in the dark. In this vein, there has long been a demand for platforms to become more transparent with regard to the extent to which they <em>themselves<\/em> shape and police the flow of information online. To date, <a href=\"https:\/\/rankingdigitalrights.org\/index2018\/report\/policing-speech\/\">not much is known<\/a>, for instance, about how platforms enforce their own rules on prohibited content. With the NetzDG this becomes an even more salient issue to ask. Are we correct in assuming that platforms first and foremost control content on the basis of their policies, and only then according to the NetzDG, as the recent debate has put forward? Such a practice would raise the question whether the provisions the NetzDG entails will nudge platforms to adjust their own content policies accordingly, so that they can more easily, and without any public oversight, police content. Does the NetzDG also play a role in the convergence of community standards across platforms\u2019 global operations?<\/p>\n<p><em>Will the NetzDG set a precedent internationally?<\/em><br \/>\nFinally, numbers can easily \u201ctravel\u201d across borders. NetzDG reports provide figures that describe a current situation in the German context. While such transparency measures start locally, they often diffuse regionally and even globally. As such, the German NetzDG law must be seen in the context of transnational challenges of cross-border content regulation. In the absence of commonly agreed upon speech norms and coherent regulatory frameworks, speech regulation has become a pressing issue on the Internet. Governments around the world initiate regulatory policies to restrict online speech they deem to be unlawful, among them France, Vietnam, Russia, Singapore, Venezuela<sup><a id=\"anker4\" href=\"#fn4\">[4]<\/a><\/sup>. As Germany has provided a blueprint for a national legal solution that formalizes content removal practices by private corporations, other countries may follow.<\/p>\n<p>To conclude, transparency reports about content removals according to the German NetzDG are not the endpoint of this discussion. Instead, they inform the debates to reach beyond Germany, politicise how global platforms govern speech online, and provoke new questions.<\/p>\n<p><sup id=\"fn1\"><a href=\"#anker1\">[1]<\/a> <\/sup> <a href=\"https:\/\/www.bmjv.de\/SharedDocs\/Gesetzgebungsverfahren\/Dokumente\/NetzDG_engl.pdf\">Act to Improve Enforcement of the Law in Social Networks<\/a> (Network Enforcement Act).<\/p>\n<p><sup id=\"fn2\"><a href=\"#anker2\">[2]<\/a> <\/sup> Schulz, W., <a href=\"https:\/\/ssrn.com\/abstract=3216572\">Regulating Intermediaries to Protect Privacy Online \u2013 the Case of the German NetzDG<\/a> (July 19, 2018).<\/p>\n<p><sup><a href=\"#anker3\">[3]<\/a>\u00a0<\/sup><span style=\"font-weight: 400;\">Flyverbom, M., Transparency: Mediation and the Management of Visibilities,<\/span><i><span style=\"font-weight: 400;\"> International Journal of Communication<\/span><\/i><span style=\"font-weight: 400;\"> 10 (2016), 110-122.<\/span><\/p>\n<p><sup><a id=\"anker4\" href=\"#fn4\">[4]<\/a><\/sup> See for an overview of <a href=\"https:\/\/www.hrw.org\/news\/2018\/02\/14\/germany-flawed-social-media-law\">developments<\/a> <a href=\"https:\/\/medium.com\/@_cberger_\/will-germanys-approach-to-content-and-platform-regulation-prevail-in-2018-d7e6e2db5cb\">internationally.<\/a><\/p>\n<div class=\"shariff shariff-align-flex-start shariff-widget-align-flex-start\"><ul class=\"shariff-buttons theme-round orientation-horizontal buttonsize-medium\"><li class=\"shariff-button linkedin shariff-nocustomcolor\" style=\"background-color:#1488bf\"><a href=\"https:\/\/www.linkedin.com\/sharing\/share-offsite\/?url=https%3A%2F%2Fwww.hiig.de%2Fen%2Fremovals-of-online-hate-speech-numbers%2F\" title=\"Share on LinkedIn\" aria-label=\"Share on LinkedIn\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#0077b5; color:#fff\" target=\"_blank\"><span class=\"shariff-icon\" style=\"\"><svg width=\"32px\" height=\"20px\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 27 32\"><path fill=\"#0077b5\" d=\"M6.2 11.2v17.7h-5.9v-17.7h5.9zM6.6 5.7q0 1.3-0.9 2.2t-2.4 0.9h0q-1.5 0-2.4-0.9t-0.9-2.2 0.9-2.2 2.4-0.9 2.4 0.9 0.9 2.2zM27.4 18.7v10.1h-5.9v-9.5q0-1.9-0.7-2.9t-2.3-1.1q-1.1 0-1.9 0.6t-1.2 1.5q-0.2 0.5-0.2 1.4v9.9h-5.9q0-7.1 0-11.6t0-5.3l0-0.9h5.9v2.6h0q0.4-0.6 0.7-1t1-0.9 1.6-0.8 2-0.3q3 0 4.9 2t1.9 6z\"\/><\/svg><\/span><\/a><\/li><li class=\"shariff-button bluesky shariff-nocustomcolor\" style=\"background-color:#84c4ff\"><a href=\"https:\/\/bsky.app\/intent\/compose?text=Removals%20of%20online%20hate%20speech%20in%20numbers https%3A%2F%2Fwww.hiig.de%2Fen%2Fremovals-of-online-hate-speech-numbers%2F  via @hiigberlin.bsky.social\" title=\"Share on Bluesky\" aria-label=\"Share on Bluesky\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#0085ff; color:#fff\" target=\"_blank\"><span class=\"shariff-icon\" style=\"\"><svg width=\"20\" height=\"20\" version=\"1.1\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 20 20\"><path class=\"st0\" d=\"M4.89,3.12c2.07,1.55,4.3,4.71,5.11,6.4.82-1.69,3.04-4.84,5.11-6.4,1.49-1.12,3.91-1.99,3.91.77,0,.55-.32,4.63-.5,5.3-.64,2.3-2.99,2.89-5.08,2.54,3.65.62,4.58,2.68,2.57,4.74-3.81,3.91-5.48-.98-5.9-2.23-.08-.23-.11-.34-.12-.25,0-.09-.04.02-.12.25-.43,1.25-2.09,6.14-5.9,2.23-2.01-2.06-1.08-4.12,2.57-4.74-2.09.36-4.44-.23-5.08-2.54-.19-.66-.5-4.74-.5-5.3,0-2.76,2.42-1.89,3.91-.77h0Z\"\/><\/svg><\/span><\/a><\/li><li class=\"shariff-button mailto shariff-nocustomcolor\" style=\"background-color:#a8a8a8\"><a href=\"mailto:?body=https%3A%2F%2Fwww.hiig.de%2Fen%2Fremovals-of-online-hate-speech-numbers%2F&subject=Removals%20of%20online%20hate%20speech%20in%20numbers\" title=\"Send by email\" aria-label=\"Send by email\" role=\"button\" rel=\"noopener nofollow\" class=\"shariff-link\" style=\"; background-color:#999; color:#fff\"><span class=\"shariff-icon\" style=\"\"><svg width=\"32px\" height=\"20px\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 32 32\"><path fill=\"#999\" d=\"M32 12.7v14.2q0 1.2-0.8 2t-2 0.9h-26.3q-1.2 0-2-0.9t-0.8-2v-14.2q0.8 0.9 1.8 1.6 6.5 4.4 8.9 6.1 1 0.8 1.6 1.2t1.7 0.9 2 0.4h0.1q0.9 0 2-0.4t1.7-0.9 1.6-1.2q3-2.2 8.9-6.1 1-0.7 1.8-1.6zM32 7.4q0 1.4-0.9 2.7t-2.2 2.2q-6.7 4.7-8.4 5.8-0.2 0.1-0.7 0.5t-1 0.7-0.9 0.6-1.1 0.5-0.9 0.2h-0.1q-0.4 0-0.9-0.2t-1.1-0.5-0.9-0.6-1-0.7-0.7-0.5q-1.6-1.1-4.7-3.2t-3.6-2.6q-1.1-0.7-2.1-2t-1-2.5q0-1.4 0.7-2.3t2.1-0.9h26.3q1.2 0 2 0.8t0.9 2z\"\/><\/svg><\/span><\/a><\/li><\/ul><\/div>","protected":false},"excerpt":{"rendered":"<p>Six months after a new German law \u2013 the Network Enforcement Act \u2013 has come into full effect, social media platforms are tasked to report on illegal hate speech. But as these figures have been made available, what can we learn from them? HIIG researcher Kirsten Gollatz, HIIG fellow Martin J. Riedl as well as&hellip;<\/p>\n","protected":false},"author":19,"featured_media":51279,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1579,224],"tags":[],"class_list":["post-51204","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ftif-plattformen-governance","category-policy-and-law"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Removals of online hate speech in numbers &#8211; Digital Society Blog<\/title>\n<meta name=\"description\" content=\"Six months after the Network Enforcement Act has come into full effect, Facebook &amp; Co. report on illegal hate speech. What can we learn from the figures?\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Removals of online hate speech in numbers &#8211; Digital Society Blog\" \/>\n<meta property=\"og:description\" content=\"Six months after the Network Enforcement Act has come into full effect, Facebook &amp; Co. report on illegal hate speech. What can we learn from the figures?\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/\" \/>\n<meta property=\"og:site_name\" content=\"HIIG\" \/>\n<meta property=\"article:published_time\" content=\"2018-08-09T07:00:49+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-03-28T15:12:53+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.hiig.de\/wp-content\/uploads\/2018\/08\/IMG_295516x9.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1500\" \/>\n\t<meta property=\"og:image:height\" content=\"844\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kirsten Gollatz\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kirsten Gollatz\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"12 minutes\" \/>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Removals of online hate speech in numbers &#8211; Digital Society Blog","description":"Six months after the Network Enforcement Act has come into full effect, Facebook & Co. report on illegal hate speech. What can we learn from the figures?","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/","og_locale":"en_US","og_type":"article","og_title":"Removals of online hate speech in numbers &#8211; Digital Society Blog","og_description":"Six months after the Network Enforcement Act has come into full effect, Facebook & Co. report on illegal hate speech. What can we learn from the figures?","og_url":"https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/","og_site_name":"HIIG","article_published_time":"2018-08-09T07:00:49+00:00","article_modified_time":"2023-03-28T15:12:53+00:00","og_image":[{"width":1500,"height":844,"url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2018\/08\/IMG_295516x9.jpg","type":"image\/jpeg"}],"author":"Kirsten Gollatz","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kirsten Gollatz","Est. reading time":"12 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/#article","isPartOf":{"@id":"https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/"},"author":{"name":"Kirsten Gollatz","@id":"https:\/\/www.hiig.de\/#\/schema\/person\/7394ca59dff0b97673a9e9a8aaa34d1d"},"headline":"Removals of online hate speech in numbers","datePublished":"2018-08-09T07:00:49+00:00","dateModified":"2023-03-28T15:12:53+00:00","mainEntityOfPage":{"@id":"https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/"},"wordCount":2402,"commentCount":0,"publisher":{"@id":"https:\/\/www.hiig.de\/#organization"},"image":{"@id":"https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/#primaryimage"},"thumbnailUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2018\/08\/IMG_295516x9.jpg","articleSection":["Ftif Platform governance","Policy and Law"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/","url":"https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/","name":"Removals of online hate speech in numbers &#8211; Digital Society Blog","isPartOf":{"@id":"https:\/\/www.hiig.de\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/#primaryimage"},"image":{"@id":"https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/#primaryimage"},"thumbnailUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2018\/08\/IMG_295516x9.jpg","datePublished":"2018-08-09T07:00:49+00:00","dateModified":"2023-03-28T15:12:53+00:00","description":"Six months after the Network Enforcement Act has come into full effect, Facebook & Co. report on illegal hate speech. What can we learn from the figures?","breadcrumb":{"@id":"https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/#primaryimage","url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2018\/08\/IMG_295516x9.jpg","contentUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2018\/08\/IMG_295516x9.jpg","width":1500,"height":844},{"@type":"BreadcrumbList","@id":"https:\/\/www.hiig.de\/en\/removals-of-online-hate-speech-numbers\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.hiig.de\/en\/"},{"@type":"ListItem","position":2,"name":"Removals of online hate speech in numbers"}]},{"@type":"WebSite","@id":"https:\/\/www.hiig.de\/#website","url":"https:\/\/www.hiig.de\/","name":"HIIG","description":"Alexander von Humboldt Institute for Internet and Society","publisher":{"@id":"https:\/\/www.hiig.de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.hiig.de\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.hiig.de\/#organization","name":"HIIG","url":"https:\/\/www.hiig.de\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.hiig.de\/#\/schema\/logo\/image\/","url":"https:\/\/www.hiig.de\/wp-content\/uploads\/2019\/06\/hiig.png","contentUrl":"https:\/\/www.hiig.de\/wp-content\/uploads\/2019\/06\/hiig.png","width":320,"height":80,"caption":"HIIG"},"image":{"@id":"https:\/\/www.hiig.de\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.hiig.de\/#\/schema\/person\/7394ca59dff0b97673a9e9a8aaa34d1d","name":"Kirsten Gollatz"}]}},"_links":{"self":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/51204","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/users\/19"}],"replies":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/comments?post=51204"}],"version-history":[{"count":24,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/51204\/revisions"}],"predecessor-version":[{"id":51571,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/posts\/51204\/revisions\/51571"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/media\/51279"}],"wp:attachment":[{"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/media?parent=51204"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/categories?post=51204"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.hiig.de\/en\/wp-json\/wp\/v2\/tags?post=51204"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}