TikTok, a kaleidoscope of visuals, data and legal questions
Are social media platforms empowering people to connect beyond borders or are they the custodians of digital communication spaces where users have to comply with strict content moderation rules? The recent example of TikTok shows that platforms somehow still struggle with combining user-friendly services with freedom of expression principles.
TikTok is the new social media platform “en vogue”. It hosts chatoyant user-generated pictures, gifs, music, and videos underlined by filters and effects that make the user experience different from usual services. TikTok’s image is young, fun, easy, and the pace is commonly described as particularly fast, leaving users feeling kind of dizzy. Currently, the TikTok application is leading in terms of downloads amongst teenagers and young adults. This confirms a trend of increasingly audio-visual content dissemination and, subsequently, of visual communication. As one user describes it: “The service is supposed to help you create and to use your cinematographic imagination to make others smile”. But this enthusiasm among users is not fully shared with experts and freedom of expression advocates, and here’s why.
A platform like an amusement park
One of TikTok’s most important feature is that it provides not only the tools to create exciting imagery and the platform infrastructure to publish it, it also develops a recommender system based on human moderation and a machine learning system which constantly supplies new content. If other large social media platforms are sometimes compared to a newsstand or a marketplace, TikTok is more like an amusement park: users have constant access to young, fun, and entertaining content. On its trending page, they will find suggestions on what they should watch and what they should post: the system will nudge them in both ways. The app and its recommender system have been under scrutiny in questions mainly related to data protection, also due to the fact that the service is owned by Chinese tech company ByteDance. Experts fear that user data from the U.S. or the E.U. might be collected and later analysed by Chinese authorities. This speculation could turn out even more unpleasing if the data is somehow used in combination with the Chinese social scoring system. There is a general concern about data protection on TikTok, especially because the app collects data of everyone not only of registered users. German newspaper Süddeutsche Zeitung reported that TikTok uses the ‘fingerprinting method’ to track users. U.S. authorities have now started investigating the company with regard to the data protection aspect and experts confirm that the Chinese authorities might indeed be looking into the user data from abroad as they expand their media control globally.
How user-generated content is perceived in China could be the reason for the company’s criticised content moderation policy. The community guidelines sound similar to those of other large social media platforms but their wording is much vaguer and generic. In addition, their enforcement on the base of guidelines is opaque to users. All in all, TikToks content moderation policy replicates mistakes from previous platforms on the level of what is removed and how. First, TikToks community guidelines forbid the usual categories of unwanted content without further describing what is meant. There are a few inconsistencies like forbidding copyright infringements without providing any guidelines on how to legally include third-party content although the app is based on the dissemination of visuals and music snippets (and users rarely compose the music themselves). It bans any type of content showing nudity without mentioning any exception like breastfeeding or art. For the sake of space, I will not go into every possible point of critique but so far TikTok’s community guidelines are simplistic and lead to a shallow treatment of the issue. As to the German peculiarity, that is, the Network Enforcement Act (NetzDG), TikTok provides an explanation which unfortunately contains misleading information: according to TikTok, content that was subject of a complaint but does not fulfill the criteria for an immediate removal within 24 hours, shall be removed within seven days. In reality, under the NetzDG, the content may as well not be removed at all if it is not punishable under German criminal law. In addition to providing a “NetzDG complaint” option next to each post (similar to YouTube), users can also access the complaint form via TikTok’s privacy and safety information page.
A secret change in moderation
A recent report by Netzpolitik.org gives unique insights into how content is moderated on TikTok. In this online spreadsheet, a secret source describes how the content moderation guidelines have been changed after a critical piece by the Guardian (which, on a side note, demonstrates the importance of reporting bad practices) according to which the company instructs moderators to ‘censor’ content related to ‘Tiananmen Square, Tibetan independence, or Falun Gong’. Indeed, several users were locked out of their accounts for talking about detention camps in China. Another important take-away of Netzpolitik’s report is how reviewers ‘censor’ content, i.e., not only removing but mainly down-grading it. According to the community guidelines, content that violates the rules shall be removed or the account closed. Decreasing the visibility of user-generated content is not mentioned as a form of sanction but is apparently used as an easier way of hiding unwanted content without being exposed to the allegation of censorship. Investigations have shown that TikTok systematically downgraded images of people with a handicap or who are overweight. The company tried to justify this behaviour as a way of protecting users from cyber-bullying. But as a platform that describes itself as ‘inclusive community’ this argument seems – again – contradictory. It also conflicts with its (presumed) legal status as a neutral platform: curating and ranking content to this extent increasingly resembles publishers’ activities.
To conclude, it seems impossible to keep politics out of the TikTok equation. Be it the alarming situation in Hong Kong, the creeping social scoring system or the ongoing human rights violations for minorities in mainland China – one cannot help but wonder if Tiktok will be a space for more freedom of expression or a censorship agent, possibly controlled by a government (or its laws as a proxy). Its vague and yet restrictive content moderation policy is a source of concern for users, not only because of the alleged link to the Chinese State. However, the company is noticeably pressured by public opinion as well as by a rapidly changing market: if TikTok is unable to convince their users that the app users worldwide respectfully without removing and hiding unwanted content on the base opaque community standards, it might get outpaced by other services. For instance, Instagram (not to say that its policies are exemplary) has already launched an in-app very similar to TikTok called Reels, respectively Cenas in Brazil, and history might repeat when looking back at the takeover performed Instagram Stories from Snapchats format.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact firstname.lastname@example.org.
Digital & Indiscipline: HIIG Explainer videos with English subtitles
Sign up for HIIG's Monthly Digest
and receive our latest blog articles.
Whether civil society, politics or science – everyone seems to agree that the New Twenties will be characterised by digitalisation. But what about the tension of digital ethics? How do we create a digital transformation involving society as a whole, including people who either do not have the financial means or the necessary know-how to benefit from digitalisation? And what do these comprehensive changes in our actions mean for democracy? In this dossier we want to address these questions and offer food for thought on how we can use digitalisation for the common good.
The Sustainability of AI is missing proper standards. EU's CSRD might be a new directive. But is it a toothless paper tiger or a sharp lion?
Digital Policy: The new EU Code on Disinformation might bring an end to platform’s arbitrary handling of hack-and-leak