Consent Under Pressure and the Right to Informational Self-Determination (1)
What Is the Future Value of Consent in Data Protection Law?
Since the German constitutional court’s census decision1 the fundamental right to informational self-determination (informationelles Selbstbestimmungsrecht) is one of the most important parts of the general right of personality (Allgemeines Persönlichkeitsrecht) guaranteed by art. 2 para. 1 in conjunction with art. 1 para. 1 of the German constitution. On a statutory level, informational self-determination is especially guaranteed by federal and state data protection acts. If informational self-determination is defined as “the authority of each individual to determine disclosure and use of his personal data”2, it seems inevitable to require the affected person’s consent for the admissibility of personal data processing.3 Accordingly, implementing rules can be found in § 4 para. 1, § 4a para. 1 Federal Data Protection Act (Bundes- datenschutzgesetz) and the corresponding Data Protection Acts of the States (Landesdatenschutzgesetze).
The ideal of a prior, voluntary and informed consent to the processing of personal data has gotten quite a few cracks since the spread of automated data processing and especially the age of constant data elicitation. This insight, of course, is anything but new. Nonetheless, the development has taken speed with the spread of online communication offers. It started with the use of e-mail providers (e.g. Gmail, Yahoo, Hotmail, Gmx) and forums, and is today fostered by all kinds of social networks (Facebook, Google+) and blog hosting services (Blogger, Twitter). At this point not only questions concerning the applicable law and its effective implementation have to be discussed. The ideal of a well-informed consent itself as well as our notion of voluntariness of consent are under considerable pressure.
It is anything but new that parties of modern day legal relations find themselves facing contractual terms which due notice is highly unproportional to the significance of the contract. Hereby, not primarily the financial capacities of the parties are put at risk (by themselves) but the general right of personality. Additionally, the use of personal data largely eludes precise peremptory norms and while being easily reversible by cancellation in theory, this is hardly controllable in practice. These facts especially apply when it comes to accessing online communication platforms. The heart of facebook’s terms of data protection, for example, is with its 9000 words about nine times as long as this blog post. Hardly any user can cope with this flood of information with regard to its content when first signing up for the service. Therefore, only the fewest users do really know what types of use of their data they have agreed to.4 The ideal of an actual informed consent is brought down to societal facts with a bump. Legal limits for the terms of data protection are admittedly set by data protection acts as well as the laws on the admissibility of terms and conditions (Recht der Allgemeinen Geschäftsbedingungen). But while data protection acts have a rather general design and allow almost all kinds of possible data use, the laws on the admissibility of terms and conditions are not tailored to protect the right to informational self-determination. This has serious consequences: Users have poor knowledge mainly shaped by hearsay, which results in insecure online behaviour. Contrary to the constitutional and statutory ideal it remains obscure to (almost) all users for what exact purposes personal data can be used. This may be countered by arguing that every responsible user has it in his own capacity to inform himself properly about the use of his data when signing up for a service. Those too lazy to read the terms of data protection should maybe refrain from using the service. This argument, however, misses out on societal facts for several reasons: Firstly, the terms of data protection are often drafted in a very vague manner and the specific possible uses of data will possibly remain unknown to a new user, who is not familiar with the functions of a certain online communication platform. That means even if the user reads through the terms of data protection, he cannot precisely assess what will actually happen with his personal data. Furthermore, the protection of parties in every day transactions by consumer law and the laws on the admissibility of terms and conditions have raised certain expectations of being protected by the law when it comes the use of online communication platforms. Consumers, who are used to enjoying extensive warranty and cancellation rights with almost every purchase order, will often assume that they do not have to act overcautiously online as well.
Yet the biggest problems emerge when the alternatives are taken into consideration: The individual citizen is increasingly deprived of a free5 decision in favour of or against online communication platforms, because these are more and more indispensable for a modern execution of fundamental rights.
Today, almost all fundamental rights are lived out online, especially those concerned with communication (Kommunikationsgrundrechte).6 Online communication sometimes substitutes offline communication; in most cases, nevertheless, the former complements the latter. Calls for demonstration and assembly, for example, are prepared, spread and discussed online. However, in recent years certain communication platforms prevailed over others and gained a position with a market dominating character.7 The citizens exercising their fundamental rights by using online communication are confronted with growing monopolistic or oligopolistic structures favouring corporations, which are naturally – in contrast to the state – not bound by fundamental rights.8 One could argue that citizens are free to execute their fundamental rights outside prevalent communication platforms. But fundamental rights concerned with communication are by nature dependent on each citizen’s realistic chance to make his concerns heard by potential supporters. The prospects of being perceived seem considerably smaller outside of established communication platforms. If the chance of being heard by others diminishes to a very low level, the use of certain online platforms for communication becomes inevitable for the individual citizen. There is not much left of our notion of voluntariness of consent, when citizens have the choice between surrendering their data to dubious use or effectively waiving their fundamental rights.
Consent is generally not voluntary and therefore void when it is granted under the influence of duress or deception.9 In this strict legal sense consent is indeed a voluntary act and consequently stays valid in the near future. It would however constitute a misconception of societal and constitutional developments, if one believed that consent in its present shape could secure informational self-determination online and thus the general right of personality permanently.
The sequel of this blog entry deals with practical and legal approaches to the issue taking account of a state’s possible constitutional duty to protect.
4 Cf. Buchner, DuD 2010, p. 42.
5 About voluntariness offline Menzel, DuD 2008, p. 406, Schapper/Dauer, RDV 1987, p. 170; Schmidt, JZ 1974, p. 247; thoughts on voluntariness with regards to participation in social networks Buchner, DuD 2010, p. 41.
6 Of course, besides that the general right of personality is realised online especially by digital natives. This blog entry is however focused on the fundamental rights concerned with communication in the narrower sense.
7 On the one hand the prevalence of fewer communication platforms makes the due notice of each set of the data protection terms more appropriate or proportional (see above), on the other hand this as a result increasingly undermines the voluntariness of consent.
8 This causes separate issues, which cannot be addressed here.
9 Gola/Schomerus, Bundesdatenschutzgesetz Kommentar, 11. edition 2012, § 4a, marginal no. 19 et seqq.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact firstname.lastname@example.org.
Sign up for HIIG's Monthly Digest
and receive our latest blog articles.
We approach the de-mystification of this claim by looking at concrete examples of how AI (re)produces inequalities and connect those to several aspects which help to illustrate socio-technical entanglements.
“System Risk Indication” (SyRI) deployed by the dutch government for automatically detecting social benefit fraud. The program was shut down due to a severe lack in transparency and unproportional collection...