Data protection and designing technology
In his dissertation Jörg Pohle uncovers the history of ideas and the historical construction of the data protection problem and data protection as an abstract solution – including the architecture of its legal implementation. The aim of his work is to critically evaluate this construction and to draw conclusions for the design of ICT systems. For our dossier on GDPR, we asked him a few questions:
What inspired you to write your dissertation?
At the beginning of the work on my dissertation I planned to investigate how legal requirements – for example from the Federal Data Protection Act – could be translated into technical requirements. A one-to-one implementation is of course not possible. During my research, however, I found that For data protection law – but much more generally for all privacy and surveillance theories – it has not been clarified what is actually the (legal) good to be protected. There is no consensus on what assumptions are made about information processing and use, and what exactly should lead to the problem to be solved. Subsequently, I changed my research question and examined how the problem was historically constructed in discourse; which legal, organisational and technical means or measures were proposed for its solution; and whether and to what extent this is still sustainable from an IT and information science point of view.
…and what answer did you find when you wrote to it?
The result of my work is that data protection – as a solution to the problem of data power created by the industrialization of social information processing – must be re-derived. The aim was to explain why and how information processing organisations threaten fundamental rights and freedoms, but also social values such as the rule of law and democracy. To this end, I have presented a data protection attacker model corresponding to the state of the sociological, legal and computer science debate and an analytical grid for a threat analysis based on it. Finally, I have shown how information technology systems can be developed on this basis to avert these threats and secure individual and social freedom.
For which target group are the outcomes particularly interesting?
Firstly, for all those interested in the history of ideas in the fields of privacy, surveillance and data protection; secondly, for lawyers, who thus receive a scientifically sound justification for data protection that is not based on traditional 19th century ideas of privacy; thirdly, for computer scientists who want to examine socio-technical systems for their individual and social effects or design information technology systems that protect existing spaces of freedom and at the same time create new freedoms.
What was your best and worst and worst finding during the research?
The most exciting realization was that almost all discussions today about the computerization of society – both in science and in society – were held between the late 1960s and the early 1980s. The “worst” is that the work was then by orders of magnitude better than what is now so pinned – from “digitization” to “algorithms” to “AI”…
✨ des Internets: Jörg Pohle präsentiert seine Forschung zur Geschichte und Theorie des Datenschutzes aus informatischer Sicht. Wichtige Lektion: #datenschutzrecht ist nicht #Datenschutz pic.twitter.com/7M3BIX7mAt
— HIIG (@hiig_berlin) 15. Dezember 2017
How is your publication related to GDPR?
The work shows at which points the GDPR simply passes the problem. For example because it makes assumptions that are not correct: such as the fixation on “personal data”, which assumes that fundamental rights and civil liberties cannot also be violated with anonymous data. Or because she paints unrealistic or simply incidental dangers on the wall, but at the same time does not even look at essential problems. On the other hand, my work can be used to fill the analysis and implementation processes laid down in the GDPR – from data protection impact assessment to data protection by design to data protection by default – so that in the end individuals, groups and society as a whole can actually be protected from the information power of states and large private organisations.
What new perspectives does this open up for you on the subject of data protection?
On the one hand, this can be a continuation of earlier discussions: for example on the objectives of data protection, as formulated by Adalbert Podlech, according to which data protection is the solution to the “technology-imparted social problem” of “determining and enforcing the conditions under which a society’s information conduct can be acceptable to the members of society”. On the other hand, I have found various, especially technical approaches in the literature that are worth following up and translating into IT systems – and I am currently working on them.
In your opinion, is GDPR an effective tool?
The GDPR is not an effective instrument, because there are only two things that the basic regulation has effectively expanded: the documentation requirements and the rights of processors…
What can I do to protect my data effectively?
Encrypt and don’t give it to anyone! The question is wrong: Data protection serves just as little to protect data as sun protection serves to protect the sun or disaster control serves to protect disasters…
Thundering applause, congratulations and salut! Last week, Jörg Pohle’s dissertation was awarded “magna cum laude” ?? pic.twitter.com/V1X9P331Lh
— HIIG (@hiig_berlin) 17. November 2017
Jörg Pohle’s thesis uncovers the history of ideas and the historical construction of the data protection problem and of data protection as its (abstract) solution, including the architecture of its legal implementation, in order to critically assess this construction and to draw conclusions for the design of ICT systems. The thesis reveals the manifold aspects which underlie the analysis of the data protection problem – from concepts of humankind and society, organisations, information technology, and information processing, to concepts, schools of thought, and theories within informatics, information science, sociology, and law, to scientific and pre-scientific assumptions and premises and how they have influenced the specific solution to this problem.
History and theory of data protection from a computer science perspective and consequences for designing ICT systems
Based on a critical assessment of this historical construction, the thesis concludes that data protection must be rederived as a solution to the information power problem, which has arisen through the industrialisation of social information processing. To this end, the thesis presents an abstract, state-of-the- art data protection attacker model, an analytical framework for a data protection impact assessment, and a procedural operationalisation approach illustrating both the sequence and the substantive issues to be examined and addressed in this process.
The thesis then draws conclusions for the design of data-protection-friendly – and not just legally compliant – ICT systems. Further, the thesis clarifies the ways in which many concepts referred to in the privacy, surveillance, and data protection debate are invalid, outdated, or oversimplified. This includes the fixation on personally identifiable information, both in terms of the limitation of the scope of application and as a reference point for lawmaking and ICT design, the patently false but widespread assertion that sensitivity is a property of information, the naive public–private dichotomy, and the so-called privacy paradox.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact firstname.lastname@example.org.
Sign up for HIIG's Monthly Digest
and receive our latest blog articles.
Whether civil society, politics or science – everyone seems to agree that the New Twenties will be characterised by digitalisation. But what about the tension of digital ethics? How do we create a digital transformation involving society as a whole, including people who either do not have the financial means or the necessary know-how to benefit from digitalisation? And what do these comprehensive changes in our actions mean for democracy? In this dossier we want to address these questions and offer food for thought on how we can use digitalisation for the common good.
Why is Artificial Intelligence so commonly depicted as a machine with a human brain? This article shows why one misleading metaphor became so prevalent.
Barriers in our physical environment are still widespread. While AI systems could eventually support detecting them, it first needs open training data. Here we provide a dataset for detecting steps...
How can we address the many inequalities in access to digital resources and lack of digital skills that were revealed by the COVID-19 pandemic?