Myth: AI Models are abstract and do not need personal data
In supervised machine learning, models are based on abstractions from training data. The models themselves, while structurally influenced by the training data, do not contain the data themselves. It therefore seems reasonable to treat data they contain as (almost) anonymous. However, this is not true. Research has shown that deanonymization is possible under certain circumstances. Therefore, the models have to be considered as partially containing personal data and data protection law has to be taken into account when developing AI models to safeguard data subjects.
AI Models are abstract and do not need personal data.
AI models are an abstraction which may or may not contain personal data. Data protection law needs to be taken into account.
Watch the talk
Shokri, R., Stronati, M., Song, C. & Shmatikov, V. (2016). Membership Inference Attacks Against Machine Learning Models.
Al-Rubaie, M. & Chang, J. M. (2019). Privacy-Preserving Machine Learning: Threats and Solutions. EEE Security & Privacy, 17(2), 49-58.
Liu, B., Ding, M., Shaham, S., Rahayu, W., Farokhi, F. & Lin, Z. (2021). When Machine Learning Meets Privacy: A Survey and Outlook. ACM Computing Surveys, 54(2), 1-36.
About the author
Professor, Saarland University (Chair of Legal Informatics), Saarbrücken, Germany
Christoph Sorge received his PhD in computer science from Karlsruhe Institute of Technology. He then joined the NEC Laboratories Europe, Network Research Division, as a research scientist. From 2010, Christoph was an assistant professor (“Juniorprofessor”) for Network Security at the University of Paderborn. He joined Saarland University in 2014, and is now a full professor of Legal Informatics at that university. While his primary affiliation is with the Faculty of Law, he is also a co-opted professor of computer science. He is an associated member of the CISPA – Helmholtz Center for Information Security, a senior fellow of the German Research Institute for Public Administration, and a board member of the German Association for Computing in the Judiciary. His research area is the intersection of computer science and law, with a focus on data protection.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact firstname.lastname@example.org.
Sign up for HIIG's Monthly Digest
and receive our latest blog articles.
Whether civil society, politics or science – everyone seems to agree that the New Twenties will be characterised by digitalisation. But what about the tension of digital ethics? How do we create a digital transformation involving society as a whole, including people who either do not have the financial means or the necessary know-how to benefit from digitalisation? And what do these comprehensive changes in our actions mean for democracy? In this dossier we want to address these questions and offer food for thought on how we can use digitalisation for the common good.
Personal data is particularly sensitive and worthy of protection in the health and care sector. What could good data governance look like here?
Considering the dynamics and processes related to the digitalization of the strategy making process results in a shift from digital strategy to digital strategizing. What's behind the concept?