Die Herausforderungen von Social Robots
Was sind die wichtigsten ethischen, rechtlichen und sozialen Herausforderungen von sozialen Robotern? Und wie können diese Herausforderungen angegangen werden? Um diese Fragen zu beantworten, führte Christoph Lutz mit zwei Forscherkollegen vier interaktive und interdisziplinäre Workshops an führenden Robotik Konferenzen durch. Im folgenden Blog Post schildern wir zentrale Erkenntnisse.
Social robots: what are they and what do they do?
Social robots are robots that interact with humans, by engaging with us in conversations or acting as emotional companions. Examples include SoftBank Robotics’ Nao and Pepper as well as the cute robot seal Paro, which was developed in Japan and is used in elderly care to to help patients with dementia. Research has started to look into the advantages and disadvantages of social robots. An upside of robots, including social robots, is that they can do dangerous, dull and dirty tasks, thus giving humans time for more fulfilling activities. At the same time, social robots can be used for nefarious purposes and are a source of ethical, legal and social (ELS) concerns, for example in terms of their privacy risks.
A solution-oriented approach
Philosophers in the field of robot and machine ethics have long discussed the ELS challenges of social robots. However, empirical ELS research remains scarce and discussions are fragmented across different scientific communities. To provide a more holistic and empirical understanding of the ELS challenges of social robots, particularly in therapy and education, Eduard Fosch Villaronga, Aurelia Tamò-Larrieux, and Christoph Lutz organized four workshops at leading robotics conferences in Europe and Japan. The workshops, held between 2015 and 2017, invited participants from all backgrounds, including academics and practitioners, to engage in open discussions on the key ELS challenges of social robots. 43 participants in total, from more than ten countries, took part in the workshops. Aiming for a solution-oriented format, we not only discussed the ELS challenges but also asked for recommendations on how these ELS challenges could be overcome. After the workshops, we synthesized the results into a working paper.
Based on the workshop discussions, ELS challenges can be grouped into five broad categories: (1) privacy and security, (2) legal uncertainty including liability questions, (3) autonomy and agency, (4) economic implications, and (5) human-robot interaction including the replacement of human-human interaction. Within each category, specific challenges emerged. For example, discussions on autonomy and agency centered on the question of legal personhood for social robots as well as hierarchies in decision-making processes (e.g.: should a robot in a hospital be allowed to override an incorrect decision by a nurse?). Recommendations to address these ELS challenges were of both a legal and technological nature. Within the privacy and security category, technological solutions included the removal of cameras and strategies of visceral notice, such as a robot making a noise whenever it takes a picture of its surroundings. Legal approaches stressed the importance of a more dynamic consent model and a potential revision of privacy understandings. Across different categories, living labs were mentioned as a promising approach, especially the Japanese Tokku zones, where robots are tested in realistic scenarios with concrete policy implications in mind.
In addition to community-building, the workshops also provided methodological insights, demonstrating the value of participant-focused research approaches. A key take-away was the importance of keeping the discussions open, allowing for flexibility. While we had prepared three case studies for structuring the workshops, some emergent categories and themes only evolved outside the boundaries of these case studies. A further insight was the usefulness of conducting the workshop at more than one conference and at different types of venue. This guaranteed a plurality of voices and a broader representation of different research cultures. The third workshop, held at the Japanese Society for Artificial Intelligence’s Annual Symposium on Artificial Intelligence (JSAI-isAI), was particularly fruitful in opening up new perspectives. Finally, documenting the workshops with notes and audio recordings (of course with the permission of the participants) was an important part of preserving the conversations for further analysis. So: Make sure to always bring enough post-it’s.
Christoph Lutz is an Associate Professor at the Department of Communication and Culture and at the Nordic Centre for Internet and Society, BI Norwegian Business School (Oslo). The article was written in follow-up to the conference “AI: Legal & Ethical Implications” of the NoC European Hub taking place in Haifa.
Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte firstname.lastname@example.org
Bleiben Sie in Kontakt
und melden Sie sich für unseren monatlichen Newsletter mit den neusten Blogartikeln an.
JOURNALS DES HIIG
Wie verändern sich Vorstellungen von ‚richtiger‘ Sprache im Kontext von digitaler Kommunikation? Britta Schneider untersucht sich verändernde Vorstellungen und Praktiken von ‚korrekter‘ Sprache, wobei sich eine ambivalente Rolle von digitaler…
To ensure the Internet survives the next decade, we need to start asking the right questions, argue Matthias C. Kettemann, Wolfgang Kleinwächter and Max Senges. It’s not only personal liberties…
Ermöglichen Social Media Plattformen Menschen, sich über Grenzen hinweg zu verbinden, oder sind sie die Wächter digitaler Kommunikationsräume, in denen ihre Nutzer strenge, inhaltsbezogene Regeln einhalten müssen? Das jüngste Beispiel…