The key challenges of social robots
What are the key ethical, legal and social (ELS) issues of social robots? How can these challenges be addressed? To answer these questions, Christoph Lutz conducted with two research colleagues four interactive and interdisciplinary workshops during leading robotics conferences. Here, he reports on some of the key findings and implications.
Social robots: what are they and what do they do?
Social robots are robots that interact with humans, by engaging with us in conversations or acting as emotional companions. Examples include SoftBank Robotics’ Nao and Pepper as well as the cute robot seal Paro, which was developed in Japan and is used in elderly care to to help patients with dementia. Research has started to look into the advantages and disadvantages of social robots. An upside of robots, including social robots, is that they can do dangerous, dull and dirty tasks, thus giving humans time for more fulfilling activities. At the same time, social robots can be used for nefarious purposes and are a source of ethical, legal and social (ELS) concerns, for example in terms of their privacy risks.
A solution-oriented approach
Philosophers in the field of robot and machine ethics have long discussed the ELS challenges of social robots. However, empirical ELS research remains scarce and discussions are fragmented across different scientific communities. To provide a more holistic and empirical understanding of the ELS challenges of social robots, particularly in therapy and education, Eduard Fosch Villaronga, Aurelia Tamò-Larrieux, and Christoph Lutz organized four workshops at leading robotics conferences in Europe and Japan. The workshops, held between 2015 and 2017, invited participants from all backgrounds, including academics and practitioners, to engage in open discussions on the key ELS challenges of social robots. 43 participants in total, from more than ten countries, took part in the workshops. Aiming for a solution-oriented format, we not only discussed the ELS challenges but also asked for recommendations on how these ELS challenges could be overcome. After the workshops, we synthesized the results into a working paper.
Based on the workshop discussions, ELS challenges can be grouped into five broad categories: (1) privacy and security, (2) legal uncertainty including liability questions, (3) autonomy and agency, (4) economic implications, and (5) human-robot interaction including the replacement of human-human interaction. Within each category, specific challenges emerged. For example, discussions on autonomy and agency centered on the question of legal personhood for social robots as well as hierarchies in decision-making processes (e.g.: should a robot in a hospital be allowed to override an incorrect decision by a nurse?). Recommendations to address these ELS challenges were of both a legal and technological nature. Within the privacy and security category, technological solutions included the removal of cameras and strategies of visceral notice, such as a robot making a noise whenever it takes a picture of its surroundings. Legal approaches stressed the importance of a more dynamic consent model and a potential revision of privacy understandings. Across different categories, living labs were mentioned as a promising approach, especially the Japanese Tokku zones, where robots are tested in realistic scenarios with concrete policy implications in mind.
In addition to community-building, the workshops also provided methodological insights, demonstrating the value of participant-focused research approaches. A key take-away was the importance of keeping the discussions open, allowing for flexibility. While we had prepared three case studies for structuring the workshops, some emergent categories and themes only evolved outside the boundaries of these case studies. A further insight was the usefulness of conducting the workshop at more than one conference and at different types of venue. This guaranteed a plurality of voices and a broader representation of different research cultures. The third workshop, held at the Japanese Society for Artificial Intelligence’s Annual Symposium on Artificial Intelligence (JSAI-isAI), was particularly fruitful in opening up new perspectives. Finally, documenting the workshops with notes and audio recordings (of course with the permission of the participants) was an important part of preserving the conversations for further analysis. So: Make sure to always bring enough post-it’s.
Christoph Lutz is an Associate Professor at the Department of Communication and Culture and at the Nordic Centre for Internet and Society, BI Norwegian Business School (Oslo). The article was written in follow-up to the conference “AI: Legal & Ethical Implications” of the NoC European Hub taking place in Haifa.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact email@example.com.
Sign up for HIIG's Monthly Digest
and receive our latest blog articles.
Should it be up to private actors to decide whether or not to ban the US President from the digital public sphere? Most probably have a clear opinion on these...
Open source hardware (OSH) is an essential approach to public interest technology, not unlike well-maintained infrastructure. While OSH is a field with a range of challenges, we see tremendous potential...