5 Questions for the AI & Society Lab
Artificial intelligence has become a huge part of our daily lives – and its relevance will continue to grow and pose new questions to our societies. In this light, the HIIG founded the AI & Society Lab. This post is a digital introduction answering five questions about the Lab – since actual drinks to the launch are postponed – cheers and to all our health!
This January, the HIIG founded the AI & Society Lab. Unfortunately, an official Launch event with drinks is not an option in times of the corona crisis. Therefore, Wolfgang Schulz, one of the HIIG’s research directors and Theresa Züger, Lead of the AI & Society Lab, took the time to answer five questions to introduce the Lab to the world digitally.
1. In a nutshell, what is the AI & Society Lab?
The AI & Society Lab sees itself as an interface between science, business, politics and civil society and tackles the questions that the increasing spread of artificial intelligence poses to our society. The goal of the Lab is to foster innovative research, interdisciplinary exchange, and knowledge transfer about artificial intelligence. Currently, AI is being discussed very differently in various social groups: the technical community, for example, deals with completely different questions than those that concern civil society. In the Lab, we develop formats that mediate between these perspectives and aim to find answers in our research on how our society can deal with the changes caused by AI in a self-determined manner.
2. There are more and more research centres emerging which focus on AI. What makes the AI & Society Lab different from others?
While we are happy to see all these great initiatives and research centres on AI emerging, the scientific discussion of social questions about AI has so far been rather isolated. What is often missing is an interdisciplinary approach that also takes into consideration social and political questions – for AI cannot be seen as a technical phenomenon only. At HIIG, it is precisely this interconnection that interests us.
3. @Wolfgang Schulz und @Theresa Züger: What do you want to achieve in the next two years? What are the success factors you want to set for the Lab?
Wolfgang Schulz: AI is a cross-sectional matter. We, therefore, measure our success by how well we succeed in establishing new connections, e.g. networking NGOs with technical AI experts. And we believe that there has been much talk about ethical high-level principles, but that there is a lack of ways to make them effective in development and application. We want to make a contribution to this. This concerns, for example, the area of explainability of AI technologies. First of all, it must be clarified what is to be explained; the information as to which artificial neuron has “flipped over” here rarely helps the person concerned. We want to find out what actually helps to make AI decision making explainable.
“We believe that there has been much talk about ethical high-level principles, but that there is a lack of ways to make them effective in development and application.”Wolfgang Schulz
Theresa Züger: Another goal for the first years of the Lab is to develop and establish a new perspective on AI and society by focussing on the concept of Public Interest AI. We want to address the question of how AI can serve the public interest on different levels, reaching from its social consequences to AI’s technical design and user-experience design. This approach highlights a political perspective on AI – meaning a perspective that sees AI as a concern to society as a whole – and thereby broadens the often discussed ethical considerations on AI which mainly addresses developers and their organizations as the main decisions makers. Thinking about public interest always implies considering the well-being of society and all its parts and concerns. The discussion about AI in the line of public interest should be open to all who are affected, especially since AI will increasingly become a part of our daily lives and society’s infrastructure.
4. Why is it a Lab and not a research program of the HIIG?
We take the lab idea very seriously and it was important for us to give this new focus of research a more experimental character than usual research programs. We believe that AI research must not only focus on new topics and questions but has to also find new ways of knowledge transfer and interdisciplinary exchange. This requires a spirit of experimentation and creativity, which we bring together in the Lab. Above all, it is also important for us to conduct research in a practical and application-oriented manner – and always in exchange with partners from politics, business and civil society. Reflecting on different perspectives on AI and society in this exchange has already started to become a great enrichment for our researchers.
5. How can others get in contact with the Lab for more info or collaboration?
Generally, the Lab can be a partner for any organization that works with or develops AI. Future partners might be foundations, technical research institutions or civil society organizations for as well as startups and tech companies. What matters is that there is a common socially relevant question that is exciting for both.
To get in touch with us, one can contact our core team or get to know the Lab better at events (which might be online or happen later this year due to the corona crisis).
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact firstname.lastname@example.org.
Sign up for HIIG's Monthly Digest
and receive our latest blog articles.
Whether civil society, politics or science – everyone seems to agree that the New Twenties will be characterised by digitalisation. But what about the tension of digital ethics? How do we create a digital transformation involving society as a whole, including people who either do not have the financial means or the necessary know-how to benefit from digitalisation? And what do these comprehensive changes in our actions mean for democracy? In this dossier we want to address these questions and offer food for thought on how we can use digitalisation for the common good.
Why is Artificial Intelligence so commonly depicted as a machine with a human brain? This article shows why one misleading metaphor became so prevalent.
Barriers in our physical environment are still widespread. While AI systems could eventually support detecting them, it first needs open training data. Here we provide a dataset for detecting steps...
How can we address the many inequalities in access to digital resources and lack of digital skills that were revealed by the COVID-19 pandemic?