How we use art for academic knowledge transfer
As an art collective we are passionate about escaping the academic ivory tower by translating our academic work into interactive art installations for the broader public. In our latest work, AI Oracle, we invite the visitors to engage with ethical questions on machine learning bias in an immersive way.
How we use art for academic knowledge transfer
As a group of academic researchers, machine learning engineers and artists, we believe that art can be a perfect platform to have a discussion on emerging issues in our society with the broader public, especially when it comes to technology. The goal of our work is to raise awareness of the topic addressed and make the associated complexity more accessible to people outside the academic community by making it personally tangible. In our latest work, AI Oracle, we invite the visitors to engage with ethical questions on machine learning bias in an immersive way.
The issue with discriminatory AI
With the rising use of artificial intelligence (AI) within different functions in society, the issues of bias within these systems are becoming more widespread and notably impacting a larger number of people with a greater degree of seriousness. AI refers to “a field of computer science dedicated to the creation of systems performing tasks that usually require human intelligence, branching off into different techniques” (Pesapane et al., 2018). As an algorithm is only as good as the data it works with, it relies on large scale data in order to detect patterns and make predictions. This data reflects human history, and therefore reflects biases and prejudices of prior decision makers that reinforce the marginalisation of minorities. For instance, news on sexist and racist decision-makings in recruitment, health care or justice based on AI have awakened the research community. Our paper “Inclusive Design – Methods To Ensure A High Degree Of Participation In Artificial Intelligence (AI) Systems”, (Ogolla & Gupta, 2018) proposes methods on how to mitigate machine learning bias. To make this rather abstract problem accessible to people outside of the academic AI community, we translated the paper into an interactive art installation.
AI ORACLE – the experience
AI Oracle is an immersive art installation that makes data-based systems tangible for its visitors. In our planned version for 2021, visitors entering the installation will be scanned through a webcam. Stripes of color and body parts are being scanned pixel by pixel. The data is being directly streamed via code to the three screens surrounding the visitor in real-time. Each visitor creates their very own and unique art experience. The installation thus demonstrates to its audience not only the future possibilities of machine vision, but also the performative power of our public discourse around the opportunities and challenges of AI in our society. With our interactive art installation, we invite the public to deal directly and immersively with the challenges of autonomous decision-making by AI. Within the framework of this installation, these themes are to be critically reflected upon, but also playfully made accessible. The aim of the art project is therefore to sharpen awareness of the topic in question and to make the associated complexity more accessible by making it personally experienceable.
About collective no:topia
Collective no:topia is an international inclusive artist collective that was founded in the summer of 2017 during an AI & Art workshop at metaLab@harvard during our stay at the Berkman Klein Institute for Internet and Society at Harvard University in Cambridge (MA), USA. We are based in Berlin, Oslo, Rome and the U.S. with origins from the Dominican Republic, Palestine and Kenya. Each year we extend our collective and invite other artists to join us for our new projects. We exhibited earlier versions of AI Oracle among others at Futurium Museum in Berlin and were awarded by the German federal ministry of education and research in 2019. Currently, we are working on an extension for a touring exhibition of AI Oracle overseas funded by Goethe Institute Australia.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact email@example.com.
Sign up for HIIG's Monthly Digest
and receive our latest blog articles.
Whether civil society, politics or science – everyone seems to agree that the New Twenties will be characterised by digitalisation. But what about the tension of digital ethics? How do we create a digital transformation involving society as a whole, including people who either do not have the financial means or the necessary know-how to benefit from digitalisation? And what do these comprehensive changes in our actions mean for democracy? In this dossier we want to address these questions and offer food for thought on how we can use digitalisation for the common good.
Sustainable AI is becoming increasingly important. But how sustainable are AI models really?
Why is Artificial Intelligence so commonly depicted as a machine with a human brain? This article shows why one misleading metaphor became so prevalent.
Barriers in our physical environment are still widespread. While AI systems could eventually support detecting them, it first needs open training data. Here we provide a dataset for detecting steps...