Why, AI? – A new Online Learning Space
Unraveling Myths about Automation, Algorithms, Society and Ourselves
Everyone talks about AI. But how does it change society? And how can we use it to help society? For that, we need to understand how AI works and how we can make it work for us. We need to do it right because there is so much to get wrong. This is why we have put together the online learning space “Why, AI?”. It is conceptualized as an open education resource (OER) and all material is licensed under CC BY. Let’s continue a conversation about what we want AI to do – and what we want to do with AI.
Artificial intelligence has become a major topic of both public debate and regulation. But what is “AI”? What are its limits? Do we need AI ethics? AI laws? On a global scale? In Europe? In Germany? Is AI neutral? Can AI discriminate? Can AI take decisions? Can AI technologies and applications – do they have to be – explained or be explainable? What are the limits to the use of AI in different societal settings? Automated cars exist, but what about automated judges?
Hosted by the Humboldt Institute for Internet and Society (HIIG) in the framework of the AI & Society Lab and the Research Group Global Constitutionalism and the Internet, “Why, AI?” is designed as a learning space to help you find out more about the myths and truths surrounding automation, algorithms, society and ourselves.
Answering misconceptions with science – by means of an open course format
A stellar team of experts has contributed their time and intellectual energy to help deconstruct a great variety of urban legends about AI: that AI algorithms and automated decision-making systems are inherently bad, that they will end discrimination, that they will take over the world.
The truth is usually a bit less dramatic – but just as interesting.
“Why, AI?” is useful for journalists, educators, students, politicians and the broader public – to learn a bit more about how algorithms work and how AI can be shaped for future use, rather than allowing it to shape the future without our consent and control.
Hans Jonas, a German philosopher of technology, reminded us to act in a way that the effects of our actions are compatible with the permanence of genuine human life. It is no exaggeration to say that we are at a key stage of technological development when it comes to AI. We as a society have to respond to progress in and with AI – but we need to do it right because there is a lot to get wrong. We hope that with “Why, AI?” you learn a little more about how AI impacts the world. And how we need to change AI before AI changes us.
“Why, AI?” is curated by Matthias C. Kettemann – head of the research group Global Constitutionalism and the Internet where he studies the rules of power and the power of rules in hyperconnected online spaces – and Daniela Dicks, Co-Lead of the AI & Society Lab, an interdisciplinary space exploring new perspectives on AI and mediating between different stakeholders in society that interact with AI.
Feedback? Ideas for a myth that should be busted? Get in touch at email@example.com.
The team behind this project: Frederik Efferenn, Lukas Fox, Christian Grauvogel, Katharina Mosene, Marie-Therese Sekwenz, Katrin Werner, Larissa Wunderlich.
It is hosted by the HIIG in cooperation with the Leibniz Institute for Media Research | Hans-Bredow-Institut (HBI), Junges Forum: Technikwissenschaften (JF:TEC), the Sustainable Computing Lab of the Vienna University of Economics and Business (SCL) and the Max Planck Institute for Comparative Public Law and International Law (MPIL) (Section International Law of the Internet) & OESA e.V.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact firstname.lastname@example.org.
Sign up for HIIG's Monthly Digest
and receive our latest blog articles.
Whether civil society, politics or science – everyone seems to agree that the New Twenties will be characterised by digitalisation. But what about the tension of digital ethics? How do we create a digital transformation involving society as a whole, including people who either do not have the financial means or the necessary know-how to benefit from digitalisation? And what do these comprehensive changes in our actions mean for democracy? In this dossier we want to address these questions and offer food for thought on how we can use digitalisation for the common good.
Personal data is particularly sensitive and worthy of protection in the health and care sector. What could good data governance look like here?
Considering the dynamics and processes related to the digitalization of the strategy making process results in a shift from digital strategy to digital strategizing. What's behind the concept?