Skip to content
AI explained in human terms

Artificial intelligence, explained in human terms

Technical systems that use so-called artificial intelligence are not only treated as a new stage of technical evolution, but also bring about progressive social change through their interaction with social processes. The EU Commission therefore formulates the following requirement for the use of AI systems: AI must be "explainable", "interpretable" and "comprehensible". This also makes the comprehensibility of AI for a broad civil society - and not only explainability for experts - a central necessity.

The project “Artificial Intelligence, explained in human terms”, uses an interdisciplinary approach to develop explanatory models for artificial intelligence (AI) systems that are designed for direct interaction and cooperation  with civil society. The research subject therefore does not refer to a technical level alone, but requires a special application-oriented and participatory research design due to the focus on civil society.

Activities

Workshop I Artificial thinking explained in human terms

The first workshop tackles how computer thinks and how we can explain it in human terms.
2. September 2022 – Alexander von Humboldt Institut für Internet und Gesellschaft

Workshop II Artificial thinking explained in human terms

The second workshop took place in Munich. Different groups of civil society, with and without AI competences, were invited to talk about the comprehensibility of AI. 
20. Oktober 2022 – In the social room of the non-profit organisation neuland & gestalten gGmbH

Cooperation partner

Contact

Sarah Spitz

Project Manager: Events and Knowledge Transfer

Theresa Züger, Dr.

Research Group Lead: Public Interest AI | AI & Society Lab

PART OF THE RESEARCH PROJECT