Artificial intelligence, explained in human terms
Technical systems that use so-called artificial intelligence are not only treated as a new stage of technical evolution, but also bring about progressive social change through their interaction with social processes. The EU Commission therefore formulates the following requirement for the use of AI systems: AI must be "explainable", "interpretable" and "comprehensible". This also makes the comprehensibility of AI for a broad civil society - and not only explainability for experts - a central necessity.
The project “Artificial Intelligence, explained in human terms”, uses an interdisciplinary approach to develop explanatory models for artificial intelligence (AI) systems that are designed for direct interaction and cooperation with civil society. The research subject therefore does not refer to a technical level alone, but requires a special application-oriented and participatory research design due to the focus on civil society.
The first workshop tackles how computer thinks and how we can explain it in human terms.
2. September 2022 – Alexander von Humboldt Institut für Internet und Gesellschaft
The second workshop took place in Munich. Different groups of civil society, with and without AI competences, were invited to talk about the comprehensibility of AI.
20. Oktober 2022 – In the social room of the non-profit organisation neuland & gestalten gGmbH
The hackathon is focused on the conception of two explanatory models for a better understanding of AI together with researchers and experts.
3. February 2023 2022 – Alexander von Humboldt Institut für Internet und Gesellschaft
June/July 2023 – Alexander von Humboldt Institute for Internet and Society and in the social room of the non-profit organisation neuland & gestalten gGmbH