Zum Inhalt springen
kyle-glenn-336141-unsplash
23 April 2019

KI-beeinflusste Entscheidungen: „und ein Löffelchen Würde“

KI hat das Potential, Entscheidungen abzunehmen und Prozesse zu optimieren – beispielsweise bei medizinischen Behandlungen. Doch die neue Art der KI-beeinflussten Entscheidungsfindung funktioniert oft auf obskure Weise, für die wir verständliche Übersetzungen brauchen. Aviva de Groot beschreibt in ihrem Blogbeitrag, wie wir den Aspekt der Würde – einen schwer fassbaren Bestandteil des „Rechts auf Erklärung“ – bei automatisierten Entscheidungen wertschätzen sollten.

The ability to make decisions is a salient shared feature of the multifold applications referred to under the umbrella term AI. Its use affects existing decisional practices and produces transformative experiences like personal communications in the health and political domains. Where decisional elements of input, analysis and output become harder to trace or even start to escape our human understanding capacities, AI-infused decisions can no longer be explained with previous methods.And where such analysis inevitably only produces correlations, causations still need to be investigated before results can be understood. Technical-operational fixes are being developed, but researchers also call attention to human(e) ingredients. Some of these need some explanation themselves to use responsibly. This blogpost shortly treats the volatile entry of dignity, preceded by some professed catalysers.[1]

Augmented Intelligence

Confusingly abbreviated as ‘AI’ too, the A here stands for Augmented. It communicates the understanding that in certain situations, the combination of human and artificial intelligence holds the greatest positive potential. The term also scores lower in the ‘scary headlines’ department, which gained it some industry popularity. Use responsibly: although it boasts the distinct natures of human and machine thinking, it potentially obscures the human color of the artificial input as it becomes increasingly challenging to separate each intelligence’s contribution.

Raw data

Don’t be fooled, this does not exist. It is said to grow in parts of the AI landscape, where the idea that technology is neutral still flourishes. Disagreeing, Feenberg and other scholars stress the importance of recognising our (possibly hidden) motives at play in the human-technology “co-construction” of reality, as what we design and implement in society shapes how we live and interact. These experiences in turn seed further designs.

Automation pessimism

This substance induces a high sense of the kind of awareness advised under the previous lemma. Seen as characteristically European and inspiring legal restrictions and safeguards on automated decision making, it boosts calls for transparency and understandability. Administrative innovations in support of the destructive machinery of the Second World War are seen to have facilitated dehumanising decisional processes in an unacceptable way.

De-objectification

Often combined with automation pessimism, this element benefits both parties of the explanational exchange. It is promoted to (re-)instate them with an understanding of how people are represented in the digital age and treated on its basis. AI is seen to exacerbate earlier upgrades for controlling humans: predicting their behaviour now depends even less on knowledge and understanding of them. Based on digitally ‘observed’ behaviour, their choice environments are set. It is a popular ingredient with those who oppose such treatment on principled grounds.

The capability approach

To (re)instate people in the described way, they will need to be (re-)instilled with the right capabilities. A known supplement in the realisation of human rights, one central idea here is that merely providing a resource – like a right to explanation – may ignore the actual possibilities of people to enjoy its functions. People will actually need to be able to provide and assess explanations to (re-)act as responsible decision makers. This is an ingredient to watch as it is becoming very popular. Think of the problem of ‘deskilling’ in light of the declining demand for people’s own decision making capabilities.

Care ethics

Not to be confused with ‘AI ethics’ varieties that currently spring up like mushrooms in industry, academic and political environments. Care ethics call upon the virtues of humans, accepting them as co-dependent and vulnerable. Its primary principles, shared within the medical domain, harbor proven beneficial potential. ‘Autonomy’ for example contains a strong obligation to explain and inform patients. Frequently used together with dignity, as these ethics activate the benign forces of the latter.

Dignity

The dignity-informed move from ‘doctor knows best’ to ‘informed consent’ has urged doctors to afford insight into what lies within and beyond the limits of their medical knowledge, in support of patients’ decisional capabilities. Ensuing challenges to the power relationship bring us to an important care-related value of dignity: its mutuality. Dignity is cultivated within us and feeds upon what we come to understand as proper, humane behaviour. The user should understand that to withhold another (and even herself) of such treatment will drain her own supply. Grand misuses of the past and present are looked to for examples. Some progress is made, slavery and genocide have been legally recognised as harmful to the shared value space we all depend on and qualified as crimes against humanity. Grave harms are still inflicted, where powerful players wield dignity as a dependent blessing. A wrongful conflation with freedom or autonomy rights – which can be legally restricted for defendable reasons relative to age, state or behaviour. Progress in humanity’s appreciation of dignity continues to redefine the limits to these limitations. And so we develop ..

A spoonful of dignity may serve to highlight human relations that are seen to fade through puzzling use of automation and as binding agent in developing prescriptions. It propels the need to identify proper understandings of augmented intelligence. As a bonus, it may relieve exhaustive calls on individual autonomy. Increasingly disqualified as a universal fix, a shift of focus to human dignity may allow the former to be nursed to a healthy resource. But that is another story.


Aviva de Groot is a PhD researcher at Tilburg Institute for Law, Technology and Society. Her research focuses on automated decision processes. The article was written in follow-up to the conference “AI: Legal & Ethical Implications” of the NoC European Hub in Haifa.

Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de

Aviva de Groot

Aktuelle HIIG-Aktivitäten entdecken

Forschungsthemen im Fokus

Das HIIG beschäftigt sich mit spannenden Themen. Erfahren Sie mehr über unsere interdisziplinäre Pionierarbeit im öffentlichen Diskurs.

Forschungsthema im Fokus Entdecken

Du siehst eine Tastatur auf der eine Taste rot gefärbt ist und auf der „Control“ steht. Eine bildliche Metapher für die Regulierung von digitalen Plattformen im Internet und Data Governance. You see a keyboard on which one key is coloured red and says "Control". A figurative metaphor for the regulation of digital platforms on the internet and data governance.

Data Governance

Wir entwickeln robuste Data-Governance-Rahmenwerke und -Modelle, um praktische Lösungen für eine gute Data-Governance-Politik zu finden.

HIIG Monthly Digest

Jetzt anmelden und  die neuesten Blogartikel gesammelt per Newsletter erhalten.

Weitere Artikel

Eine Hand hält eine digitale Karte auf einem Smartphone. Dies repräsentiert GIS-Technologie und Geodaten.

Wege durch das Großstadtlabyrinth: GIS-Technologie und die Grenzen zwischen digitaler und physischer Infrastruktur

Mit der Entwicklung von GIS-Technologie stellt sich die Frage, ob digitale Karten wie physische öffentliche Infrastrukturen behandelt werden sollten.

Der Werkzeugkasten "Making Sense of the Future" liegt auf dem Tisch und symbolisiert digitale Zukünfte im Unterricht.

Making Sense of the Future: Neue Denksportaufgaben für digitale Zukünfte im Unterricht

"Making Sense of the Future" ist ein Werkzeugkasten, der Zukunftsforschung und Kreativität kombiniert, um digitale Zukünfte neu zu gestalten.

Generic visualizations generated by the author using Stable Diffusion AI

Liebling, wir müssen über die Zukunft sprechen

Können Zukunftsstudien den Status quo jenseits der akademischen Welt in Frage stellen und den öffentlichen Dialog als fantasievollen Raum für kollektive Unternehmungen nutzbar machen?