KI-beeinflusste Entscheidungen: „und ein Löffelchen Würde“
KI hat das Potential, Entscheidungen abzunehmen und Prozesse zu optimieren – beispielsweise bei medizinischen Behandlungen. Doch die neue Art der KI-beeinflussten Entscheidungsfindung funktioniert oft auf obskure Weise, für die wir verständliche Übersetzungen brauchen. Aviva de Groot beschreibt in ihrem Blogbeitrag, wie wir den Aspekt der Würde – einen schwer fassbaren Bestandteil des „Rechts auf Erklärung“ – bei automatisierten Entscheidungen wertschätzen sollten.
The ability to make decisions is a salient shared feature of the multifold applications referred to under the umbrella term AI. Its use affects existing decisional practices and produces transformative experiences like personal communications in the health and political domains. Where decisional elements of input, analysis and output become harder to trace or even start to escape our human understanding capacities, AI-infused decisions can no longer be explained with previous methods.And where such analysis inevitably only produces correlations, causations still need to be investigated before results can be understood. Technical-operational fixes are being developed, but researchers also call attention to human(e) ingredients. Some of these need some explanation themselves to use responsibly. This blogpost shortly treats the volatile entry of dignity, preceded by some professed catalysers.
Confusingly abbreviated as ‚AI‘ too, the A here stands for Augmented. It communicates the understanding that in certain situations, the combination of human and artificial intelligence holds the greatest positive potential. The term also scores lower in the ’scary headlines‘ department, which gained it some industry popularity. Use responsibly: although it boasts the distinct natures of human and machine thinking, it potentially obscures the human color of the artificial input as it becomes increasingly challenging to separate each intelligence’s contribution.
Don’t be fooled, this does not exist. It is said to grow in parts of the AI landscape, where the idea that technology is neutral still flourishes. Disagreeing, Feenberg and other scholars stress the importance of recognising our (possibly hidden) motives at play in the human-technology “co-construction” of reality, as what we design and implement in society shapes how we live and interact. These experiences in turn seed further designs.
This substance induces a high sense of the kind of awareness advised under the previous lemma. Seen as characteristically European and inspiring legal restrictions and safeguards on automated decision making, it boosts calls for transparency and understandability. Administrative innovations in support of the destructive machinery of the Second World War are seen to have facilitated dehumanising decisional processes in an unacceptable way.
Often combined with automation pessimism, this element benefits both parties of the explanational exchange. It is promoted to (re-)instate them with an understanding of how people are represented in the digital age and treated on its basis. AI is seen to exacerbate earlier upgrades for controlling humans: predicting their behaviour now depends even less on knowledge and understanding of them. Based on digitally ‚observed‘ behaviour, their choice environments are set. It is a popular ingredient with those who oppose such treatment on principled grounds.
The capability approach
To (re)instate people in the described way, they will need to be (re-)instilled with the right capabilities. A known supplement in the realisation of human rights, one central idea here is that merely providing a resource – like a right to explanation – may ignore the actual possibilities of people to enjoy its functions. People will actually need to be able to provide and assess explanations to (re-)act as responsible decision makers. This is an ingredient to watch as it is becoming very popular. Think of the problem of ‚deskilling‘ in light of the declining demand for people’s own decision making capabilities.
Not to be confused with ‘AI ethics’ varieties that currently spring up like mushrooms in industry, academic and political environments. Care ethics call upon the virtues of humans, accepting them as co-dependent and vulnerable. Its primary principles, shared within the medical domain, harbor proven beneficial potential. ‘Autonomy’ for example contains a strong obligation to explain and inform patients. Frequently used together with dignity, as these ethics activate the benign forces of the latter.
The dignity-informed move from ‚doctor knows best‘ to ‚informed consent‘ has urged doctors to afford insight into what lies within and beyond the limits of their medical knowledge, in support of patients‘ decisional capabilities. Ensuing challenges to the power relationship bring us to an important care-related value of dignity: its mutuality. Dignity is cultivated within us and feeds upon what we come to understand as proper, humane behaviour. The user should understand that to withhold another (and even herself) of such treatment will drain her own supply. Grand misuses of the past and present are looked to for examples. Some progress is made, slavery and genocide have been legally recognised as harmful to the shared value space we all depend on and qualified as crimes against humanity. Grave harms are still inflicted, where powerful players wield dignity as a dependent blessing. A wrongful conflation with freedom or autonomy rights – which can be legally restricted for defendable reasons relative to age, state or behaviour. Progress in humanity’s appreciation of dignity continues to redefine the limits to these limitations. And so we develop ..
A spoonful of dignity may serve to highlight human relations that are seen to fade through puzzling use of automation and as binding agent in developing prescriptions. It propels the need to identify proper understandings of augmented intelligence. As a bonus, it may relieve exhaustive calls on individual autonomy. Increasingly disqualified as a universal fix, a shift of focus to human dignity may allow the former to be nursed to a healthy resource. But that is another story.
Aviva de Groot is a PhD researcher at Tilburg Institute for Law, Technology and Society. Her research focuses on automated decision processes. The article was written in follow-up to the conference “AI: Legal & Ethical Implications” of the NoC European Hub in Haifa.
Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte firstname.lastname@example.org
Aviva de Groot
Bleiben Sie in Kontakt
und melden Sie sich für unseren monatlichen Newsletter mit den neusten Blogartikeln an.
JOURNALS DES HIIG
Viele Artikel in Journals, die die Anwendung künstlicher Intelligenz auf die Wissensarbeit untersuchen, beziehen sich auf Daten und Erkenntnisse aus Blogs, Zeitungen oder auf von Beratungsunternehmen gesammelten Daten. Diese Praxis…
Digitale globale Märkte verbinden Angebot und Nachfrage auf eine historisch einzigartige Weise. Gleichzeitig hat die Verteilung von Verbraucher*innen und Arbeiter*innen auf Nationen und Kontinente sie entfremdet, da das Zeitalter der…
Wie autonom sind autonome Waffen? Wie real sind sie bereits? Und welchen Anteil haben Fiktionen wie der Terminator daran, wie wir über diese Technik denken? Diese Fragen standen im Zentrum…