KI – Eine Metapher oder werden Maschinen in einer digitalen Gesellschaft zu Personen?
Ist Künstliche Intelligenz Metapher oder Programm? Können Maschinen in einer Weise intelligent sein wie es Menschen sind? Seit Anbeginn des Begriffs ist diese Frage strittig. Die sogenannte schwache Variante einer Künstlichen Intelligenz versteht den Begriff als Metapher, die starke nimmt den Begriff wortwörtlich. Die Antwort auf diese Frage verspricht Hinweise auf die zukünftige Rolle von intelligenten Maschinen in der Digitalen Gesellschaft. Dieser Artikel ist Teil einer fortlaufenden Serie über die Politik der Metaphern in der Digitalen Gesellschaft. HIIG-Forscher Christian Katzenbach und Stefan Larsson (Lund University Internet Institute) editieren die Beiträge dieser Serie.
Dossier: Wie Metaphern die digitale Gesellschaft gestalten
Will computers, robots and machines one day be considered intelligent persons? Will automatic agents imbued with artificial intelligence become members of our society? The concept of personality is fluid. There were times when slaves had no personality rights. Recently, there has been a growing movement arguing to confer legal personality upon animals. Hence, our concept of personality might change in the course of digitisation of society. This might be due to advances in artificial intelligence, but also to the way the term is framed.
Is Artificial Intelligence a metaphor or a descriptive concept? This question cannot be answered in one way or another as there is a semantic struggle surrounding the concept of artificial intelligence. As it will be shown, some people treat artificial intelligence rather as a broad metaphor for the ability of machines to solve specific problems. Others take it word for word and conceive of artificial intelligence as being the same as human intelligence. Some researchers go as far as to reject the concept completely. To shed more light on the issue, it is worthwhile to go back to the time the term was coined.
Artificial intelligence was first used in 1956 in Dartmouth, New Hampshire, where John McCarthy, Claude Shannon and Marvin Minsky organised a six-week summer workshop supported by the Rockefeller foundation. They introduced their grant application in the following terms:
…The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.
Interestingly, even the organisers of the conference did not really approve of the term artificial intelligence. John McCarthy stated that “one of the reasons for inventing the term ‘artificial intelligence’ was to escape association with ‘cybernetics’”, as he did not agree with Norbert Wiener. Yet, the term artificial intelligence was used frequently. Today, it constitutes a subdiscipline of computer science.
The weak AI thesis vs the strong AI thesis
As can be seen from the statement above, there has always been an ambiguity to the term. This statement can be taken to mean that every aspect of human intelligence can be replicated. Yet, it can also be interpreted as a conjecture, as the use of the word simulate suggests artificial intelligence and humane intelligence remain different. The different interpretations of artificial intelligence have been conceptualised as the strong and weak AI thesis. The strong AI thesis suggests that such a simulation in fact replicates the mind and that there is nothing more to the mind than the processes simulated by the computer. On the contrary, the weak AI thesis suggests that machines can act as if they were intelligent. The weak AI thesis transfers the concept of intelligence to a context in which it normally would not apply. Therefore, the term intelligence is used in a metaphorical sense in the context of weak AI.
One of the active proponents of the concept of weak AI was Joseph Weizenbaum, a Jewish German-American computer scientist who was responsible for some important technical inventions, but who remained critical of the societal impacts of computers. He programmed the famous chatbot ELIZA (try here). Weizenbaum used a few formal rules for the chatbot to keep the conversation going. The chatbot analyzes sentence structure and grammar and rephrases the former phrase with a question or replies with a standard utterance.
The proponents of the thesis of strong artificial intelligence have tried to find ways to replicate processes in the brain, for example, by designing neural networks. The strong artificial intelligence thesis suggests that machines can be intelligent in the same way as human beings can. One of the proponents of the strong AI thesis, Klaus Haefner, once had an exchange with Weizenbaum. They use arguments that were already foreseen by Allan Turing in his seminal text “Computing, Machinery and Intelligence” from 1949. He famously replaced the question “Can machines think?” with an “imitation game”. In this game, the interrogator has a written conversation with one human being and one machine, both of which are in separate rooms. The task Turing describes is to design a machine that acts such that the interrogator cannot distinguish it from the human being based on its communication. Therefore, the goal is not to design a system that equals a human being, but one that acts in such a way that a human beings cannot tell the difference. Whether this is achieved by replicating the human brain, or in any other way, was not important for Turing.
Flying Different than Birds
In the literature on AI, the possible advances of the field are compared to other technologies like aeroplanes. Early models tried to simulate birds, while in the end, airplanes manage to fly in a very different way. One aim of Turing’s article was to shift the focus from a general and teleological debate to the actual problems to be solved. According to his approach, there is no great general solution to the question to what AI can achieve in the future, but there are many small improvements to machines.
There might be a day when we suddenly realize that in many respects the line between humans and machines is blurred. Games like chess or Go are examples of problems in which machines have surpassed humans. If this trend continues, it might give a completely different connotation to the term digital society. While we cannot say that we are there yet, does that mean it can never happen? Try “bot or not”, an adaptation of the Turing test for poems. You will find that even today, it can be tricky to distinguish machines from human beings.
Sie möchten selbst einen Artikel im Rahmen dieser Serie veröffentlichen? Dann senden Sie uns eine Email mit Ihrem Themenvorschlag.
Forschungsthema im Fokus Entdecken
Plattformregulierung und Data Governance
HIIG Monthly Digest
Jetzt anmelden und die neuesten Blogartikel gesammelt per Newsletter erhalten.
Netto-Null-Wachstum angekurbelt von grüner Technologie – Ein Traum für alle?
Ein kritischer Blick auf die große Vision weltverbessernder grüner Technologie. Kann sie zu einem nachhaltigen Wirtschaftswachstum beitragen?
Sprachmodellen Normen lehren – Die nächste Etappe einer hybriden Governance
In diesem Blogbeitrag wird untersucht, wie wir Large Language Models (LLMs) gesellschaftliche Normen beibringen können.
Barrieren abbauen: Leichte Sprache im deutschen Web
Wieviel deutsche Webinhalte sind in Leichter Sprache verfasst und damit barrierefrei kommuniziert? Das AI & Society Lab liefert die Antwort.