Zum Inhalt springen
Speech in the digital age | HIIG
21 Januar 2020| doi: 10.5281/zenodo.3752941

Was ist ‚richtige‘ Sprache in digitaler Gesellschaft? Von der Gutenberg- zur Alexa-Galaxie

Wie verändern sich Vorstellungen von ‚richtiger‘ Sprache im Kontext von digitaler Kommunikation? Britta Schneider untersucht sich verändernde Vorstellungen und Praktiken von ‚korrekter‘ Sprache, wobei sich eine ambivalente Rolle von digitaler Kommunikation zeigt – Normen werden fluider und fixierter zugleich.


Who is a native speaker in digital society?

In Western cultures the idea that there is ‘correct’ and ‘wrong’ language has a high social relevance. Grammatical and orthographic norms have also been technically conditioned for a very long time. Clear and standardized language norms, as we know them today, are inconceivable without the technique of book printing, which contributed to certain language and writing practices being accepted as ‘correct’ in national, territorially based social spaces. How do notions of ‘right’ language change in the context of digital communication? What happens to our notions of language when auto-completion, AI translation tools but also transnational, often multilingual interaction and speaking with machines are part of our everyday lives? Linguistic-anthropological observations allow a first look at this and show both homogenizing and diversifying tendencies – at the same time, it seems that voice-controlled devices in particular are not only linguistic authorities, but also partners in conversation and relationships.

The idea of ‘wrong’ and ‘right’ in language comes along so naturally in Western culture that it is seldom questioned. In the evaluation of ‘correct’ and ‘incorrect’ uses of language, linguistic research distinguishes between so-called prescriptive norms, that is, norms as they are taught in school, and descriptive norms, norms that linguists describe when they document the speech practices of native speakers of a particular language (which may or may not deviate from prescriptive norms). These speakers are defined as using language ‘grammatically’ (Chomsky 1957). And yet, understanding human speech as ‘naturally’ ordered, and classifying some speakers as ‘native’ or ‘mother tongue’ speakers, whose interaction practices can be objectively documented in written form is challenged in an age of digital media. For example, we see that transnational communication spheres develop, where genres of formal and informal, oral and written language use are mingled and where the concept of the ‘native speaker’ who uses language ‘correctly’ by nature becomes problematic. Additionally, it may be assumed that the logics of programming and self-learning algorithms interact with traditional written and nationally ordered norms. As I will discuss, an impact of technology is nothing new, however, as language has never appeared in isolation from the material mediations in which it occurs (that is, sounds, letters or printing). In this blog post, I first introduce the thought that languages are social constructs that have always been influenced by technology and then scrutinize to what extent language use in digital societies questions or reproduces traditional conceptions of language norms. Finally, based on an interview study, I ask about the effects of human-machine interaction on notions of language correctness. Such interactions seem to have the effect that speakers not only perceive electronic devices as linguistic authorities but also develop emotional relationships to them.

Language and technology – old wine in a new bottle

Generally, it can be said that humans use sound patterns in more or less structured form. There is overlapping of these forms where people interact with each other regularly, or have access to particular uses of language through, for example, school or media. Yet, the idea that overlapping and homogenized structures are natural forms, based on internal universal grammar or on the ‘nature’ or ‘soul’ of a group, and that these can be neutrally described by linguists has been critically discussed (see e.g. Cameron 1995). In more socially oriented and poststructuralist traditions of linguistics, the notion of a language as systemic entity, such as English, German or Spanish, with stable and coherent structures, has been studied as an outcome of socio-historical processes (see e.g. Pennycook 2004), strongly linked to power hierarchies where the speech practices of powerful groups have been declared to be ‘correct’ and other uses as ‘false’, or as minority deviations that are considered to be ‘dialects’ or ‘vernaculars’.

Clearly, besides the social prestige of powerful speakers and the social bonds language norms create, material technologies – writing and printing in particular – have played a crucial role in the establishment of language form as ‘correct’ or ‘standard’ (see also Schneider in preparation). The practice of writing is often interpreted as a one-to-one mapping of the sounds of language, and as secondary to speaking. This, however, overlooks that writing not only represents sounds but also reduces the complex activity of verbal and multimodal (through e.g. mimics, intonation and gesture) interaction to a linearly ordered form that gives the impression that human communication is primarily based on lexical-syntactic form.

The impact of literacy and of printing on culture and thought have been discussed from different angles. In critical research on literacy, the techniques of writing have been studied in relation to the development of social organization and hierarchy and in its epistemological effects on our perception of language and truth (see e.g. Coulmas 2013, Linell 2005, Ong 1982, Street 1995). Marshall McLuhan, a Canadian philosopher and communication theorist, argues that the appearance of Gutenberg’s printing press not only enforces the standardization of linear language form but furthermore impacts on cultural orders of perception and representation – abstraction, specialization, fragmentation, typologization and proportionation are said have become leading logics in societies that make use of the printing press (see also Kloock 2008: 255). McLuhan calls the printed book the first mass product, creating huge and homogenized collective memories (McLuhan 1995). Printing had, however, not only effects on social hierarchies and collective memories and perceptions but also on social space and on the networks in which knowledge was distributed (see also Giesecke 1991). No longer being dependent on religious authorities as main distributors of knowledge, the book printing industry allowed a far wider reach and a democratization of the ability to distribute information.

In this sense, we can detect similarities in the changes that the printing press and digital communication practices have brought about. Like the printed book, digital devices and the Internet have widened the number of people who are technically able and socially legitimized to distribute knowledge. They were thus first often celebrated as enabling democratization (see e.g. Grossmann 1995) and potentially related to the overcoming of national boundaries (for discussion see e.g. Beck 2002). Today, after fake news, filter bubbles and bots belong to topics of everyday discourse, the effects of digital communication on democracy and social and public order are discussed much more critically. It is here not the place to evaluate general political developments in contemporary society. Yet, the analysis of language norms and their potential reconfiguration can be an interesting point of comparison to theories on reconfiguration of social order.

Diversities in digital language…

Thus, similar to the idea that national social orders may be weakened by digital communication, many observations hint at a destabilization of language norms in an age of digitalization. It has been discussed in academic as well as in public settings that language use in online settings and in communication via smartphones does not necessarily adhere to traditional orthography. Some studies show that there seems to be a dramatic decrease of the production of orthographically ‘correct’ writing in primary school children between the 1970s and today and suggest that this may be related to new forms of creative language use in online/electronic communication (Steinig and Betzel 2014). The fact that new types of writing have emerged that blur the formerly pretty strict divide between oral informal and written formal language has also been analyzed (e.g. Dürscheid 2003) and can be seen as another factor in the destabilization of language norms.

Furthermore, we can observe that the possibilities of digital communication lead to the emergence or enforcement of transnational communities, which are based on common interests, goals or patterns of consumption – from football, Hip Hop or gaming to climate activism or organic coffee. These communities have different aims, different scales and different forms of engagement but what typically unites them is that, when it comes to digital interaction, people of different language backgrounds here interact. Very often, transnational digital interaction involves the use of English and for many users, this is the use of English as a second or third language. Multilingualism has thus a wider public presence and the idea that people and societies are ‘normally’ monolingual is now outdated. As an effect, especially English is used in variable forms. Even though the idea that there is ‘correct’ and ‘native’ English is still dominant, we not only find an array of more or less officially sanctioned varieties of English (e.g. US or UK English, Australian English, Canadian English but also Indian English or Nigerian English) but also many everyday uses of English that do not fit into such national framings and that are coined by the socialization trajectories and experiences of individual users. It seems obvious that users of English-speaking countries use English correctly (which so far allows linguists to describe (see above) such uses as ‘new varieties of English’). And yet, it will become increasingly harder to legitimize that forms used in officially English-speaking highly multicultural cities in Canada, for example, are correct but speakers of English in an equally multicultural, but officially non-English setting like Berlin only use English ‘correctly’ if they have been raised in officially English-speaking countries. Clearly, we here see a pluralization of language form that questions the territorial, national basis on which many of the traditional concepts of language correctness are founded.

… and new types of fixity

At the same time, transnational uses of English in digital spheres demonstrate that, besides the potential pluralization of norms, we see the emergence of new norms, where the normalization and dominance of English is rather obvious. English is now established as language of elites in politics, academia and digital business but is also the lingua franca of a high number of people who interact for more mundane purposes. Having taught in academia for almost fifteen years now, I have observed that German students who enter universities today are much more fluid and comfortable to speak English than when I started my career. The ability to speak English has become an almost normal asset in a very high number of jobs, at least on the German job market, to an extent that it may not even be mentioned in job ads.

The trend towards the normalization of English is found in many areas of digital life. English is the basis for most programming languages. English is dominant in websites – more than half of all websites online are in English and there seems to be an increasing trend. While German, for example, shows a decrease of 1,5% on websites from November 2018 to November 2019, English shows an increase of 2% at the same time. Due to the availability of English language websites, and of course related to the Anglophone dominance of English in digital programming and business, English is the first candidate with which translation tools, grammar checkers and auto-completion tools are trained. And despite the pluralization of norms in human to human online interaction, it is written online language from websites that now often serves as corpus for the linguistic study of grammatical patterns of language (as e.g. on IWeb with an estimated number of 14 billion words or on GloWbe with 1,9 billion words). Both corpora are designed to the study English. From a cultural anthropological angle, it is a very typical development that practices that are not only demographically dominant but furthermore associated with cultural elites will, sooner or later, become understood as model and thus as norm.

Written online language from websites and from other documents available online – both typically reproducing traditional notions of formal and correct language use – are also the basis for successful and more and more established digital translation tools. In this context, we see another trend towards homogenization, which has not only to do with the dominance of English but also with the logics of Big Data and programming. The highly successful online translation tool DeepL, for example, is an automated artificial learning tool that has been trained with a database of an online dictionary (Linguee). The data base of the dictionary is based on documents that are available online in different languages, many of them from the context of documents from the context of EU administration. Forms that have been translated by human translators are here reproduced as correct. The results are impressive, even though linguists have noted that, for example, in German-English translation English phrases appear that are not grammatically incorrect but are much more common in German than in English, and thus have a slight German twist to them. This is most likely due to the fact that many official documents are first produced in English and then translated into German by German translators (which is the same procedure for other languages involved). As such forms, in a kind of looping process, reappear in all likelihood in more and more texts, and then continue to impact on AI learning processes, they will co-define what is ‘correct’ English – at least in the contexts where people make use of the tools. Similar to the observation above on the role of English in transnational interaction, we see that there is a simultaneous process of fragmentation and homogenization of norms.

Alexa as authority – and as a friend 

Finally, an ongoing qualitative interview study with users of Alexa and Siri I started last year shows similar trends. When it comes to speaking with electronic devices, it is common that machines do not always ‘understand’ what people say. Users typically conceive this as their own inability to pronounce things ‘correctly’ so that hyper-correct homogenization occurs on the very fine-grained level of pronouncing single sounds. At the same time, my interviewees report that the devices have become much more able to react to what is conceived as ‘natural’ language, that is, language produced with variable intonation and speed. Nevertheless, some German users report that they use Alexa in the English version, as the ‘English Alexa’ is (at least currently) easier to use. And yet, as the data bases with which AI tools are trained grow, it is likely that language processing will also function with more and more variable language, including (some) dialects. Again, there is a kind of ‘Matthew effect’ logic here, as those speaking practices that are quantitatively more common will have a greater effect on AI learning and thus become inscribed as ‘correct’. Besides, it remains to be observed whether the limited syntactical forms (that is, mostly imperative forms) with which voice-controlled interfaces are used and the limited number of pragmatic functions (that is, mostly orders) will have an impact on humans’ perception of ‘normal’ language.

The idea that language norms may change in digital society is not, however, only based on the dominance of English, limitations of templates of voice-controlled devices and AI/ Big Data logics. An observation that I made during the interview study, and that I hadn’t anticipated, was that some users develop emotional relationships to their speaking tools. See the following interaction that I recorded in an interview situation at the home of an Alexa user, where the interviewee wanted to show me how the tool works:

Interviewee: This is computer. Computer, what time is it?
Alexa: It is 16.45h.
Interviewee: Good girl. This is Alexa.
Britta: But do you also say this when I am not here, that you say “good girl” or “well done” or something?
Interviewee: Sometimes.
Britta: Yeah?
Interviewee: I really do like them all (laughs).

The fact that the interviewee openly speaks about her emotional attachment when she says ‘I really do like them all’ (she has up to three devices in each room) is interesting in itself and may show a potentially universal role of sound in human emotions, relationships and ascription of agency. Note for example, that all of my interviewees conceive that the tool ‘does’ something actively if they control it via sound but not if they control it via their hands (where they perceive themselves as the actors). For the question of language homogenization or fragmentation, we can hypothesize that there is again a double development: Humans adapt their language to conform to the demands of their devices. At the same time, an AI tool like Alexa will increasingly become accustomed to patterns that are not orders but, as in the case above, phatic language that has no referential but social functions and thus may expand its abilities to include more variable language use.

Thus, in the end, it is maybe human programmers who will have to learn from Alexa that humans do not only use language to fulfill logical demands in rational, referential interaction but that communication is an essentially creative, affective and, above all, social activity. The question of whether and how language norms remain the same or change is therefore a social question about whose voices are heard and legitimized – a question of power in a society where humans and machines have collaborated for several centuries.


References

Beck, Ulrich. 2002. “The cosmopolitan society and its enemies.” Theory, Culture & Society 19:17-44.
Cameron, Deborah. 1995. Verbal Hygiene. London: Routledge.
Chomsky, Noam. 1957. Syntactic Structures. The Hague: Mouton.
Coulmas, Florian. 2013. Writing and Society. Cambridge: Cambridge University Press.
Dürscheid, Christa. 2003. “Medienkommunikation im Kontinuum von Mündlichkeit und Schriftlichkeit. Theoretische und empirische Probleme.”Zeitschrift für Angewandte Linguistik 38:37-56.
Giesecke, Michael. 1991. Der Buchdruck in der frühen Neuzeit. Eine historische Fallstudie über die Durchsetzung neuer Informations- und Kommunikationstechnologien. Frankfurt am Main: Suhrkamp.
Grossman, Lawrence K .1995. The Electronic Republic: Reshaping Democracy in America. New York: Viking.
Linell, Per. 2005. The Written Language Bias in Linguistics. Its Nature, Origins and Transformations. London: Routledge.
McLuhan, Marshall. 1995 [1968]. Die Gutenberg-Galaxis. Das Ende des Buchzeitalters. Bonn: Addison-Wesley.
Pennycook, Alastair. 2004. “Performativity and language studies.” Critical Inquiry in Language Studies 1:1-19.
Ong, Walter J. 1982. Orality and Literacy. The Technologizing of the Word. London: Routledge.
Schneider, Britta. in preparation. Liquid Languages. Polymorphous Acts of Identity and the Fluidity of Language Categories in Linguistically Complex Belize. (Habilitation thesis, Europa Universität Viadrina)
Steinig, Wolfgang and Dirk Betzel. 2014. “Schreiben Grundschüler heute schlechter als vor 40 Jahren? Texte von Viertklässlern aus den Jahren 1972, 2002 und 2012.” In Sprachverfall? Dynamik – Wandel – Variation, edited by Albrecht Plewnia and Andreas  Witt. Berlin: de Gruyter.
Street, Brian V. 1995. Social Literacies: Critical Approaches to Literacy in Development, Ethnography and Education.London: Longman.


Prof. Dr. Britta Schneider is junior professor for language use and migration at European University Viadrina.

Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de

Britta Schneider

Auf dem Laufenden bleiben

HIIG-Newsletter-Header

Jetzt anmelden und  die neuesten Blogartikel einmal im Monat per Newsletter erhalten.

Forschungsthema im Fokus Entdecken

Du siehst eine Bibliothek mit einer runden Treppe die sich in die höhe schraubt. Sie steht sinnbildlich für die sich stetig weiterentwickelnden digitalen Infrastrukturen unserer Wissensgesellschaft. You see a library with a round staircase that spirals upwards. It symbolises the constantly evolving digital infrastructures of our knowledge society.

Offene Hochschulbildung

Wir erforschen den Einsatz von offener Hochschulbildung, um Wissen für alle in unserer zu fördern, zu teilen und zu verbreiten.

Weitere Artikel

Das Bild zeigt eine Wand mit vielen Uhren, die alle eine andere Uhrzeit zeigen. Das symbolisiert die paradoxen Effekte von generativer KI am Arbeitsplatz auf die Produktivität.

Zwischen Zeitersparnis und Zusatzaufwand: Generative KI in der Arbeitswelt

Generative KI am Arbeitsplatz steigert die Produktivität, doch die Erfahrungen sind gemischt. Dieser Beitrag beleuchtet die paradoxen Effekte von Chatbots.

Das Bild zeigt sieben gelbe Köpfe von Lego-Figuren mit unterschiedlichen Emotionen. Das symbolisiert die Gefühle, die Lehrende an Hochschulen als innere Widerstände gegen veränderung durchleben.

Widerstände gegen Veränderung: Herausforderungen und Chancen in der digitalen Hochschullehre

Widerstände gegen Veränderung an Hochschulen sind unvermeidlich. Doch richtig verstanden, können sie helfen, den digitalen Wandel konstruktiv zu gestalten.

Das Foto zeigt einen jungen Löwen, symbolisch für unseren KI-unterstützten Textvereinfacher Simba, der von der Forschungsgruppe Public Interest AI entwickelt wurde.

Von der Theorie zur Praxis und zurück: Eine Reise durch Public Interest AI

In diesem Blogbeitrag reflektieren wir unsere anfänglichen Überlegungen zur Public Interest AI anhand der Erfahrungen bei der Entwicklung von Simba.