Es gibt nicht genügend empirische Daten über die Auswirkungen von KI
Viele Artikel in Journals, die die Anwendung künstlicher Intelligenz auf die Wissensarbeit untersuchen, beziehen sich auf Daten und Erkenntnisse aus Blogs, Zeitungen oder auf von Beratungsunternehmen gesammelten Daten. Diese Praxis birgt die Gefahr, dass Informationen, die mit weniger zuverlässigen Methoden gewonnen wurden, in die Forschung einbezogen werden. Es könnte so der Eindruck erweckt werden, dass die empirische Forschung einen größeren Umfang hat, als das tatsächlich in diesem Bereich der Fall ist. In diesem Blogbeitrag hebt Miriam Feldman die Dringlichkeit der Verwendung von primären Datenerhebungsmethoden wie Umfragen, Interviews und Fallstudien hervor.
The significant time involved in gathering primary data or going through more well-travelled empirical methods might be a hindrance to researchers of emerging technology, a very fast-paced field. Any journal article also faces significant time in peer review before publication. For this reason, authors may choose to draw their information from news reporting in order to produce academic work which engages with cutting-edge technology.
I am currently conducting a literature review which forms part of the Artificial Intelligence & Knowledge Work – Implications, Opportunities And Risks project at HIIG. The team is investigating how artificial intelligence, systems that use large amounts of data to solve complex problems, is impacting knowledge workers’ workplaces . The HIIG project focuses on these intra-organizational implementations of artificial intelligence from the workers’ perspective.
My literature review explores current academic approaches to understanding the use of artificial intelligence in knowledge work through research questions including “What methodologies are the relevant studies applying?” The review is conducted systematically using the Scopus database with search terms related to the larger project, such as “machine learning,” “chatbot,” and “workforce,” among others. Through this search, 1122 articles were identified for abstract and keyword screening, with 51 chosen for review at the full text level. Thirty-six were included in the final literature review corpus after this stage of review.
Over one third of the articles in this review—all published in peer-reviewed journals—cite popular sources. The library at the University of California Berkeley lists traits of popular sources, including a lack of scholarly peer review before publication, being written by “generalists, including bloggers, staff writers, and journalists,” and an absence of formal citations. These popular sources include news articles and some reports from consulting firms. The journal articles in my literature review which cite these sources often do so without specific discussion of the method or justification of their inclusion. These citations are particularly common when academics describe the recent activities of particular firms incorporating artificial intelligence into their operations.
For some of the articles in the literature review, these popular sources form the backbone of the authors’ empirical work. Two similar articles use data gathered by a variety of consulting firms and popular news sources as the basis for their secondary data analysis on artificial intelligence in business. These particular articles are certainly upfront about where they get the data. But, by using data from popular sources to the same extent as data from academic sources—including a major university and a survey conducted directly by the authors—they do little to make a distinction between the popular and the academic sources.
Other authors weave reporting on the integration of artificial intelligence into the workplace from news sources into the text of the article. One news article cited by a journal in the project’s body of literature, for instance, reports on the plans of hospitals to employ certain emerging technology. The news article does not include a discussion of the reporter’s sources or methods. Despite this, the news outlet includes highly specific plans and figures about the hospital. This raises the question: why wouldn’t the authors of the journal article go straight to the hospital employing the technology and confirm these figures? This phenomenon is far from an isolated occurrence; similar practices are common throughout the corpus.
Risk: Who’s reviewing?
There are significant disparities in how transparent the methodologies of popular sources are. As the website of the library at Simon Fraser University notes, “because grey literature (usually) does not go through a peer review process, the quality can vary a great deal.” It is therefore plausible that, when popular sources or grey literature are being used to support academic arguments, information acquired through less rigorous methodologies may be lent undue credibility. When repackaged as a part of a journal article, the popular sources’ lack of formal citations is concerning. This is particularly true when the journal articles in question, like those in our literature review, are all themselves peer-reviewed and therefore viewed as highly academic and rigorous once published.
Risk: What are the incentives?
There further risks associated with the use of these popular sources, because for-profit institutions, consulting firms and media companies have an inherently different set of incentives than academics do. The Poorvu Center at Yale University, in its discussion of scholarly and popular sources, puts it well: “Every source must be questioned for its stake in the material.” The grey area with the citations in the body of literature reviewed for Artificial Intelligence & Knowledge Work – Implications, Opportunities And Risks is that much citation of these popular sources occurs without explicit discussion of where the information is coming from and the publishing organization’s motivations.
Risk: How much original scholarship is there?
There is further risk of inflating the number of apparent studies in the field of artificial intelligence and knowledge work. By publishing large numbers of journal articles which draw data or insights from popular sources, there is an illusion of a bigger body of original scholarship in the field than actually exists, despite a low volume of original surveys, interviews, or similar.
As a deeply interdisciplinary area of study, the application of AI to knowledge work naturally attracts a wide range of stakeholders. From consultancies offering digital strategy services, to journalists interested in the future of labor, to academics in disciplines as wide ranging as engineering and psychology, each party wishes to contribute their findings to the discourse and wishes to stay on the cutting edge. The academics simply can’t report breaking tech news at the same rate as journalists, nor can the journalists (usually) claim the degree of peer review that journal articles have—these industries have methodologies and approaches that vary widely.
AI in knowledge work is still very much in its early stages as a field of research. The literature review finds that the frequent citation of popular sources by academic journal articles, as well as the risks discussed above, illuminate a true need for more data-gathering and empirical work in the field. This is far from a claim that that popular sources should never be cited in academic work. Considering the risks above, though, I call for more articles which investigate the ways that organizations are actually applying artificial intelligence, through case studies, surveys, and interviews. With more empirical research and data on the topic, the academic community will be much better poised to analyse the effects of this emerging technology on work.
 Though knowledge work has many definitions, Paso Pyöriä notes the common themes of significant formal education and use of information technology. (Pyöriä, 2005)
Aktuelle HIIG-Aktivitäten entdecken
Forschungsthemen im Fokus
Forschungsthema im Fokus Entdecken
HIIG Monthly Digest
Jetzt anmelden und die neuesten Blogartikel gesammelt per Newsletter erhalten.
Netto-Null-Wachstum angekurbelt von grüner Technologie – Ein Traum für alle?
Ein kritischer Blick auf die große Vision weltverbessernder grüner Technologie. Kann sie zu einem nachhaltigen Wirtschaftswachstum beitragen?
Sprachmodellen Normen lehren – Die nächste Etappe einer hybriden Governance
In diesem Blogbeitrag wird untersucht, wie wir Large Language Models (LLMs) gesellschaftliche Normen beibringen können.
Barrieren abbauen: Leichte Sprache im deutschen Web
Wieviel deutsche Webinhalte sind in Leichter Sprache verfasst und damit barrierefrei kommuniziert? Das AI & Society Lab liefert die Antwort.