There isn’t enough empirical data on the impact of AI
Many articles in peer-reviewed journals that investigate the application of artificial intelligence to knowledge work refer to data and insights from blogs, newspapers, or data collected by consulting firms. This practice carries the risk that information obtained with less reliable methods will be included in research. This could give the impression that empirical research is more extensive than it actually is in this area. In this blog post, Miriam Feldman highlights the urgency of using primary data collection methods such as surveys, interviews and case studies.
The significant time involved in gathering primary data or going through more well-travelled empirical methods might be a hindrance to researchers of emerging technology, a very fast-paced field. Any journal article also faces significant time in peer review before publication. For this reason, authors may choose to draw their information from news reporting in order to produce academic work which engages with cutting-edge technology.
I am currently conducting a literature review which forms part of the Artificial Intelligence & Knowledge Work – Implications, Opportunities And Risks project at HIIG. The team is investigating how artificial intelligence, systems that use large amounts of data to solve complex problems, is impacting knowledge workers’ workplaces . The HIIG project focuses on these intra-organisational implementations of artificial intelligence from the workers’ perspective. My literature review explores current academic approaches to understanding the use of artificial intelligence in knowledge work through research questions including “What methodologies are the relevant studies applying?” The review is conducted systematically using the Scopus database with search terms related to the larger project, such as “machine learning,” “chatbot,” and “workforce,” among others. Through this search, 1122 articles were identified for abstract and keyword screening, with 51 chosen for review at the full text level. Thirty-six were included in the final literature review corpus after this stage of review.
Over one third of the articles in this review—all published in peer-reviewed journals—cite popular sources. The library at the University of California Berkeley lists traits of popular sources, including a lack of scholarly peer review before publication, being written by “generalists, including bloggers, staff writers, and journalists,” and an absence of formal citations. These popular sources include news articles and some reports from consulting firms. The journal articles in my literature review which cite these sources often do so without specific discussion of the method or justification of their inclusion. These citations are particularly common when academics describe the recent activities of particular firms incorporating artificial intelligence into their operations.
For some of the articles in the literature review, these popular sources form the backbone of the authors’ empirical work. Two similar articles use data gathered by a variety of consulting firms and popular news sources as the basis for their secondary data analysis on artificial intelligence in business. These particular articles are certainly upfront about where they get the data. But, by using data from popular sources to the same extent as data from academic sources—including a major university and a survey conducted directly by the authors—they do little to make a distinction between the popular and the academic sources.
Other authors weave reporting on the integration of artificial intelligence into the workplace from news sources into the text of the article. One news article cited by a journal in the project’s body of literature, for instance, reports on the plans of hospitals to employ certain emerging technology. The news article does not include a discussion of the reporter’s sources or methods. Despite this, the news outlet includes highly specific plans and figures about the hospital. This raises the question: why wouldn’t the authors of the journal article go straight to the hospital employing the technology and confirm these figures? This phenomenon is far from an isolated occurrence; similar practices are common throughout the corpus.
Risk: Who’s reviewing?
There are significant disparities in how transparent the methodologies of popular sources are. As the website of the library at Simon Fraser University notes, “because grey literature (usually) does not go through a peer review process, the quality can vary a great deal.” It is therefore plausible that, when popular sources or grey literature are being used to support academic arguments, information acquired through less rigorous methodologies may be lent undue credibility. When repackaged as a part of a journal article, the popular sources’ lack of formal citations is concerning. This is particularly true when the journal articles in question, like those in our literature review, are all themselves peer-reviewed and therefore viewed as highly academic and rigorous once published.
Risk: What are the incentives?
There are further risks associated with the use of these popular sources, because for-profit institutions, consulting firms and media companies have an inherently different set of incentives than academics do. The Poorvu Center at Yale University, in its discussion of scholarly and popular sources, puts it well: “Every source must be questioned for its stake in the material.” The grey area with the citations in the body of literature reviewed for Artificial Intelligence & Knowledge Work – Implications, Opportunities And Risks is that much citation of these popular sources occurs without explicit discussion of where the information is coming from and the publishing organisation’s motivations.
Risk: How much original scholarship is there?
There is further risk of inflating the number of apparent studies in the field of artificial intelligence and knowledge work. By publishing large numbers of journal articles which draw data or insights from popular sources, there is an illusion of a bigger body of original scholarship in the field than actually exists, despite a low volume of original surveys, interviews, or similar.
As a deeply interdisciplinary area of study, the application of AI to knowledge work naturally attracts a wide range of stakeholders. From consultancies offering digital strategy services, to journalists interested in the future of labor, to academics in disciplines as wide ranging as engineering and psychology, each party wishes to contribute their findings to the discourse and wishes to stay on the cutting edge. The academics simply can’t report breaking tech news at the same rate as journalists, nor can the journalists (usually) claim the degree of peer review that journal articles have—these industries have methodologies and approaches that vary widely.
AI in knowledge work is still very much in its early stages as a field of research. The literature review finds that the frequent citation of popular sources by academic journal articles, as well as the risks discussed above, illuminate a true need for more data-gathering and empirical work in the field. This is far from a claim that that popular sources should never be cited in academic work. Considering the risks above, though, I call for more articles which investigate the ways that organisations are actually applying artificial intelligence, through case studies, surveys, and interviews. With more empirical research and data on the topic, the academic community will be much better poised to analyse the effects of this emerging technology on work.
 Though knowledge work has many definitions, Paso Pyöriä notes the common themes of significant formal education and use of information technology. (Pyöriä, 2005)
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact firstname.lastname@example.org.
Sign up for HIIG's Monthly Digest
and receive our latest blog articles.
We approach the de-mystification of this claim by looking at concrete examples of how AI (re)produces inequalities and connect those to several aspects which help to illustrate socio-technical entanglements.
“System Risk Indication” (SyRI) deployed by the dutch government for automatically detecting social benefit fraud. The program was shut down due to a severe lack in transparency and unproportional collection...