Prof. Jonathan Roberge and Prof. Michael Castelle sind im Mai Gastforscher am Alexander von Humboldt Institut für Internet und Gesellschaft. In ihrem gemeinsamen Brown Bag Lunch argumentieren sie für die Notwendigkeit von Critical AI Studies und fragen, welche epistemischen Kategorien es braucht, um Machine Learning zu verstehen. Die Vorträge finden auf Englisch statt, falls Sie Interesse haben, melden Sie bitte sich über untenstehendes Formular an.
Lunch talk with Jonathan Roberge & Michael Castelle
Wednesday, 15 May 2019 · 1 pm · HIIG Kitchen
Critical AI Studies. Making the Case for CAIS
Automated technologies populating today’s online world rely on social expectations about how “smart” they appear to be. Algorithmic processing, as well as bias and missteps in the course of their development, all come to shape a cultural realm that in turn determines what they come to be about. It is our contention that a robust analytical frame could be derived from a culturally-driven STS while focusing on Callon’s concept of translation. Excitement and apprehensions must find a specific language to move past a state of latency. Translations are thus contextual and highly performative, transforming justifications into legitimate claims, translators into discursive entrepreneurs, and power relations into new forms of governance and governmentality. In this presentation, we discuss three cases in which AI was deciphered to the public: i) the Montreal Declaration for a Responsible Development of Artificial Intelligence, held as a prime example of how stakeholders manage to establish the terms of the debate on ethical AI while avoiding substantive commitment; ii) Mark Zuckerberg’s 2018 congressional hearing, where he construed machine learning as the solution to the many problems the platform might encounter; and iii) the normative renegotiations surrounding the gradual introduction of “killer robots” in military engagements. Of interest are not only the rational arguments put forward, but also the rhetorical maneuvers deployed. Through the examination of the ramifications of these translations, we intend to show how they are constructed in face of and in relation to forms of criticisms, thus revealing the highly cybernetic deployment of AI technologies.
Jonathan Roberge is Associate Professor of Cultural and Urban Sociology at the Institut National de la Recherche Scientifique, where he also holds the Canada Research Chair in Digital Culture. He is among the very first scholars in North America to have critically focused on the production of algorithms, a research agenda which culminated into a foundational text in this domain entitled Algorithmic Cultures (Routledge, 2016, translated into German at Transcript Verlag, 2017). He currently works on a manuscript entitled The Cultural Life of Machine Learning to be out early in 2020 at Palgrave MacMillan (together with Micheal Castelle).
Experiment, Vector, and Loss: The Epistemic Ensemble of Deep Learning
The fast-growing research field of deep learning — the use of convolutional and/or recurrent neural network architectures in machine learning — has been hailed as a “revolution” by researchers and practitioners, one which is sometimes considered “unreasonably effective” and associated with a “black art” of internalist knowledge. At the same time, these models have been criticized by social scientists for their interpretative opacity, dependence on classification schemes, and capacity to reproduce social biases. Which methodologies are most appropriate for understanding these techniques and their increased deployment in everyday sociotechnical life? In this presentation, by focusing on three distinctive features of deep learning—its experimental method; its vectorial or ‘structuralist’ ontology; and the role of the ‘loss’ function through which model parameters are optimized—I will argue that a combination of historical, epistemological, semiotic and interactional approaches are necessary for understanding deep learning. This allows one to understand this emergent field not as a revolutionary disruption but as a genre of technoscience which synthesizes aspects of past epistemic breaks—in this case behaviorism, cognitivism, structuralism, connectionism, and machine learning (as well as aspects of game theory and cybernetics)—together into what I call an epistemic ensemble. This resulting perspective can permit richer engagements with these techniques on the part of social scientists and humanists, who can thus draw on a rich historiography of previous cross-disciplinary engagements and, I argue, actually apply their own existing theoretical apparatuses to contribute to this unstable field for some future ‘neural’ social sciences and/or humanities.
is Assistant Professor at University of Warwick’s Centre for Interdisciplinary Methodologies. His work is at the intersection of the sociohistorical studies of science and technology and the economic sociology of markets, and is working to draw connections between present-day research developments in AI (such as generative network architectures and attention models) and 20th-century theories of language, learning, and creativity. He has written for the journals Philosophy & Technology, Economy and Society, Computational Culture, and has presented at EMNLP (Empirical Methods in Natural Language Processing) and SIGCHI (Special Interest Group on Computer-Human Interaction). He holds degrees in both Sociology (Ph.D., University of Chicago) and Computer Science (Sc.B., Brown University) as well as professional experience in computer graphics, computational neuroscience, and neurology.
Eine Anmeldung ist für diese Veranstaltung aktuell nicht möglich.