
Image courtesy of Charles Deluvio on Unsplash
Conversations on Machine Learning and Conceptual Change
Dr Eamonn Bell and Dr Alexander Campolo
When we began planning a series of interdisciplinary workshops on machine learning for the IAS, we hoped that recent developments in the field might resonate across scholarly disciplines. But we didn’t anticipate the huge interest—both scholarly and public—surrounding the release of applications like ChatGPT. By the time our series ended only a few months later, “generative AI” had become something of a buzzword. In fact, the velocity of change in the field of machine learning was one of the motivations for our series, “Conversations on Machine Learning and Conceptual Change,” which was supported by a Development Grant from the IAS. Amidst rapid advances and breathless hype, we saw an opportunity to slow down and ask deeper questions about the ways machine learning might shed light on perennial questions of knowledge, ethics, and politics.
Each conversation in the series took the form of dialogue between faculty in the natural sciences and engineering and a counterpart in the social sciences and humanists. These pairings encouraged participants to step outside the usual academic division of labor, where engineers and scientists develop technologies and humanists reflect on their effects and meanings. Our dialogues instead surfaced epistemological and ethical issues at every stage of the machine learning pipeline; developing and engineering these systems in fact involves deep, if not always explicit, engagements with ideas about society and ethics. Conversely, we found that when humanists engage closely with the technical operations of these systems, they open new perspectives on aspects of the human condition, like emotion. Against “two cultures” clichés, we found that it was difficult and, in any case, not desirable to separate scientific and engineering questions from those involving values and judgments.
Our four events were designed to deepen these entanglements by orienting them around a single concept that can span across these spheres. The first dialogue, on the idea of “capture,” featured Professor Gerald Moore from the School of Modern Languages and Culture and Dr Eamonn Bell of Computer Science. This conversation went beyond the important but well-known critique that machine learning systems somehow expropriate our data and violate privacy to instead explore the longer history of the co-construction of human beings and technology. In this sense, we explored questions of how machine learning systems might themselves change how we understand human language and intelligence.
The second event concerned the idea of “recognition,” both a paradigm in applied machine learning research (pattern recognition, facial recognition, etc.) and a basic philosophical concept perhaps most famously expressed as Anerkennung in the tradition of German Idealism. Professor Stuart Reeves from the Department of Computer Science at the University of Nottingham and Professor Louise Amoore from the Department of Geography at Durham University discussed the embodied and contextual knowledges required to make facial recognition work. They also explored questions regarding the ways that deep learning might transform our understanding of recognition itself, through the inductive identification of patterns in high-dimensional spaces, unintelligible to ordinary human perception.
The third dialogue focused on a particular kind of recognition, that of “emotion.” Together, Professor Effie Lai-Chong Law of the Department of Computer science and Dr Alexander Campolo of Geography discussed affective computing. This conversation connected recent machine learning techniques to the history of photography and medicine dating back to the nineteenth century, where the categories used to classify emotions emerged. Participants also discussed how machine learning systems operationalize theoretical assumptions about the relationship between exterior expressions and interior mental states.
The topic of “inference” provided a fitting capstone for the series, as ideas about causality and knowledge had played a role in all of the preceding dialogues. Professor Alex Broadbent of the Department of Philosophy and Dr Robert Lieck of Computer Science discussed the particular culture of inference machine learning. “Inference” is sometimes used as a technical term, associated with cross-validation or fine-tuning and contrasted with the “training” phase of model development. However, the idea of inference also resonates across science and philosophy, wherever experiments are designed, models are made, or conclusions are drawn. In this dialogue the participants discussed why machine learning seems to value predictive accuracy over causal knowledge and theoretical explanations.
Looking back at this series, we are grateful that our participants and audiences were willing to engage in a truly interdisciplinary exercise. At the outset, we were well aware of the risks of this type of inquiry: ideas getting lost in translation, retreating to habitual roles, and simply the challenge of dealing with a highly technical topic. However, we came away greatly impressed at the willingness of all our speakers to both dive into the engineering details (which matter greatly in machine learning) while at the same time making imaginative connections across disciplinary contexts and categories. There was truly a sense that both sides had something important to offer each other. We were also pleased that this series was quite local; Durham University faculty comprised the vast majority of speakers. It turns out that there is a wealth of both technical and social expertise on machine learning right at our doorstep. As we move forward, we hope to expand the network of scholarship in Durham with further events and interdisciplinary research programs at the intersection of machine learning and social thought. If you would be interested in pursuing this type of research further, please reach out to Eamonn and Alex by filling up this form:
https://forms.office.com/e/f99FwbZGC9
0 Comments