On two typically grey-skied days in March (see above picture), the setting was the cosy Rainolds Room in Corpus Christi College, Oxford. After each speaker, commentator and guest were sufficiently caffeinated, the colloquium began with an introductory welcome by Seth Lazar.
The Philosophy, AI, and Society (PAIS) network recently held its first Doctoral Colloquium at the University of Oxford on March 6-7, 2023.
It was an initiative led in particular by the MINT Lab at ANU (headed by Professor Seth Lazar) and the Institute for Ethics in AI at Oxford (headed by Professor John Tasioulas). The colloquium held on March 6-7, featured excellent talks by doctoral students (and incisive commentary by faculty) within the burgeoning field of the ‘normative philosophy of computing’, which raises probing philosophical questions about computing, AI, and their impact on society.
He explained that the motivation of the colloquium was not just to learn from and give feedback on talks – the standard procedure of many an academic conference – but to build a community of scholars with shared interests in the normative philosophy of computing. Already on that score, the colloquium was successful: the students and commentators came from a variety of great institutions such as Oxford, Leeds, LSE, Sheffield, Edinburgh, Arkansas, Stanford, and Michigan. Then, of course, there were the genuinely insightful student presentations. Let’s start with Day 1.
Austen McDougal (Stanford) opened with a lesser-discussed concern about social media, namely, how it can cheapen the virtue of considerateness (e.g., would we be displeased if our friends remembered our birthdays only because Facebook told them?). One 30-minute coffee break later, Cameron McCullough (Michigan) sceptically probed the privacy-related differences between human and machine inferences (i.e., is it okay for a human to infer that I am pregnant but wrong for Target to do so?). The talk evoked a feeling of puzzlement and doubt about the alleged moral differences, a feeling referred to as aporia by Ancient Greek philosophers like Aristotle. Speaking of Aristotle, Ruby Hornsby (Leeds) then gave her talk on the moral significance of humans and robots and claimed such friendships be thought of as ‘imaginary friendships’. Building on Aristotle’s reflections on the nature of friendship, Ruby suggested that friendships with robots (like imaginary friends) are a watered-down case of the genuine friendships we have with human beings. Insightful comments by Liam Bright (LSE), Kate Vredenburgh (LSE) and Milo Phillips-Brown (Oxford) on the respective talks generated useful feedback for the speakers and sparked an excellent Q&A.
Following the first day of talks, Lazar convened a roundtable conversation about the prospects for normative philosophy of computing. It gave the students an opportunity to discuss and get advice about working in this exciting field. After the roundtable, the speakers and commentators migrated (in peripatetic fashion) to the Pan Pan restaurant in Cowley. Great food and drink inspired ‘off-the-record’ philosophizing as well as meaningful opportunities to chat and get to know one another. With that, we move to Day 2!
A.G. Holdier (Arkansas) discussed context collapse worries on social media, seamlessly bringing together philosophy of language and communication studies. Context collapse occurs when information intended for one audience finds its way to another; in this respect, Holdier helpfully collapses these academic contexts into an interesting interdisciplinary talk for the benefit of the audience. In a very topical talk, Felicity Fu (Sheffield) analyzed the interplay between emotions, moral judgment, and AI (in particular, large-language models like ChatGPT). Felicity advanced the intriguing idea that AI may actually help us to make better moral judgments by exposing biases and heuristics in our reasoning and thereby expanding our moral imagination. Clever scheduling of the program led us to conclude the day’s talks with Jen Semler’s (Oxford) examination of the types of moral agency we might attribute to AI. With an excellently detailed taxonomy of moral agency, Jen provided a sense of the complexity here and a good sceptical reply to those who might dismiss the potential moral agency of AI. Comments by Max Khan Hayward (Sheffield), John Zerilli (Edinburgh) and Seth Lazar (ANU) garnered great feedback and maintained the same vibrant Q&A from Day 1.
The formal part of the colloquium was then finished, and one-on-one mentoring sessions took place with the speakers and their respective commentators. After those sessions, the remaining attendees made their way to the Bear Inn to celebrate the successes of the colloquium. Finally, I want to end by noting that there couldn’t be a more fitting name for this community than the PAIS network. In Spanish and other Romance languages, the word ‘país’ literally means nation or state. But it’s not a stretch to use that word to refer to a community united by various features, such as an interest in the normative philosophy of computing. I know I speak for many others when I say that we are eagerly awaiting the next PAIS Doctoral Colloquium.
Kyle Van Oosterum, DPhil student in Philosophy at Oxford, provided this overview of the event.
Images by Oxford Atelier