An overview of this event was written by Kyle Van Oosterum
Normative philosophy of computing draws on the methods of analytical philosophy to engage with normative questions raised by computing and its role in society. It aspires to be technically- and empirically-grounded, and to move philosophy forward while remaining legible and useful to other fields. This workshop aims to foster excellence in this growing field by bringing PhD scholars together with one another and with faculty mentors. Our aim is not only to learn from and strengthen the research presented, but also to build community. It is an initiative of the PAIS network, and in particular of the MINT lab at ANU and the Ethics in AI Institute at Oxford.
The PAIS Network comprises the following organizations: The Institute for Ethics in AI at University of Oxford; Schwartz Reisman Institute at University of Toronto; Human Centered AI at Stanford University; Edmond and Lily Safra Center for Ethics at Harvard College; Center for Human Values at Princeton University; the School of Philosophy at Australian National University.
Students will present their paper in the following order:
- Student: Austen McDougal (Stanford)
- Title of paper: The Cheap Considerateness of Social Media
- Abstract: Spontaneously thinking of others—remembering their birthdays, thinking to check in on them—used to matter for our relationships. Philosophers have explained the significance of such "considerateness" for a variety of ethical frameworks, but I highlight the corrosive effect of recent technologies. The significance of considerateness has now been cheapened, in particular, by social media with its automatic reminders. One might think that the solution is to increase our voluntary efforts—say, by recording much more elaborate birthday messages for our friends—but I caution that this cannot replace the lost significance of spontaneous attention.
- Faculty Respondent Dr Liam Bright (LSE)
- Student: Cameron McCulloch (Michigan)
- Title of paper: Are there principled and practicable moral constraints on machine inference?
- Abstract: Can a distribution of goods that arises from a just initial distribution and evolves through legitimate steps ever be unjust? Nozick (1974) said, "No." This paper asks a similar question about inferences made by machine learning algorithms: If an initial set of data, D, is acquired justly (by, e.g., a corporation), are there any inferences from D that are illegitimate? Widely shared moral intuitions suggest the answer must be "Yes." But it's surprisingly hard to defend this claim with arguments that are both morally non-arbitrary and practically tenable. I present five such attempts and argue that they fail.
- Faculty Respondent Dr Kate Vredenburgh (LSE)
- Student: Ruby Hornsby (Leeds)
- Title of paper: My Imaginary Friend: The Robot
- Abstract: This paper will argue that we can best understand the relationship between humans and robots as an ‘imaginary friendship’ - which occurs when a human imagines that they are engaged in genuine friendship with a robot (involving mutual love), though they are not (their love for the robot is in fact unidirectional). What differentiates the robot from many other kinds of imaginary friends is how realistic their friendly performances are; they imitate manifestations of love in such a way that inspires the perceiver to imagine that the robot really can reciprocate feelings. As such - it requires substantially less imagination to befriend the robot, than it does to befriend a stuffed toy, or say, Casper the friendly ghost.
- Faculty Respondent Dr Milo Phillips-Brown
- Student: A G Holdier (Arkansas)
- Title of paper: Agentic and Algorithmic Context Collapse
- Abstract: Context collapse occurs when discursive spaces are crowded such that speech acts performed in one leak into another. Previous theorists have distinguished two kinds of collapse in terms of a speaker’s preferences (authorial collapse is intended, while adversarial collapse is not); in this paper, I articulate a second dimension of context collapse grounded in the phenomenon’s kinematics — agentic collapse is caused by a social agent, whereas algorithmic collapse occurs beyond the control of any individual. After analyzing the resultant fourfold taxonomy, I develop the notion of metacontexual (or ecological) context collapse and explain its role in perspectival conflicts.
- Faculty Respondent Dr Max Khan Hayward (Sheffield)
- Student: Yuhan Fu (Sheffield)
- Title of paper: The Emotionless Machine and Its Rational Core: What Can AI Tell Us About the Role of Emotions in Human Moral Cognition
- Abstract: In this presentation, I argue that AI can show that humans' moral cognition involves emotions, but one cannot further conclude that moral principles are grounded in emotions, nor moral beliefs are acquired via emotions. Furthermore, I will argue that although the lack of emotions makes AI not a good moral agent and we still need to pinpoint its moral responsibility, AI has the potential to help us with ameliorating the biases and noises brought by emotions in moral cognitions.
- Faculty Respondent Dr John Zerilli (Edinburgh)
- Student: Jen Semler (Oxford)
- Title of paper: Types of Artificial Moral Agency
- Abstract: In discussions of artificial moral agency, it’s not always clear what’s meant by the term “moral agent.” This paper explores two ways to interpret the concept: (1) simple moral agents are sources of moral action, and (2) complex moral agents are morally responsible. These senses of moral agency come apart, and the distinction has implications for the possibility of artificial moral agency. Moreover, in the context of AI applications, we must consider which type of moral agency, if any, is required—it’s not enough to say that AI shouldn’t fulfill a particular role because it’s not a moral agent.
- Faculty Respondent Professor Seth Lazar (ANU)
Hosted by:
Professor Seth Lazar is Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, self-defence, and risk, and now leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research projects on the moral and political philosophy of AI, funded by the ARC, the Templeton World Charity Foundation, and Insurance Australia Group. He was General Co-Chair for the ACM Fairness, Accountability, and Transparency conference 2022, and Program Co-Chair for the ACM/AAAI AI, Ethics and Society conference in 2021, and is one of the authors of a study by the US National Academies of Science, Engineering and Medicine, which reported to Congress on the ethics and governance of responsible computing research. He has given the 2022 Mala and Solomon Kamm lecture in Ethics at Harvard University, and the 2023 Tanner Lecture on AI and Human Values at Stanford University.
Professor John Tasioulas, the inaugural Director for the Institute for Ethics and AI, and Professor of Ethics and Legal Philosophy, Faculty of Philosophy, University of Oxford. He was previously the inaugural Chair of Politics, Philosophy & Law and Director of the Yeoh Tiong Lay Centre for Politics, Philosophy & Law at The Dickson Poon School of Law, Kings College London. Professor Tasioulas has degrees in Law and Philosophy from the University of Melbourne, and a D.Phil in Philosophy from the University of Oxford, where he studied as a Rhodes Scholar. He was previously a Lecturer in Jurisprudence at the University of Glasgow, Reader in Moral and Legal Philosophy at the University of Oxford, where he taught from 1998-2010, and Quain Professor of Jurisprudence at University College London. He has also acted as a consultant on human rights for the World Bank and is a member of the International Advisory Board of the European Parliament's Panel for the Future of Science and Technology (STOA). He has published widely in moral, legal, and political philosophy.