The Oxford-Berlin Workshop on AI Ethics 2025: Day Two Write-up

A collage of photos showcasing speakers and participants at the Oxford Berlin Colloquium on AI Ethics
Oxford Berlin Colloquium on AI Ethics 2025 Photography Gallery. Photo Credit Ian Wallman.

Write-up by Helena Ward, Doctoral Candidate at the Institute for Ethics in AI. 

The second day of the Oxford-Berlin Colloquium on Ethics in AI held on 21 January 2025 at the Beit Room at Rhodes House connected early-career researchers exploring the moral and political philosophy of AI. The workshop aimed to provide a forum for presenting work in progress, receiving valuable feedback in a constructive environment, and to facilitate collaboration with other world-leading institutions including MIT, Stanford, Tsinghua University, and Berlin.

This write-up will briefly summarise the topics presented. 

Freedom of Information Choice in the Age of Recommender Systems, Shira Ahissar (LSE)

Shira Ahissar introduced the concept of ‘freedom of information choice’ to explain how recommender systems harm us. While freedom of choice is about autonomous decision making—having control over our actions, freedom of information choice is about having autonomy over our epistemic choices. Drawing from literature in economics, we can think of each agent as possessing a cognitive budget set (paralleled to monetary budget sets) which is the amount of information we can process. Our cognitive budget sets are limited by features such as our cognitive limitations, the emotional costs of acquiring information, and time. Recommender systems harm our freedom of information choice because they warp and reduce our cognitive budget sets—overcoming the harms of recommender systems will thus require increasing our cognitive budget sets.

The Moral Significance of AI Interests, Miriam Gorr, (TU Dresden)

When we speak of a conscious or sentient entity as having a goal, we ordinarily have a conception of satisfying that goal as being good for that entity. Entities with a good of their own might also be said to have interests, and if these interests elicit obligations to respect them, then they may be sufficient for moral status. Artificial agents also have goals, and if we think that satisfaction of these goals is good for the artificial agent, might we also have grounds to describe artificial agents as having a moral status? Miriam Gorr argues that the teleological model of interests (which applies to artefacts), unlike the sentient and conscious models of interests (which applies to humans and non-human animals) is not morally relevant; while sentient and cognitive entities have independent reasons or justifications for their goals which ground their moral relevance, it is unclear what the independent reason for a goal-directed entity might be.

Deepfakes at Face-Value: Self-Image and Authority, James Kirkpatrick (Oxford)

Deepfakes are forms of media—audio, video or photographic—which have been manipulated by deep learning models. They give us the means to depict people doing or saying things they have not done and would never do. James Kirkpatrick asks what, exactly, makes the creation of deepfakes wrong (and not only harmful); and argues that current accounts overlook a fundamental issue. For Kirkpatrick, deepfakes are objectionable because they subvert a legitimate interest we have in possessing authority over our own image and likeness, and this wrong exists over and above psychological and structural harms.

AI and Moral Creativity: Advancing AI Ethics and Moral Philosophy, Jonas Bozenhard (TU Hamburg)

Jonas Bozenhardt argues that moral creativity may be useful in approaching the challenge of value alignment: the challenge of designing artificially intelligent systems in ways that are consistent with human values. Dominant approaches to achieve alignment tend to either adopt a principle-based approach (in which we need to align AI with certain ethical principles such as privacy, fairness, or transparency), a preference-maximisation approach (under which AI should maximise the fulfilment of human preferences) or a learning approach (in which AI learns a moral theory, whether Kantianism, Utilitarianism, or Virtue Ethics from the bottom up). Bozenhardt argues that each of these approaches faces problems. An adequate alignment approach needs to be able to cope with the complexity and unpredictability of novel concepts, to accommodate the diversity of human values, and to exhibit normative direction. Moral creativity—which consists of value, and novelty may help us to address the downfalls of existent accounts.

Trusting Myself and Strangers: A Normative Approach to Trust in AI, Jennifer Munt (ANU)

Jennifer Munt presents an account of trust and considers how we might conceptualise our trusting interactions with LLMs. We trust when we believe others to be trustworthy with respect to some act, and we’re trustworthy, when we’re responsive to the normative expectations associated with our relationships with others. Trust can be self-directed, where we self-trust; and it can also be directed at strangers. This dichotomy, Jennifer argues, is helpful for analysing interactions with LLMs. High levels of trust in LLMs are more akin to self-trust—LLMs can mirror not only the semantic content of our linguistic behaviours, but the normative expectations too. Low levels of trust in LLMs are more akin to trust in strangers, where our minimal relationship with them provides minimal normative expectations. Her account of trust raised important questions about the ability of LLMs to respond to normative expectations; of whether reasons-responsiveness might require something like moral understanding (of which the prerequisite cognitive and social capacities might be lacking in LLMs); and also the question of what we trust, when we trust—whether the LLM itself, or the interaction at some specified time.

Autonomy in AI-Driven Decision Support Systems, Timo Speith (Bayreuth)

In the many published AI principles and guidelines discussing the use of AI as a decision-making tool, it is often stressed that “if we’re going to utilise AI systems in decision making, then we need to keep the human in the loop”. Instead of being the origin of decisions, AI systems should be used collaboratively as a tool. Proponents of this position assume that putting humans in the loop increases human autonomy, Timo Speith doubts the plausibility of this assumption. If an artificial system is explainable and presents not only an output but reasons or justifications for that decision, then explainability may help promote autonomy under an individualistic account. But can an agent who has been influenced by an artificial output be described as having full autonomy over that decision?

AI-facilitated Collective Judgment, Manon Revel (Harvard) & Theophile Penigaud (Yale)

It is difficult to aggregate the decisions of individuals in a society to achieve a collective outcome, such that the aggregation accurately reflects the collective judgement of that society. Manon Revel and Theophile Penigaud consider the potential for artificial intelligence to harness collective judgement. Rather than representing shared judgements along a constrained matrix with constrained reasons for preferences; LLMs have the potential to capture the open-endedness of human reasoning. Constructing this new kind of information could help society reflect on itself, and create adaptive, reflective preferences. The success of AI-facilitated collective judgement may have significant potential in representing political preferences in comparison to traditional voting systems.

The Politics of Algorithmic Decision Systems, Konstanze Möller-Janssen (TU Dresden)

Konstanze Möller-Janssen asks whether the increasing mediation of relations through algorithmic decision-making affronts human freedom. To answer this, we need to get clear on what algorithmic decision systems are: what exactly are we dealing with when we’re dealing with algorithmic decision systems? Möller-Janssen argues that we are dealing with systems of politics: they are systems which build order in our world by perpetuating norms. Algorithmic decision makers require structured data which is hierarchically organised; it required norms of unambiguousness and calculability—simplifying complex life into calculable simplicity; and has a preference for conformity. All of these features reveal the danger of algorithmic systems implementing social order and imposing non-neutral normative values into society.

Personal Information, Lauritz Munch (Aarhus)

Lauritz Munch asks the question of how, if at all, the wrong of violating privacy is a wrong related to information. When we watch someone in the shower, unbidden, or record a sensitive conversation, we violate privacy, but what is it that makes privacy violations morally wrong? Current accounts of privacy describe the wrong of violations of privacy in terms of the moral goods associated with preventing access to information. Munch calls this Informationalism—the claim that the wrong of violating privacy essentially has something to do with information being obtained. Munsch argues that Informationalism is false—the core wrong of privacy has to do with using person without their consent, rather than information.

This event was supported by the International Collaborative Fund and The Business, Civil Society and Policy Engagement Fund.

The International Collaborative Fund is adding significant value to the Initiative of The Institute of Ethics in AI by bringing tech companies to the conversation, and leveraging work across the network of institutions supported by Stephen Schwarzman and the Patrick J. Mc Govern Foundation. The International Collaboration Fund will resource activities including international travel, workshops, and visiting speakers. 

The Business, Civil Society and Policy Engagement Fund provides support for academics and graduate students at the Institute of Ethics in AI who are engaged with the Initiative to work with companies, think tanks, NGOs and policy makers. The funding will provide small grants for roundtable discussions and international travel. Both of these funding bodies will be used over a three-year period.