
Write-up by Helena Ward, Doctoral Candidate at the Institute for Ethics in AI.
Ethics, as Professor Tasioulas describes it, is not a discipline, but a domain of thought and action in which we all engage. It is an interdisciplinary activity—just as the problems raised by AI are interdisciplinary in nature. To reflect the need for interdisciplinary solutions to these problems, the Oxford Berlin Colloquium on AI Ethics brought together a number of early career researchers and established professionals to discuss fundamental questions raised by AI: from the ethics of robotics to the project of implementing AI governance.
Hosted by Dr Caroline Green, Director of Research, Institute for Ethics in AI, and Dr Luise Muller, Freie Universitat Berlin, the first day of the Oxford Berlin Colloquium on AI Ethics held at the Beit Room at Rhodes House welcomed:
Professor Alastair Buchan, Founding Director of the Oxford-Berlin Research Partnership who outlined the rich history of the links between Oxford and Berlin.
Professor Dr Alena Buyx, TUM School of Medicine and Health who discussed when and when not we ought to favour artificial over human intelligence in the context of alignment, and Dr David Storrs-Fox, Institute for Ethics in AI, who provided commentary.
Professor John Tasioulas, Director of the Institute for Ethics in AI, Oxford University, discussed the need to dispel inaccurate hypes surrounding AI and the overlooked compatibility of artificial intelligence and deliberative democracy. Professor Max Kiener, TU Hamburg Institute for Ethics in Technology provided commentary.
Professor Dr Verena Hafner, Department of Computer Science, Humboldt University, discussed requirements of self and agency in sensorimotor functioning, and recent research problems such as the development of a sensorimotor Turing test. Dr Linda Eggert, Institute for Ethics in AI provided commentary.
The final session of colloquium involved a panel discussion with Professor Jeremias Adams-Prassl, Faculty of Law, University of Oxford; Professor Nathalie Smuha, Institute for Ethics in AI; Professor Yuval Shany, Institute for Ethics in AI; and Ms Victoria Adelmant, University of Oxford, who took stock of AI regulation both reflecting on the progression of regulation thus far, and on what’s to come.
Each discussion shed light on a single question: how are we to leverage the benefits of artificial intelligence while minimising the risks? In her analysis of alignment Professor Alena Buyx stressed that our main focus in designing and deploying artificial systems should be protecting the individuals affected by their use. But how are we to best approach this? Over the past few years, various forms of regulation with laudable goals have emerged—protecting fundamental human rights and values. We have also seen regulation at the level of corporations, which lacks the standards to which regulatory bodies conform (for example, a lack of transparency under the guise of intellectual property). The insufficiency of corporation level regulation, as Professor John Tasioulas notes, reflects the urgency for global regulation.
Drawing from discussions throughout the colloquium, this write-up will explore the barriers and promise of risk-based approaches to AI regulation.
The Scope of AI
Professor Nathalie Smuha notes that one of the barriers facing policy makers is deceptively simple: defining the scope of artificial intelligence. What is AI? Is AI only machine learning? Does it involve statistics? Basic algorithms? The question of definition is essential because it shapes what falls under the scope of AI regulation. The prohibition of AI in the EU Act is promising in this vein—already, recent regulatory measures in Korea are using the same definition outlined in the EU Act which may suggest the beginning of an international consensus.
What Risks?
What risks should we look out for? Some risks arise from technical errors pertaining to the technology itself; other risks are ethical—they arise from a lack of ethical consideration or value; other risks concern magnification, such as the risk that AI may exacerbate existing structural inequalities. Determining the full variety and weight of ethical risks will require a bottom-up approach, consulting not only experts but the most vulnerable parts of our demography.
However, risk isn’t the only salient factor. Most recent regulations adopt risk-based approaches, which try to categorise systems according to risks. But human rights considerations are missing from many regulatory frameworks (Professor Yuval Shany). Another, related limit of risk-based approaches is that they only ask the negative question—of how we avoid ethics risks. Asking the positive question—of what sorts of values we want to promote through AI design and use, is also central.
Which Ethical Principles?
Even if we come to a consensus on the kinds of ethical principles, we want to align technology with instating regulations around principles will involve determining the precise bounds of those principles: what does it mean, exactly, to protect patient autonomy? What does a fair or unbiased system look like? Making decisions about the future of artificial intelligence will inevitably lead to trade-offs—and determining which trade-offs are worthwhile will involve making important decisions about what we value (Dr David Storrs-Fox).
In making trade-offs, how can we make sure the voices of individuals who are being negatively impacted are heard? Responding to the diversity of individuals impacted by AI will require recognising our diverse needs and interests. Nit picking a few ethical principles would make life easier for policy makers, but a pluralistic approach is essential if we hope to accommodate the heterogeneity of values and needs within our society.
Effective Governance Practices
Risk-based approaches to governance can either be reactive or preventative. Reactive approaches are backward looking; they revise technological infrastructure retroactively in response to ethical mishaps. Here, we see a ‘move fast and break things’ approach in which innovation surpasses legal liability. While some reactive responses are inevitable, minimising the negative impacts of AI will require a preventative approach.
One notable feature of the EU AI Act is that it centres around self-regulation. While it is preventative at the level of the company—in that each company performs regulations and checks before deployment; it is reactive at the level of governance. Given that there is no external vetting process, it is only when there is a problem that issues are retroactively addressed. There are practical reasons for this: having a preventative governance process is costly—no EU country has the capacity to externally vet each product entering the market. Saying that, a neutral third-party might provide a more robust alternative.
Effective governance adopting a preventative approach will also necessitate cyclical processes. If we are to proactively prevent foreseeable consequences from occurring, risk spotting should happen throughout the design, development and deployment process.
While some unintended consequences are inevitable, we can make some risk-based predictions prior to deployment. Some predictions, as Victoria Adelmant discussed during the colloquium, might be made through impact assessment tools—which have formed a key component of recent regulatory measures. Impact assessment tools are used to pre-emptively understand the reasonably foreseeable future impact a system will have. Part of the problem with foreseeing risks is conceptual—we lack a comprehensive understanding of how AI systems impact human rights, and what kinds of risks they introduce. But whether or not the benefits of an AI technology outweighs the costs is an empirical question, and the effect of a single system is incredibly hard to measure.
Despite the plurality of challenges facing AI regulation, as Professor Jeremias Adams-Prassl, ended the workshop on a more positive note—there is ample space for human agency to guide the future of technology, shaping a future which is least detrimental and most aligned with human value.
This event was supported by the International Collaborative Fund and The Business, Civil Society and Policy Engagement Fund.
The International Collaborative Fund is adding significant value to the Initiative of The Institute of Ethics in AI by bringing tech companies to the conversation and leveraging work across the network of institutions supported by Stephen Schwarzman and the Patrick J. Mc Govern Foundation. The International Collaboration Fund will resource activities including international travel, workshops, and visiting speakers.
The Business, Civil Society and Policy Engagement Fund provides support for academics and graduate students at the Institute of Ethics in AI who are engaged with the Initiative to work with companies, think tanks, NGOs and policy makers. The funding will provide small grants for roundtable discussions and international travel. Both of these funding bodies will be used over a three-year period.
The write-up for the second day, the Oxford Berlin Workshop on AI Ethics, held on 21 January, can be read here.
