This talk explores how two false assumptions about values embedded in machine learning design might, if corrected, help technologists make progress on aligning machine outputs with human values. Can an AI be built that avoids these two mistakes? I sketch a possible design that is a hybrid of both machine learning and symbolic systems.
You can register here for this colloquium.
The Institute for Ethics in AI will bring together world-leading philosophers and other experts in the humanities with the technical developers and users of AI in academia, business and government. The ethics and governance of AI is an exceptionally vibrant area of research at Oxford and the Institute is an opportunity to take a bold leap forward from this platform.
Every day brings more examples of the ethical challenges posed by AI; from face recognition to voter profiling, brain machine interfaces to weaponised drones, and the ongoing discourse about how AI will impact employment on a global scale. This is urgent and important work that we intend to promote internationally as well as embedding in our own research and teaching here at Oxford.
Professor Ruth Chang is the Chair and Professor of Jurisprudence and a Professorial Fellow of University College. Before coming to Oxford, she was Professor of Philosophy at Rutgers University, New Brunswick in New Jersey, USA. Before that she was a visiting philosophy professor at the University of California, Los Angeles, and a visiting law professor at the University of Chicago. During this period she also held a Junior Research Fellowship at Balliol College where she was completing her D.Phil. in philosophy. She has held fellowships at Harvard, Princeton, Stanford, and the National Humanities Center and serves on boards of a number of journals. She has a J.D. from Harvard Law School.
Her expertise concerns philosophical questions relating to the nature of value, value conflict, decision-making, rationality, the exercise of agency, and choice. Her work has been the subject of interviews by various media outlets in the U.S., Canada, the U.K., Germany, Taiwan, Australia, Italy, Israel, Brazil, New Zealand, and Austria, and she has been a consultant or lecturer for institutions ranging from video gaming to pharmaceuticals to the CIA and World Bank.
Commentators
Alex Grzankowski is a Reader in Philosophy at Birkbeck, University of London and Associate Director of the Institute of Philosophy where he leads the London AI and Humanity Project. He specialises in the philosophy of mind and language, exploring the foundational aspects of reference, representation, and emotion. He aims to shed light on the critical role these concepts play in the development and interpretation of AI.
Hosted by
Professor John Tasioulas, the inaugural Director for the Institute for Ethics and AI, and Professor of Ethics and Legal Philosophy, Faculty of Philosophy, University of Oxford. He was previously the inaugural Chair of Politics, Philosophy & Law and Director of the Yeoh Tiong Lay Centre for Politics, Philosophy & Law at The Dickson Poon School of Law, Kings College London. Professor Tasioulas has degrees in Law and Philosophy from the University of Melbourne, and a D.Phil in Philosophy from the University of Oxford, where he studied as a Rhodes Scholar. He was previously a Lecturer in Jurisprudence at the University of Glasgow, Reader in Moral and Legal Philosophy at the University of Oxford, where he taught from 1998-2010, and Quain Professor of Jurisprudence at University College London. He has also acted as a consultant on human rights for the World Bank and is a member of the International Advisory Board of the European Parliament's Panel for the Future of Science and Technology (STOA). He has published widely in moral, legal, and political philosophy.