Professor Carozza's address engaged directly with these doubts. He argued that, while applying a rights-based framework may be helpful and perhaps even essential for the Board to reach appropriate decisions, it cannot be their sole basis. Professor Carozza argued that human rights have many obvious virtues relevant to decisions about content moderation. Firstly, embracing human rights places the value of the inalienable dignity of all people at the heart of the Board's decisions. Secondly, the global appeal of human rights, together with their claim to universality, makes it an apt reference point for a body with a global reach. Thirdly, it connects the work of the board with the wider ecosystem of norms, institutions and stakeholders, with whom it shares its vocabulary and ways of reasoning, making the Board's decisions more transparent and understandable to a wider audience.
On Thursday 2nd November, we held our first Ethics in AI Colloquium this academic year, which focused on the Meta Oversight Board and the Self-Regulation of Tech Companies. Our main speaker, Paolo Carozza, is currently a member of Meta's Oversight Board and a Professor of Law and Political Science at the University of Notre Dame; and formerly a member of the Venice Commission. We were also joined by Kate O'Regan as a commentator. Professor O'Regan is a former judge of the South African Constitutional Court; the Inaugural Director of the Bonavero Institute of Human Rights at Oxford, and a Trustee of the Oversight Board. The discussion was chaired by the Inaugural Director of the Institute of Ethics in AI, Professor John Tasioulas.
In many ways, Meta's Oversight Board is the first institution of its kind - a quasi-judicial organ, which hears appeals from the users of Meta's digital platforms regarding content moderation matters. In the three years since its inception, it received over 2.5 million appeals. As of today, there are 67 cases which are either currently pending before the Board or where decisions have already been issued. It is a good time to reflect on the state of the experiment and the challenges it faces - whether it can operate at scale, and whether it can meaningfully improve the experience of social media platforms. There are also deeper questions about the Board's operation - is the self-regulation model it exemplifies fit for purpose, or should states take a greater regulatory role in the context of content moderation? To the extent that it is, what standards should the Board apply in deciding cases? As noted by Professor Tasioulas, many have expressed doubts about existing standards, such as human rights, which were designed with states in mind, being applied in the context of corporate decision making without adaptation.
Yet, Professor Carozza felt that using human rights by the Board as its underlying analytical framework also had undeniable limitations. Firstly, by its very nature, human rights law is focused on the individual, whereas the challenge of content moderation on social media largely relates to diffused public goods, and necessarily requires much automation. Secondly, human rights' claim to universality is often in tension with the particular contexts and traditions of various regions. Making global decisions about the rules of content moderation on the basis of human rights is quite different from the decision making in traditional human rights legal settings, and it faces a real challenge in seeking to reflect these local realities. Thirdly, it is unclear how the positive obligations under human rights treaties should apply to corporations, especially in the context of restricting freedom of expression. Fourthly, overreliance on human rights frameworks in the context of content moderation may encourage the misguided view that the problem requires legalistic and procedural solutions. Given these concerns, the best course of action for the Board is to adapt existing human rights frameworks for its needs; and not to think of human rights as just a system of positive rules, but as universal moral claims grounded in human dignity.
In her comments, Kate O'Regan contextualised Professor Carazzo's remarks within a discussion of the factors which render the problem of content regulation uniquely challenging. One such problem is novelty. Professor O'Regan pointed out that until recently, most litigation concerning freedom of speech focused on the publishing industry. The rise of social media platforms has shaken up this paradigm, creating new problems, and thus necessitating new approaches fit for the digital space. This challenge is compounded by the complex and contested state of the existing law on freedom of expression and harmful speech, and the genuine divergence in approaches taken to this issue across the world. Yet another challenge is the potential for the imperative to moderate content to run into tension with the business model of the social media platforms, which is based on maximising engagement, given that people are often engaged by controversial or shocking content. Another challenge is that content moderation decisions need to be made in an in an ex post facto fashion, but in order to be effective, they also have to be made at immense speed and scale.
Professor O'Regan's remarks were followed by a rich and animated Q&A, with questions from the floor and from our online audience, which returned to many aspects of the issues discussed by our speakers.
Written by Konrad Ksiazek
DPhil Student in Law and an Affiliated Student at the Institute for Ethics in AI
Image credit: Maciek, MT Studio/Oxford Atelier