
Written by Isabelle Ferreras[1]
Suggested citation: Ferreras, I., ‘Ethical and Safe AI Development: Corporate Governance is the Missing Piece’, AI Ethics At Oxford Blog (3 February 2025) (available at: https://www.oxford-aiethics.ox.ac.uk/blog/Ethical-and-Safe-AI-Development-Corporate-Governance-is-the-Missing-Piece )
____________________________________________________________________________________________________
Regulatory overreach or regulatory capture? This is not the crux. Ethical and safe AI Development necessitates a fundamental shift in focus. While the first of its kind « AI Safety Report » is released and will be discussed next week at the AI Action Summit in Paris, this blog post argues that corporate structure is the missing piece. Empowering labor investors, by granting them a critical voice in corporate decision-making, is crucial for mitigating the potential harms of AI and ensuring equitable technological progress. Corporate design should take center stage in the conversation, moving decisively beyond mere protection for whistleblowers.
________________________________________________________________
Will over-regulating AI stifle innovation?
The introduction of any new technology causes society to wonder how to strike the right balance between enthusiasm and fear. AI is no different: it opens up many new possibilities… and it could go awfully wrong. Debate is raging in every free society over what directions AI should be allowed to go: governments are setting up expert panels and holding parliamentary hearings and the UN has created a taskforce.
The European Union has been at the forefront of the regulatory approach with the AI Act, but even here, fears of government’s heavy hand choking innovation have made their way into the much-discussed Draghi Report on EU Competitiveness.[2] Mission letters to new Commissioners drafted in mid-September by EU Commission President Ursula von der Leyen invoked the Draghi Report as a key reference point, further underlining fears among regulators that regulation might hamstring innovation. No one wants to be responsible for regulatory framework that might fail a country’s economy by quashing innovation.
What’s being regulated?
On September 17, the United States Senate Committee on the Judiciary’s Subcommittee on Privacy, Technology, and the Law held a public hearing titled, “Oversight of AI: Insiders’ Perspectives”[3] Multiple experts testified on the so-called “arms race” that the field of AI is quickly becoming. Among those experts was Helen Toner, an engineer who directs the Center for Security and Emerging Technology at Georgetown University. Toner was an OpenAI board member until late last year and played a crucial role in the storm that engulfed the company last November when Sam Altman was dismissed and then reinstated as CEO of the company. Toner is currently one of the most informed voices speaking on AI development and is passionate about the importance of ensuring AI is developed ethically, in service of humanity. According to Toner, if AI is to be developed for the good of humanity, society requires a set of interventions that include government regulation and industry-wide standards, whistle-blower protection, and public pressure. That she is pushing hard for solutions is laudable. Sadly, the ones she was offering lack the force needed to adequately and effectively confront this unprecedented situation.
The problem lies with the object being regulated: AI is constantly evolving, and characterized by uncertainty. This makes the epistemic foundations for regulation unstable; more important, it gives industry insiders a tremendous epistemic advantage, since they know so much more, so much sooner about AI. Regulating a field with such massive information asymmetries is a daunting challenge.
Epistemic uncertainty forces us to navigate between a bad set of alternatives: regulatory overreach or regulatory capture
States are trapped between a rock and a hard place when it comes to AI: regulatory overreach could very well stifle good innovation; regulatory capture by fast-growing AI corporations that will only accept regulation with some degree of regulatory lag is a real risk. Even in a best-case scenario where some helpful regulation is put in place, the time that elapses between the moment regulators understand the object of regulation and identify the right way to regulate it and the moment of implementation would likely allow corporations extraordinary latitude and power.
With so many of its applications and capabilities still unknown, regulating AI “from the outside” seems unhelpful at best and counterproductive at worst. Currently, there is no internal mechanism to right this imbalance of power. AI workers, along with Geoffrey Hinton, considered by many as the “godfather of AI,” who received this year’s Nobel Prize in Physics for his seminal work in the development of AI, have written an open letter in which they assert, “AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.”[4] The letter continues, “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”[5] As of today, the only new mechanism that OpenAI has put in place after last winter’s turmoil is a new form of whistleblower protection, which, in the very words of its own employees and their peers, is an inadequate balance.
This is a no-win situation: the solutions currently on the table are unsatisfactory to friends and foes alike. They are inadequate (whistle-blower protections will be offset by voluntary departure packages with generous compensation and non-disclosure agreements), hamper development (heavy-handed regulation suffocates unethical and ethical innovation alike), or miss the target entirely (the technology and its applications are constantly changing).
It is urgent that we find better solutions. To fail is to forfeit our future to a handful of oligarchs. As AI technologies are employed to govern more and more aspects of our world – starting with the workplace - we are at risk of losing democracy entirely, robbing the people of a meaningful voice in their own lives[6].
The answer lies in the nature of the technology itself: in essence, AI an open-ended process, where decisions about means and ends are constantly being made, unmade, and remade.[7] For this reason, we cannot approach “AI safety” as we did car safety (a clear object, a specific use, a narrow set of options – seat belts, passenger limits, etc.). Instead, firms must be redesigned to fit the deeply political nature of AI itself: an open-ended technology, where decisions about means and ends are constantly being made, unmade, and remade. We need Corporate Governance for Ethical AI development.
The open-ended nature of AI demands that we tailor the corporate governance of AI firms by making them into sites of evolving deliberation. In other words, we must adapt corporate governance to the actual process of producing AI if we have any hope of producing AI in an ethical fashion. AI firm governance must be designed to foster quality reflection that brings workers’ critical intuitions to the table each and every time decisions are made. I do not mean a nice-to-have afterthought listed among choices considered at board-level. To meaningfully harness this reflection requires quantitative significance, as well. Worker voice must have real impact on the choices made in AI development. This calls for what I refer to as democratizing corporate structure. [8] This means moving beyond the current arrangements of corporate governance, in which decisions are left almost entirely in the hands of capital investors.
So much of the talk about alignment today goes on without actually listening to the real experts in the field: not the boards of corporations, but the AI workers themselves. The - right - move away from profit-driven corporate designs was in fact a priority in the early phase of OpenAI – but it was quickly abandoned when the scale of the potential profit became clear.[9] This is, I have argued, a structural problem. Deliberation over a firm’s goals and means are not currently valued sufficiently in corporate bylaws; currently, the only stated fiduciary duty of the management team is maximizing shareholder value. This is a mistake – and, I repeat, a structural one: maximizing the quality of the product and the interests of all the firm’s stakeholders (including users, consumers, and society at large) must be built into the way the firm is governed. “It is a design problem,” as Josh Cohen has noted repeatedly[10]. Here, as the field of AI makes clear, workers’ voice is critical: they are our best experts, holding a variety of sorts of ethical concerns that society wants to see injected in the design process, that is AI development.
The legal form enabling the kind of structural change that would mandate AI firms to pursue ethical AI already exists in the form of worker cooperatives. Contemporary firms are mostly structured through corporations, however. Instead of the major overhaul required by a full transition to worker cooperatives, a key adjustment to capitalist corporations could be made today. Workers in AI firms could be granted the ability to exercise a countervailing power to representatives of the logic of profit-maximization – to capital investors, in other words. All this would take is a switch to a Dual Majority (or bicameral) Board design[11].
Tailoring corporate governance to the age of AI development
Today, it seems ridiculous to imagine a Great Britain ruled by property owners alone, gathered together in the House of Lords with no House of Commons to balance their power. The same logic can be applied to the contemporary firm, it is unreasonable – and seen in the light of the House of Lords, hardly inevitable – that a company be governed by its shareholders alone. This is all the more true in an industry wielding such enormous influence over how society, and the economy in particular, will evolve. Whilst a valid case can be made for abolishing landowners (or shareholders) altogether, particularly in industries that serve the common good, the economic and political control they have makes this an unrealistic objective in the short term. Right now, it is time to increase workers’ power within firms as they currently exist.
What I am proposing is a legally sanctioned and recognized body within firms, one that would operate in parallel to their corporate boards. This body would represent firm workers, functioning as a second representative chamber in the firm government. Hence my choice of name for this proposal – economic bicameralism. A firm’s two chambers of representatives – one of capital investors, one of labour investors - would function as the legislative branch of its government. Together, they would elect the firm’s executive branch – its top management, in other words. Continuing the historical tradition of bicameralism, this structure would grant veto power over a firm’s main strategic decisions to a previously disenfranchised constituency, that of labour investors. Capitalist firms would thus be run along the already familiar lines of a democratic political entity.
There is nothing new in the intuition of granting a voice to workers: labour laws around the world already provide for it in smaller ways, from guaranteeing at least minimal rights to organise through unions to EU Work Councils that guarantee workers the right to be consulted and informed on certain key issues to large German companies governed by Mitbestimmung or ‘codetermination’, which grant representatives elected by workers an equal number of seats on the supervisory boards (although the chair is still chosen by the capital investors, and casts the deciding vote). Currently, nowhere in the world do any kind of worker organizations carry the legal weight that the board of a corporation has in deciding the fate of a firm (with the exception of existing worker-owned businesses, of course). The challenge of ensuring that AI is developed ethically should place the task of restructuring corporations at its center: making room for worker voice should raise at the very top of the global political agenda. We can expect this debate to take center stage in 2025, as the new Economics Nobel Laureates, Daron Acemoglu and Simon Johnson have spoken eloquently to the crucial role that workers’ rights, thus corporate structure, play in shaping the impact of technology[12]. The brand-new and first of its kind “AI Safety Report” unfortunately misses the point: it contains no mention of corporate design as a fundamental obstacle to deliver ethical and safe AI. Only one, insufficient, mention speaks to the imbalance of power between for-profit corporations governed in the interests of shareholders, and those who invest their labor who know better about the potential risks, these companies’ workers: “Whistleblowers can play an important role in alerting authorities to dangerous risks at AI companies due to the proprietary nature of many AI advancements. (…) Incentives and protections for whistleblowers are expected to be an important part of advanced AI risk governance.”[13]
If we care about workers’ ethical intuitions and judgments – and if we are serious about their knowledge – there is no reason why we wouldn’t want to empower them. The people who are actually developing AI should weigh in on AI alignment. If democratized, the corporation – the epicenter of the capitalist economy – could provide a key solution in the search for a future where the pursuit of open-ended endeavors goes hand in hand with ethical and safe growth and development.
What’s next after that? By diversifying the approaches and rationalities of those who sit at board tables,[14] by moving firms away from the monomaniacal – irresponsible - pursuit of profit maximization, we might start to have a plan for a better, democratic future, capable to value the common good. It is reasonable to hope that democratized corporations will relate to democratic public powers in ways that are far more mutually beneficial. This increased cooperation will strengthen public powers’ abilities to provide frameworks to set much-needed limits to the power of all firms, and initiate a new capability to value the common good of humanity –and the planet.[15] Instead of worrying about regulation that stifles innovation, let’s stop stifling innovation in the field of regulation itself. As we search for answers to the question of how to structure corporate governance in ways that nurture ethical and safe AI development, ensuring significant collective worker voice at the board level could unleash the creative capabilities of our democracies, placing power back where it should belong: in the hands of the people, not algorithms or Broligarchs.
[1] FNRS Research Director/Extraordinaire Professor of sociology, University of Louvain (CriDIS_TED); Distinguished research fellow, Institute for Ethics in AI, Oxford University; Senior research associate, Center for Labor and a Just Economy at Harvard Law School, Member of the Royal Academy of Belgium, Technology and Society section. The author thanks Leopoldo Moncada for research assistance and Miranda Richmond Mouillot for her outstanding editorial support.
[2] https://commission.europa.eu/document/download/97e481fd-2dc3-412d-be4c-f152a8232961_en
[3] https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-insiders-perspectives
[5] Op. Cit.
[6] See our joint piece on this topic: Adams-Prassl, Jeremias, Ferreras, Isabelle, Block, Sharon, Miller, Michelle, ‘Current AI Challenges to the Future(s) of Work’, AI Ethics At Oxford Blog (15th November 2024) (available at: https://www.oxford-aiethics.ox.ac.uk/blog/current-ai-challenges-futures-work).
[7] See: Ferreras I., 2017, Firms as Political Entities. Saving Democracy through Economic Bicameralism (Cambridge University Press)
[8] See: Ferreras I., T. Malleson, J. Rogers ed., 2024, Democratizing the Corporation. The Bicameral Firm and Beyond (NYC/London: Verso)
[9] https://ssir.org/articles/entry/strengthening-hybrid-board-governance#
[10] Lately, as he delivered the annual lecture of the Institute for Ethics in AI, Oxford, on June 6, 2024.
[11] For more on this proposal, see my work on Economic Bicameralism and: Battilana, Julie, and Isabelle Ferreras. "From Shareholder Primacy to a Dual Majority Board," The Aspen Institute Business & Society Program Report Series, August 2021. https://www.hks.harvard.edu/publications/shareholder-primacy-dual-majority-board
[12] See Power and Progress. 2024. London: Basic Books. Paperback.
[13] See page 164: Bengio Y. at alii, “International AI Safety Report” (DSIT 2025/001, 2025) LINK: https://www.gov.uk/government/publications/international-ai-safety-report-2025 Yet, the report gets close enough to identify that workers themselves are the key, as it states, page 23: “For risk identification and assessment to be effective, evaluators need substantial expertise, resources, and sufficient access to relevant information. Rigorous risk assessment in the context of general-purpose AI requires combining multiple evaluation approaches. These range from technical analyses of the models and systems themselves to evaluations of possible risks from certain use patterns. Evaluators need substantial expertise to conduct such evaluations correctly. For comprehensive risk assessments, they often also need more time, more direct access to the models and their training data, and more information about the technical methodologies used than the companies developing general-purpose AI typically provide.” Finally, one critical bias of the Report is openly disclosed though, as the authors candidly acknowledge that hundreds of references making for the bibliography of the Report have been “either published by a for-profit AI company or that at least 50% of the authors on a preprint (based on their listed affiliations) work for a for-profit AI company” (page 230).
[14] See J. Battilana’s work on hybrid organizations: J. Battilana and S. Dorado, 2017, "Building Sustainable Hybrid Organizations: The Case of Commercial Microfinance Organizations," Academy of Management Journal, Vol. 53, No. 6 https://doi.org/10.5465/amj.2010.57318391
[15] See I. Ferreras, “From the Politically Impossible to the Politically Inevitable” in Ferreras I., Battilana J., Méda D., 2022, Democratize Work. The Case for Reorganizing the Economy. Chicago: The University of Chicago Press. Pp. 23-46.