Beyond Regulation: Building AI in the Image of the Human

Publication date
Stock image of a woman thinking surrounded by data

AI ethics can do more than help us to think about regulating AI systems. It should also think about the ways AI, like other new technologies, may change what we value. And it should remember that there’s such a thing as the ethics of imagination: if AI artificially recreates something we possess - intelligence - the way we imagine our own capacities may be critical to turning the AI future in a humane direction.  

By Professor Edward Harcourt

As it’s usually presented, AI develops according to an internal logic – partly commercial, partly technical. Correspondingly, as in Yuval Shany’s illuminating piece on autonomous weapons,[1] or Caroline Green’s on the use of unregulated chatbots to develop care plans,[2] AI ethics is a downstream activity which aspires to regulate the application of the technology in the light of human values. This is an important aspiration. But here I want to explore two ways in which AI ethics can also aspire to more.

The first aspiration concerns the idea of moral change. When we say ‘but what about privacy/ humans in the loop/rights/labour?’, we take ourselves to be articulating a timeless standard which, if only we could succeed in enforcing it, would constrain the application of AI. Of course we might fail to enforce it, and of course we might have misidentified the standard. But if we haven’t, it enables us to tell whether AI has made us better, or worse, or about the same. I am sceptical that that is really all we are doing. 

That’s not because I think strategic or commercial interests sweep all before them: I’m not, that is, like Churchill who apparently thought it was ridiculous to allow moral considerations in on the question whether to develop chemical weapons.[3] Consider the advent of literacy, or for that matter printing, railways, the phone, or television. For any of these innovations, rather similar questions could be asked to the ones we now ask about AI. Railways are in fashion now among progressive thinkers, but even independently of environmental considerations, you may puzzle as to what could possibly be wrong with them – or indeed with the telephone or the internet: they are just tools which enable one to get messages (or goods, or people) to places faster. That is, they are just more efficient means to various ends. What that answer conceals is the extent to which new technologies – the very word helps this concealment along, because it means or has come to mean ‘instruments’ – don't just do that, but reshape our ends, partly by repositioning the boundary between ends and means. When it was suggested to the then King of Prussia - Friedrich Wilhelm III, I think - that he might allow the construction of railways in his kingdom on the grounds that they enable you get to your destination faster, the King’s reply was ‘What does it matter whether we get to Potsdam a few hours earlier or later?’ The example shows that it doesn’t go without saying that speed belongs on the ‘means’ side while ‘reaching a destination’ on the ‘end’ side of things. The King wanted to get to Potsdam alright – some time or other. But he didn't buy the idea that if you want that, then – on pain of being ‘instrumentally irrational’ – you must want to do it as fast as possible. Getting places fast is the end of the railway ‘technology’, and that wasn't his end at all. But the railways came anyway. It’s an exaggeration to say that they swept a whole world away with them, because in some limited contexts people still try to get places as slowly as possible. But they did bequeath to us the then radically new end of getting places fast, in the light of which further innovations, like air travel or email, have come to seem simply more efficient ways of getting what we already want.  The reason we need examples like that King of Prussia is to remind us that that is not all they are. 

This reshaping of ends by new technologies reveals the limits of the standard menu of AI ethics questions - privacy, bias, humans in the loop and so on – important as they are. Like any transformative technology, AI doesn’t develop in isolation, leaving our landscape of values and preferences as they are while creating new means, or new obstacles, to realizing them: it changes the evaluative landscape in which it develops. We cannot know how yet, but history should teach us that the future world in which AI is as pervasive as literacy or telecommunications are in ours will be a world in which our current values and preferences have been transformed. Despite the fears of the Luddites, the first industrial revolution brought more jobs rather than fewer. But the jobs it brought were different, and with them came a transformation in the meaning of work, of craft, of the significance of nature and the countryside. (Who in a predominantly agrarian economy would have dreamt of seeing certain crops as an aspect of ‘heritage’?) We can expect the effect of AI on work to be similar. Again, some development within AI which enables less privacy may be welcomed because by the time it comes onstream, privacy will be valued quite differently from the way we value it now. Just as the pre-railway value of leisure, or the pre-telecoms value which saw communication across a distance as a poor second best, have been partly obscured from view, so in the AI-fashioned world, we will come to value goods that differ from what we now call privacy or security, albeit in ways we cannot now anticipate. After all, privacy has not always had the significance it now has for us. Or perhaps we can begin to anticipate these ways, by considering the ways these things are viewed by younger generations of ‘digital natives’ and, progressively, ‘AI natives’. That is why many of today’s AI ethics questions seem less like the application of fixed standards than like proactive bids to shape the future, competing with the technology’s way of shaping it. I don't think that makes them bad questions. But as we (rightly) see AI ethics as an attempt philosophically to grasp the meaning of convulsive change, we also need to be more aware of how the very ends against which to measure the success of AI ethics shift along with the technology we’re trying to evaluate. 

My second aspiration for AI ethics also speaks to the worry that AI threatens ‘human values’, but in a contrasting way. Of course innovations have consequences unimaginable when they were first made. Still, we - that’s to say we humans - invented AI. So before we wag our fingers at it, we should ask what was around in our evaluative landscape before the advent of the technology.

To depict the development of AI as driven by commercial considerations, with poor ethics struggling like a reed in the torrent, is to present a flattering picture of human values, even as we stress their fragility. Humans – though not all of them equally – like money, not just having it but the competition involved in making it, and the excitement of innovation and entrepreneurship. It’s a rather basic feature of what humans do, and do not because they can’t help it, but because they value it. So don't let’s pretend ‘human values’ only come on the scene to resist the pursuit of profit. The pursuit of profit is a human value, and if it sits uneasily with some other ones, that’s because of the complicated and divided kinds of being we are. 

But there’s a deeper point here too, which goes not merely to something AI does – such as, generating wealth while perhaps threatening other areas of human welfare – but to what it is. AI gives impressive effect to some longstanding visions which humans, at least in the ‘first world’, have had of themselves and of human society. Perhaps the most obvious example concerns the workplace. Consider the ideal of automation in production. We worry about the labour market consequences once automation has AI behind it. But we should pinch ourselves every time we feel that worry. Fordism – the ideology of the production line – is the attempt to reduce human beings as far as possible to the status of machines, each drawing on only a tiny fraction of their intelligence in the service of producing more goods more cheaply. Robots can realize that vision much more fully than human workers, because they really are machines. But remember, we humans conceived of Fordism long before factories had robots, still less autonomous systems. 

The example might suggest that the problem, even if it isn’t specially characteristic of AI, is specially characteristic of capitalism, or the pursuit of profit. But it connects with government too. Long before there were computer screens, the removal of judgment from decision-making and its reduction to the mechanical application of rules has been an ideal of bureaucratic administration. Again, a long-standing fantasy of liberal technocracy is of government as in essence a distribution mechanism. So once powerful enough, AI will be able to do away with all the bulky, embodied manifestations of government - buildings, civil servants, documents, meetings – and replace all that with an all-encompassing cost-benefit analysis in which ‘values’ are fed in and sums of money, with instructions on how to spend them, come out at the other end. Or again, healthcare as a set of technical interventions on a body, in which ‘bedside manner’, the therapeutic relationship, plays no role.  We aren’t there yet, but if we get closer and you don’t like the result, please don’t blame AI: quite unlike Friedrich Wilhelm and the railways, in the case of AI and automated production or automated decision-making, we really do have a case of a newly efficient means to an end we signed up for a while back, even if demonizing AI can help us to forget we did so.

Behind these imaginings stands, more fundamentally, a vision of human thought and language as computational mechanisms, perhaps realized in the as yet barely comprehensible chemical mechanisms of the brain. As consequentialism in ethics, Chomskyan linguistics, or computationalism in psychology attest in recent times, or, further back, as David Hume’s moral psychology attests already in the 18th century, the idea that we ourselves are machines – call it ‘mechanism’ - is a powerful vision of humanity which predates AI by a long way. 

There are lessons here for how we think about AI ethics. One of these is a straightforwardly moral lesson. We all know the phrase ‘ethics-washing’ – a slapdash review by an ethics committee to rubber-stamp something you were going to do anyway. The phenomenon I’m getting at is also a kind of whitewashing, the kind we encounter in the phenomenon of scapegoating, where we make a single sacrificial victim take the blame for something and end up feeling we are morally spotless. In other words what gets whitewashed is ourselves. That is also what’s going on when we allow AI ethics to be structured by the narrative that before AI, all was well and it’s only thanks to AI that ‘human values’ are under threat. That is wrong, because it is a form of self-blindness: to the extent that AI is bad, it reveals to us the bad in ourselves.

Sometimes when we engage in this form of whitewashing, the only thing to do is, uncomfortably, to get to know ourselves a bit better. But in this case there’s more to be done and this ‘more’ makes a special but infrequently acknowledged demand on AI ethics. I suggest that the actual history of AI – including its development for industry, government, healthcare and the rest - seems partly owed to, because it helps to realize, a deeply rooted image of ourselves as machines. And AI made in the mechanical image of ourselves isn’t just bad because it threatens bad consequences if not properly regulated. It is bad because the mechanical image of ourselves is a false image: we are nothing like these familiar philosophical theories say we are. Given the huge weight of philosophical orthodoxy behind them, I can hardly start arguing that now. But at least these theories are contestable, and one thing AI ethics can do in addition to thinking how to regulate AI is to contest them. Don’t let it be said that the truth or otherwise of mechanical theories of mind, or language, or moral choice, are the business of some theoretical rather than practical department of philosophy. No less than how we regulate technologies, how we imagine ourselves is central to the business of AI ethics, as indeed of ethics generally. If AI is to be made in a better image of ourselves, and so to point AI in a more humane direction, AI ethics needs to take on the task of dismantling those mechanical self-images and constructing more generous and more truthful ones. 

[1] https://www.oxford-aiethics.ox.ac.uk/blog/red-herring-meaningful-human-control-and-autonomous-weapons-systems-debate

[2] https://www.theguardian.com/technology/2024/mar/10/warning-over-use-in-uk-of-unregulated-ai-chatbots-to-create-social-care-plans?CMP=fb_a-technology_b-gdntech&utm_source=newsletter&utm_medium=email&utm_campaign=institute_for_ethics_in_ai_newsletter&utm_term=2024-03-18

[3] See R Harris and J Paxman, A Higher Form of Killing: The Secret Story of Gas and Germ Warfare (New York: Hill and Wang, 1982), p 127.