Ethics in AI Lunchtime Seminar - Wednesday 17th May 2023, 12:30pm (BST)
Attendance is via registration only, which can be found here.
Human judgment is flawed and limited. Our reasoning degrades when we are tired or hungry; we are capable of thinking only in low dimensions; we process many forms of information slowly; and so on. Algorithm judgment promises to correct our flaws and exceed our limits. Algorithms do not get hungry or tired; they can “think” in high dimensions: they process many forms of information at break-neck speed; and so on. This paper is concerned with a particular flaw in human judgment—noise, understood in Kahneman et. al’s (2021) sense, as unwanted variability in judgment. A judge is noisy, for example, if she sometimes hands down harsh sentences and sometimes lenient sentences—with no particular rhyme or reason—to defendants who ought to receive the same sentences. Her judgment exhibits unwanted variability. We ask: are algorithmic systems susceptible to noise? At first glance, the answer is no—and indeed, Kahnemen et. al argue that it is no—since many algorithmic systems compute the same function every time, and so by definition are free from a certain kind of variability. This first glance, we argue, is misleading. The kind of variability that algorithms are free from can, and often does, come apart from the kind of variability that is unwanted in cases of noise. Algorithms are susceptible to noise, just like we are.