Machines Behaving Badly
There is a tendency in discussions about artificial intelligence to swing between gushing visions of a magical future and gloomy predictions of a dystopia. Toby Walsh’s newest offering, Machines Behaving Badly: The Morality of AI, navigates these twin tendencies constructively. At the outset, he notes how one of the causes for optimism in what otherwise might be a glum reality is that many people working in the AI industry are “waking up to their significant ethical responsibilities” and that “this book is a small part of my own awakening”.
It is encouraging to see Walsh, a highly qualified and well-respected thinker in the field, take responsibility for contending with ethical questions in engineering and computer science spaces that have too often been perceived – or dismissed – as beyond their scope. This is an accessible read, with a nuanced mix of optimism and steadfast, practical caution.
Walsh observes that industry is leading the development of AI, but the values of many companies that do this work may not be ethical or responsible. How do we address this? This is an undeniably large question that extends well beyond computer science, provoking questions about law – including how it is made and by whom – the role and limits of cultural context and knotty philosophical problems associated with the concept of fairness. While there is something deeply laudable about the task Walsh sets himself, it’s fair to say that some of these are explored with greater acumen and depth than others.
On one level, there are questions that must be explored before contending with how to make AI ethical, including whether we need it at all. The case for AI is rarely questioned and often merely assumed, leaving the opportunity costs unexamined. Perhaps understandably, Walsh is less inclined to explore these problems, assuming, for example, that self-driving cars are not only inevitable but desirable. This leaves unasked the question of whether we ought to take the opportunities presented by the digital revolution to discuss reducing dependency on personal vehicles, which have reshaped our urban landscapes at significant cost. It also evades broader social questions about the companies that are seeking to advance this technology and how they might be working to limit their own liability. If we are to make AI ethical, we need to ask questions not only about its applied functionality, but about who gets to decide where and when it should be deployed. Perhaps Walsh’s ethical awakening does not extend this far, but these are significant, foundational questions and it would be helpful to know his thoughts on them.
The risk in shifting the focus in this way is, of course, that it becomes hard to proceed into meaningful analysis of the ostensible topic of the book. Kate Crawford’s Atlas of AI, for example, conducts a meticulous and prudent examination of the material inputs that go into various AI systems. While Crawford’s work is invaluable, by way of contrast there is also something practical about Walsh’s determination to work through the realities of existing AI systems and the adequacy or otherwise of the limits placed upon them by laws and social conventions. As distinct and often complementary approaches to the same topic, it is useful to read both together.
One critical issue is that AI does not sit in a vacuum. It is a human creation and it is used and misused by humans – as Crawford neatly puts it, AI is “neither artificial nor intelligent”. Even the most autonomous weapons have been created using human labour and will be deployed by human decision-makers. Aiming to regulate the technology itself, separately from its political risks, misses this human context. This means that the problems presented by AI are less novel than they often appear and that traditional rules and laws are more relevant than we might assume.
Walsh is at his best when he talks about his own field and creates bespoke and granular tools for mapping the ethical conundrums that will likely face those working within it. On a few occasions he sets up lists or proposals for ethical problem-solving, and it’s easy and sensible reading, with gravitas. He is less persuasive when discussing thinking he is understandably less familiar with – human rights, for example, are examined and dispensed within barely two pages.
It seems a shame, because so many of the challenges that are faced in that field – balancing rights and negotiating ideas of fairness – have occupied human rights thinkers for decades. But overall there is a lot to admire about a person like Walsh, with his deep expertise, grappling meaningfully with the trials and tribulations of regulating human behaviour.
This book is perfect for readers who are immersed in the world of machine-learning and are looking for a functional and engaging introduction into the importance of ethical thinking. It’s an important book because Walsh speaks from a position of authority about the benefits of caution and reflection, which serves as a critical counterweight to the breathless utopianism that is not uncommon in the field.
It is not a definitive work but perhaps that is the point: hopefully it will encourage more interdisciplinary conversations about the social and political contexts in which AI operates. The task could not be more urgent given that technology will play a part in solving some of the biggest problems facing humanity, including climate change, wealth inequality and insecure work. Walsh’s contributions will invariably bring us closer to ethical solutions.
La Trobe University Press, 288pp, $32.99
La Trobe University Press is a Schwartz imprint.
This article was first published in the print edition of The Saturday Paper on June 4, 2022 as "Machines Behaving Badly, Toby Walsh".
A free press is one you pay for. Now is the time to subscribe.