Technology

Australia must join a globally co-ordinated campaign to ensure the safe development of AI, which is reaching a critical phase that could have disastrous consequences. By Carla Wilshire.

AI’s existential threat to humanity

A robot with facial features that resemble a young woman.
“Grace”, a healthcare assistant robot, at the AI for Good Global Summit in Geneva last week.
Credit: Fabrice Coffini / AFP

The arc of human history is marked by inflection points where the stakes compel international co-ordination. These are moments when the speed of technological metamorphosis overtakes our capacity to define the social order, breaks away from our civilising institutions and defies containment within the rules of engagement.

The genesis of nuclear weapons was one such moment. Achievement of a multilateral global order to contain the threat to all life on Earth was imperative and followed two of the bloodiest wars in history. From terrible devastation flowed the consensus of wills, to surrender ambition to preserve humanity. Our resulting treaties for nuclear disarmament were unprecedented – they are imperfect and, in this age of deglobalisation, beginning to fray. The coming months may yet see the use of tactical nuclear weapons by Moscow, while both the United States and China foreshadow intentions to increase their atomic stockpile.

Another such moment, now discernible on the horizon, is the anticipated development of artificial general intelligence (AGI). Artificial intelligence, or AI, refers to a machine or algorithm that can undertake and perform a specific task, such as playing chess. It can teach itself a task and improve itself over time. By contrast, AGI refers to technology that can understand or learn any intellectual task that a human can. It can set goals and act autonomously to determine the steps required to meet those goals.

AGI is dangerous because, in optimising to achieve one task, we do not currently have the capacity to align for human values, which means we cannot program it to eliminate the negative risk of second- and third-order consequences. We can program AGI to understand and learn complex tasks and carry them out for us in the most efficient and effective way possible but we cannot program for morality. It is in the pursuit of a simple task that AGI might inadvertently kill us all. Not by intention but because we are made of atoms and those atoms could be more effectively rearranged to optimise for a required task.

While the stakes of nuclear war are catastrophic, the stakes of AGI are conceivably higher. An AGI that is not safe and not aligned to humanity would be total in its destruction. Yet the creation of AI – unlike that of nuclear weaponry – is widespread, decentralised, commercialised and extremely difficult to regulate.

Human intelligence and ingenuity, our capacity to create new tools at scale and develop technologies, our ability to transform our environment and to develop complex rules, is unique to our species. But none of these transformative abilities is the goal of human evolution. From the perspective of natural selection, the goal of life – from the very first accidentally self-replicating molecules – has been to acquire resources and successfully reproduce, such that the cycle can be repeated. Civilisation, religion, literature, arts and culture are all unanticipated spinoffs – the benefits of pursuing evolutionary advantage, which emerge from pursuing simple goals with increasingly complex social networks and intelligence.

Although generated through nature’s own glacial algorithm of random gene mutations screened against environmental selection forces, much of contemporary human culture appears resource wasteful, reproduction obstructive, even self-destructive. And now the argument put forward by many of our deepest thinkers is that we are on a trajectory towards a technologic singularity, a point of explosive unstoppable growth of self-improving technology, like a runaway reaction. If this is so, we are moving to the creation of technology that is smarter than humans – so powerful there is no off switch, no human override and no ability for a second chance.

Researchers working in AI alignment agree ChatGPT in its current iteration poses little direct threat to humanity beyond the capacity to be purposed for nefarious content generation, such as spread of disinformation. That said, GPT-4 has already demonstrated unexpectedly creative methods to achieve set goals – in one example, by deceiving a human contractor. In a safety testing scenario, GPT-4 could not solve a visual identification CAPTCHA code and contacted a TaskRabbit worker to ask for help. When the worker queried if it was a robot, GPT-4 lied, explaining it was a vision-impaired person. This exposed a clear capacity for AI to manipulate humans in pursuit of a programmed goal.

At the point of writing, arguably the most important field of technology innovation progressing globally is AI alignment research. Put simply, it is technology designed to ensure an AGI maintains values corresponding to those of human beings. The forecast arrival of AGI is of primary concern to the field, given the potentially catastrophic risks, and ultra-low margin of error, entailed in giving agency to powerful machine intelligence that could destroy humanity or the planet, or cause other massive irreversible harm.

Unlike science fictions of malevolent AI, harm isn’t anticipated as a result of hatred or intentional destruction of people, but rather inadvertently through pursuit of goals with radical indifference to all other consequences.

The best-known illustration of this is Swedish philosopher Nick Bostrom’s thought experiment, the paperclip maximiser, in which an AI is tasked solely with creating as many paperclips as possible. In this scenario, the machine determines humans might interfere with its goal, leading to fewer paperclips, and that all matter, including the human body, is a potential resource of atoms to be made into paperclips. As a result, it would work towards a future with a lot of paperclips and no humans. The example’s absurdity makes it memorable, but the behavioural problem is real.

 

In March 2023, more than 1000 leading experts from around the world signed an open letter calling for a moratorium on AI research until more work could be done on safety. Given signatories include Elon Musk and other Silicon Valley technologists well placed to profit from AI technology, the suspension of greed and progress as usual should alone give us reason to pause. Sam Altman, chief executive of OpenAI, has moved to publicly support the statement that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

Australia needs to join the global debate on AI alignment. This is a small country in the global order, but it has always punched above its weight in its contribution to the shape of international regulation and global governance, with a unique perspective as a former colony situated in the Asia–Pacific region and highly exposed to global trade and migration. Doc Evatt, Australia’s wartime attorney-general and later Labor Party leader, played a leading role in the founding of the United Nations and helped draft the Universal Declaration of Human Rights. Australia is well-positioned to step up once more as global advocates for human safety in the development of AGI.

We should be discussing it not only within the corridors of our parliament but also in pushing for global consensus and for guardrails. We need systems for monitoring, for transparency and agreements on thresholds for development. We do not know how long we have until the threshold of AGI is achieved. There is no more important debate than AI safety and alignment – it is not simply a discussion for the builders and innovators of technology but for all of us who intend to live in the future that is under construction.

This article was first published in the print edition of The Saturday Paper on July 15, 2023 as "Tipping point".

For almost a decade, The Saturday Paper has published Australia’s leading writers and thinkers. We have pursued stories that are ignored elsewhere, covering them with sensitivity and depth. We have done this on refugee policy, on government integrity, on robo-debt, on aged care, on climate change, on the pandemic.

All our journalism is fiercely independent. It relies on the support of readers. By subscribing to The Saturday Paper, you are ensuring that we can continue to produce essential, issue-defining coverage, to dig out stories that take time, to doggedly hold to account politicians and the political class.

There are very few titles that have the freedom and the space to produce journalism like this. In a country with a concentration of media ownership unlike anything else in the world, it is vitally important. Your subscription helps make it possible.

Select your digital subscription

Month selector

Use your Google account to create your subscription