News

As governments and police increasingly rely on algorithms and automation, legal experts warn these systems could undermine key discrimination protections. By Mike Seccombe.

Algorithms and prejudice

Professor George Williams, dean of law at the University of New South Wales, at a Sydney inquiry into press freedom.
Credit: AAP IMAGE / Joel Carrett

Did your parents separate before you were five years old?

Have friends or family in your neighbourhood been the victims of crime? Have you an ability to sweet-talk people in order to get what you want? Do you believe that many people get into trouble or use drugs because society has given them no education, jobs or future?

Answer carefully, for your freedom may depend on it. Or it would, at least, if you were up before a court in a number of states in America.

The answers to these questions are deemed, respectively, to be indicators of “family criminality”, a risky “social environment”, a “criminal personality” and “criminal attitudes”. They, along with 133 other data points, are fed into a proprietary algorithmic system called COMPAS, the Correctional Offender Management Profiling for Alternative Sanctions, which purports to assess the overall “risk factor” of reoffending.

It’s not hard to see why such a system might appeal to some in the judiciary and legislatures. In the United States, as in this country, decisions relating to bail, sentencing and parole are often controversial, with those on the left complaining they are too harsh and those on the right complaining they are too soft. How tempting to be able to lay responsibility on an apparently scientific, impartial, computer-based system of risk assessment.

Except that it isn’t impartial. Algorithms, like people, can be subject to bias, either built in – wittingly or otherwise, by those who program them – or learnt by the machines themselves.

An investigation by ProPublica found serious in-built biases within the COMPAS system – black defendants were 77 per cent more likely to be identified as being at higher risk for committing future violent crime and 45 per cent more likely to be predicted to commit a future crime of any kind.

The investigation documented various case studies in which white and black co-defendants were treated differently, or where people of colour were determined to be of higher risk than white people with worse criminal histories.

The leaking of the COMPAS questionnaire – although not the details about the weighting given to various items on it – suggests possible reasons for this bias. Family breakdown was deemed an indicator of potential criminality, but so too was poverty. Poor people are more likely to live in crime-ridden neighbourhoods, and black Americans are more likely to be poor.

As for the capacity to sweet-talk – it could be an indicator of dishonesty, or simply of charm. And the belief that social disadvantage is a factor in drug abuse and crime – that may correlate with liberal values but, unsurprisingly, COMPAS was mostly embraced by conservative states.

As yet, this form of algorithmic justice hasn’t leaked into Australian courts, though some experts see pressure coming from those who want to sell their proprietary systems.

But algorithmic decision-making is here already, in government, and has been for decades in the form of relatively simple data-matching. It went largely unnoticed by the public until the recent spectacular failure of the federal government’s so-called robo-debt system, which led to some 20 per cent of welfare recipients receiving demands for payment of debts they did not owe. And that was a pretty simple system.

To those paying attention, robo-debt’s flaws were obvious from the start. They were in-built: a mismatch in the data going in, between annual tax returns and fortnightly welfare payments; a reversal of the onus of proof, so it was incumbent upon recipients to prove they didn’t owe money, rather than on Centrelink to prove they did. And there was a lack of transparency about the whole process.

These problems could have been rectified much sooner, had the Morrison government been more concerned with ensuring the system worked fairly.

But things are rapidly becoming a whole lot more complex, says Victor Dominello, who, since April has worn the unique title of minister for Customer Service in the New South Wales government. He sees his role as being largely about preparing his government for the “confluence of technology that’s about to explode on our doorstep”.

In the next five to 10 years, Dominello says, “the world is just going to be profoundly changed” by the arrival of 5G communication, quantum computing, artificial intelligence and advances in machine learning.

“The use of data and digital and technologies is the fastest way that we can improve lives and outcomes for people across the world,” he says. “I have no doubt about that.

“But equally I understand the dystopian possibilities. We are coming to a massive inflection point in human history in my personal view, that if we don’t get it right, then it will have a serious deleterious impact.”

A 2014 decision by Amazon to build a system of artificial intelligence that could assess the résumés of job applicants illustrates how things can go wrong.

Amazon’s system was designed to vet applicants by observing patterns in résumés submitted to the company over a 10-year period. Most came from men, a reflection of male dominance in the tech industry. And thus the system taught itself to prefer male applicants, reinforcing the company’s pre-existing gender imbalance. Instead of increasing the diversity of staff being hired, it did the opposite. Dominello calls it an example of “bias upon bias”.

This capacity of computers to “learn” biases presents a real problem, as Federal Court justice Melissa Perry pointed out in a presentation to a conference on immigration law run by the Law Council of Australia earlier this year.

“For example,” she said, “allegations of algorithmic bias have arisen in the United States in relation to algorithms used to offer jobs, determine loan applications and rank school teachers.”

Perry also referenced the practice of “predictive policing” as a concern.

The concept of it is simple enough: feed in data on crime rates in particular areas, and use the algorithm to determine where police resources should be concentrated. The complaint, though, is that this becomes self-reinforcing. More intensive policing means more crime detected, which in turn leads to a more intensive police presence – and so on.

In 2017, NSW Police came under scrutiny for its use of a “risk-assessment” tool called the Suspect Targeting Management Plan (STMP) to predict the likelihood of offending for children as young as 10. Once an individual was placed on the STMP, they were subject to intense police contact – randomly, repeatedly detained as they went about their lives and visited by police at home at all hours – even if they had never committed an offence. Overwhelmingly, the young people deemed risky by the tool were from an Aboriginal or Torres Strait Islander background.

A study of the STMP by Dr Vicki Sentas, senior lecturer at the University of NSW Law School, and Camilla Pandolfini, principal solicitor at the Public Interest Advocacy Centre, found the practice had no demonstrable impact on crime prevention and was counterproductive in that it increased hostility to police and undermined efforts to rehabilitate offenders.

That is not to deny there can be a usefulness to algorithms, or machine learning – but the question of how much weight to give their findings in a decision-making process is a thorny one.

It is statistically true, for example, that men are more likely to commit crimes of violence, and more inclined to recidivism. Aboriginal Australians are statistically more likely to commit crimes.

“So,” says Professor George Williams, dean of law at UNSW, “should you weight an algorithm for that? If you do, you come up against discrimination law: race, sex, religion, ethnicity.”

Then there is the question of the extent to which decision-making is delegated.

Andrew Ray, a research assistant at the Australian National University College of Law, looked at the “authorisation provisions” in more than 20 different federal acts.

“They’re fairly standard in form,” he says. “They essentially say that the secretary of a department may use a computer process for any decision that the secretary could make under the relevant law. There’s very little limitation attached to that, and as government is automating processes … there’s actually no way of knowing if an individual decision has been made by a computer program, and no way to view the computer program itself.”

This raises the question posed by Perry, Williams and others: Who is the decision-maker, and to whom has authority been delegated?

As Perry asked in her presentation: “Is it the programmer, the policymaker, the human decision-maker or the computer itself?”

A further complication, says Professor Lyria Bennett Moses, director of the Allens Hub for Technology, Law and Innovation at UNSW, is that some statutes contain a provision requiring certification that an automated system is “functioning correctly”.

“And the meaning of that is problematic,” she says, because it’s not clear what rate of accuracy equates to correct functioning.

“You might have 99 per cent accuracy – that’s about as good as these things get,” says Bennett Moses. “But, nevertheless, there are going to be people whose data is matched incorrectly. You still need a mechanism in law to deal with errors that have a negative impact on individuals.”

There is no reason why these difficulties can’t be addressed but, ultimately, that means ensuring human accountability and values, says Victor Dominello. “Everything in the digital arena has to be based on four pillars, I believe: privacy, security, transparency – opening up data as much as you can, subject to privacy and security laws – and ethics. Together, those add up to trust.”

It sounds simple in theory but, as he concedes, is not so much in practice.

“Technology is moving fast, ever faster,” Dominello says. “But government’s not moving any faster. In fact, it’s getting slower. We face a regulation lag between technology and societal standards … And something’s going to have to give.”

Otherwise, he says, we will delegate our future to machines.

Algorithms can make decisions easier, but Dominello says that in the end “the decision has to be a human decision, and that human needs to be accountable”.

The thousands of robo-debt recipients would no doubt agree.

This article was first published in the print edition of The Saturday Paper on Dec 7, 2019 as "Algorithm and blues".

A free press is one you pay for. In the short term, the economic fallout from coronavirus has taken about a third of our revenue. We will survive this crisis, but we need the support of readers. Now is the time to subscribe.

Mike Seccombe
is The Saturday Paper’s national correspondent.

Our journalism is founded on trust and independence

Register your email for free access or log in if you already subscribe

      Keep Reading                 Subscribe