News

After the Christchurch massacre, the Coalition and Labor passed a bill prohibiting the sharing of abhorrent material on social media, but experts argue the new law will achieve little. By Martin McKenzie-Murray.

Controlling social media

Facebook CEO Mark Zuckerberg.
Facebook CEO Mark Zuckerberg.
Credit: NIALL Carson / PA Wire

In 2003, for the first time since the Nuremberg trials, three journalists were convicted of incitement to commit genocide – just as the Nazi publisher Julius Streicher had been in 1946. The three convicted men – two radio executives, and the editor of an extremist magazine – had insistently and enthusiastically promoted the slaughter of 800,000 people during the 1994 Rwandan genocide.

In the months before the killings began, the radio station RTLM described Tutsi citizens as “cockroaches” requiring patriotic extermination. The magazine Kangura editorialised similarly. Days before, the magazine declared: “Let whatever is smouldering erupt … It will be necessary then that the masses and their army protect themselves. At such a time, blood will be poured. At such a time, a lot of blood will be poured.”

Both the radio station and magazine were instruments in the government’s secret planning of the genocide, and the popular conscription of civilians to it. During the genocide, RTLM broadcast the locations of Tutsi hideouts, and the personal details of Hutu moderates. “The graves are not yet full,” read one broadcaster. Samantha Power, a war correspondent, human rights lawyer and, later, President Barack Obama’s ambassador to the United Nations, wrote in her history of genocide, A Problem from Hell, that Hutu killers carried a machete in one hand, and a transistor radio in the other.

The court’s judgement cited a witness: “I monitored the RTLM virtually from the day of its creation to the end of the genocide, and, as a witness of facts, I observed that the operation of the genocide was not the work done within a day … What RTLM did was almost to pour petrol, to spread petrol throughout the country little by little, so that one day it would be able to set fire to the whole country.”

Last year, almost a quarter-century since the Rwandan genocide, Facebook admitted it had failed to realise how extensively and influentially it had been used by the Myanmar military to inspire the ethnic cleansing of the country’s Rohingya minority. For years, accounts linked to the military, and which had almost 1.5 million followers, were inciting extreme violence. A Facebook executive admitted its platform had been used to “foment division and incite offline violence” and added, in a now familiar refrain, that “we can and should do more”.

 

It was Facebook’s use in another obscenity that obliged the recent hasty passage of the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill through Australia’s parliament. Namely, the killer’s live broadcast of his massacre of 50 Muslim worshippers in New Zealand. From Facebook, the video was copied and distributed elsewhere – and was quickly impossible to contain. But Prime Minister Scott Morrison said that Facebook was irresponsibly slow in responding – the video remained on the site for one hour – and that Australians demanded accountability.

“It will be a criminal offence for social media platforms not to remove abhorrent violent material expeditiously,” according to the government’s media release. Contravention will be “punishable by three years’ imprisonment or fines that can reach up to 10 per cent of the platform’s annual turnover”.

The legislation defines “abhorrent” as videos of terrorism, murder, attempted murder, rape, torture and kidnapping. The other salient word in the law is “expeditiously”, which is left undefined – it will be up to a jury to define it within the context of a trial.

Communications Minister Mitch Fifield said it was unacceptable that social media be “weaponised”; Attorney-General Christian Porter said the law was likely the first of its kind in the world – and that the government would encourage its consideration by other countries.

The bill passed with Labor’s support, even as it announced scepticism about the sophistication of the legislation. This isn’t the first time the opposition has helped pass a hastily drafted law that it had doubts about. Opposition Leader Bill Shorten said that if Labor were elected to government, he would review the legislation later this year, while the Law Council of Australia said that legislation conceived “as a knee-jerk reaction to a tragic event” rarely made good law.

“The major parties don’t want any daylight between them on issues of national security,” one tech expert told me. “So, things are rushed through, and we’ll fix it later. That’s not how democracy is meant to work.”

 

For much of this century, the heads of Silicon Valley have been the masters of the universe – the potent, seemingly untouchable shapers of modern culture and business. “Allow us our creativity,” went the mantra. “Regulate us, and the world shrinks.” Arguably the tech giants were partly protected from regulation by our awe and faith in their benevolence, but they were most substantially protected by their leviathan size and financial influence. Only belatedly have governments begun considering the companies’ duty of care, as increasing attention is paid to their vast webs of surveillance and indifferent responses to hate speech, incitement of violence and extreme manipulations of their platforms.

“We need to recognise that the role of the internet in society has changed,” says Dr Andre Oboler, a senior lecturer in cybersecurity at La Trobe Law School and the chief executive of the Online Hate Prevention Institute.

“This was a new space, free of censorship, and it would introduce a new age of freedom. Al Gore promoted this idea, and it was promoted by the founders of the internet,” Oboler says. “At that time, the internet was a platform that wasn’t open to the public at large. It was a network accessed by universities. Self-regulation and certain standards of behaviour went without saying. Nowadays, it’s a cornerstone of daily life. The uses of the internet cover the full gamut of behaviour in society. So that includes crime, racism, legitimate protest and terrorism. The question is not ‘Where do we draw the lines on the internet?’ The question is ‘Where do we draw the lines in society?’ Well, the idea that we shouldn’t have any censorship at all isn’t plausible. We have copyright laws, for example. And defamation law says you may have a right to freedom of speech, but not when it’s harming other people and false. These lines must apply on the internet also. Privacy, integrity of elections, hate speech and extremism – there’s a cost to this freedom. Now that harm has been recognised, we need to ask how we might mitigate that.”

In the past, Facebook relied upon basic libertarian principles to argue against regulation. Faced with greater scrutiny and less faith, it is likely to argue today that moderating hateful content is impractically burdensome. Every day, 1.5 billion people log on to the platform, generating unfathomable quantities of content. Manual monitoring is a Sisyphean task, and scanning software is imprecise. But, Oboler says, the scale of the task can often be an argument of convenience. Companies have to simply be incentivised, by law and public exposure, to uphold their responsibilities.

“Is it too late to regulate these companies?” Oboler asks. “No, it’s not. Geoblocking means content can be regulated at a local level. A decade ago, I spoke with Facebook about Holocaust denial. Facebook prohibits hate speech, but that doesn’t include Holocaust denial. And that’s still true today. When I first raised the issue with them, I pointed out that it’s a criminal offence in a range of countries. Their response was: just because it’s illegal in one country, we can’t take it off. But we pointed out that they block content based on location. Ultimately, they accepted that. They began blocking things locally. Facebook is very good now at conforming with German law.

“The issue is: when they do the cost–benefit analysis, in a country like Germany, where there’s heavy penalties for not moving against hate speech, the costs are higher than the potential benefits of ignoring it. The way things happen in Germany are very different to how they happen here [in Australia]. That’s because of external influence. This was the result of concerted effort by the German government. They said to Facebook: ‘You are facilitating breaches of our laws.’ There were broader incidents of hate speech when prosecutors threatened to go after executives personally. Eventually, Facebook said, ‘We’d reach an agreement.’”

 

If Scott Morrison was eager to declare his decisiveness after Christchurch, the reaction of New Zealand’s privacy commissioner, John Edwards, to Facebook betrayed that country’s rawer, unmediated disgust with the company. “They are morally bankrupt pathological liars who enable genocide,” he wrote on Twitter, a reference to the killing of the Rohingya in Myanmar. “[They] facilitate foreign undermining of democratic institutions … allow the live-streaming of suicides, rapes, and murders, continue to host and publish the mosque attack video, allow advertisers to target ‘Jew haters’ and other hateful market segments, and refuse to accept any responsibility for any content or harm.”

The posts were later deleted, but New Zealand prime minister Jacinda Ardern is now considering alternatives to Facebook’s live stream, which she had previously used to broadcast speeches.

One thing Australia’s new law is not is a reckoning with a culture far more tolerant of extreme speech. As Andre Oboler says, the bill was not only rushed, but also exceptionally narrow. The Dangerous Speech Project, run out of Harvard University, has identified five “hallmarks” of dangerous speech: dehumanisation, threats to group purity, assertions of attacks on women and girls, questioning the loyalty of the in-group, and “accusations in the mirror” – accusing the “out-group” of the very violence the speaker intends to commit or encourage themselves. Rwanda’s genocidal broadcasts contained all five. So did the Christchurch shooter’s manifesto. Elements can be found in the statements of white supremacist Blair Cottrell, a man once praised by the shooter, and who was last year invited onto Sky News – not as a white supremacist but as a peer and commentator.

“Once the corporate and state medias grip on the zeitgeist of modernity was finally broken by the internet, true freedom of thought and discussion flourished and the overton window was not just shifted, but shattered,” the Christchurch shooter wrote in his manifesto. “All possibility of expression and belief was open to be taught, discussed and spoken.”

The Overton window is a phrase used to describe the basic, civil parameters of public discourse. Scarily, regarding its shifting, it seems he was correct.

This article was first published in the print edition of The Saturday Paper on April 13, 2019 as "Post production".

For almost a decade, The Saturday Paper has published Australia’s leading writers and thinkers. We have pursued stories that are ignored elsewhere, covering them with sensitivity and depth. We have done this on refugee policy, on government integrity, on robo-debt, on aged care, on climate change, on the pandemic.

All our journalism is fiercely independent. It relies on the support of readers. By subscribing to The Saturday Paper, you are ensuring that we can continue to produce essential, issue-defining coverage, to dig out stories that take time, to doggedly hold to account politicians and the political class.

There are very few titles that have the freedom and the space to produce journalism like this. In a country with a concentration of media ownership unlike anything else in the world, it is vitally important. Your subscription helps make it possible.

Select your digital subscription

Month selector

Use your Google account to create your subscription