With a two-word Twitter missive, Elon Musk unleashed chaos: “Use Signal,” the Tesla boss told his 42.6 million followers. The tweet, sent on January 7, saw downloads of the encrypted messaging app surge 25-fold, causing temporary outages in the service. Shares of an entirely unrelated tech company, Signal Advance, skyrocketed 6350 per cent over three days.
“[Users] were offered a binary choice of ‘you accept these terms’ or ‘get off the service’ and this approach to users is wearing people down,” O’Shea tells The Saturday Paper. “Increasingly, I think, we will see people choose the latter.”
Digital rights activist and IT consultant Justin Warren says the degradation of Facebook’s reputation on privacy, coupled with high-profile recommendations such as Musk’s, is creating pressure for users to switch to other options.
But then there are a contingent of users who’ve been forced off social media alternatives as tech companies finally attempt to crack down on rampant misinformation and hate speech.
“They won’t all choose Signal,” Warren says, “but some will.” Earlier this month, Twitter suspended more than 70,000 accounts linked to the far-right QAnon conspiracy theory movement.
In the wake of the Capitol riots in the United States, the social media site Parler, which attracted millions of conservatives and extremists in 2020 over its refusal to police dangerous speech, has been denied web hosting services from Amazon and been removed from app stores.
Some users jumped over to another unmoderated social media network, Gab, which has survived for years as a pariah to the mainstream tech giants by engaging with alternative service providers. Parler is following in Gab’s footsteps, landing a deal with a hosting service provider in Russia – although for now the new site only links to a landing page.
University of Technology Sydney professor Andrew Jakubowicz, an expert on cyber racism, says the rise of alternatives was entirely foreseeable in the absence of new regulation. “Years ago, we predicted that if platforms would tighten up, but governments did little, you’d see fragmentation of social media and people would seek the safest place to hate,” he says.
Jakubowicz would like to see laws change in Australia to strengthen users’ rights to take legal action against platforms if they are victims of hate speech, regardless of which social media option they use.
“People need to have the right to push back against the platform, and to move for damages so there are financial penalties,” he says.
But the challenge of regulating social media companies, even with a willing government, is formidable. Witness the flexing of the sector’s muscles in response to Australia’s move to force tech companies to share advertising revenues with local media outlets.
On Friday, Google told an Australian senate inquiry the company would remove search from Australia entirely if the plan moves ahead.
Under the proposed code, Google and Facebook will be subject to mandatory price arbitration if a commercial agreement on payment for Australian media cannot be struck.
The tech giants have pushed back strongly against any such code, bombarding users with messages lambasting the proposal, including pop-up windows appearing on the screens of visitors to the Google search page. Facebook has threatened to ban all news from its platforms, while Google is already experimenting with tweaking algorithms to bury Australian news media in search rankings.
The Australian Financial Review reported anecdotal incidents of users searching for the outlet’s stories only for the Google search engine to display old stories or content from other sources. Google confirmed it would be running experiments with Australian news results through to early February.
This week, the US government intervened in the stoush, with its trade representatives requesting to the senate inquiry that the code be abandoned and offering the ominous warning it “may result in harmful outcomes”.
Will Easton, Facebook Australia’s managing director, confirmed on Thursday in a blog post that the company wouldn’t launch Facebook News in Australia unless the news bargaining code is changed, and would “prioritise other countries for investment”.
Meanwhile, Australian Competition and Consumer Commission (ACCC) chair Rod Sims has warned the code may just be the beginning of efforts to regulate the tech giants. “This bargaining code is a journey, if we see market power elsewhere, we can add them to the code,” he told Reuters in December.
Last month, the Coalition put a draft online safety bill out for consultation, proposing a range of new powers for the eSafety Commissioner. These include the power to force online service providers to remove harmful online content within 24 hours, and mandatory reporting requirements for social media platforms to ensure they are meeting online safety expectations. Breaches could see companies fined up to $550,000 and individuals up to $111,000.
But when it comes to misinformation, some of the politicians with the power to regulate are themselves part of the problem that needs regulation.
Earlier this month, Facebook restricted a post from Queensland senator George Christensen, which spread false claims that left-wing groups were behind the riot at the US Capitol. Liberal MP Craig Kelly has similarly received a caution from the social media giant over his claims regarding unproven Covid-19 treatments. Prime Minister Scott Morrison has been unwilling to condemn either Coalition backbencher.
The Covid-19 misinformation shared by Kelly in the midst of a global pandemic has provoked concern among health experts. Dr Claire Hooker, an expert in public health communication at the University of Sydney, says health misinformation spread via social media presents a real-world threat.
“It can become the basis for powerful advocacy against restrictions, and we’ve seen the consequences of that in the US, where the number of people dying from coronavirus each day can exceed the death toll on 9/11,” Hooker tells The Saturday Paper.
But she suspects the recent efforts of social media companies such as Twitter to control Covid-19 misinformation through the use of labelling posts will fail to contain the problem. “The label isn’t very visible, and the posts still get shared widely. The algorithms aren’t good enough to identify misinformation,” she says.
Where algorithms, warnings and legislation fail, the lone backstop remains that social media sites can remove any user from their platform – as was dramatically done to former US president Donald Trump after the Capitol riot.
Christensen responded to the Trump ban and the sanctions against his own misinformation by launching a petition to lobby Communications Minister Paul Fletcher to introduce laws “to stop social media platforms from censoring any and all lawful content created by their users”.
For her part, Lizzie O’Shea does have some reservations about the power of tech companies to set the terms of public debate through the removal of politicians’ accounts, but she is more concerned with the co-ordinated effort by competitors to stamp out Parler. “There is a good case to be made that marketplaces should be neutral and this kind of co-operative deplatforming prohibited,” she says.
“Of course, that still leaves us with the bigger problem of far-right extremism and white nationalism … This is a technological issue, of course, but it’s actually a political problem, including with tolerance of extremist views among mainstream politicians,” she says. “It’s a bit rich for conservatives in Australia to complain about social media companies fanning the flames of disinformation while Craig Kelly and George Christensen are still members of the Coalition.”
Some people fear mainstream social media sites banning users will only drive them further underground, into private forums and sites that can’t be policed. But Andrew Jakubowicz says the line needs to be drawn somewhere, particularly for public figures.
“People like Christensen, Kelly and so on, unless they are held to account, then it goes on and people feel that’s where the new boundary is,” he says.
Digital privacy experts, anti-racism advocates, competition watchdogs and deplatformed conservatives may all have very different reasons for wanting to see the tech giants regulated, but they all agree it shouldn’t be left solely to the companies themselves.
“Mainstream platforms like Twitter and Facebook rely on a business model that incentivises the collection of personal information. That has had numerous negative consequences for our public discussions, as well as our sense of personal and private life online,” says O’Shea.
“These platforms must be better regulated to protect privacy and to ensure that this model does not continue to undermine our social institutions. These platforms need to move away from this approach to monetisation, because unless they do, viral extremism and disinformation will continue to be profitable for them, which is bad for us.”
This article was first published in the print edition of The Saturday Paper on January 23, 2021 as "Tangled web".
For almost a decade, The Saturday Paper has published Australia’s leading writers and thinkers. We have pursued stories that are ignored elsewhere, covering them with sensitivity and depth. We have done this on refugee policy, on government integrity, on robo-debt, on aged care, on climate change, on the pandemic.
All our journalism is fiercely independent. It relies on the support of readers. By subscribing to The Saturday Paper, you are ensuring that we can continue to produce essential, issue-defining coverage, to dig out stories that take time, to doggedly hold to account politicians and the political class.
There are very few titles that have the freedom and the space to produce journalism like this. In a country with a concentration of media ownership unlike anything else in the world, it is vitally important. Your subscription helps make it possible.
Select your digital subscription