News
A senate report on foreign interference through social media channels in Australia is proposing a co-ordinated national approach to ensure online platforms work harder to combat false information. By Karen Middleton.
The government plan to combat social media misinformation

Twitter, now known as X, says it has blocked 6600 suspect overseas-based accounts in the past eight months for deliberately spreading false information targeting Australians online and trying to generate conflict.
The social media platform says those accounts were among 30 million it has blocked or suspended worldwide for violating its policies on information manipulation and spam since late last year.
The figures were contained in answers to questions from a senate committee, just ahead of the publication this week of its report on the prevalence of foreign interference through social media in Australia and what should be done about it. The responses form part of major social media platforms’ attempts to avoid further regulation and prove they take the issue seriously.
Since he took over Twitter in October last year, Elon Musk has been accused of stripping back the platform’s protections against false information. Despite what his and other platforms insist are active moves to weed out nefarious behaviour, Australian parliamentarians remain concerned that tech giants are doing too little to protect against activities designed to undermine democracy and foment unrest.
The multiparty senate report details the extent of foreign interference through disinformation via “coordinated inauthentic behaviour” and its threat to social harmony and national security in Australia. It says the current voluntary code for social media platforms is opaque and ineffective, urging government to do more to combat the growing problem.
“Given the declining trust in democratic institutions and leaders, the increasing polarisation of opinion in society and the dominant role of social media platforms in our contested information environment, it is clear from the evidence received over the course of this inquiry that a coordinated national approach is required,” the report says.
The senate inquiry was focused on disinformation – false material spread deliberately – as opposed to misinformation – which is spread innocently in the belief that it is true. The report was published on Tuesday, the same day the federal Coalition raised strong concerns about recently circulated draft government legislation aimed at curbing the influence of both.
The government’s Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill, still in exposure-draft form, would strengthen the powers of the Australian Communications and Media Authority (ACMA) to police the spread of false information and outlaw that which could cause “serious harm”.
It is not clear whether the government genuinely intends to legislate all of the bill’s contents or just use the threat of it to force the platforms to increase the standard, and transparency, of their self-regulation.
A spokesperson for Communications Minister Michelle Rowland said the Labor government’s move to tackle the information problem and boost ACMA’s powers echoed a promise the Coalition made while in government to do the same.
The spokesperson said the proposed bill provided a clear framework for holding digital platforms to account. It proposes stronger information-gathering powers for ACMA, because without them “Australians are none the wiser as to the systems and processes platforms have in place to protect users”.
The spokesperson said the draft framework “focuses on systemic issues which pose a risk of serious harm on digital platforms”. ACMA would not have the power to request specific content be removed but could set rules and would act as a “regulatory backstop” if self-regulation failed.
The minister would consider feedback, including on definitions, avenues of appeal and other matters, before a final version went before parliament by the end of the year.
The bill’s current definition of “harm” includes harm to any group in the Australian community, to the democratic processes, environment or economy, or to Australians’ health or financial interests. It also includes harm arising from “disruption of public order or society”. It does not say how many people would need to be potentially harmed or what constitutes “serious” harm.
If passed, the new law would authorise ACMA to force companies or individuals who might possess documents or information that is evidence of misinformation on digital platforms to provide it or face a fine of $8000 a day.
Rather than adjudicate more specifically on what could be misinformation, the bill seeks to regulate according to who is providing it.
It exempts information coming from Australia’s governments, or from accredited educational institutions, including their accredited foreign partners. It also exempts material produced “in good faith for the purposes of parody, entertainment or satire” and “professional news content”. It refers to “quality journalism” but does not define it. Journalists are only exempt from the proposed law for content on their professional platforms, not for material posted elsewhere.
The shadow Communications minister, David Coleman, calls the exclusions too arbitrary, the definitions too vague and the bill “a complete mess”. He says it means material from a government, a university professor or in some cases an entertainer would be protected but not the views of anyone challenging them.
“It’s difficult to see how they can possibly salvage this,” Coleman tells The Saturday Paper. “It is such a bad bill and so far from the mark, it is going to be very difficult.”
He says there are “obviously a lot of people who say profound and interesting and controversial things who aren’t academics or journalists, and those people aren’t protected by the bill”. He adds: “Think of all the people in history who were outliers who were shunned for things they said and turned out to be right.”
Confined to foreign-sourced disinformation, the senate inquiry did not examine the bill, which would extend to all non-exempt material deemed false information and published on digital platforms in Australia. It asked major social media companies to appear at the inquiry and all did except the Chinese-language platform WeChat.
The committee condemned WeChat, saying its answers to written questions, which insisted it was eager to co-operate with Australian authorities and downplayed the influence of the Chinese state, lacked credibility and were “widely contradicted by the evidence”.
The multiparty committee has urged the government to consider extending the current ban on having the TikTok video-sharing app on government devices to other China-linked platforms, including WeChat. Because of the apps’ data-harvesting properties and China’s requirement that they pass it on, it also recommends applying the ban to government contractors and designated systems of national significance.
In determining how best to tackle the threat of false information on social media, whether spread by mistake or malice, the tension is in whether to lean more on transparency or censorship.
The report expresses concern that platforms are succumbing to private pressure from authoritarian governments to remove some content in some locations, fuelling distortion and disinformation.
Overall, the report emphasises disclosing the sources of information and the motives of its proponents over blocking it. It calls for social media platforms to meet minimum transparency standards and for large companies to be required to have a physical presence in Australia.
“The committee recommends an approach that favours transparency over censorship,” the report says, adding that platforms should be required “to take reasonable steps to inform Australians about the origin, and, where possible, the intent of the content they are exposed to so they can make up their own minds about its merits”.
In additional remarks, Labor senators endorsed the direction of the government’s bill. The Greens called for policies to minimise unnecessary data retention – a hacking risk – and opposed the ban on TikTok and others as “a game of digital whack-a-mole”.
The whole committee called for moves to strengthen “independent”, “professional” media in non-English-speaking diasporas, to ensure their communities received accurate information. It made no specific recommendations on the role of traditional English-language media.
The report argues the voluntary code administered by tech industry association DIGI has not been effective enough.
In its evidence, YouTube reported that in the first three months of this year it had terminated more than 18,000 channels linked to China and 900 linked to Russia.
Facebook’s parent company, Meta, reported that between 2017 and 2022 it had disabled more than 200 covert operations, originating from more than 60 countries, which targeted other countries’ domestic public debate.
Also the owner of Instagram and the new Twitter-rival platform Threads, Meta pointed to its action in blocking material deemed to breach existing Australian law. In its public “transparency center”, not yet updated for 2023, Meta reported that in the first half of 2022 it had blocked access in Australia to 182 items. Of those, 88 related to providing academic cheating services, 52 were responding to reports of unsafe health products, 24 involved alleged violations of electoral law, 13 were in response to court orders and four were prompted by private reports of defamation.
Meta has also begun publishing what it calls “system cards” explaining how its artificial intelligence algorithms promote content to users based on whether other users have responded positively or negatively to it. Its AI tools assess the responses from accounts connected to the user and others that are not, with the combined volume of likes and dislikes helping to suppress or propel posts in others’ accounts.
The committee members argue these kinds of self-reinforcing systems are why greater vigilance is required against both automated accounts and malign actors controlling mass or influential accounts and aiming to spread some messages widely and stop others.
Committee chair and Liberal senator James Paterson says the report proposes that platforms take greater responsibility for identifying and labelling propaganda and other manipulative material, especially linked to foreign authoritarian states.
“It ensures that consumers are informed about the source of their information and the platform they’re accessing it on, without in any way censoring their own speech,” Paterson says. “And we know that some of these practices, when they have been in place in the past on platforms, have helped improve the quality and reliability of information available.”
This article was first published in the print edition of The Saturday Paper on August 5, 2023 as "Social experiments".
For almost a decade, The Saturday Paper has published Australia’s leading writers and thinkers. We have pursued stories that are ignored elsewhere, covering them with sensitivity and depth. We have done this on refugee policy, on government integrity, on robo-debt, on aged care, on climate change, on the pandemic.
All our journalism is fiercely independent. It relies on the support of readers. By subscribing to The Saturday Paper, you are ensuring that we can continue to produce essential, issue-defining coverage, to dig out stories that take time, to doggedly hold to account politicians and the political class.
There are very few titles that have the freedom and the space to produce journalism like this. In a country with a concentration of media ownership unlike anything else in the world, it is vitally important. Your subscription helps make it possible.
Select your digital subscription