Google’s defence of a popular YouTuber’s blatantly racist and defamatory content has cost it dearly, both in money and reputation. By Hannah Marshall.
Why did Google try to defend Friendlyjordies’ Barilaro videos?
This week’s Barilaro v Google decision is a dissertation on everything Google did wrong in its defence of former New South Wales deputy premier John Barilaro’s defamation case. It reveals a stark divide between the shiny, trustworthy exterior that Google sells us and its internal machine, which favoured profit over policy to hopelessly defend racist and defamatory content on its video platform YouTube.
Friendlyjordies, also known as Jordan Shanks, published a series of videos on YouTube about John Barilaro. Shanks accused the politician of corruption, perjury and adultery, and mocked his Italian heritage in a prolonged, calculated attack. He called Barilaro a “wog”, “greasy”, a “greasy little scrotum” and a “meatball”, and claimed, “I really like the thought of that man being upset.” Barilaro sued both Shanks and Google for defamation. The claim against Google was based on it providing the platform on which the videos were published. Barilaro claimed Google became liable after it became aware of the defamatory content and failed to remove it.
A lot of other narratives run around the edges of this case. The concerning regularity of politicians bringing defamation actions is one. The impact of those cases on public debate about political figures is another. In this instance, those concerns were heightened when the NSW Police Force’s Fixated Persons Investigations Unit arrested friendlyjordies producer Kristo Langker for allegedly stalking Barilaro, although it dropped those charges shortly before the Google trial began.
The focus of this judgement, however, is Google’s conduct. The videos were indefensibly defamatory and filled with content that very clearly violated Google’s own policies. The real question was why Google sought to defend them in the first place.
The Barilaro v Google decision doesn’t make new law. It barely even tests existing defamation laws. By the start of the trial, Google had given up all its defences. It was not arguing qualified privilege and it dropped its public interest defence; it was only waiting to hear how much it should pay Barilaro in damages.
The decision comes at a critical time for Google. Regulators globally are trying to figure out how to regulate digital platforms. One of the key concerns is the role these platforms play in disseminating news, misinformation and hate speech. They are near-ubiquitous and they have absolute power to control the information we receive.
The standard line of defence by Google and other platforms is that they are doing their best. They have policies against bad content, in which they invest heavily to enforce. Sundar Pichai, the chief executive of Google’s parent company, Alphabet, told the United States congress last year that “we strive to have clear and transparent policies and enforce them without regard to political party or point of view”. The YouTube website claims that, “At the heart of our approach are the four Rs – we Remove content that violates our policies, Reduce the spread of harmful misinformation and borderline material, Raise up authoritative sources for news and information and Reward trusted creators.” Trust us, Google is saying, we’re committed to looking after you.
In Australia, a big question about digital platforms is defamation: to what extent should Google be liable for the content that YouTube hosts? A law reform process is under way that is looking at precisely this question. At the moment the law is unsettled, but roughly speaking the court cases say the platform can be liable once it knows about the defamatory content and doesn’t take it down. The proposal is that defamation legislation should more clearly address internet intermediary liability. It is an open issue as to when and to what extent platforms such as YouTube should be liable for user content.
How we legislate for platform liability in defamation will necessarily reflect the role we perceive platforms as playing in the dissemination of information, defamatory or otherwise. It’s artificial to regard them as mere conduits with no interest in the content that their users post. YouTube places ads in that content and the more people view it, the more ads YouTube sells. As YouTube’s owner, Google’s commercial interest is in maximising its audience. If a video is defamatory but receiving high views, then Google’s commercial interests favour it staying up.
In May 2021 Google lodged a submission with the NSW government on the question of defamation reform. It argued that the real fight in a defamation action must be between the content originator and the person allegedly defamed. It proposed a safe-harbour system in which an internet intermediary could only be liable for user content when a plaintiff could not identify the original author – and after the platform had received notice of the defamatory content and failed to take it down.
The thrust of these arguments is that the platform itself has no meaningful part to play in the defamation. It is an innocent bystander. This rings hollow once you look at Google’s role in the Barilaro case, which was playing out at exactly the same time, and in which it doggedly refused to take down the friendlyjordies videos even after Barilaro had settled with Shanks. In fact, it held this position right up until the final hearing, by which time Google had abandoned all of its defences.
Google was liable in the first place because it didn’t take down the defamatory YouTube content after Barilaro complained about it. It was seriously defamatory content and would have been subject to a large damages award based on that alone, approaching the statutory cap of about $400,000. But then Google had to pay extra, by way of aggravated damages, because of its poor behaviour in conducting the case. That brought the total up to $675,000 plus $40,000 interest.
Aggravated damages aren’t awarded in many cases. You have to go a long way beyond mounting a reasonable defence to justify them. The judge here was utterly scathing of Google’s conduct. The heart of the issue was the vast divergence between what Google says it will do in its policies and the way it behaved in responding to Barilaro’s complaint. This manifested in several ways.
The judge saw friendlyjordies’ videos as “replete with racist, hate-filled rants that were calculated to bully and publicly hound” Barilaro. Google says it will take down malicious insults and racial slurs, even if used in satire or comedy. For clarity, Google says “if the video is dedicated to repeating slurs over and over, it likely crosses the comedic line we set”.
The judge saw no plausible basis to defend the videos against these policies. The only reason Google could have left them up was in clear preference for its commercial interests in friendlyjordies’ popularity. By choosing to leave them up, Google facilitated Shanks’s “vitriolic, obsessional, hate-filled” campaign against Barilaro.
After Barilaro settled with Shanks, and as the Google case approached trial, friendlyjordies posted a series of further videos. They accused Barilaro’s lawyers of dishonesty and incompetence, with a palpable undertone of intimidation. Soon after, Google made a settlement offer, the terms of which were not in evidence. Google refused to remove the videos, which the judge regarded as an intentional act to pressure Barilaro towards a settlement, and as a possible contempt of court.
Google’s approach to its defence of the case showed a similar degree of antagonism. It adopted untenable positions, arguing against the imputations arising and relying on defences that were hopeless and that it progressively abandoned.
The legal consequence of Google’s response was the award of aggravated damages. That’s a serious indictment of a global business that is trying to convince the world it is a responsible custodian of the vast power it holds.
Stripped back, the case exposes the inescapable conflict that plagues the digital platforms. They claim they can self-regulate and protect their community of users from hate speech, defamation and other evils. They make rules about what content they will not allow. But content removal is at odds with their commercial interests. If content is popular, as is friendlyjordies’, then the platform has no commercial interest in taking it down.
This case will hurt Google’s credibility in its campaign to reduce its liability for user content. It will embolden plaintiffs. And it will demonstrate to legislators that Google cannot be relied on to safely regulate its own communities.
This article was first published in the print edition of The Saturday Paper on June 11, 2022 as "Down the (You)Tube".
A free press is one you pay for. Now is the time to subscribe.