Facebook and data harvesting
The Cambridge Analytica data scandal has all the elements of a viral Facebook post: Trump, “honey trap” sex workers, fake news, secret recordings and dirty tricks to steal elections. The only difference is that Facebook doesn’t want you to share this story.
As revelations continued this week into how a survey app harvested data from 50 million Facebook users to boost Donald Trump’s election campaign, Facebook is fast losing friends. Governments are floating regulations that could curb advertising revenue, investors have wiped off more than $US75 billion of the company’s value, and indignant users are threatening to #DeleteFacebook. Angry emojis abound.
But for social media marketers and insiders familiar with using Facebook advertising for political campaigns, a more common response is the verbal shrug “meh”. Almost every aspect of the scandal was previously reported or was a predictable outcome of Facebook’s business model. So why did outrage boil over this time?
Was it the perceived lack of consent? In 2013 a researcher named Aleksandr Kogan created an app called “thisisyourdigitallife” and paid American citizens to fill out responses. The catch was they needed to log in to Facebook, granting access not only to their own profile data but also that of their friends. Yet these loose rules existed until 2015. A Facebook employee responsible for third-party data breaches told The Guardian “tens or maybe even hundreds of thousands of developers” had taken advantage, but there was hardly a blip of controversy at the time.
Is it the mention of personality profiling? Kogan copied an earlier quiz app called “myPersonality” created by psychology student David Stillwell, now a lecturer in big data analytics at Cambridge University. Stillwell’s app went viral, collecting more than six million test results and four million Facebook profiles. Researchers then showed how these Facebook Likes could accurately estimate people’s private attributes, such as political preference, sexual orientation and personality.
That database still exists and has been used by Australian academics. Last year, Stillwell and others published a paper showing how they’d used it to target more than 3.5 million people with messages tailored to their personality traits, resulting in up to 40 per cent more clicks and 50 per cent more purchases of various commercial products.
Stillwell declined an interview, but tweeted that lots of digital footprints, not just Facebook Likes, can be used to predict psychology. IBM has a personality insights tool based on analysing text, advertisers can use web browsing information, and Google and YouTube have search term data. “All can be used to predict psychology and then do microtargeting,” he tweeted.
“Many, many companies already do what Cambridge Analytica are doing with people’s personal data,” psychologist Dr Michal Kosinski, probably the world’s leading expert on using Facebook Likes to predict personality traits, told The Guardian. “They just didn’t boast about it, and weren’t hired by Trump.”
Are people upset that data harvesting gave an unfair advantage to one side of politics, hijacking democracy?
That’s been going on for years. After the 2012 US election, Obama campaign manager Jim Messina boasted about inventing “targeted sharing” on Facebook to persuade people. They got six million people to log in to Facebook through an app and showed them a Michelle Obama video. “At the end of the 20 seconds we had matched our data with their data, and we gave them five of their best friends who were undecided voters, and said ‘click here to send them a video, click here to send them information’. Of those people, 78 per cent voted for Barack Obama.”
Messina has since claimed they had informed consent while Cambridge Analytica “obtained their data fraudulently”. But the point remains: elections have become a technological arms race, and both sides use increasingly sophisticated and invasive targeting tools to win.
What about the intrusion of billionaire American ideologues in our local politics? Far-right United States investment banker Robert Mercer financed Cambridge Analytica’s start-up costs with $US15 million, so when the company visited Australia in 2017, there were concerns about wealthy Republican supporters exporting their culture war Down Under.
Both the major parties have now distanced themselves from Cambridge Analytica, but in the recent South Australian election the Liberal Party used the voter database platform i360, which is owned by the US Koch brothers, also notorious far-right billionaires.
SA Liberal Party state director Sascha Meldrum told The Saturday Paper: “i360 has confirmed that its product has no similarity to Cambridge’s business practices. i360 does not use social media data or any other data sources that are questionable for these purposes.” She would not confirm any other details of the platform.
Is it the sheer amount of information on people that’s worrying? Cambridge Analytica’s website claims to have “up to 5000 data points on 230 million US voters”, much of which would have come from commercial sources. Commercial data is more scarce in Australia, but the big data companies such as Acxiom, Experian and Quantium do operate here and they connect directly with Facebook through “partner categories”. You can access their data from within the social media platform and Facebook takes a percentage of the advertising spend.
Besides, political parties in Australia already have extensive databases of information about voters, which is far more useful for election campaigning than third-party commercial data. Plus if they want to know the hot button issues or voting preferences in marginal electorates, the seats are small enough to canvass people individually – whether through telephone polling, robocalls or SMS.
David Vaile, chair of the Australian Privacy Foundation, says political parties have granted themselves an exemption from the Privacy Act, Spam Act and Do Not Call Register under the rubric of freedom of political communication. We don’t know what information they have on us, there’s no legal obligation for them to ensure it’s accurate, and in effect it’s impossible to opt out or stop them.
What is Facebook doing in response? A spokesperson told The Saturday Paper: “We will tell people affected by apps that have misused their data. This includes building a way for people to know if their data might have been accessed via ‘thisisyourdigitallife’. Moving forward, if we remove an app for misusing data, we will tell everyone who used it.”
However, a software expert familiar with using Facebook for research, Dr David Glance from the University of Western Australia, says even restricting data access by third-party apps won’t stop unscrupulous operators. It’s possible to “scrape” public Facebook profiles to obtain the kind of information Cambridge Analytica harvested – friends, Likes, gender, relationships, posts and comments. Doing so breaches Facebook’s terms of service, but, as we’ve seen, that won’t deter mercenary firms.
Other academics worry that Facebook’s response will make it harder for genuine ethical research using the site’s rich trove of data.
“If it’s only Facebook that has the ability to analyse the behaviour on their system then that has implications for accountability and transparency, and it also has implications for academic research into the big issues of today like misinformation, political polarisation and fake news,” says Robert Ackland, an online social networks researcher based at the Australian National University.
The fear is that Facebook will further restrict independent outside research while continuing its own internal experiments.
On the day of the 2010 US congressional elections, for example, Facebook conducted a 61-million-person experiment in voter turnout. Some users received a message encouraging them to vote and were shown profile pictures of their friends who’d clicked an “I Voted” button. The experiment increased turnout by an estimated 340,000 votes.
For a week in 2012, Facebook manipulated the news feeds of nearly 700,000 users, tweaking the amount of positive or negative content to see if emotions had a “contagious” effect.
So well before the current scandal, Facebook had already demonstrated that tiny changes to its feed could manipulate moods and shape elections. Psychologists had shown Likes could predict personality, thousands of apps had exploited loopholes to harvest data, companies had collected our intimate details for sale and previous election campaigns had pioneered the science of social media targeting. Cambridge Analytica just pulled all these threads together.
And that’s what recent revelations have done, too – collate all the dark truths we already knew about Facebook into a single moment of revelation. It cut through because a whistleblower finally put a face to allegations that had been doing the rounds for years, but also because it fitted the formula of a viral hit. There was a villain to hate (Cambridge Analytica CEO Alexander Nix), footage to share (secret recordings of him talking about bribes, spies and “honey trap” hookers) and a polarising political conspiracy (the Left’s insistence that Trump’s election win was somehow illegitimate).
The cynical marketers are right. We always suspected Facebook was a voracious machine to monetise our personal data, but we uploaded ourselves anyway, trading privacy for little dopamine hits of peer affirmation. The real question isn’t how this happened, it’s why we knew but didn’t care.
This article was first published in the print edition of The Saturday Paper on Mar 31, 2018 as "Facebook, unmasked".
A free press is one you pay for. In the short term, the economic fallout from coronavirus has taken about a third of our revenue. We will survive this crisis, but we need the support of readers. Now is the time to subscribe.