Technology

If Google is telling us what we want to know rather than surveying what is known, are its online search algorithms expanding our minds or reinforcing our biases? By Wendy Zukerman.

Google's searches narrowing our experience

Google’s EMEA Engineering Hub offices in Zurich, Switzerland.
Credit: Peter Wurmli

Surrounded by rolling hills and the achingly blue waters of Lake Michigan, then Google CEO Eric Schmidt put on a sharp black suit and told a room of US politicians how his company bewitched the world. 

“Google is really built around aha moments,” Schmidt said at the National Governors Association meeting in 2007. He was talking about those dazzling scenarios when you give Google almost nothing to work with, such as entering “pizza” into its query box, and the online search engine sifts through one trillion pages to deliver the number of your local store. It’s those aha moments, he said, that have created a remarkable and unprecedented trust in the company. 

These days, Google probably knows more about its users than any company in history. Receiving 100 billion queries each month, it’s now worth $US395 billion.

But a few weeks ago, the European Court of Justice showed some disenchantment with the search engine. They announced that European users now have a right to demand that Google remove links from its results page that are “inadequate, irrelevant or no longer relevant”. 

Commentators largely bashed the judgement – calling it a blight on freedom of speech and a move that could allow people to rewrite history. These arguments show that while the European court is no longer hypnotised by Google, many still are. But this is not the first time Google’s power over information has been questioned.

In 2011, concerns emerged that the search giant was inadvertently distorting information. The fear, sparked by US internet activist Eli Pariser, was that in the company’s mission to “give you exactly the information you want, right when you want it”, Google was ignoring a more difficult goal: to give you the unbiased information that you need. Pariser argued that through its search algorithms, which are largely kept secret, Google was creating so called “filter bubbles” – spheres where users would only be fed information that they liked, rather than accurate results. 

From the beginning, Google engineers shook their heads at the accusations. “Our algorithms are tremendously balanced to give you a mix of what you want, and what the world says you should at least know,” said senior Google engineer Dr Amit Singhal in 2011. The company line has stuck. A Google spokesperson told me: “Our engineers are very aware of the need for balance in providing search results.” But, Sriram Sankar, an engineer who has worked on Google’s search algorithm and is now at LinkedIn, says, “The filter bubble effect does tend to take place.”  

Key to the problem is the flourishing field of personalised search results, which Google started in the late noughties. “Knowing about you, in particular, can be our most valuable tool in delivering the results you actually want,” said a bright-faced Googlite in a 2007 official video. At first Google only tracked its users’ locations, which could be easily detected through IP addresses and domain names. But soon the search giant was logging past search histories and websites visited. More recently, Google has been scraping its social network, Google+, to capture who its users know, in a bid to better understand what they want. 

In many realms, Google’s personalisation is very useful. “Who wants to find out about a pizza parlour 1000 miles away?” says David Lazer, a political scientist at Northeastern University in Boston. Tracking your personal history can also make it easier for Google to know if you’re interested in fish or guitars, when you simply type “bass” into the search field. 

Yet, when it comes to politics, or any polarising issues, personalisation has the potential to create an invisible echo chamber. “We are at risk of becoming a global community where we only find facts that align with our own opinions,” says Lazer. 

While algorithms may spit out “relevant” results to a pepperoni fan about their local stores, they may also insulate us from opinions and information contrary to our personal understanding and beliefs. The algorithms may prevent us widening our understanding of important issues. For example, a person sceptical of climate change or vaccination might initially click on links supporting their preconceived ideas. Google’s algorithms will tally that behaviour and when, say, they first do a search on “HPV” to learn more about it, anti-vaccination sites will be ranked higher in the results.

The issue is compounded by ignorance of how it works, says Ancsa Hannak, a colleague of Lazer’s at Northeastern. She says users don’t realise that personalisation algorithms are altering the information they see and think the website is providing some sort of “objective truth”. In fact, providing the “objective truth” was never Google’s intention.

The search monolith’s story begins in the mid-1990s when an incredibly bright and shy grad student arrived at Stanford University looking for a thesis topic. The kid, Larry Page, was drawn to the embryonic world wide web. Page noticed that while it was easy to follow hyperlinks from one site to another, it was tricky to work backwards and know what had linked to a site he was currently on. Page found this frustrating and started finding a way to track the so-called “backlinks” of every site on the web.

Sergey Brin, a fellow Stanford student, cottoned on to the project and found it enticing. At the time, there were already millions of pages on the web and connecting their dots would be remarkable. When the algorithm was completed, the team quickly realised they had created a revelatory tool. 

A curious by-product of their algorithm was that it ranked sites by their importance. For example, websites with many links flowing towards them were generally classed as more important – and useful – than a loner’s blog post. Similarly, when an important site linked to another page it suggested that site was valuable, too, like a professor giving a nod to a fellow colleague. Page and Brin figured their system, dubbed “PageRank”, could rank websites better than the main players in internet search around at the time, Yahoo, AltaVista and Excite. And it did. 

In 1998, the team published its results, which it described as “audacious”. Even then, the team predicted the slanted world of personalisation. “We can generate personalised PageRanks which can create a view of the web from a particular perspective,” they wrote. Later that year Google was incorporated and six years later Brin announced in a TED Talk that Google was being used “everywhere there is power”.

Funnily enough, the unease surrounding Google’s ability to skew results could easily be dismissed if the search giant were more open about what its personalisation algorithms are doing. According to computer scientist Hannak, if Google explained what percentage of results are being personalised, and on what basis, we could get a better picture of whether filter bubbles really are a problem. Google, however, refuses to provide this information, leaving independent academics struggling to understand how slanted the company’s results are.

This year, a study by Xinyu Xing at Georgia Institute of Technology in Atlanta estimated that three out of the 10 results in the first page of any Google query would be affected by personalisation. But there’s much debate. Previous studies, including one by Lazer and Hannak, calculated that personalisation could affect as few as one result in the top 10, or as many as six.

In theory, six could be significant enough to start influencing opinions, but it depends on why those results are different. For example, Xing’s study estimated that most of the changes were related to a user’s location – such as providing information about pizza from a local store. That suggests fears of bias may be overplayed. But Lazer told me he has found evidence of personalisation in politics, which was “troubling”. 

“I’m not sure this has proven quite as bad as some worried,” says Ben Edelman, an internet scholar at Harvard Business School. Edelman points out that while the results are slanted, at least there are alternative perspectives. “Does a user on Google News receive less balanced information than a user watching Fox News?” he says. “I’d worry more about the latter.” 

While Google’s bias is more subtle, it’s also less visible. Someone choosing to watch Fox, for example, often knows there are other views, but decides not to engage with them. According to Martin Feuz at Zurich University of the Arts, when it comes to Google’s biases a user doesn’t know what information they’re missing out on. We’re in the “spectre of the ‘unknown unknowns’ ”, he says. 

As Google gets better at personalising data it will be able to target more efficiently our individual browsing history, interests and social relationships. We’ll be likely to be provided with more and more information reflecting our biases. Our filter bubbles are set to be more defined and less permeable. How the European Court of Justice’s decision will affect Google’s goal of giving you “exactly the information you want” will be uncertain for some time, but the judgement is a timely reminder that Google’s results are not gospel. They are merely computer algorithms that produce results, which, sometimes, are inadequate or irrelevant.

This article was first published in the print edition of The Saturday Paper on Jun 7, 2014 as "Unknown unknowns". Subscribe here.

Wendy Zukerman
is a science journalist and host of the Science Vs. podcast.

Continue reading your one free article this week