Skip to main content

Google denies claim that it’s tracking internet users when incognito mode is on

Google search
Image used with permission by copyright holder

Google is disputing a study released by competing privacy-focused search engine DuckDuckGo that claims the search giant is tracking users across the internet in order to deliver personalized search results, even when incognito mode is enabled. DuckDuckGo claimed that Google is using personal information — ranging from search and browsing history to online purchases — to tailor search results in what the competitor calls “Google’s filter bubble.”

“These editorialized results are informed by the personal information Google has on you (like your search, browsing, and purchase history), and puts you in a bubble based on what Google’s algorithms think you’re most likely to click on,” DuckDuckGo said in a blog post outlining its privacy research study. As part of its findings, DuckDuckGo noted that participants saw search results that were unique to them. Moreover, enabling private browsing mode did little to affect how search results could vary from user to user. “Private browsing mode and being logged out of Google offered very little filter bubble protection. These tactics simply do not provide the anonymity most people expect. In fact, it’s simply not possible to use Google search and avoid its filter bubble.

“We saw that when randomly comparing people’s private modes to each other, there was more than double the variation than when comparing someone’s private mode to their normal mode,” DuckDuck Go continued. “We often hear of confusion that private browsing mode enables anonymity on the web, but this finding demonstrates that Google tailors search results regardless of browsing mode.”

Of concern are search results delivered when users searched for political topics, DuckDuckGo highlighted. The company explored how search results related to political topics may have impacted the 2012 presidential election after the publication of a 2012 Wall Street Journal article that claimed Google “often customizes the results of people who have recently searched for ‘Obama’ — but not those who have recently searched for ‘Romney.'” The influence of technology companies on politics has been called into question in recent months, and Google was part of a legislative inquiry after the company received criticism from President Donald Trump that the search engine may have used its position to censor conservative voices, a charge that Google continues to vehemently deny.

Following the publication of the DuckDuckGo’s privacy research on 9to5 Google, Google disputed the study, noting that the research methods were flawed. “This study’s methodology and conclusions are flawed since they are based on the assumption that any difference in search results are based on rationalization,” Google retorted. “That is simply not true. In fact, there are a number of factors that can lead to slight differences, including time and location, which this study doesn’t appear to have controlled for effectively.”

This isn’t the first time that Google has been under fire for privacy confusion related to incognito mode. When Google launched its updated Chrome 69 browser, the company faced backlash after researchers pointed out that the use of cookies may still allow Google to follow you on the internet even when private browsing mode is enabled, a hypothesis that Google refuted.

Chuong Nguyen
Silicon Valley-based technology reporter and Giants baseball fan who splits his time between Northern California and Southern…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more