Skip to main content

Microsoft Shortens Bing Data Retention to Six Months

Bing logo
Image used with permission by copyright holder

Microsoft has announced it will begin deleting potentially identifying information associated with queries through its Bing search engine after six months, rather than the 18-month retention window to the company had been using. The move helps bring Bing into line with current data retention policies at other major Internet search operations, and also goes a long way towards appeasing concerns of privacy advocates…as well as the Article 29 Working Party, charged with advising European Union regulators on privacy issues.

“This new and significant step will be incorporated into our existing privacy practices, which already provide strong protections for Bing users,” said Microsoft’s Chief Privacy Strategist Peter Cullen, in a statement.

According to Microsoft, Bing already takes steps to “de-identify” search queries as soon as it receives them, separating the queries from account information (such as a HotMail or Windows Live account) used to perform the search. Now, after six months, Microsoft will also delete the user’s IP address from its records about the query. However, Microsoft still plans to hold on to de-identified cookie information (which can be used to track search sessions) and any cross-session IDs and tracking information associated with the search for 18 months.

Google currently anonymizes search data after nine months; a little over a year ago, Yahoo jumped in front of the pack, announcing it would anonymize search data after just 90 days.

Search engine companies like to be able to retain identifiers with search queries to analyze how their search engines are being used, to monitor and protect against fraud, and evaluate how their advertising businesses are doing. Privacy advocates warn that retaining personally identifiable information with search queries can lead to abuses, particularly in an age where online identity theft is a major concern.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more