Skip to main content

Microsoft wants Facebook user data to improve Bing

Microsoft and Facebook are in talks to strengthen its existing search relationship by using “public data” from Facebook to refine Bing search results, according to a report from AllThingsDigital. Facebook already allows its public status updates to appear on Bing Social. Bing currently provides global Web search to Facebook, with branded results.

Microsoft is also a longtime investor for the social networking site.

The new search relationship could include feeding data from Facebook’s “Like” button into Bing. Using the button’s anonymous data (which has popped up on many sites around the Web) will help personalize Bing’s search results. If you liked a particular politician on their Facebook page or site, you may start seeing results skewed to that political party, for example.

When a user “likes” a page, the user’s Facebook friends are notified. Under this deal, Microsoft will also know which pages users prefer. Since it’s all anonymous data, Bing won’t be able to tie your Justin Bieber obsession back to you. With this information, Bing doesn’t need to rely solely on spiders scouring the net to discover what people are looking at.

Facebook seems to be learning from the user revolts over privacy  in the past. The search relationship will not include any information the users have not agreed to make public.

Considering Facebook has 500 million users at last count, data from this large pool of users would give a boost over Google, since this is data the search giant presumably will not have access to.

Frankly, we don’t think this is entirely a good thing, as users start finding information Bing thinks they want, as opposed to presenting an objective set of results. Imagine searching for health information and getting information that Bing has decided is relevant based on its social data, and not what you decide is relevant.

Microsoft entered a search-and-advertising agreement with Yahoo in 2009 with similar reasons, to obtain data from other sources to increase its search accuracy. At the moment, the companies are nowhere close to finalizing a deal, according to AllThingsDigital.

Fahmida Y. Rashid
Former Digital Trends Contributor
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more