Skip to main content

Microsoft Responds to Google’s Complaints

Microsoft Responds to Google

Sometimes the squeaky wheel really does get the grease. After Google lodged formal complaints against Microsoft for the anti-competitive nature of its desktop search tool in Vista, Microsoft announced Wednesday that it will actually change the feature in response to Google.

Microsoft will let PC users and manufacturers select third-party applications for their desktop searches, and provide Google with information that will help the company optimize its own search application. The software giant is obligated to help competing companies develop smooth-running software for its Windows operating system under a 2002 antitrust ruling.

Google’s legal team wasn’t completely satisfied with Microsoft’s concessions. “These remedies are a step in the right direction,” said Google’s chief legal officer, David Drummond, in a statement. “But they should be improved further to give consumers greater access to alternate desktop search providers.”

Google isn’t unaccustomed to dealing with outside criticism. The company found itself under fire two weeks ago when a watchdog group ranked it last for privacy among top Web companies. Google subsequently cut its data retention times from 24 months to 18.

Microsoft’s fix for desktop search will find its way into the next service pack for Vista, which will be appearing before the end of the year – possibly in beta form.

Nick Mokey
As Digital Trends’ Managing Editor, Nick Mokey oversees an editorial team delivering definitive reviews, enlightening…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more