Skip to main content

EU Opens Antitrust Investigation of Google

Image used with permission by copyright holder

The European Commission has launched an investigation that Internet giant Google has engaged in unfair trade practices and anticompetitive behavior. The investigation may turn out to be nothing, but in recent years the EC’s competition regulators have provided rough roads for some huge technology companies, most notably Microsoft and Intel.

According to the notice the EC gave Google, the commission is responding to complains about Google’s practices from three firms: the UK price comparison site Foundem, the French legal search engine ejustic.fr, and Caio from Bing, another price comparison site that was purchased my Microsoft back in mid-2008. Apparently the sites believe they are not being ranked fairly in Google’s search results, and believe they are being deliberately snubbed so their listings don’t receive the same prominence as Google’s services and preferred partners.

The irony, of course, is that Microsoft was the subject of the EC’s most involved antitrust proceedings: it owns Caio, and helps fund an organization called ICOMP, of which Foundem is a member.

Google notes that each case in the inquiry is somewhat different, but the company denies doing anything to reduce competition and endeavors to put its users’ interest first . “Our algorithms aim to rank first what people are most likely to find useful and we have nothing against vertical search sites,” Google’s Senior Competition Counsel Julia Holtz noted in a statement. “We are also the first to admit that our search is not perfect, but it’s a very hard computer science problem to crack. Imagine having to rank the 272 million possible results for a popular query like the iPod on a 14 by 12 screen computer screen in just a few milliseconds. It’s a challenge we face millions of times each day.”

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more