Skip to main content

Google greases the gears of government with $5.16M on lobbyists

According to the Lobbying Disclosure Act Database, Google spent $5.16 million on lobbyists in 2010 – a 28 percent increase since 2009. Rightly so: Google has had its hands full this year attempting to influence the right authorities. Between its privacy battles and online tracking battles and the net neutrality ruling that has Internet companies on edge, it’s been an important year for Google to get its foot in the door with policy makers.

The drastic increase is also due in part to Google’s expansion. The Internet titan first hired lobbyists in 2006, when it was little more than a search engine. Now, its repertoire has grown to include mobile phones, VoIP, a smartphone OS, Places…the list of Google products is never ending. And the margin spent on lobbying government officials reflects this growth. Its number of acquisitions has notably skyrocketed as well, which didn’t go unnoticed by the Department of Justice (Google’s purchase of ITA Software has yet to be approved by the government). The higher its aspirations, the more it has to shell out to protect them.

Just to give you a comparison, Google spent more than Apple (which shelled out $1.61 million) but less than Microsoft. Microsoft spent a monstrous $6.91 million on lobbyists – and it shows: The company brought in nearly 50 percent more revenue than Google in the fourth quarter.

Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more