Skip to main content

Skyhook sues Google over location patents and contracts

Location service vendor Skyhook filed two lawsuits against Google, claiming the search giant infringed patents and interfered with contracts it signed with Motorola and another company.

Skyhook said the search giant’s Google Location Service violated four of its location technologies patents. Skyhook’s XPS technology maps the location of 250 million Wi-Fi access points. Skyhook claims that Google licensed the technology in 2005 – and then launched its own service two years later.

In an interesting twist, the harshly-worded lawsuit also accuses Google for interfering with Skyhook’s contracts with Motorola. In 2009, it says the company signed a deal with Motorola to launch a range of Android phones using the Skyhook location service. Google VP Andy Rubin allegedly called Motorola co-chief executive Sanjay Jha and demanded that Skyhook’s service be removed because the technology would make the phones incompatible with Android.

While wordy, the accusation is worth reading in its entirety:
“Google wielded its control over the Android operating system, as well as other Google mobile applications such as Google Maps, to force device manufacturers to use its technology rather than that of Skyhook, to terminate contractual obligations with Skyhook, and to otherwise force device manufacturers to sacrifice superior end user experience with Skyhook by threatening directly or indirectly to deny timely and equal access to evolving versions of the Android operating system and other Google mobile applications,” Skyhook wrote in the complaint.

Google also allegedly interefered with another company. The dates in the filing indicate the unnamed company is Samsung, who originally announced a deal with Skyhook around that time.

The two lawsuits were filed in Massachusetts, one in federal court and the other in Suffolk County, where Skyhook is based. Skyhook claims it suffered damages amounting to tens of millions of dollars.

Fahmida Y. Rashid
Former Digital Trends Contributor
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more