Skip to main content

France fines Google over Street View privacy violations

Google Street View Paris France LouvreFrance’s data privacy regulator has announced that it has fined Google $142,000 (€100,000) for unauthorized data collections carried about by the company’s controversial Street View photo-mapping project.

The Commission nationale de l’information et des libertes (CNIL) informed Google in May of 2010 that it would face penalties if it did not stop the data collections and hand over the information that had been collected. CNIL today said that Google had failed to comply with its demands and therefore would be subjected to a fine.

The Street View project aims to add panoramic photos to Google’s mapping service, Google Maps. Last year, it was revealed that Street View vehicles had collected personal information from unencrypted Wi-Fi networks. Several countries, including the U.S. and the U.K., announced investigations into the breaches. Google has since apologized for its actions and has always maintained that the data collections were accidental.

And while Google has ceased siphoning Wi-Fi data, CNIL alleges that the company has resorted to collecting data about Wi-Fi access points through smartphones that are running Google Latitude, a geo-social app that allows users’ to register and share their locations. CNIL claims that the Internet giant has failed to inform Latitude users about the practice — another reason behind its decision to fine the company.

Google is free to appeal the fine, but hasn’t yet indicated if it will do so.

Topics
Aemon Malone
Former Digital Trends Contributor
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more