Skip to main content

FCC investigates Google over Street View

google-street-view-car-on-the-roadThe FCC announced this week that it was conducting its own probe into Google’s Street View project to determine if any privacy laws were breached when Street View vehicles collected personal information from unprotected Wi-Fi networks while photographing locations. Some of the data collected included account information, passwords, and e-mails.

Google’s Street View project launched in 2007 with the aim of adding street level photographs to its Internet map service. The project has expanded to include to the US and around 30 other countries.

Earlier this month, the UK found that Google had violated privacy laws when it scooped up Wi-Fi data, but neglected to impose any fines on the company so long as it promised not to repeat the offense. Canada’s government has also found Street View guilty of violating privacy laws. In the US, the FTC announced two weeks ago that it had concluded its own investigation but had decided not to take any action against the company.

Amid privacy concerns, nearly a quarter million households in Germany declined to take part in the project in advance of its launch. Google has blurred out the declining properties.

Yesterday, Google issued an apologetic statement, as it has in the past when Street View has come under governmental scrutiny. “As we have said before, we are profoundly sorry for having mistakenly collected payload data from unencrypted networks,” Google said. “As soon as we realized what had happened, we stopped collecting all Wi-Fi data from our Street View cars and immediately informed the authorities.”

“As we assured the F.T.C., which has closed its inquiry, we did not want and have never used the payload data in any of our products and services,” Google said. “We want to delete the data as soon as possible and will continue to work with the authorities to determine the best way forward, as well as to answer their further questions and concerns.”

The FCC’s latest investigation is another Street View headache for Google, which has always maintained that the data siphon was accidental.

Aemon Malone
Former Digital Trends Contributor
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more