Skip to main content

Google Maps Street View Illegal In Canada?

Google Maps Street View Illegal In Canada?In a pre-emptive strike, Canada’s Privacy Commissioner has warned Google that if it expands its Street View feature of Google Maps to the country, it could be breaking a law regarding individual privacy.   So far, Google has used the tool in maps of just nine US cities. It offers360-degree views at the street level, including people informally captured on camera who can easily be recognized.   The Commissioner, Jennifer Stoddart, wrote to Google last month asking forclarification on its intent. The pictures of individuals are clear enough to be considered personal information, and under Canadian law businesses disclosing personal information about individualsneed their consent first. Immersive Media Corp., which produced the images, says it has street level pictures of Canadian cities.   "The images … appear to have been collected largelywithout the consent and knowledge of the individuals who appear in the images," wrote Stoddart. "I am concerned that, if the Street View  application were deployed in Canada, it mightnot comply with our federal privacy legislation. In particular, it does not appear to meet the basic requirements of (the law)."   People could ask for their image to be removed from StreetView, but Stoddart said this would not meet Canada’s 2004 personal information protection act.

Digital Trends Staff
Digital Trends has a simple mission: to help readers easily understand how tech affects the way they live. We are your…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more