Skip to main content

Google goes off-road with Street View

Google yesterday posted a batch of new Street View images. But these were no pictures of busy intersections or suburban strip malls. Rather, the images depicted pastoral trails and well-tended gardens. Thanks to the off-road “trike” Google’s Street View is no longer restricted to avenues and boulevards.

“In 2009 we introduced the Trike, a modified bicycle outfitted with Street View equipment, to visit these locations, from towering castles to picturesque gardens,” said Jeremy Pack, a Google software engineer, in a blog post. “The Trike team has been pedaling around the world, and today we’ve added more of these unique places to Street View in Google Maps.”

Google has traditionally relied on motorized vehicles to capture the 360-degree images that it posts to Street View. The trike is “a three-wheeled tricycle in a device reminiscent of an ice cream” that is outfitted to go places where cars can’t (or would otherwise be unwelcome). The first of the trike’s images include shots from France’s Château de Chenonceaux in Civray-de-Touraine, the National Botanic Gardens in Dublin and the gardens of the San Diego Art Institute in California.

Street View aims to add 360-degree photographs of locations within Google Maps. The program, now active in seven countries, has been mired in controversy as of late, after it was revealed that Street View vehicles had hoovered up personal data from Wi-Fi networks. Google apologized and has maintained that the breaches were accidental. But several countries, including the U.S., launched investigations into Google’s Street View practices. Israel recently said it was working with Google to allow Street View vehicles to photograph its streets in spite of security concerns.

Will Google stir up more controversy as it leaves the street and heads off-road?


Topics
Aemon Malone
Former Digital Trends Contributor
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more