Skip to main content

Google Yanks Street-Level Images for DOD

Google Yanks Street-Level Images for DOD

Maybe “all your base are belong to us,” but, according to the Department of Defense, not all their bases belong to Google. Or at least, not pictures of them: at the DOD’s request, Google is removing some images from its Street View street-level mapping service because the pictures could pose a security risk to U.S. military bases.

Street View is a feature of Google Maps that currently offers ground-level views of streets and areas in 30 U.S. cities. In some cases, users can experience full 360 degree views of streets, enabling them to get a sense of an area’s topography, development, signage, scenery and even “vibe” that is otherwise inaccessible from maps. But Street View has also engendered some controversy over the images captured by Google’s photography teams: individuals captured in some of Street View’s photos have asked that their images be removed, and, although the feature hasn’t been introduced in Canada, privacy laws in that country may bar Google from publishing identifiable images of people.

The Department of Defense’s concern is that images of military installations show the positions of guards, how vehicle barriers operate, and the locations of building entrances. In theory, such information could pose a security risk or be used to stage an attack on the bases.

Google says it only photographs from publicly accessible streets and roads, so it’s not clear that, in the United States, the Defense Department has any legal basis to request Google remove the images.

The popular Google Earth application has also been criticized for offering images of potentially sensitive locations. Google Earth’s images are sourced from civilian releases of satellite maps and commercial satellite mapping services.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more