Skip to main content

Enthusiast blog unearths China’s bizarre ‘street offset’ system

enthusiast blog unearths chinas bizarre street offset system china google maps
Image used with permission by copyright holder
In recent years, digital mapping software has become rather ubiquitous. Google Earth was something of a novelty when it launched back in 2001, but these days it’s completely normal to open up your preferred map app on a smartphone to chart your course in the car or on foot.

However, there’s an element of unease associated with our easy access to such maps. You can, after all, use a free piece of software to snoop on any location in the world, often zooming in to a street view for an even more detailed vantage point. And as it turns out, not everyone is too thrilled by that idea.

Enthusiast site the Google Earth Blog recently launched an investigation into last week’s explosions in Tianjin, China. In doing so, an unusual alignment error was discovered in Google’s street maps of China. While things don’t quite line up on the Chinese side of the border, the streets of Hong Kong are mapped as normal.

The reason behind this discrepancy seems to be a longstanding piece of Chinese legislation. In an effort to stop prying eyes, maps created in China must use the GCJ-02 coordinate system, as opposed to the WGS-84 system that is used in the rest of the world.

Countries are able to control information like street maps that comes from within their borders, but they find it harder to enforce legislation related to satellite imagery. That’s what makes it difficult for Google to combine the two — although a partnership with AutoNavi allows the company to offer their services in China without complications.

As the Internet invades more areas of everyday life, we’re only going to see situations like this arise more often. It’s a global tool in a world that still enforces the law on a country-by-country basis, so there’s bound to be more questions over which party takes precedence in years to come.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more