Well-traveled enough to know a location on sight? Well, Google is. After years of depending upon geotags in order to determine where in the world you snapped that stunning shot, the latest artificial intelligence system from the Internet giant has evolved past that amateur necessity. Instead, as a new study claims, Google’s new deep-learning program knows what it’s looking at by … well, just by looking at it.
A new program called PlaNet has been taught to determine locations based purely on photographic details — sure, it still depends upon geotags in a sense (by examining over 90 million images that have been geotagged on the Internet), but it’s now learning to recognize landmarks. Sure, that sounds remarkably human, but unlike our measly minds, PlaNet is also able to employ its impressive machine-learning capabilities to figure out where a picture was taken sans distinctive traits — even if the program is just examining arbitrary roads and edifices, it’s still able to tell where it is.
In a test of the new AI that employed 2.3 million images, PlaNet was able to figure out the country from which the picture originated 28.4 percent of the time, and the continent 48 percent of the time. Yes, it’s far from perfect, but it’s also an impressive step forward in the technology as a whole.
And perhaps more importantly, PlaNet is already doing better than humans. In a test against 10 “well-traveled humans,” the AI was able to win over 50 percent of the 50 challenges it engaged our species in (28 to be exact). Again, it wasn’t a landslide victory, and the sample size was admittedly small, but according to researchers, PlaNet’s “ability to guess close to the location of a given picture was roughly twice as good as people.”
It’s currently unclear just how Google plans to utilize PlaNet in the coming days, months, and years. But if you thought you could protect your travel plans from the Interwebs by just turning off geotags, you may want to think again.