Skip to main content

Google Places removes third-party review snippets

google-sign
Image used with permission by copyright holder

Google has announced that it is dropping ratings and snippets from third-party reviews from its Google Places pages, meaning that bits of information from services like Yelp and TripAdvisor will no longer get featured screen real estate in Google’s location-based listings. Google plans to keep listing links to third party review services—so users can go there if they want—but the Places pages will only feature content from Google’s own users.

“Based on careful thought about the future direction of Place pages, and feedback we’ve heard over the past few months, review snippets from other Web sources have now been removed from Place pages,” Google director of project management Avni Shah said in Google’s LatLong blog. “Rating and review counts reflect only those that’ve been written by fellow Google users.”

Google Places pages are available through Google Maps and Google Earth, and show a place’s location and address, maps and directions, along with potentially-accurate information like business hours and categorization. Locations will also show reviews and ratings from Google users (aggregated into an average rating), and Google has made other changes to make it more obvious how to upload a photo of a particular location or write a review. The service also provides personalized recommendations to users if they’re logged in to their Google accounts.

The removal of third-party review snippets follows on the heels of a Federal antitrust investigation looking into Google’s business practices. In Europe, Google is facing complaints from the likes of Citysearch, Yelp, and TripAdvisor that Google is exploiting their customer reviews in order to add value to its Google Places service.

google-places-cafe-campagne
Image used with permission by copyright holder
Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more