Skip to main content

Facebook Places checks in to UK

Facebook Places, a service that encourages users to publish their locatoin, launched in the UK today. The controversial service launched in the US last month.

Users “check in” via smart phone when they arrive at a location. To get started, Facebook bought lists of UK premises and locations from third party providers. Very similar to Foursquare, a smaller service with three million users, and Gowalla, Places was considered controversial because of its friends tagging feature.

Facebook Places allow users to tag friends they are with, who will be asked if they want to check in, too. Users were concerned that even if they weren’t at a certain location, a friend could tag them as being there. And some users just don’t want their location publicized, and did not like the fact their friends could do it for them.

Users can remove any check-in after the fact.

The privacy controls prevent users who are under 18 from sharing their location with anyone not on their friends list. By default, check-ins are defaulted to friends only, unless the global master privacy setting is set to “everyone.”

It’s unclear at this time whether UK users will react similarly to the ability to tag friends at a location. The feature is not available to everyone, yet, as Facebook is rolling it out gradually.

Immediately after US launch, instructions on how to set the privacy controls so that no one could tag you on Places, and how to opt out of the service altogether, sprang up on various sites.

A feature called “Here Now” displays a list of everyone at a given location who agreed to widely share their location information.

At launch, Facebook has not attached any advertising to the service. It is expected that Facebook will shift to display targeted ads based on location data at a future date.

Topics
Fahmida Y. Rashid
Former Digital Trends Contributor
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more