Skip to main content

Facebook to track user data to stop piracy?

facebook track user data stop piracy fb
Image used with permission by copyright holder

Facebook is stepping up its efforts to combat piracy within its borders.

The social network was awarded a patent, called “Using social signals to identify unauthorized content on a social networking system,” that will allow it to tap into the profile information of its user base, including the user’s interests, physical location and social relationships to help it determine whether files that users share via Facebook are pirated or not. 

Here’s what Facebook has to say on the matter.

The social networking system may collect social signals about the content such as the diversity of the viewers of the content, the relationship between the viewers and another user or other entity that is featured or tagged in the content, and the relationship between the viewers and the user who posted the content. The social signals are then used to calculate a series of aggregated metrics to generate a prediction for whether the content is an unauthorized use of the social networking system.”

What’s currently unclear is whether Facebook is employing these methods to combat illegal file sharing right now or not. Nevertheless, considering how many people share and overshare personal information on Facebook, we suspect that if Facebook is indeed making use of this patent at this time, that users are already being identified as illegal file sharers.  

Of course, people can always use dummy information in order to throw Facebook’s anti-piracy efforts off. It’ll be interesting to see whether these measures will result in a piracy crackdown of any kind on Facebook.

Konrad Krawczyk
Former Digital Trends Contributor
Konrad covers desktops, laptops, tablets, sports tech and subjects in between for Digital Trends. Prior to joining DT, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more