Skip to main content

Facebook Photo Memories won’t display your exes – but everyone else is fair game

In the midst of yet another privacy debacle, Facebook wants you to know it truly does care about your feelings (maybe not your privacy, but definitely your feelings). After backlash over its new Photo Memories sidebar dredging up not-so-fond memories of exes, Facebook has set the feature to automatically block photos from users you have been in a relationship with.

A group was created specifically to complain about Facebook’s lack of sensitivity after the feature went live back in May. Users complained that the sidebar display lets (really, forces) you to relive past moments captured on Facebook. Of course, this includes any and all photos of you with an ex. And not just your exes, but perhaps your current significant other’s ex. The possibilities for being confronted with partners of the past go on and on, especially if you and yours tend to date within the same social circles.

Of course, this still leaves the window open for images of anyone you never declared your romantic status with to pop up. Or, if you were one of those people who found it funny to list yourself as committed to your roommate or best friend, their smiling faces are now eliminated from your sidebar. For the strong opposition to Photo Memories to be satisfied, Facebook might have to create a way for the feature to block exes, stepparents, former best friends – really, anyone to whom we have negative connotations.

So for now, you can look forward to still rehashing plenty of memories better left buried in that album you created freshman year.

Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more