Skip to main content

Celebrities hold Twitter accounts for ransom

Image used with permission by copyright holder

If your day begins with a bowl of Captain Crunch and reading celebrity tweets, this is not the news you want to read. Lady Gaga, Alicia Keys, Justin Timberlake, Usher, Jennifer Hudson, Ryan Seacrest, Kim Kardashian, Elijah Wood, and other celebrities (none of our favorites) have joined Digital Life Sacrifice, a crazy new campaign that asks participants to sign off of all social media until $1 million is raised for Key’s Keep a Child Alive charity. None of them will post or respond to anything on Facebook, Twitter, or any social media site beginning Tuesday, reports the AP.

Though the campaign is built on the somewhat arrogant notion that people will pay to read Lady Gaga’s  160 character tweets again, Keys explains why she chose this route. “It’s so important to shock you to the point of waking up,” Keys said. “It’s not that people don’t care or it’s not that people don’t want to do something, it’s that they never thought of it quite like that. This is such a direct and instantly emotional way and a little sarcastic, you know, of a way to get people to pay attention.”

The foundation will accept donations via text message or bar-code scan. The money will go toward helping to support families affected by HIV and/or AIDS in Africa and India.

“We’re trying to sort of make the remark: Why do we care so much about the death of one celebrity as opposed to millions and millions of people dying in the place that we’re all from?” said Leigh Blake, the president and co-founder of Keep a Child Alive.

Of course, the entire effort is really a demand for $1 million in ransom from fans of the celebrities. Though the money is going to a charity, all of the attention and glory will go to celebrities like Lady Gaga, who are doing little more than not posting on Twitter for a day.

If you’d like to bring back a celebrity from their social networking death, head to buylife.org.

What do you think of this charity? Do you like the idea?

Jeffrey Van Camp
Former Digital Trends Contributor
As DT's Deputy Editor, Jeff helps oversee editorial operations at Digital Trends. Previously, he ran the site's…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more