Skip to main content

Anonymous says it breached SpecialForces.com

anonymous
Image used with permission by copyright holder

Members of the hacktivist collective Anonymous claim to have breached SpecialForces.com—a site offering military and law enforcement gear—and gathered more than 14,000 passwords and some 8,000 credit card numbers. Anonymous says they breached the site several months ago but are only now getting around to publicizing the breach as part of “LulzXmas,” the groups’ current hacking campaign. A Twitter account associated with Anonymous posted a screenshot of a message SpecialForces sent to its customers warning of the breach and informing them their passwords had been reset. According to that message, SpecialForces believes only encrypted credit card data may have been compromised.

Anonymous claimed to have targeted SpecialForces.com because their customers are mainly “military and law enforcement.”

The claim of responsibility for breaching SpecialForces.com comes in the wake of attackers associated with Anonymous breaching Strategic Forecasting (Stratfor) and obtaining more than 50,000 client email addresses, personal information, and credit card numbers, along with millions of email messages. The collateral damage from that attack has escalated, with millionaire Australian MP Malcom Turnbull and billionaire business magnate David Smorgon (also Australian) having their credit card information published on the Internet—members of Anonymous have also posted images claiming to show receipts for donations made to charities using credit card information belonging to Stratfor clients, including the U.S. Department of Homeland Security and Department of Defense.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more