Skip to main content

Zynga hacker caught, but not before landing $12M in virtual poker chips

zynga pokerAccording to BBC News, a London IT businessman could face prison time after stealing $12 million Zynga currency.  Ashley Mitchell hacked into the company’s servers and was able to pocket the 400 billion virtual poker chips, which he then sold on Zynga’s black market – which is where he got caught.

The 29-year-old was only able to sell around £53,000 (or $86,000) worth of his stash. If he’d been able to sell everything he had stolen, though, he’d be $300,000 richer. The chips, sold legitimately by Zynga, amount to $12 million.

Mitchell has pled guilty, and through his lawyer insists he struggles with a gambling addiction. He faces four charges of converting criminal property as well as consequences for violating the Computer Misuse Act. Not helping his case? His record: Mitchell hacked into his local government’s network three years and his latest actions breach a suspended sentence. Some people never learn.

Prosecutor Gareth Evans pointed out that seeing as Zynga’s currency is in-game only, it doesn’t directly affect the company’s revenue as most customer thefts do – Zynga can always recreate the “money” Mitchell stole and its value is difficult to determine since it exists online only. However, he noted that Zynga could lose users who fear falling subject to hackers.

Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more