Skip to main content

Third-party Minecraft community lost 7 million user passwords, and didn’t inform users

ben wheatley minecraft free fire
Image used with permission by copyright holder
Minecraft community Lifeboat has been hit by a major data breach, leading to some 7 million user accounts having hashed passwords and email addresses circulated on the Internet. Worse yet, it seems that the site’s careless approach to security has delayed discovery of the problem, as affected users weren’t notified of the attack.

Despite the site being breached several months ago, staff at Lifeboat avoided making an announcement so the culprits wouldn’t know that the data had a limited shelf life, according to a report from Ars Technica. However, some users are even questioning the site’s claims that it implemented a full password reset.

Unfortunately, Lifeboat’s list of errors doesn’t stop there. In a damning set of instructions hosted on its Getting Started guide, new users are instructed to keep their passwords “short, but difficult to guess.” The page goes on to state that “this is not online banking.”

It might not surprise you to learn that this casual approach carried over to security efforts under the hood. Lifeboat was using the MD5 algorithm to conceal plain text passwords, which is a rather outdated form of protection to be using on a modern site.

MD5 was designed in 1991, but even by 1996 some experts were warning that it might be time to start investigating alternative options. The hashes used by Lifeboat were not even salted, a method of adding extra defense against certain attacks by pairing each entry with a randomized value.

All in all, it’s clear that Lifeboat and its users have learned some tough lessons about online security. It’s true that a service that allows you to play Minecraft with your friends is far from online banking — but for any account holders that have a tendency to re-use passwords, the two might be intertwined in a rather troublesome manner.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more