Skip to main content

New details reveal over 43M accounts were breached in 2012 Last.fm hack

last fm 2012 hack 43 million accounts
Lorenzo Massacci/Flickr
The scale of a 2012 hack of music site Last.fm is now coming to light, revealing that more than 43 million accounts were affected.

LeakedSource, a data breach and hacking notification site, says it has obtained a copy of the hacked database. The site was originally breached in March 2012, which led the company to send out a password reset notification to its users, but it’s only now that the full scale of the hack is rearing its ugly head.

After analyzing and verifying the data, LeakedSource published its findings Thursday. It says the data includes usernames, hashed passwords, email addresses, and the date the user signed up to the site and/or the newsletter, as well as advertising data.

Perhaps most alarming is the hashed password data, which was secured with the MD5 hashing algorithm. MD5 has been considered outdated for a number of years. In 2012, the year of this hack, the original author of the algorithm wrote that it was no longer safe to us. As far back as 2005, a cryptographer wrote that MD5 was “broken”.

The case bears similarity to the Dropbox hack, details of which emerged Wednesday. Passwords were protected with SHA-1, another hashing algorithm that is becoming more and more outdated as computing power gets stronger.

In the case of Last.fm, LeakedSource was particularly alarmed by the use of MD5. “This algorithm is so insecure it took us two hours to crack and convert over 96 percent of them to visible passwords,” LeakedSource said, adding that it recently invested more into its own password-cracking capabilities for testing purposes.

The site also published a list of some of the most commonly used passwords it found and it doesn’t make for encouraging reading. The three passwords at the top of the list were “123456,” “password,” and “lastfm.”

Last.fm has yet to respond to the new details.

Jonathan Keane
Former Digital Trends Contributor
Jonathan is a freelance technology journalist living in Dublin, Ireland. He's previously written for publications and sites…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more