Skip to main content

Report: Hacker puts data from 167 million LinkedIn accounts up for sale

linkedIn
Image used with permission by copyright holder
A hacker is attempting to sell a package containing account records for 167 million LinkedIn users. The data set reportedly contains user IDs, email addresses and SHA1 password hashes — and the asking price is a measly five bitcoins, which roughly converts to about $2,200 based on current market rates.

The sale is being advertised through a dark market website known as TheRealDeal, according to a report from MacWorld. The listing confirms that the package doesn’t contain the entire LinkedIn userbase, which is more than double the 167 million entries being advertised.

It’s believed that this data was stolen during a major breach of LinkedIn’s security that took place in 2012. At the time, only 6.5 million passwords were released to the internet, but the administrators of security breach indexer LeakedSource have stated in a blog post that there’s evidence that this sale is using records accessed in that attack.

Of the 167 million accounts affected, some 117 million contain hashed passwords; the rest are thought to have been set up via a Facebook login, or another similar process. However, the protection applied to these passwords leaves a lot to be desired.

The passwords were hashed using the SHA1 function, which has long since been cracked and is no longer considered to be secure. The data was not salted, adding to the likelihood that whoever purchases the package will be able to decrypt the information, and potentially access the affected accounts.

If you haven’t changed your LinkedIn password in several years, now is the time to do so — and it might be worth enabling the site’s two-factor authentication process while you’re at it. Of course, if you use the same email address and password combination across several sites and services, you’ll need to make the change across the board.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more