Skip to main content

Yahoo is warning users over state-sponsored cookie-forging attacks

us charges russian yahoo hackers 1
New Yorker
Yahoo’s security woes continue with the company sending out a fresh warning to users over hacked accounts at the hands of allegedly state-sponsored actors.

In an email to users, Yahoo said it has identified evidence of cookie-forging attacks on some accounts, which would allow attackers to access an account without re-entering a password. The email was only sent to accounts that Yahoo believes have been affected by these intrusion attempts so we don’t know how many people have been impacted.

“Our outside forensic experts have been investigating the creation of forged cookies that could allow an intruder to access users’ accounts without a password,” the email reads. “Based on the ongoing investigation, we believe a forged cookie may have been used in 2015 or 2016 to access your account.”

It is believed that hackers obtained Yahoo’s source code for creating cookies. The company’s forensics team has invalidated any corrupted cookies it found.

It’s not clear what evidence Yahoo has to suggest these cookie forging attempts were state-sponsored. However, Yahoo has been the victim of at least two major hacks that were disclosed in the last few months for which it pointed the finger at possible hackers acting on behalf of a government.

The numerous data breaches at the web firm included 500 million accounts compromised in 2014 and up to 1 billion accounts compromised in 2013. But it wasn’t until last year that these mega breaches — as they’ve been dubbed — came to light. Yahoo is now currently under investigation by the Securities and Exchange Commission over why it waited years before disclosing the details of the hacks.

The security blunders could be costly for Yahoo as Verizon, its purchaser, has since sought a price tag reduction between $250 million and $350 million (off the original $4.83 billion offer), as it was unaware of these breaches when the offer was made.

Jonathan Keane
Former Digital Trends Contributor
Jonathan is a freelance technology journalist living in Dublin, Ireland. He's previously written for publications and sites…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more