Skip to main content

Email scam Petya locks down PCs until a ransom is paid

NotPetya ransomware
Trend Micro
A new piece of malware doing the rounds using popular cloud storage service Dropbox as its carrier is reportedly able to lock users out of their systems. The ransomware is known as Petya, and at present it seems to be forcing users to pay more than $400 to regain access to their computers.

Petya is being distributed via email, according to a report from Trend Micro. The package is included in correspondence intended to look like a message from a professional looking for work, which contains a Dropbox link that will supposedly allow the recipient to download their resume.

Unfortunately, that file is in fact a self-extracting executable that’s designed to install a Trojan which blocks any active security software and downloads the Petya ransomware. Once that groundwork has all been laid, the real attack can get underway.

Petya overwrites the master boot record of the infected system, causing a blue screen of death. Once the user tries to reboot, they’ll be greeted with a bright red screen emblazoned with an ASCII skull and crossbones — and there’s no way of escaping this, as safe mode will have already been disabled.

The ransomware then informs the user that their system has been locked with a “military-grade encryption algorithm.” The only way to reverse the process is to head to the dark Web and pay for a key with bitcoin — the going rate is $431, and that figure doubles if the victim doesn’t pay within a certain schedule.

This is undoubtedly a very nasty piece of malware, and another piece of evidence that online criminals are continually developing their methods of attack. At present, it’s unclear what individuals can do to avoid being targeted, aside from being continually vigilant about the sort of links they click on in emails from unknown senders.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more