Skip to main content

Malware can now detect virtual machines, and then go dark like a Cold War spy

Radek Gryzbowski/Unsplash
One of the more effective ways to counter a malware infection is to make sure that it infects something that can’t have much of an influence on the rest of the system, like a sandboxed virtual machine. However as malware continues to evolve, its creators are now discovering ways to detect whether it is simply wasting its time infecting virtual machines, so it can go after more legitimate targets.

Discovered by Caleb Fenton with security firm SentinelOne (via ThreatPost), this new form of malware is able to sniff out that it currently resides on a virtual machine. Purportedly it does this by analyzing the number of documents on the machine. Low numbers would suggest some form of testing environment, which could tip it off that it’s sandboxed.

After making such a discovery, the malware becomes dormant, deliberately hiding itself as best as possible to avoid any detection techniques by potential security staff or automated tools. Although that particular piece of malware may become redundant to the creator at that point, avoiding detection is incredibly important in such a situation.

Related: Warning from police: Never plug in a USB stick you get in the mail

Since security researchers can use virtual machines to learn a lot about a piece of malware without risking any spread of infection, keeping the nefarious software under wraps allows its clones to proliferate in the wild for a little while longer.

In one specific example that Fenton discovered, the malware would search a machine for Microsoft Word documents using the Recent Documents Windows function. If it discovered two or more, it would initiate and download its malware payload. If those files were not found, it shuts down and obfuscates its location to try and avoid detection.

To try and avoid smart security researchers who may have added a number of Word documents to the system to avoid tripping that check, the anti-sandbox malware also detects the IP of the system and cross references it with a known blacklist of security firm addresses. Again, if it finds itself in the belly of the IT security beast, it will halt all actions and try to hide.

Although not exactly unique, these techniques are rather new and represent the next evolution in the ongoing war between white and black hats the world over. Extending the life of malware can go a long way to improving its viability as an attack vector, often more so than simply making the malware harder to stop.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more