Skip to main content

Chinese hackers used Microsoft TechNet platform to hide malware distribution

microsoft open sources graph engine microsoftlogo
Drserg/Shutterstock
As companies and governments around the world continue to improve their security in response to the threat posed by individuals, groups and state-sponsored hackers, the makers of the world’s most malicious software need to evolve their game too. Which it what appears to have happened in the case of Chinese hacking collective APT17, also known as Deputy Dog, which used Microsoft’s own TechNet support network to hide its activity.

This wasn’t a case of a man-in-the-middle attack against the site’s members though, nor was it a compromise of Microsoft servers, but instead was a use of public accounts to obfuscate the group’s actions. Using its latent talents, APT17 set up standard profiles on the TechNet website and then filled them with malware, according to a FireEye report.

This wasn’t just an attack designed to go after TechNet members. What makes this particular hack so dangerous is that it was able to keep itself hidden thanks to the use of the support platform.

The particular malware that the group proliferated around the TechNet site was a variant of the BLACKCOFFEE malware. While that sort of nefarious software was detectable by botnet hunters, it took some time for it to be discovered, as most trackers considered TechNet traffic to be a secure source and not likely to have been compromised.

Fortunately it was eventually discovered and stamped out by Microsoft and FireEye in late 2014. In a bit of poetic justice, they gave APT17 a taste of its own medicine, with counter-malware code added to the TechNet profiles, which allowed those chasing the hackers to learn about the malware being used and who it may have affected.

Through its announcement and accompanying break-down of these techniques, FireEye hopes that it can warn other platform providers to be on the lookout for such malware hiding techniques — though it’s hardly a poor advert for the firm’s services either.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more