Skip to main content

Windows Creators Update to improve Defender’s detection and response

Microsoft is continuing to update its Windows Defender platform and will issue a big overhaul to some of its functions in the upcoming Creators Update. Specifically, it will improve the ways in which the anti-malware software detects, investigates and responds to a range of threats from different actors.

Along with Windows Firewall, Windows Defender is seen by many as the baseline of defense for a Windows-based PC. It can go hand in hand with third-party antivirus and anti-malware products, but Windows Defender is the first and last step in protecting many millions of systems the world over. So, keeping it updated and capable of tackling the latest threats is rather important.

In the Creators Update, Microsoft will update its ability to detect memory and kernel intrusions, where typically attackers could hide from traditional detection methods. Microsoft claims to have already leveraged this ability to prevent new zero-day attacks on Windows and has used machine learning to counter the changing trends in attack vectors.

Customers can even add in their own indications of intrusion to augment the detection dictionary.

Opening up the anti-malware process to consumers is a major part of the changes Microsoft is making in the Creators Update. When it comes to threat investigation, Microsoft has added a “single pane of glass across the entire Windows security stack.” In essence, everyone will be able to see what Windows Defender is doing: what it’s blocking, what it’s quarantining and what it’s keeping an eye on.

All of that will be available within a single view to make it easier for security teams to analyse potential and historic threats to the system. This should enable a deeper understanding of the types of attacks coming in, which makes it easier for security professionals and end users to prevent further attacks in the future.

IT managers will be able to look at up to six months of logs for an entire organization’s cloud-connected systems, to provide historic context for any studied attacks.

Giving those same security professionals additional power to combat ongoing attacks, Windows Defender’s update response system will give them manual controls for isolating machines, banning certain files from the network, and killing and quarantining certain processes or files.

All of that and more will be added as part of the upcoming Creators Update. If you’d like to try it out now, you can start a free trial with the Advanced Thread Protection system today.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
You’re going to hate the latest change to Windows 11
A laptop running Windows 11.

Just two weeks after rolling out a preview build to Windows Insiders, Microsoft is pushing out an update to Windows 11 that adds advertisements to the Start menu. Build KB5036980, which is now slowly rolling out to the wider Windows 11 user base, includes recommendations in the Start menu, and they sneakily sit beside your real apps.

These apps comes exclusively from the Microsoft store, and they sit in the Recommended section of the Start menu. This section includes recently used, frequent, and new apps, but one (or more) slots will now be dedicated to an ad. As the update reads: "The Recommended section of the Start menu will show some Microsoft Store apps. These apps come from a small set of curated developers. This will help you to discover some of the great apps that are available."

Read more
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more