Skip to main content

Windows 10 to offer Microsoft’s strongest security measures ever

microsoft windows 10 passport hello
Microsoft
While its competitors face virus problems of their own, it’s no secret that if you want to protect your Windows PC from nefarious malware, you need to utilize some third-party solutions. While that is likely to still be a good idea with Windows 10, the upcoming OS is at least offering a number of new security measures that should make it a bit harder for malware makers to infect your pristine system.

Speaking at this year’s RSA Conference in San Francisco, Microsoft’s corporate VP of trustworthy computing, Scott Charney, discussed three features in particular that would help keep a Windows 10 PC safer than previous iterations of the OS.

The first one, Device Guard, is designed to compartmentalize the process of deciding whether an application or executable is trustworthy. To make sure that a piece of malware doesn’t trick the system, the choice of whether it is or is not trustworthy is left up to a separate process, disconnected from Windows, and using “hardware technology and virtualization.” This should prevent even those with administrative access from giving malware the thumbs up.

However, that doesn’t take users out of the driving seat. Microsoft has promised that Device Guard will only notify the user, who will be given ultimate control as to whether an app is allowed through or not.

The other two new systems are Windows Hello and Microsoft Passport, which together give the OS support for password-free logins and biometrics such as fingerprint and iris tracking, as well as facial recognition through the likes of Intel’s Realsense 3D camera.

It’s through this combination of technologies that Microsoft believes it can shore up its next operating system’s defenses better than ever before. While it can’t be made 100 percent secure, Microsoft is confident that organizations that choose to make use of the new features “will help eliminate some of the most common tactics that are being used against them,” as per the Microsoft blog.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more