Skip to main content

Windows Hello can recognize the difference between identical twins

windows hello twins
Wikimedia
Right now it feels like we’re living through the ’90s again. Not just because the music and fashion choices seem be making a comeback, but also because hackers are once again scary. While the ’90s may have portrayed them as dark and dangerous, today they’re out in the open, posting tweets and mocking law enforcement with abandon.

So what’s to be done? Fortunately, it seems like a new range of digital security systems are just around the corner. Along with bio-metric wrist bands that can use your heart beat as a signature instead of a password, Microsoft is looking to use your face as a way of logging you in to its Windows 10 operating system. And it works really well. In a recent demonstration, the 3D camera system was able to detect the differences between identical twins.

While some twins don’t look particularly similar, there are some that with the right styling and makeup look truly identical. So It’s impressive to see Microsoft’s Hello system differentiate between them without difficulty.

The test involved several sets of twins, and while not everyone was detected perfectly, in many cases Hello was able to tell the difference in seconds. To achieve this, the Australian and Australian Twin Registry used a standard Windows 10 installation with Windows Hello and an Intel RealSense camera.

What’s encouraging about the tests, though, is that while in some cases the system struggled (and in one failed outright) to recognize the twins and log them in to the account, in no instances was someone incorrectly recognized and logged in to the other’s account.

Microsoft has been quite adamant that due to the multiple camera perspectives used in the facial recognition, hacking it should be incredibly difficult if not outright impossible.

Though when it comes to hacking, we’d never say never.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more