Skip to main content

Speak and it shall be done: Intel’s Skylake can be awakened with your voice

microsoft cortana beta ios mockup
Michael Crider/Digital Trends
One unifying standard of all sci-fi is that future tech will include computers controlled almost entirely by voice. Computer do this, computer do that. We just can’t wait to begin telling our computers what to do, and be liberated from figuring out how to click the right thing.

As it turns out, we don’t need to wait for teleporters to arrive to enjoy that sort of technology, as it’s arrived with the combination of Intel’s Skylake chips and Windows 10.

While, as Gizmodo points out, technically Intel’s Core M chips have been capable of doing this for a while, it’s only with the introduction of Windows 10 that the software can take advantage of it. What’s so special about these recent CPUs is that they have an ultra-low-power digital signal processor built in, which is always ready to power on if the time is right.

And thanks to Windows 10’s Cortana, the time is right when you say it is.

Related: Got a 3D camera handy? You can login to Windows 10 with your face

That means that you can, in theory, walk up to your PC and say “Cortana, wake up” and your PC will boot itself all the way to the login screen. If you have Windows Hello and a 3D camera, you can even login without touching the mouse or keyboard.

Since there aren’t any real reports of this being used in the wild just yet, we don’t know quite how it will perform when not set up perfectly. Will you need a decent microphone? Will you need to put a headset on to make this work?

Perhaps more importantly though, does this mean the PC is always listening? And if so, does that mean that the NSA may be listening, too? We’d imagine so, and that does dampen our excitement a little bit, to be honest.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more