Skip to main content

Passwords are so last season, ‘pass-thoughts’ let you log in with your mind

brainwave-authentication
Image used with permission by copyright holder

We try to make our passwords as secure as possible, but sometimes even complicated character strings can be vulnerable. That’s why companies, researchers, and organizations continue to search for ways to make accessing devices and accounts more secure. Google, for one, wants to replace passwords with USB sticks and smart rings. A team of researchers from the UC Berkeley School of Information, on the other hand, devised a way to unlock gadgets and accounts with brain waves. 

To be able to use “pass-thoughts,” as the team calls them, instead of passwords, the team took an affordable and readily available Bluetooth headset with built-in electroencephalogram (EEG) called Neurosky Mindset. Previously, using pass-thoughts would’ve been considered unfeasible because EEG devices are very expensive, but a $199 headset might make using pass-thoughts a reality. The team used Neurosky on test subjects and found that in order for the headset to provide enough brainwave signal, they had to make users perform seven mental tasks and then calibrate the headset for each one of them so nobody else’s thoughts can unlock their devices and accounts.

The team determined that the most effective way to implement pass-thoughts is to ask users to perform a mental task that’s not too complicated or boring. According to tests conducted, users were bored by imagining their fingers sliding up and down to unlock something, but when they were asked to make up their own pass-thoughts, they came up with something too complicated and hard to recreate. The tasks researchers found most effective are singing a song of one’s choice, counting objects with the same color, and focusing on one’s breathing. Calibrating the headsets and using these mental tasks to serve as one’s pass-thought returned error rates of less then 1 percent. 

The team presented their findings at the 2013 Workshop on Usable Security at the 17th international conference on Financial Cryptography and Data Security in Japan in early April. While the UC Berkeley team’s method sounds very promising, more research needs to be done, and companies must be willing to make the investment in EEG-enabled headsets before pass-thoughts become widely used.

[Image via UC Berkeley School of Information]

Mariella Moon
Former Digital Trends Contributor
Mariella loves working on both helpful and awe-inspiring science and technology stories. When she's not at her desk writing…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more