Skip to main content

Researchers are using neural networks to get better at reading our minds

what is the mandela effect brain stock photo
Tatiana Shepeleva/123RF
Researchers are doing a remarkable job of scanning the human brain and extracting information that can be used for a number of important applications. Known as brain decoding, this technology could help with such things as curing some forms of blindness and controlling PCs and other devices using the brain as an input device.

One of the technologies used in brain decoding is functional magnetic resonance imaging (fMRI), which is able to determine brain states while certain mental functions are being carried out. An example is reconstructing visual stimulus, and a group of researchers has determined a way to extract cleaner and more accurate data, as Engadget reports.

Essentially, some Chinese researchers applied neural network algorithms to the process of mapping brain scan data to what a person sees. As can be seen in the illustration below, algorithms accomplish varying degrees of accuracy in recreating what a person is seeing using fMRI real-time scanning.

Image used with permission by copyright holder

The researcher’s Deep Generative Multiview Model (GDDM) provides an uncanny representation of the letters being viewed by a test subject. This means that the decoding process is essentially reading the subject’s mind and displaying the results on-screen. While the technical details are incredibly complex, the overall concept is relatively simple — use neural network algorithms to make mapping real-time data vastly more accurate.

The applications for this kind of technology are mind-bogglingly exciting. While this particular research only handled the brain’s processing of simple visual data, more accurate systems could potentially handle more complex images and even video. Should the technology progress that far, then applications could be developed for using the brain to control devices, analyze dreams, and create a cure for blindness.

Future work will be aimed at perfecting the algorithms and neural networks with an eye to reconstructing dynamic vision. In addition, the researchers are looking at how to use the fMRI imaging measurements for multi-subject decoding. If they succeed, then it will not be too long before scientists can read our minds and act on that data — which is both a promising and terrifying proposition.

Mark Coppock
Mark has been a geek since MS-DOS gave way to Windows and the PalmPilot was a thing. He’s translated his love for…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more