Skip to main content

Wireless brain implant could make controlling PCs with your mind a reality

brain-flickr-greenflames09
Image used with permission by copyright holder

An increasing number of technologies are being developed to help paralyzed people communicate better with the rest of the world. According to Geekosystem, Brown University is working on one such technology. The institution revealed today that it created a wireless, rechargeable brain implant that could one day be used by people suffering with paralysis to control electronic devices using their thoughts.

Brown University describes the device, which measures 2.2 inches long, 1.65 inches wide, and 0.35 inches thick, as a “miniature sardine can with a porthole.” It has a titanium shell that houses low-power circuits, wireless radio and infrared transmitters, a lithium-ion battery, and a copper coil for charging. Signals transmitted by the brain are read by electrodes, and the sardine can-like implant transmits that data to an external receiver at 24 Mbps via 3.2 and 3.8 Ghz microwave frequencies. On a full battery, it can run for up to six hours, after which it has to be charged through the scalp for two hours via wireless induction. 

Engineering professor and project head Arto Nurmiko says the device has “features that are somewhat akin to a cell phone, except the conversation that is being sent out is the brain talking wirelessly.”

While Brown’s technology sounds incredibly promising for the medical field, it will take some time before it’s used on actual patients. The university has successfully tested the implants on three pigs and three rhesus macaque monkeys, but it has yet to be approved for human use in clinical trials. 

Image via GreenFlames09/Flickr

Mariella Moon
Former Digital Trends Contributor
Mariella loves working on both helpful and awe-inspiring science and technology stories. When she's not at her desk writing…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more