Skip to main content

New graphene nanotubes could bring faster, microscopic computers

Moore’s law may continue on its unyielding plod to make processors cheaper and faster, and computers may continue to shrink. But some have started to worry that, with the current materials and structures, the graph is going to plane off at some point and we won’t be able to do either.

A team at the University of Nottingham in the United Kingdom report that they may have helped alleviate some of those worries. They report building nanotubes of graphene (a meshwork of only carbon atoms) and sulphur that have shown effective in encapsulating nanoscale—just a few molecules—reactions. Nanoribbons could end up playing a centrol in the concept of nanoswitches, nanoactuators and nanotransistors.

“Nanoribbons—far from being simple flat and linear structures—possess an unprecedented helical twist that changes over time, giving scientists a way of controlling physical properties of the nanoribbon, such as electrical conductivity,” Andrei Khlobystov of the University of Nottingham.

The basic idea is that the sulphur atoms, pointed towards the inside of the tube and carrying a charge, would facilitate needed chemical reactions. With the ability to alter and manage a charge differential up and down these tubes we could, theoretically, allow for the movement and shifting one atom at a time, if we developed the proper controls.

To understand the size of a typical nanoribbon, you would have to lay 80,000 of them side by side to reach the width of a human hair.

Nanotube research is still in a relatively early stage, but its potential benefits cannot be denied.

(Image courtesy of WikiCommons)

Caleb Garling
Former Digital Trends Contributor
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more