Skip to main content

Scientists design crash-proof computer based on nature’s chaos

blue-screen
Image used with permission by copyright holder

You might think your souped-up computer is great for withstanding any task you throw at it without crashing, but the University College London (UCL) has a computer that actually never crashes. According to New Scientist, the “systemic” computer, as it’s dubbed, was designed by UCL’s Peter Bentley and Christos Sakellariou to mimic nature’s chaos and randomness. 

The systemic computer prevents an impending crash by quickly repairing corrupted data and carrying out several tasks simultaneously. Let’s say you give the computer something to do. It divides the result into several copies or “systems,” which are executed all at once so if one system crashes, the computer can simply access another system to carry out your command. Ordinary computers, however, carry out results by going through the process in a linear fashion. It doesn’t create several copies of the result like the systemic computer does, so if it can’t access a part of its memory that it needs to execute a task, it crashes.

Bentley and Sakellariou are working on giving the computer the ability to rewrite its own code as a response to environmental factors. In the future, this super-smart computer could be used for scientific research and mission-critical machines, like drones that can reprogram themselves as a response to damage, or even on remote search-and-rescue robots that can make adjustments based on its environment.  

Photo via billjacobus1/Flickr

Mariella Moon
Former Digital Trends Contributor
Mariella loves working on both helpful and awe-inspiring science and technology stories. When she's not at her desk writing…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more