Skip to main content

Phoenix Technologies Goes into HyperSpace

Phoenix Technologies Goes into HyperSpace

Phoenix Technologies, maker of the humble system BIOS, may not be accustomed to buzz over its latest product introductions, but got it nonetheless on Monday, when it announced HyperSpace, a sort of operating system substitute.

Hoping to improve both boot speed and power efficiency in laptops, Phoenix developed HyperSpace to offer instantly available applications. Instead of booting into a on operating system like OS X or Windows, Hyperspace will allow PC vendors to embed certain software programs into their computers that can launch directly from system start-up, paring out the need for an operating system and the wasted time and energy inefficiency that go with it.

"For most of us, today’s computing experience is a lot like air travel – offering tremendous possibilities, but plagued with security issues, delays and system failures," said Woody Hobbs, president and CEO of Phoenix Technologies, in a statement. "HyperSpace introduces a new framework to transform the personal computing experience through purpose-driven appliances that work within the HyperSpace environment.”

Besides Phoenix’s own proof-of-concept prototypes, no computers have yet been built using the HyperSpace platform, but the company is working with industry partners and manufacturers to develop such systems for consumer use.

Nick Mokey
As Digital Trends’ Managing Editor, Nick Mokey oversees an editorial team delivering definitive reviews, enlightening…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more