Skip to main content

Hitachi, Phoenix Offer Remote-Erasable HDDs

Hitachi, Phoenix Offer Remote-Erasable HDDs
Image used with permission by copyright holder

Hitachi and Phoenix Technologies have announced a partnership to offer Phoenix FailSafe on notebook computers equipped with 2.5-inch Hitachi hard disk drives. FailSafe is intended as a safety net in the event a notebook (and its hard drive) are lost or stolen: users will be able to remotely track their disk and, if necessary, disable or cryptographically erase their hard drive, keeping their data safe.

The FailSafe agent can be installed on an encrypted Hitachi hard drive at the factory; if the agent is removed by a thief, the authorized owner will be abe to remotely re-install the FailSafe agent and, if necessary, perform a cryptographic “erasure” of the drive by deleting the 128-bit encryption key, effectively rendering the drive unreadable because the drive controller’s system-on-a-chip will no longer be able to decrypt the drive’s contents.

“All of our 2007 and 2008 models of 2.5-inch mobile hard drives can be enabled at the factory to utilize powerful AES-128 encryption engine,” said Hitachi Global Storage Technologies’ director of product planning Masaru Masuda, in a statement. “By incorporating Phoenix FailSafe technology with our mobile hard drives, OEMs can deliver advanced security to prevent data loss and theft. The ability to remotely and securely erase or disable disk drives on mobile PCs offers the next-generation protection our customers need to stand apart from competitors.”

FailSafe is probably most interesting to enterprises and organizations trying to manage fleets of notebook computers, but a few consumers—especially heavy travelers—might be interested in the technology as well.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more