Skip to main content

Sony USB Drives Open Security Hole

Sony USB Drives Open Security Hole

In a development eerily reminiscent of the Sony DRM rootkit fiasco of 2006 (in which the company tried to protect music CDs from copying by way of software programs which exposed users to security theats), computer security firms are warning that fingerprint reading software for Sony’s MicroVault USM-F USB drives with integrated fingerprint readers may expose Windows users to security risks. Like the CD copy protection software, the fingerprint reader software attempts to hide key files from tampering either by the user or computer security programs; in doing so, it potentially creates a "safe zone" from which attackers and malware could run software or otherwise compromise a user’s computer.

Unlike the music CD software, the USB fingerprint reader software is not installed clandestinely without users’ informed consent: to use the USB drive’s fingerprint-reading functionality, users must explicitly install software to support it. The fingerprint reading software also does not hide its components as deeply as the XCP copy protection software did, and does not alter users’ registries or run hidden processes. Also, unlike the music CD copy protection software, the fingerprint reader software is explicitly designed to help users protect their own data, rather than regulate access to Sony-licensed content.

Summaries of the fingerprint readers’ softare behavior are available from F-Secure and McAfee.

Sony typically doesn’t develop driver software for its computer peripherals in-house, but instead outsources the work to third party developers. The MicroVault USM-F has been on the market for a few years, but appears to still be available from Sony.

Sony has not yet commented or responded to reports that the fingerprint-reading software can be used as a potential vector of attack on Windows computers.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more