Skip to main content

Microsoft issues fix to address Windows USB vulnerability

usb-stick-flickr-molotalk
Image used with permission by copyright holder

If you’ve yet to update Windows in recent days, do so now – especially if you have a proclivity towards plugging in random USB drives on your computer, or if there’s someone else who uses your computer at home. A recently issued Windows update contains a patch that fixes a Windows vulnerability that allows your system to be exploited by malware introduced by thumb drives.

When compromised flash drives are plugged into a computer, the system can automatically execute malicious codes that could install viruses and keyloggers on your computer, giving attackers remote access to your sensitive files and data. Companies with huge networks of interconnected computers are the most at risk, as all it takes to be infected is one not-so-tech-savvy worker to use a USB stick of unknown origin. A notable example of a security breach caused by an infected thumb drive is the 2008 widespread virus infection at a U.S. military base in the Middle East. The malware that came from an unknown thumb drive plugged into a laptop went on to infect the base’s whole network, even its computers containing classified information. 

If you have automatic update enabled, you probably already have the patch. Otherwise, you can manually it via the Microsoft Update service. You can check out the details of the patch on the security bulletin issued along with its release

(Image credit: Vladim Molochnikov)

Mariella Moon
Former Digital Trends Contributor
Mariella loves working on both helpful and awe-inspiring science and technology stories. When she's not at her desk writing…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more