Skip to main content

Warning from police: Never plug in a USB stick you get in the mail

usb stick malware lost issues 0001
Image used with permission by copyright holder
With recent surveys suggesting a good many people find it hard to resist popping a found USB stick into their computer, it’s no surprise that hackers are using them to try to spread malware.

Cops in Australia reported this week that a number of the diminutive storage drives have been left in the mailboxes of residents in a suburb of Melbourne.

Curiosity has clearly gotten the better of some of the recipients, with a number of them learning to their cost that it’s really not a good idea to plug such an item into a computer if you have no idea where it’s come from.

Without offering much in the way of detail, police described the contents of the unlabeled sticks as “extremely harmful,” adding that residents who stuck them into their PC experienced “serious issues” with their machine.

It’s not yet known who’s behind the mysterious deliveries.

A U.S. study earlier this year found that nearly half of 297 USB sticks placed randomly around a university campus were picked up and inserted into computers.

Hackers can use the sticks in a number of ways. They could load them with malware that infects the system without the user realizing. Such malware could pull personal information from a computer before sending it back to the hacker, or lock the computer up until a ransom is paid.

It could also contain malicious software that once activated can read keystrokes, giving the hacker access to the computer owner’s user names and passwords, as well as other personal information.

Either way, plugging a found USB stick into your computer – whether through curiosity or in the hope that you can discover its owner so you can return it –isn’t worth the potential hassle. As for an unmarked drive showing up in your mailbox … the only place you should stick that is straight in the trash can.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more