Skip to main content

Hackers could seize robots with ransomware, costing companies millions

Image used with permission by copyright holder

Security consultants IOActive recently created a proof-of-concept attack that uses ransomware to disrupt big corporations. The attack didn’t land on corporate PCs to encrypt files for ransom. Instead, the researchers attacked robots, which are vital in many markets such as automobile manufacturing, healthcare, and more. Disrupting these robot-powered environments can cost businesses money every second they are offline. 

One attack vector relies on how robots deal with data. Although they typically include internal storage, most of the data handled by robots remains “in transit,” meaning robots receive data, process the data, and then send the data back to be stored at the source. That data could contain high-definition video, captured audio, payments received by customers, instructions on how to perform the current task, and so on. 

“Instead of encrypting data, an attacker could target key robot software components to make the robot non-operational until the ransom is paid,” the researchers state. 

To prove their theory, the researchers focused their attack on NAO, a highly used robot in the research and education fields with a roster of 10,000 units in active duty across the globe. It has “nearly the same” operating system and vulnerabilities as SoftBank’s Pepper, a business-oriented robot with a massive roster of 20,000 units deployed in 2,000 businesses. Even Sprint is using Pepper to assist customers in its retail stores. 

The attack starts off by exploiting an undocumented function that allows anyone to remotely execute commands. After that, they could disable administration features, change the robot’s default functions, and route all video and audio feeds to a remote server on the internet. Others steps include elevating user privileges, disrupting the factory reset mechanism, and infect all behavior files. In other words, they can make the robot very unpleasant, even physically harmful.

By hijacking robots, hackers could interrupt service altogether, causing corporations to lose money with each passing moment. They could even force the robots to show explicit porn to customers, curse at customers during one-on-one interaction, or perform violent movements. The only way to reverse the behavior is to succumb to hackers because, ultimately, paying the ransom could be cheaper than repairs. 

That scenario even applies to sex robots given the privacy and intimacy aspects. Users will likely shell out money to hackers rather than call technical support, deal with customer care, and arrange for someone to get the unit for “repairs.” At least sex robots don’t have any moving parts … or rather, not yet. 

“They aren’t cheap,” the report states. “It’s not easy to factory reset them or fix software and hardware problems. Usually, when a robot malfunctions, you have to return it to the factory or employ a technician to fix it. Either way, you may wait weeks for its return to operational status.” 

The researchers compare disrupting robots in corporate environments to halting cryptocurrency mining farms. Interrupt those PCs with ransomware and miners lose money every second those devices aren’t online digging for digital coins. 

Kevin Parrish
Former Digital Trends Contributor
Kevin started taking PCs apart in the 90s when Quake was on the way and his PC lacked the required components. Since then…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more