Skip to main content

Hacking into your hotel room is easier than you might think

hacking hotel room easier might think
Image used with permission by copyright holder
Old, insecure protocols could be giving unwanted guests access to hotel rooms across the globe, according to research carried out by one enterprising hacker. Spaniard Jesus Molina has been speaking to Wired about several vulnerabilities that he’s discovered and which he plans to present at a Black Hat security conference next month.

Door locking mechanisms remained secure, but Molina was able to easily take control of thermostats, lights, TVs and window blinds across the hotel he stayed at. “I could have changed every channel in every room so everybody could watch soccer with me,” he says, “but I didn’t.”

The key to the hack was a ‘digital butler’ app running on an iPad and an ageing communications standard called KNX. It enables guests to control the various pieces of equipment in their rooms, but it can easily be taken over by someone in the next room or sat in the lobby. If the right Trojan Horse virus was installed then the app could be controlled from the other side of the world.

“Guests make assumptions that the channel they are using to control devices in their room is secure,” explains Molina, but that isn’t necessarily the case. “I didn’t have to be in the hotel to do what I did. I could have done it from anywhere. I could use a very big antenna from the next building.”

The hotel that Molina was staying at was the five-star St Regis in Shenzhen, China, but he believes the same systems are installed at many other locations in Asia, Europe and the United States. When the problems were reported to the St Regis, staff immediately took action, although fixing the issue required a wholesale upgrade of the network.

The problem is made more urgent by the fact that KNX is increasingly used in home automation networks as well. “People are reusing protocols that are not meant for the Internet of Things,” says Molina. “Using protocols like KNX for home automation makes no sense for wireless. This guerrilla war we’re playing with the Internet of Things can get dangerous. This is not something I say lightly.”

[Image: Eviled / Shutterstock.com]

David Nield
Dave is a freelance journalist from Manchester in the north-west of England. He's been writing about technology since the…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more