Skip to main content

A cyberattack in Dallas managed to set off all 156 alarms in the city

dallas cyberattack sirens 49377348 ml
rafaelbenari/Flickr
In what may have been the loudest cyberattack ever, a data breach resulted in an hour-and-a-half of blaring sirens in Dallas. The Texas city has a total of 156 sirens meant to sound the alarm for danger, which were themselves a nuisance when the entire warning system was breached late Friday night and into Saturday morning.

“At this point, we can tell you with a good deal of confidence that this was somebody outside of our system that got in there and activated our sirens,” city Emergency Management Director Rocky Vaz told reporters. The hack is believed to have been carried out by someone in the area, city spokeswoman Sana Syed revealed in an email statement.

Warning—video below contains adult language

Sirens going on in Dallas Texas

Given that the hack is said to be the largest ever with regard to emergency sirens, experts are on high alert. “This is a very, very rare event,” Vaz said. While most hacks only manage to trigger a couple sirens at most, this most recent breach was significantly more extensive.

As it stands, city engineers are resetting the alert system, and are slated to complete their work by the end of the weekend. But for now, that means that Dallas residents (all 1.6 million of them), will have to resort to local media, emergency 911 phone calls, and a federal radio alert system should any serious situation arise. This attack goes far beyond an annoying prank.

This isn’t the first time an emergency system has been compromised. Indeed, cybersecurity officials have previously expressed concern over the entire 911 system, which has also proven vulnerable. Currently, the Dallas hack is being examined by system engineers. While the Federal Communications Commission has been contacted, police have not yet been called in.

Lulu Chang
Former Digital Trends Contributor
Fascinated by the effects of technology on human interaction, Lulu believes that if her parents can use your new app…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more