Skip to main content

Dell Streak publicity stunt ends in company arrests

mad max - dellThis isn’t going to get you get any cool points, Dell. In an attempt at viral marketing, company employees Bryan Chester and Daniel Rawson were arrested for interfering with public duties and deadly misconduct on Monday. According to Round Rock, Texas police reports, various 911 calls were made reporting a strange man with two metal objects inside Dell headquarters. Some of the callers even described the intruder as a “masked gunman.” Wearing all black and a skull mask, the man yelled at bystanders “to go to the lobby.” And in the understandable panic and chaos following, two arrests were made. Read part of the report below:

“Essentially, a member of the marketing group (who wore dark clothing with a face-hiding, skull-pattern mask) held aloft small metallic items as he rushed through densely staffed areas while yelling ‘go to the lobby,’ believed by many of the 400-plus witnesses to be directives under armed threat (as was reported to police).”

Apparently Chester was the masked man, who was purportedly in biker attire in an attempt to promote the forthcoming Dell Streak’s Harley-Davidson syncing capabilities. Rawson, his supervisor, was arrested for his “unwillingness to cooperate with police and refusal to comply with police instructions.”

Sigh. Nice try, Dell. You’ve embarrassed motorcycle enthusiasts and advertisers everywhere.

Topics
Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more