Skip to main content

Guy claims to delete his whole company — turns out it’s a hoax

dell secureworks prices hacker keyboard 2 970x0
Image used with permission by copyright holder
Your week wasn’t worse than this guy’s — unless you also pretended to delete your entire company, and the work of hundreds of customers, with a single line of code. And then were found out after major news outlets picked up on your fake woe is me story.

Updated on 4-17-2016 by Lulu Chang: Marco Marsala’s alleged company deletion was nothing more than a marketing hoax.

Marco Marsala, who runs a web hosting company, claimed to make a fatal flaw that managed to delete the data of not only his own service, but that of his clients as well. Included in his code was the command “rm -rf,” which effectively tells a computer to delete just about everything. “Rm” is the remove command, “r” specifies everything within a directory, and “f” forces the command to push past the normal blockers that would prevent such a disaster from occurring. 

The small business owner then posted his alleged mistake to Server Fruit, a server forum. “I run a small hosting provider with more or less 1,535 customers and I use Ansible to automate some operations to be run on all servers,” he wrote. “Last night, I accidentally ran, on all servers, a Bash script with a rm -rf {foo}/{bar} with those variables undefined due to a bug in the code above this line.”

“All servers got deleted and the offsite backups too because the remote storage was mounted just before by the same script (that is a backup maintenance script). How I can recover from a rm -rf / now in a timely manner?”

Apparently, none of this was true.

On Friday, Stack Overflow, the organization behind the forum, told its members they’d been played. The cry for help “was actually just a hoax in some kind of viral marketing effort,” Stack Overflow said, explaining that an Italian newspaper picked up on the joke, and that Marsala told the publication, “it was just a joke.”

“The moderators on Server Fault have been in contact with the author about this, and as you can imagine, they’re not particularly amused by it,” Stack Overflow said in a statement.

We should have known it was too good to be true when Marsala claimed that he had a happy ending to his original story. “We consulted a data recovery company who analyzed one of our 1,500 server disks for a reasonable fee, and after diagnoses, sent … a list of recoverable files. All files are here. Now we’re finding the money to pay [them] for all our servers.” Marsala wrote in an update.

Yeah, because they were never gone to begin with.

Lulu Chang
Former Digital Trends Contributor
Fascinated by the effects of technology on human interaction, Lulu believes that if her parents can use your new app…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more