Skip to main content

Tumblr blames ‘human error’ for weekend security lapse

Tumblr LogoPopular blogging service Tumblr has cited “human error” as the cause behind a security glitch that may have revealed users’ passwords, API keys, IP addresses and other personal data.

The alarm was sounded Saturday morning via Twitter. “OMG…The Tumbeasts are spitting out passwords!,” the tweet read. The news quickly spread, with armchair hackers taking to forums to debate the extent and cause of the glitch. As it turns out, a PHP coding error was likely to blame for 748 lines of information being made visible.

Tumblr responded quickly to fix the problem and followed up with an official statement posted about five hours later. Here’s what Tumblr had to say for itself:

“A human error caused some sensitive server configuration information to be exposed this morning. Our technicians took immediate measures to protect from any issues that may come as a result.

We’re triple checking everything and bringing in outside auditors to confirm, but we have no reason to believe that anything was compromised. We’re certain that none of your personal information (passwords, etc.) was exposed, and your blog is backed up and safe as always. This was an embarrassing error, but something we were prepared for.

The fact that this occurred at all is still unacceptable, and we’ll be seriously evaluating and adjusting our processes to ensure an error like this can never happen again.”

The explanation was likely enough to assuage the fears of Tumblr loyalists, but on the Hacker News forum a contingent was left unconvinced that the breech was merely “an embarrassing error.”

Some commentators went as far as to blame Tumblr for “criminal negligence.” Others were content to point a finger at the idiosyncrasies of the PHP programming language. A few defended Tumblr, saying that the breach wasn’t as severe as it was made out to be. Either way, Tumblr had dozens of sideline developers offering their debugging expertise pro bono.

In December, Tumblr was taken offline for almost a full day following an issue with its database cluster.


Topics
Aemon Malone
Former Digital Trends Contributor
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more