Skip to main content

Ransomware attackers refuse to decrypt hospital's files after being paid off

ransomware hospital hackers demand more money ransomeware
Brian A Jackson/Shutterstock
Negotiating with criminals doesn’t always work out, as Kansas Heart Hospital in Wichita learned last week. The hospital paid to get files back after falling victim to ransomware, but only got “partial access” and a demand for more money, Techspot is reporting.

That’s right: the criminals got their ransom, and then decided they wanted more money. The hospital’s president, Dr. Greg Duick says the hospital is not paying up.

Duick won’t reveal which malware hit the hospital, or how much money was paid to the attackers.

“I’m not at liberty, because it’s an ongoing investigation, to say the actual exact amount,” said Duick. “A small amount was [paid].”

The hospital had a plan for this sort of attack, and it’s not clear why it didn’t work. Without more details from Kansas Heart, it’s hard to say. But there’s at least one bright side.

“The patient information never was jeopardized, and we took measures to make sure it wouldn’t be,” said Duick.

Still, this sort of thing is becoming way too common in America’s hospitals, and any money paid to criminals is money not spent on providing healthcare.

Ransomware encrypts files on the victim’s computers, then demands a payment for access. Typically users get access to the files after paying up, but in this case it seems like the attackers thought they could exploit the situation and get more money.

There’s been a rash of ransomware infections in the U.S. healthcare market for a while now, including hospitals in Kentucky and California. Some combination of high-value, irreplaceable information and lagging IT infrastructure makes hospitals a ripe target.

Regular, air-gapped backups could seriously dull the power of such software. If you’ve got another copy of your data, there’s no need to pay off ransomware. For our money, that’s the solution hospitals, and every organization, should be looking at.

Justin Pot
Former Digital Trends Contributor
Justin's always had a passion for trying out new software, asking questions, and explaining things – tech journalism is the…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more