Skip to main content

U.S. government can now sue companies that fail to protect customer data

Apex Building
The Apex Building, headquarters of the Federal Trade Commission in Washington, D.C Wikipedia/ Carol M. Highsmith collection
As politicians from various nations continue to debate TPP legislation that would see companies able to sue governments for lost business, the government now has a strong precedent to sue companies in return, if they fail to protect their customers. This follows a landmark suit in which the Federal Trade Commission (FTC) successfully sued a hotel chain for failing to protect customer data.

In an action by the FTC against Wyndham Hotels and Resorts, a U.S. Court of Appeals ruled in favor of the FTC. The FTC claimed that Wyndham had used inadequate security measures to protect customer information, Wyndham took the tack of arguing that the FTC was not authorized to bring the lawsuit.

The Court of Appeals found that the FTC could bring such an action, which opens up the gate for potentially many more cases in the future, as several big name companies have let customer financial and other information slip into the hands of hackers in recent years.

One of the biggest was also one of the most recent, with Ashley Madison, the infidelity ‘dating’ website, having its entire customer database pilfered. The hackers have already released customer names, ages, pictures, emails, passwords and some credit card transaction data to the world.

The FTC would have quite a case there.

However that or other similar lawsuits might not actually be filed. Wyndham has one last chance to overturn the ruling. As the DailyDot explains, if Wyndham seeks review by the U.S. Supreme Court it may be able to have the ruling in favor of the FTC overturned. If somehow that does happen, there will be many CEOs throughout America breathing a sigh of relief.

What companies would you like to see the FTC go after if this ruling stands?

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more