Skip to main content

Pro-ethical hacking group hacked, used as malware front

ethical hacker site hacked ransomware eccouncil
FOX IT
The administrators of the Certified Ethical Hacker (CEH) program, which looks to spread knowledge and know-how about measures to prevent being hacked, has come under fire from security organizations for its own lax security and its late response to related warnings. Despite being provided with ample notice, the CEH ignored warnings that it was distributing malware to some of its visitors.

Admittedly, the hack in question was an elusive one. According to a FOX IT report, the Angler toolkit which had infected the site would further infect those who visited the site from a major search engine and were using Internet Explorer — likely suggesting a less than stellar knowledge of Internet security.

What’s heartbreaking is that the visitors were themselves clearly trying to learn — they were, after all, looking up courses on improving security.

But unfortunately the very site they visited was the one making the visitors vulnerable. So now a security firm has gone public (via Ars) with the information, in the hopes that it encourages action and discourages people from visiting the site until it’s safe again.

Related: Update: Mac ransomware may have flaws that allow file recovery

The notice states visitors who meet the criteria for infection may find themselves redirected  to the Angler toolkit landing page, which then uses Flash or Silverlight plugins to infect the victim’s local machine with more malware.

Most worrisome is that the malware it then dumps on the user’s system is TeslaCrypt, a ransomware that immediately encrypts the user’s files and demands a 1.5 bitcoin ransom (equal to around $624) to decrypt them — potentially meaning that visitors to the CEH site could lose all of their important and personal files and images.

The malware program is very traditional too, offering just the payment option — unlike others, there is no offer to sign on as an affiliate.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more