Skip to main content

Netragard stops selling exploits for fear of how they may be used

kentucky hospital subjected to ransomware hacker keyboard
Image used with permission by copyright holder
Digital security firm Netragard has announced that its controversial Exploit Acquisition Program (EAP) will be halted moving forward, after it was discovered that it had been selling exploits to Italian firm Hacking Team, which was recently found to be doing the same to regimes guilty of human rights violations. Although Netragard still believes zero-day exploits are important, it cannot continue to sell them without knowing their potential end-game usage.

“Our motivation for termination revolves around ethics, politics, and our primary business focus,” said Netragard CEO Adriel Desautels in a blog post. Although he said that it wasn’t the responsibility of a seller to determine what customers would do with their products, in light of what Hacking Team was found to be up to, it could no longer ethically continue selling them.

The reason it’s possible for Netragard to do this, is because as Desautels points out, EAP isn’t the company’s many focus – even if it has proved a strong revenue stream.

While Desautels wants to pull Netragard back from the brink of being linked with Hacking Team’s immoral sales of exploits to countries headed by decried regimes, he did take a moment to defend the development and use of zero-day exploits. Highlighting how the FBI used a flaw in the Flash player in 2013 to help close a child pornography ring, he suggested that those that are against the use of such ‘tools’ were merely uneducated about them.

Moving forward, Netragard will only reintroduce its EAP system if a framework is put in place to regulate it better. However, Desautels did add the caveat that he didn’t want to see the practice of discovering these exploits restricted, as that would negatively affect those striving to improve software security around the world, he said.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more