Skip to main content

Cybersecurity can’t rely on artificial intelligence too much, report says

ransomware wannacry exploit attacking pc security padlock
Maksim Kabakou/Shutterstock.com
Cybersecurity pros shouldn’t rely on artificial intelligence and machine learning just yet, according to a new report.

The report from security firm Carbon Black, which surveyed 410 cybersecurity researchers and 74 percent said that AI-driven security solutions are flawed, citing “high false-positive rates”, while 70 percent claimed attackers can bypass machine learning techniques.

The respondents did not write off AI or machine learning as unhelpful but rather said that they just aren’t there yet and cannot be solely relied on to make big decisions when it comes to security. AI and machine learning should be used “primarily to assist and augment human decision making,” said the report.

Eighty-seven percent of those surveyed said it will be more than three years before they really feel comfortable trusting AI to carry out any significant cybersecurity decisions.

AI and machine learning have become more prominent in cybersecurity research and commercial products as a way to keep up with an ever-evolving threat landscape.

Among these new threats are non-malware attacks or fileless attacks. As the names suggest, these are attacks that do not use any malicious file or program. Rather, they use existing software on a system, making them largely undetectable for traditional antivirus programs that rely on detecting suspicious-looking files before acting.

Sixty-four percent of Carbon Black’s respondents said that they had seen an increase in such tactics since early 2016.

“Non-malware attacks will become so widespread and target even the smallest business that users will become familiar with them,” one respondent said. “Most users seem to be familiar with the idea that their computer or network may have accidentally become infected with a virus, but rarely consider a person who is actually attacking them in a more proactive and targeted manner.”

Non-malware attacks will be the scourge of organizations over the next year, said the report, and will continue to need a human approach.

Perhaps AI is overpromising what it can do for security. It indicates a future where cybersecurity will be a battle of “machine versus machine”, according to the professionals surveyed in this report but for now, it very much remains “human versus human.”

Jonathan Keane
Former Digital Trends Contributor
Jonathan is a freelance technology journalist living in Dublin, Ireland. He's previously written for publications and sites…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more