Skip to main content

Is anti-virus enough? Security professionals say preventative measures are much stronger

exploit
Image used with permission by copyright holder
As cyber threats become more and more ubiquitous, there is declining confidence in traditional detection-based software like anti-virus. Instead, the industry is moving towards more preventative measures, instead of eliminating threats once they’re already detected.

The results come from Bromium’s Enterprise Security Confidence Report, wherein the security group surveyed 125 professionals to determine the state of cyber threats and the security industry.

The overwhelming pace of hacks and data breaches has led to anti-virus software losing a lot of trustworthiness, said Clinton Karr, senior security strategist at Bromium. “Information security professionals are turning instead to technologies that provide proactive protection, such as threat isolation, as the foundation of their security architecture.”

The survey found that a staggering 92% of respondents are losing confidence in legacy solutions like anti-virus and whitelisting, which don’t hold as much clout as before. “That confidence has now been decimated,” said Bromium’s survey. Meanwhile, 78% of those interviewed said that anti-virus software is not effective against general attacks.

Many of the infosec professionals surveyed said they believe endpoint threat isolation solutions to be the most effective. Other respondents said they are placing their faith in intrusion detection or prevention solutions. Elsewhere 27% of respondents said that network sandboxes are effective. “Detection-based solutions cannot provide the adequate level of protection,” said the survey, showing that the demand for security products and what they offer has changed drastically in recent years.

At the end of the day, a robust and diverse info security configuration is the key to staying safe. It’s important to be able to deal with issues as they arise, but you need to be able to stop them from ever getting into your system as well.

Jonathan Keane
Former Digital Trends Contributor
Jonathan is a freelance technology journalist living in Dublin, Ireland. He's previously written for publications and sites…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more