Skip to main content

Hackers are using AI to create vicious malware, says FBI

The FBI has warned that hackers are running wild with generative artificial intelligence (AI) tools like ChatGPT, quickly creating malicious code and launching cybercrime sprees that would have taken far more effort in the past.

The FBI detailed its concerns on a call with journalists and explained that AI chatbots have fuelled all kinds of illicit activity, from scammers and fraudsters perfecting their techniques to terrorists consulting the tools on how to launch more damaging chemical attacks.

A hacker typing on an Apple MacBook laptop while holding a phone. Both devices show code on their screens.
Sora Shimazaki / Pexels

According to a senior FBI official (via Tom’s Hardware), “We expect over time as adoption and democratization of AI models continues, these trends will increase.” Bad actors are using AI to supplement their regular criminal activities, they continued, including using AI voice generators to impersonate trusted people in order to defraud loved ones or the elderly.

It’s not the first time we’ve seen hackers taking tools like ChatGPT and twisting them to create dangerous malware. In February 2023, researchers from security firm Checkpoint discovered that malicious actors had been able to alter a chatbot’s API, enabling it to generate malware code and putting virus creation at the fingertips of almost any would-be hacker.

Is ChatGPT a security threat?

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

The FBI strikes a very different stance from some of the cyber experts we spoke to in May 2023. They told us that the threat from AI chatbots has been largely overblown, with most hackers finding better code exploits from more traditional data leaks and open-source research.

For instance, Martin Zugec, Technical Solutions Director at Bitdefender, explained that “The majority of novice malware writers are not likely to possess the skills required” to bypass chatbots’ anti-malware guardrails. As well as that, Zugec explained, “the quality of malware code produced by chatbots tends to be low.”

That offers a counterpoint to the FBI’s claims, and we’ll have to see which side proves to be correct. But with ChatGPT maker OpenAI discontinuing its own tool designed to detect chatbot-generated plagiarism, the news has not been encouraging lately. If the FBI is right, there could be tough times ahead in the battle against hackers and their attempts at chatbot-fueled malware.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
OpenAI needs just 15 seconds of audio for its AI to clone a voice
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

In recent years, the listening time required by a piece of AI to clone someone’s voice has been getting shorter and shorter.

It used to be minutes, now it’s just seconds.

Read more
How much does an AI supercomputer cost? Try $100 billion
A Microsoft datacenter.

It looks like OpenAI's ChatGPT and Sora, among other projects, are about to get a lot more juice. According to a new report shared by The Information, Microsoft and OpenAI are working on a new data center project, one part of which will be a massive AI supercomputer dubbed "Stargate." Microsoft is said to be footing the bill, and the cost is astronomical as the name of the supercomputer suggests -- the whole project might cost over $100 billion.

Spending over $100 billion on anything is mind-blowing, but when put into perspective, the price truly shows just how big a venture this might be: The Information claims that the new Microsoft and OpenAI joint project might cost a whopping 100 times more than some of the largest data centers currently in operation.

Read more
We may have just learned how Apple will compete with ChatGPT
An iPhone on a table with the Siri activation animation playing on the screen.

As we approach Apple’s Worldwide Developers Conference (WWDC) in June, the rumor mill has been abuzz with claims over Apple’s future artificial intelligence (AI) plans. Well, there have just been a couple of major developments that shed some light on what Apple could eventually reveal to the world, and you might be surprised at what Apple is apparently working on.

According to Bloomberg, Apple is in talks with Google to infuse its Gemini generative AI tool into Apple’s systems and has also considered enlisting ChatGPT’s help instead. The move with Google has the potential to completely change how the Mac, iPhone, and other Apple devices work on a day-to-day basis, but it could come under severe regulatory scrutiny.

Read more