Skip to main content

ChatGPT just created malware, and that’s seriously scary

A self-professed novice has reportedly created a powerful data-mining malware using just ChatGPT prompts, all within a span of a few hours.

Aaron Mulgrew, a Forcepoint security researcher, recently shared how he created zero-day malware exclusively on OpenAI’s generative chatbot. While OpenAI has protections against anyone attempting to ask ChatGPT to write malicious code, Mulgrew found a loophole by prompting the chatbot to create separate lines of the malicious code, function by function.

After compiling the individual functions, Mulgrew had created a nigh undetectable data-stealing executable on his hands. And this was not your garden variety malware either — the malware was as sophisticated as any nation-state attacks, able to evade all detection-based vendors.

Just as crucially, how Mulgrew’s malware defers from “regular” nation-state iterations in that it doesn’t require teams of hackers (and a fraction of the time and resources) to build. Mulgrew, who didn’t do any of the coding himself, had the executable ready in just hours as opposed to the weeks usually needed.

The Mulgrew malware (it has a nice ring to it, doesn’t it?) disguises itself as a screensaver app (SCR extension), which then auto-launches on Windows. The software will then sieve through files (such as images, Word docs, and PDFs) for data to steal. The impressive part is the malware (through steganography) will break down the stolen data into smaller pieces and hide them within images on the computer. These images are then uploaded to a Google Drive folder, a procedure that avoids detection.

Equally impressive is that Mulgrew was able to refine and strengthen his code against detection using simple prompts on ChatGPT, really raising the question of how safe ChatGPT is to use. Running early VirusTotal tests had the malware detected by five out of 69 detection products. A later version of his code was subsequently detected by none of the products.

Note that the malware Mulgrew created was a test and is not publicly available. Nonetheless, his research has shown how easily users with little to no advanced coding experience can bypass ChatGPT’s weak protections to easily create dangerous malware without even entering a single line of code.

But here’s the scary part of all this: These kinds of code usually take a larger team weeks to compile. We wouldn’t be surprised if nefarious hackers are already developing similar malware through ChatGPT as we speak.

Editors' Recommendations

Aaron Leong
Former Digital Trends Contributor
Aaron enjoys all manner of tech - from mobile (phones/smartwear), audio (headphones/earbuds), computing (gaming/Chromebooks)…
Apple finally has a way to defeat ChatGPT
A MacBook and iPhone in shadow on a surface.

OpenAI needs to watch out because Apple may finally be jumping on the AI bandwagon, and the news doesn't bode well for ChatGPT. Apple is reportedly working on a large language model (LLM) referred to as ReALM, which stands for Reference Resolution As Language Modeling. Made to give Siri a boost and help it understand context, the model comes in four variants, and Apple claims that even its smallest model performs on a similar level to OpenAI's ChatGPT.

This tantalizing bit of information comes from an Apple research paper, first shared by Windows Central, and it appears to be an early peek into what Apple has been cooking for a while now. ReALM is Apple's own LLM that was reportedly made to enhance Siri's capabilities; these improvements include a greater ability to understand context in a conversation.

Read more
ChatGPT AI chatbot can now be used without an account
The ChatGPT website on a laptop's screen as the laptop sits on a counter in front of a black background.

ChatGPT, the AI-powered chatbot that went viral at the start of last year and kicked off a wave of interest in generative AI tools, no longer requires an account to use.

Its creator, OpenAI, launched a webpage on Monday that lets you begin a conversation with the chatbot without having to sign up or log in first.

Read more
OpenAI needs just 15 seconds of audio for its AI to clone a voice
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

In recent years, the listening time required by a piece of AI to clone someone’s voice has been getting shorter and shorter.

It used to be minutes, now it’s just seconds.

Read more