Skip to main content

Malware is spreading through Google Bard ads — here’s how to avoid them

As the public adjusts to trusting artificial intelligence, there also brews a perfect environment for hackers to trap internet users into downloading malware.

The latest target is the Google Bard chatbot, which is being used as a decoy for those online to unknowingly click ads that are infected with nefarious code. The ads are styled as if they are promoting Google Bard, making them seem safe. However, once clicked on, users will be directed to a malware-ridden webpage instead of an official Google page.

Malware posing as a Google Bard ad.
ESET Research / ESET Research

Security researchers at ESET first observed the discrepancies in the ads, which include several grammar and spelling errors in the copy, as well as a writing style that is not up to par with Google’s standard, according to TechRadar.

The ad directs users to the webpage of a Dublin-based firm called rebrand.ly instead of a Google-hosted domain, where you would actually learn more about the Bard chatbot. Researchers have not confirmed, but have noted and warned that accessing such pages while being logged into browser accounts could leave your private data susceptible to being hacked.

Additionally, the ad includes a download button, which when accessed downloads a file that appears as a personal Google Drive space; however, it is actually a confirmed malware called GoogleAIUpdate.rar.

ESET researcher, Thomas Uhlemann noted as of Monday, the “campaign was still visible in different variations.”

He added this is one of the larger cyberattacks of its kind he has seen, some including fake ads for meta AI or different Google AI dupe marketing.

Bard is currently the biggest competition of OpenAI’s ChatGPT chatbot. ChatGPT experienced a similar cyberattack in late February when an info-stealing malware called Redline was observed by Security researcher Dominic Alvieri. The malware was hosted on the website chat-gpt-pc.online, which featured ChatGPT branding and was being advertised on a Facebook page as a legitimate OpenAI link to persuade people into accessing the infected site.

Alvieri also found fake ChatGPT apps on Google Play and various other third-party Android app stores, which could send malware to devices if downloaded.

ChatGPT has been a major target of bad actors, especially since it introduced its $20 monthly ChatGPT Plus tier in early February. Bad actors have even gone as far as using the chatbot to create malware. However, this is a rigged version of OpenAI’s GPT-3 API that was programmed to generate malicious content, such as text that can be used for phishing emails and malware scripts.

Fionna Agomuoh
Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more