Skip to main content

Bing Chat’s ads are sending users to dangerous malware sites

Since it launched, Microsoft’s Bing Chat has been generating headlines left, right, and center — and not all of them have been positive. Now, there’s a new headache for the artificial intelligence (AI) chatbot, as it’s been found it has a tendency to send you to malware websites that can infect your PC.

The discovery was made by antivirus firm Malwarebytes, which discussed the incident in a blog post. According to the company, Bing Chat is displaying malware advertisements that send users to malicious websites instead of filtering them out.

A malicious advert served in Bing Chat.
Malwarebytes

When using Bing Chat, you can ask the chatbot to find information, websites, apps, and other things for you. Sometimes, it will provide a link in the chat. Almost from Bing Chat’s first release, Microsoft has been inserting adverts into these links, much like how a Google search places ads above initial results.

The problem, though, is that it is very easy for bad actors to buy an advert in order to promote a website that masquerades as a legitimate destination. If you’re not careful, you can end up falling victim to this bait and switch.

Advertising malicious websites

Bing Chat shown on a laptop.
Jacob Roach / Digital Trends

Here’s how it works. In the blog post, Malwarebytes detailed how you could ask Bing Chat to download a popular IP scanning app that is used by system admins. Bing Chat provided a link to the app’s official website, but hovering over the link actually showed two results: the real website, with a malicious advert placed right above it.

If you didn’t look too closely — or weren’t familiar with the app’s official website address — you might not realize that the first result would take you to a deceptive website.

On further analysis, Malwarebytes found that the fake website redirected visitors to a second site that possessed a very similar web address to the real app’s official URL. It then prompted users to download malware that could damage their computers.

The incident suggests that Microsoft could be doing a lot more to protect its users from malicious adverts that are served up through Bing Chat. For the time being, you should be very careful when clicking links provided by Bing Chat. It might be best to stick to a standard search engine and install an ad blocker to prevent malicious adverts from ever reaching you.

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more