Skip to main content

Google flags torrent site Demonoid for spreading malware

google area 21 hq
Image used with permission by copyright holder

According to TorrentFreak, Web and tech giant Google has flagged Demonoid, a popular torrent site, as one that’s potentially dangerous to its users due to malware that was discovered on the site. This comes after a hiatus of nearly 20 months for Demonoid, with the site relaunching this past March.

If you search for and visit the Demonoid site, TorrentFreak reports that Google will present the visitor with an advisory notice that reads “Warning – visiting this web site may harm your computer!” Google allows the user to continue through to Demonoid if they wish, and also provides links on how to protect your computer from malware, as well as access to a detailed report regarding the issues they discovered with respect to Demonoid.

Google’s findings concluded that after checking 59 pages on Demonoid’s site in the past month and a half, seven of those pages contained “malicious software” that was “being downloaded and installed without user consent.”

“We run content from a lot of ad networks in our ad banners, and a lot of banners from each,” a statement from Demonoid says. “One of those banners started serving malware, so we disabled all ads until we are 100% sure of the culprit and get it removed. We are also taking the proper steps to get us out of all the blacklists.”

It’s worth noting that, when we opened Demonoid, we were shown no such notice warning notice from Google. This may mean that Demonoid’s claims of malware being spread via ads on the site, leading Google to remove the warning flag once ads were removed.

Topics
Konrad Krawczyk
Former Digital Trends Contributor
Konrad covers desktops, laptops, tablets, sports tech and subjects in between for Digital Trends. Prior to joining DT, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more