Skip to main content

Fake IRS emails are delivering dangerous new malware this tax season

fake irs emails are delivering dangerous new malware this tax season 1040 form being filled out
SeniorLiving.Org/Flickr
Tax season is upon us, which is creating ample opportunity for scammers. Researchers at security firm Heimdal have found a malware campaign that uses phony IRS emails to hit its targets.

The scam email purports to be about a tax refund but instead comes loaded with the Kovter trojan and CoreBOT malware. Kovtar is often used by cybercriminals to deliver ransomware. Kovtar is a little different because, once downloaded, it can sit on the registry rather than your disk. “The threat is also memory resident and uses the registry as a persistence mechanism to ensure it is loaded into memory when the infected computer starts up,” said a blog from Symantec last year, which detailed the malware’s features.

Meanwhile, CoreBOT is a well-known banking malware strain that can steal crucial login details. It largely targets online banking credentials in the U.S., Canada, and the U.K.

According to Heimdal, users need to keep an eye out the email subject line: “Payment for tax refund # 00 [6 random numbers]” and any zip attachment called “Tax_Refund_00654767.zip -> Tax_Refund_00654767.doc.js,” which people are of course advised never to download.

“But don’t let your curiosity get the best of you: not only is it a fake email, but it also carries plenty of danger within,” said Heimdal’s Andra Zaharia.

IRS scams are nothing new and have traditionally involved scam phone calls that target someone that believes they are being question by the agency for their personal details. The IRS has been warning users for years about potential phishing threats coming from fake IRS emails but this new discovery marks a slightly more dangerous threat.

IRS is keen to remind people that it will not contact anyone via email, social media, or text message. Be extra wary of any IRS emails that land in your inbox this tax season.

Jonathan Keane
Former Digital Trends Contributor
Jonathan is a freelance technology journalist living in Dublin, Ireland. He's previously written for publications and sites…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more