Skip to main content

Microsoft is taking on the tech support scammers

microsoft taking tech support scammers keyboard padlock
Image used with permission by copyright holder
You may have had first-hand experience of it: An unsolicited call comes through, claiming to be from Microsoft or Windows tech support, and offering step-by-step advice on how to fix your ‘computer problem’. Of course there is no problem, and the scammers are looking to extort money from you, install dangerous malware on your machine, or both.

It’s a criminal endeavor that’s been going on for years but now Microsoft is taking direct action against the scammers. The Redmond firm is filing a lawsuit in the U.S. against one of the offending businesses and promising further action in the U.K. and India, following up on lawsuits instigated by the Federal Trade Commission last month. More than 65,000 complaints have been logged from concerned users since May, Microsoft says.

Microsoft’s lawsuit is aimed at Omnitech Support, a division of California-based firm Customer Focus Services, claiming trademark infringement, domain squatting and unfair and deceptive business practices. It’s alleged that a support technician from Omnitech ran a few simple tools on an investigator’s computer and then charged a total of $859.99 for fixing issues that didn’t exist in the first place.

“Tech support scammers don’t discriminate; they will go after anyone, but not surprisingly senior citizens have been among the most vulnerable,” writes Microsoft Senior Attorney Courtney Gregoire. “According to the FBI, senior citizens are often more trusting and con artists exploit these traits. The holiday season is a popular time for scammers as more people engage in online activities, including shopping, donating to charity and searching for travel deals. Still, our customers must be vigilant to protect themselves.”

If you get a call over the holiday season purporting to come from Microsoft tech support, don’t follow any of the instructions they give you, pay for any services or reveal any personal information. Take down the caller’s information and report the issue through the channels listed on Microsoft’s blog post.

[Header image courtesy of Nikita Starichenko / Shutterstock.com]

David Nield
Dave is a freelance journalist from Manchester in the north-west of England. He's been writing about technology since the…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more