Skip to main content

AI is making a long-running scam even more effective

You’ve no doubt heard of the scam where the perpetrator calls up an elderly person and pretends to be their grandchild or some other close relative. The usual routine is to act in a distressed state, pretend they’re in a sticky situation, and ask for an urgent cash transfer to resolve the situation. While many grandparents will realize the voice isn’t that of their grandchild and hang up, others won’t notice and, only too keen to help their anxious relative, go ahead and send money to the caller’s account.

A Washington Post report on Sunday reveals that some scammers have taken the con to a whole new level by deploying AI technology capable of cloning voices, making it even more likely that the target will fall for the ruse.

An elderly person holding a phone.
Ono Kosuki / Pexels

To launch this more sophisticated version of the scam, criminals require “an audio sample with just a few sentences,” according to the Post. The sample is then run through one of many widely available online tools that use the original voice to create a replica that can be instructed to say whatever you want simply by typing in phrases.

Data from the Federal Trade Commission suggests that in 2022 alone, there were more than 36,000 reports of so-called impostor scams, with more than 5,000 of these happening over the phone. Reported losses reached $11 million.

The fear is that with AI tools becoming more effective and more widely available, even more people will fall for the scam in the coming months and years.

The scam still takes some planning, however, with a determined perpetrator needing to find an audio sample of a voice, as well as the phone number of the related victim. Audio samples, for example, could be located online via popular sites like TikTok and YouTube, while phone numbers could also be located on the web.

The scam can take many forms, too. The Post cites an example where someone pretending to be a lawyer contacted an elderly couple, telling them their grandson was in custody for an alleged crime and that they needed more than $15,000 for legal costs. The bogus lawyer then pretended to hand the phone to their grandson, whose cloned voice pleaded for help to pay the fees, which they duly did.

They only realized they’d been scammed when their grandson called them later that day for a chat. It’s thought the scammer may have cloned his voice from YouTube videos that the grandson posted, though it’s hard to be sure.

Some are calling for the companies who make the AI technology that clones voices to be held responsible for such crimes. But before this happens, it seems certain that many others will lose money via this nefarious scam.

To listen to an example of a cloned voice to see just how close to the original it can get, check out this Digital Trends article.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more