Skip to main content

Spammer Alan Ralsky, 10 Others Indicted

Spammer Alan Ralsky, 10 Others Indicted

A federal grand jury in Detroit has returned an indicted against so-called "spam king" Alan Ralsky and ten other on charges of international spamming and engaging in pump-and-dump stock fraud schemes. Other individuals included in the indictment include Ralsky’s son-in-law and citizens of Russia, Canada, Hong Kong, California, and Arizona, along with one dual national from Hong Kong and Canada.

The indictment covers 41 counts, and follows a three year investigation which uncovered an extensive spamming operation intended to promote a stock "pump-and-dump" scheme, in which the suspects are alleged to have promoted Chinese penny stocks to drive up their price, then pull in profits by selling the stocks at those artificially inflated prices. The indictment alleges the defendants used illegal methods like falsely registered domains, proxy servers, a botnet of compromised Windows computers, falsified headers, and false claims to get their spam through filters and spam-blocking services, as well as to deceive recipients into acting on the messages.

"Today’s charges seek to knock out one of the largest illegal spamming and fraud operations in the country, an international scheme to make money by manipulating stock prices through illegal spam e-mail promotions," said United States Attorney Stephen J. Murphy, in a statement (PDF).

Investigators estimate Ralsky and other participants in the scheme earned about $3 million from their spamming operation during the summer of 2005. Three of the individuals charged in the indictment have been arrested (including Ralsky’s son-in-law and How Wai John Hui, the dual national alleged to have acted as the go between for Chinese companies seeking to have their stocks pumped up. The other defendants are still being sought; Ralsky himself is believed to be in Europe, although his attorney has said he will surrender himself in the next few days.

(Image by John Greilick/The Detroit Free Press.)

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more