Skip to main content

Notorious Spammer Soloway Arrested

So-called "spam king" Robert Soloway has been arrested in Seattle on chargest which include mail, wire, and email fraud, identity theft, and money laundering. Soloway was notorious in anti-spam circles as one of the Internet’s most prolific spmmers, sending billions—perhaps tens of billions—of spam messages a day via his company Newport Internet Marketing Corporation.

Solloway’s company would often prey on naive businesses who were led to believe they were hiring a legitimate online marketing company; he also used spam to promote Web sites under his own company’s control. Soloway used false headers, falsified return addresses, and so-called botnets (collections of compromised Windows computers) to obscurete origin of the spam and often cast blame on innocent Internet users and organizations; he also used overseas registrars and ISPs to obscure the true ownerships of sites under his control. The 35-count indictment includes charges the Soloway made false claims about products and services he offered and refused to refund money to victims; he is also accused of violating the U.S. CAN-SPAM Act. In total, Soloway could face over 65 years in prison and a fine of $250,000 if convicted of all counts. Federal authorities are also tried to seize over $750,000 they claim Soloway earned from his activities. Soloway apparently claims to have no money.

Soloway will spend at least the next few days at a federal detention center, pending a detention hearing on June 4.

The antispam community generally believes Soloway has ties with other top-level spammers, rating the possibility Soloway might parlay information about their operations in exchange for reduced charges or leniency in sentencing.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more