Skip to main content

Spam King Soloway Gets Four Years

Spam King Soloway Gets Four Years

A judge has sentenced Seattle-based "Spam King" Robert Soloway to just under four years in prison on charges related to fraud and failing to file a tax return. Prosecutors have asked that Soloway be sentenced to nine years in jail for sending tens of millions of spam messages using a "zombie" network of compromised Windows PCs ; however, judge Marsha Pechman noted that legislation governing spam was very new territory, and the federal CAN-SPAM act allows for a maximum sentence of five years.

Soloway was a notorious spammer who sent out untold millions of spam messages from late 2003 to mid-2007 via his Newport Internet Marketing Corporation, promoting a wide range of products and services, many of which were fraudulent. He also peddled his own spamming services, offering to send spam for customers or sell them software for doing it themselves. In 2005, Microsoft won a case in which Soloway was ordered to pay some $7.8 million in damages for spamming MSN and Hotmail addresses, and in the same year an ISP in Oklahoma managed to get a $10 million judgement entered against him.

Soloway was arrested by federal authorities over a year ago, and in a deal with prosecutors managed to peddle the original 35-count indictment down to guilty pleas on charges of fraud, email fraud, and failing to file a tax return. In addition to prison time, Soloway will have to pay $704,000 in restitution.

Soloway is now one of the first spammers to actually be convicted of criminal charges for their activity. In court, he claimed prosecutors were not interested in striking a deal that, in Soloway’s words, could have led to a 50 percent reduction in spam.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more