Skip to main content

Porn Spammers Get Prison Sentences

In the first case prosecuted under the United States’ CAN-SPAM Act to be tried before a jury, spammers Jeffrey Kilbride of Venice, California, and James Schaffer of Paradise Valley, Arizona, were sentenced to five and a quarter and six years in federal prison, respectively, for sending pornographic spam, as well as engaging in fraud, and money laundering. Each defendant was also fined $100,000 and ordered to pay AOL $77,500 in restitution; the U.S. government is also seizing $1.1 million in revenues earned by the operation.

Kilbride’s and Schaffer’s spamming operation dates back to 2003, when they began sending millions of unsolicited spam messages promoting pornographic Web sites; often, those messages included pornographic images themselves. When the CAN-SPAM act was passed in late 2003, Kilbride and Schaffer shifted their operations to servers in Amsterdam—hoping to evade U.S. law enforcement’s jurisdiction—and used accounts in Mauritius and the Isle of Man to hide revenue from their operations from law enforcement.

Three co-conspirators of Kilbride and Schaffer—Jennifer Clason, Adrew Ellifson, and Kirk Rogers—pleaded guilty to their roles in the operation and testified against Kilbride and Schaffer.

The U.S. CAN-SPAM Act was designed to cut back on deceptive and fraudulent email by barring falsified headers, fake addresses, and misleading subject lines; commercial email senders are also required to provide an opt-out mechanism from their mailings. The act has been widely criticized as ineffective, however, because it cannot do anything about spam originating from sites outside the U.S. (Kilbride and Shaffer, while distributing spam from Amsterdam, were originating the messages in Arizona). The act has also been criticized for its opt-out provisions (which could require everyday computer users to “opt-out” of hundreds of mailing lists a day), and weaknesses in the law’s terminology and enforcement provisions. Nonetheless, the act has been used as leverage to extract guilty please from prolific spammers—including that of notorious spammer Adam Vitale—and now the successful prosecution of Kilbride and Schaffer.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more