Skip to main content

Spam storm clogs the Kindle self-publishing platform

amazon_kindle_2The Kindle’s ebook store has become a new outlet for self-publishing spammers in the past few months, forcing users to wade through a growing number of low-value, subpar content to get to the titles they want. This recent trend may be damaging to Amazon’s push into self-publishing and may even dig into the Kindle’s reputation, hurting the 10 percent of business Citigroup analysts say the product will account for in 2012.

Spammers are exploiting something known as PLR content, or Private Label Rights. Though there is potential for this work to be of high quality, PLR allows someone to grab informational content for free or for very cheap on the internet and reformat it as a digital book. The form of PLR these spammers use tends to be poorly written, generic and lets them put anyone’s name on it, slap a catchy title and churn it out for 99 cents. Amazon then pays out 30 to 70 percent of the revenue.

Sometimes these ebooks will just be stolen content from actual work. Reuters points out a case concerning a New Zealander and her debut historical novel which she found being sold on the platform under a different author’s name. The case was resolved by Amazon’s British team, but it points to a larger issue. Reuters cited Internet marketer Paul Wolfe, who explained that the common tactic involves copying an bestselling ebook and repackaging it with a new title and cover.

The problem has not been hitting Google eBooks or Barnes & Noble’s Nook so far, but the Smashwords ebook publisher has been seeing a trickle of spam. The spam on Amazon’s platform may become a more widespread problem. The increase in ebook sales over the past year has helped many people who couldn’t publish their work traditionally, with an outlet to get their voice out there. Amazon needs to wake up and either manage it’s submission more aggressively, require a fee or set up some sort of social networking weeding process in order to keep this platform untarnished.

Jeff Hughes
Former Digital Trends Contributor
I'm a SF Bay Area-based writer/ninja that loves anything geek, tech, comic, social media or gaming-related.
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more