Skip to main content

BlueBeat to pay $1 million for illegal online music sales

BlueBeat logo
Image used with permission by copyright holder

Back in 2009, online music market BlueBeat made headlines by offering Beatles songs for sale for prices as low as $0.25 a track—long before the Beatles finally succumbed to the lure of online music sales and let their music be sold via Apple’s iTunes. A court order (followed by an injunction) took Beatles music off BlueBeat in short order, but the record labels concerned over BlueBeat’s activities—EMI, Capital, and Virgin—didn’t stop there, and have just reached a $950,000 settlement with the company to settle copyright infringement claims.

To most people’s ears, BlueBeat was simply ripping tracks from Beatles and other artists and offering them for sale without authorization—or paying royalties to the artists, copyright holders, and/or publishers. BlueBeat, however, contended that it wasn’t selling the original recordings, but rather “psycho-acoustic simulations” of Beatles music—which were copyrighted by BlueBeat. That claim didn’t hold water with the court, which had no difficulty issuing a restraining order preventing BlueBeat from distributing the recordings.

Under the terms of the settlement (PDF), BlueBeat and parent company MRT will be paying $950,000 in a settlement, and agrees to stop distributing or linking to any copyrighted works controlled by the labels involved in the settlement—that covers not just Beatles material, but a broad range of other artists. BlueBeat may also be on the hook for other damages and attorneys’ fees.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more