Skip to main content

Injunction Puts BlueBeat Out of the Beatles Business

Beatles (MBE presser; photographer unknown)
Image used with permission by copyright holder

U.S. District Judge John F. Walker has issued a preliminary injunction barring online music sites BlueBeat and BaseBeat from streaming or selling any tracks from EMI artists—and that includes the Beatles, along with popular acts like Coldplay.

Record label EMI had filed suit against BlueBeat and its owner Hank Risan for streaming and selling tunes by the Beatles and other EMI artists without the labels’ permission. While the vast majority of recordings from EMI’s artist are available for purchase and streaming via other online music services, the Beatles are perhaps the most famous hold-out from the digital music scene: although the Fab Four are now the stars of their own video game, their music catalog is not legally available for purchase through any online music store. BlueBeat drew EMI’s ire when it began offering Beatles’ recordings for $0.25 a pop—substantially below the going rate for single tracks from most online music services.

BlueBeat apparently justified its actions by claiming it was selling “psycho-acoustic simulations” of Beatles tracks, rather than original recordings, using a process that allegedly improved the quality of the recordings and simulated how a live listener might perceive a performance. BlueBeat also claimed that by embedding images in the digital downloads, they became new works to which it controlled copyright.

Most industry watchers have dismissed BlueBeat’s claims as completely implausible.

Judge Walker’s injunction will remain in place until the case goes to trial…if it ever gets to trial.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more