Skip to main content

Illegal music downloads peak in the UK

UK downloadingIllegal music downloading hit an all-time high this year in the UK, a new report from the British Recorded Music Industry (BPI) has announced. And as a direct correlation — it suggests — the British music industry’s growth has stalled.

The report found that 75 percent of all downloaded music in the region is pirated, and in the past six months, consumers using P2P networks increased 7 percent. All this, in spite of the numerous legal outlets for consumers.

These legal services were created in response to the issue, and while they’ve assisted in increasing music sales this year, their effect has been minimal. BPI CEO Geoff Taylor claims that “this growth is a fraction of what it ought to be. Illegal downloading continues to rise in the UK. It is a parasite that threatens to deprive a generation of talented young people of their chance to make a career in music, and is holding back investment in the fledgling digital entertainment sector.”

In addition to singling out various P2P applications as sources of the problem, BPI also takes issue with search engines. The report declares that searching with “neutral terms” like “mp3” or “download” yields an overwhelming amount of illegally downloading options. It states that upon examining this, research found that “On average, 17 of the first 20 Google results for singles and 14 out of 20 search results for albums were links to known illegal sites.”

Predictably, BPI’s research found that the most common reason consumers are utilizing illegal downloading services is because the music is free.

As for a solution, the organization will continue to increase pirating awareness and music education. But if the UK’s current trends are any indication, we’re betting some legal tactics may follow in the future. A US woman was recently found guilty of illegally downloading 24 songs and fined $1.5 million, a verdict she will appeal. It seems like it’s only a matter of time until a similar situation unravels in the UK.

Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more