Skip to main content

Firm Issues Threats for Ignoring Their DRM

The California company Media Rights Technologies announced today that it has sent cease-and-desist letters to Microsoft, Adobe, Real Networks, and Apple, alleging the company’s media player products—like Windows Media Player, QuickTime, iTunes, Real Player, and Flash—violate the Digital Millennium Copyright Act (DMCA) by (get this) failing to incorporate Media Rights’ Technologies X1 SeCure Recording Control proprietary rights management technology into their products.

“Together these four companies are responsible for 98 percent of the media players in the marketplace; CNN, NPR, Clear Channel, MySpace, Yahoo, and YouTube all use these infringing devices to distribute copyrighted works,” wrote MRT CEO Hank Risan in a statement. “We will hold the responsible parties accountable. The time of suing John Doe is over.”

MRT’s argument goes something like this: software applications capable of playing back streamed media over the Internet are often vulnerable to so-called “stream rippers,” which capture the audio and/or video content from a stream and save it to a user’s hard disk, often in an unrestricted format. However, the DMCA prohibits any product which circumvents access controls on a copyrighted work. Therefore, MRT concludes, things like Windows Media Player, Windows Vista, iTunes, and other applications are actually illegal under the DMCA since copyrighted content played back with those products can be captured without access restrictions. Moreover, MRT asserts that failing to implement effective copyright protection is itself a violation of the DMCA, and therefore Apple, Real, Adobe, and Microsoft are all guilty of breaking the law for failing to license MRT’s X1 X1 SeCure Recording Control technology.

MRT claims its X1 SeCure Recording Control has been “proven effective” in blocking stream ripping, yet Microsoft, Apple, Adobe, and Real have been “actively avoiding” using the technology. MRT’s says failure to comply with its cease and desist letter could result in a Federal court injunction barring the sale of infringing production, and even include fines of $200 to $2,500 for each infringing product distributed or sold. If MRT’s highest-dollar scenario were to come to pass, the company could be due billions of dollars.

MRT’s assertion that stream ripping is a source of copyright infringement are not altogether new; in fact, stream ripping was one of the factors cited by industry organizations and music publishers for seeking to increase royalty rates paid by Internet-based broadcasters. (Although broadcasters are trying to reverse the ruling, new rates are currently due to go into effect July 15.)

Industry opinions vary on the merits of MRT’s claims, but there seems to be little support for the notion that the law would require companies to license MRT’s technology under the DMCA merely because it exists. Copyright holders are licensing their material for use in the very mediums that are vulnerable to stream ripping, presumably with full knowledge of how the technology operates. Although the DMCA may be predicated upon protecting the rights of copyright holders, it is within the rights of those copyright holders to license their content for use in any medium they like—or even (gasp!) without any DRM at all.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more