Skip to main content

Flash Flaw Exposes Amazon Video to Piracy

Flash Flaw Exposes Amazon Video to Piracy

This is exactly the reason record labels and movie studios tried to avoid offering their material online for years: a security issue in Adobe’s Flash media servers potentially enables users of Flash-based video services like Amazon’s Video on Demand service to download and copy as much video as they like.

The issue impacts sites that use Adobe’s media encryption technology and video player verification: cases exist where Adobe’s Flash video stream is not truly encrypted on the way from the video server to a user’s Flash-based player, potentially enabling users to capture video streams. The vulnerability in Amazon’s Video on Demand service comes from the free two-minute previews of material that it offers users before they buy: the two minute previews stop playback in a user’s Web browser, but the entire video stream is still accessible to stream-catching software.

Popular stream catchers include Replay Media Capture from Applian Technologies.

Adobe has said in a statement that its committed to both protected Flash users from vulnerabilities, as well as protecting the rights of content providers and producers. Last month, the company published a security note outlining techniques content providers can use to validate video is being viewed by a “real” Flash player rather than a stream catcher.

Although industry watchers expect Adobe will integrate more robust security technology soon in order to discourage casual piracy, many note that pirated videos, movies, and television shows are readily available via file-sharing services and other venues, and that using a stream catcher is more trouble and complication than most computer users will tolerate. However, stream catching technology is not, in and of itself, necessarily a bad thing, and can certainly be used within generally-accepted realms of fair use under existing copyright law.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more