Skip to main content

FBI takes down alleged King’s Speech, Black Swan pirate

Wes DeSotoAccording to confidential court records that Wired has accessed, the FBI raided the home of Wes DeSoto, who agents believe first shared The King’s Speech and Black Swan on The Pirate Bay in January. DeSoto is a member of the Screen Actors Guild and is being investigated for any involvement in TiMPE, which has made a name for itself in pirating movies before their release dates. According to the sealed documents, the FBI searched DeSoto’s home for “records, documents, programs, applications or materials relating to ‘TiMPE’ and ‘thepiratebay.org.’”

DeSoto told Wired he has no connection with the group. “I’m nobody in the online file sharing world.,” he claims. “This investigation is excessive and a waste of tax dollars.”

If convicted, DeSoto could face up to three years in prison. His SAG status is what has made him a prime suspect: The organization sends members codes so they can screen films before their release dates, and production studios had placed watermarks on the pre-released films. When someone under the handle mf34inc told Pirate Bay commenters that “SAG now sends out iTunes download codes for screens…I’m a SAG member and I thought I’d share these,” suspicions were aroused and authorities began paying attention.

While mf34inc began sharing Rabbit Hole, investigators found and traced the IP address, leading them to DeSoto.

DeSoto is an actor and graphic and apparel designer living Los Angeles. According to IMDB, he recently appeared on an episode of CSI, playing the role of “Werewolf Enthusiast.”

Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more