Skip to main content

Warner Bros Hires Student Pirate Hunters

If you thought being a resident advisor and writing up fellow college students for underage drinking put students at odds with their peers, Warner Bros. has a job that might make a close second. The studio is now hiring college students in the U.K. to ferret out pirated WB properties and report back to Big Brother (or more appropriately, Big Bros.) with info on infringers.

A job listing at the University of Manchester for an “anti-piracy intern” describes a 12-month job that would include monitoring Internet forums and IRC for pirated content, infiltrating private piracy hubs, and even performing trap purchases of pirated products. Students would also be called upon to write automated bots for link scanning, and sending infringement notices.

According to the listing, students would specifically be instructed to look specifically for Warner Bros. and NBC Universal content, which could include anything from ripped Harry Potter DVDs to the latest episodes of The Office captured off the air.

What chops does it take to navigate the seedy online underworld of such things? Warner Bros. wants a student studying a computer-related discipline with a host of skills including experience with IRC, FTP, newsgroups, and programming experience with languages like Java, JSP, PHP, Perl or Python.

Compensation for rooting around on the Web actually stacks up pretty well against other internships. Warner Bros. will toss its rat £17,500, or about $26,000 USD, for the total one-year run.

Nick Mokey
As Digital Trends’ Managing Editor, Nick Mokey oversees an editorial team delivering definitive reviews, enlightening…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more