Skip to main content

News Corp. offers $4.7 million to phone hacking victims

murdoch-scandel
Image used with permission by copyright holder

Rupert Murdoch recently approved an early attempt at a settlement offer to the family of Milly Dowler, a 13-year-old girl that was murdered in 2002. This payment stems from alleged hacking of Milly Dowler’s mobile phone by representatives of the Murdoch-owned News of the World. When the public discovered the phone hacking scandal, several of the high-ranking employees at the newspaper were arrested and eventually lost their jobs after the scandal forced the News of the World to shut down after 168 years of business. In addition to the newspaper closure, parent company News Corp. was forced to withdraw from negotiations to purchase the remaining shares of the BskyB network.

milly-dowler-phone-hackingDesigned to forgo litigation, the initial settlement offer stands at 3 million pounds or $4.7 million dollars. 2 million pounds would be paid directly to the family of Milly Dowler and the remaining 1 million pounds would be awarded to charity. Rupert Murdoch is said to be personally involved with the negotiations and recently visited the Dowler family in July to apologize personally for the hacking incident at News of the World. Shareholders of News Corp. were recently told that the cost of taking the various hacking cases to trial would cost the company approximately 20 million pounds, however analysts had estimated a typical court aware to be around 120,000 pounds. 

After London’s Metropolitan Police opened a criminal case regarding the phone hacking and began to arrest staff, the police have been notifying about 4,000 potential victims that phone hacking could have occurred. This has spawned dozens of lawsuits against News Corp. based off claims that their privacy was violated. In June, News Corp. had announced a system that would allow victims to contact the company directly for a faster route to financial compensation, but the planned system has yet to process any claims of hacking.

Topics
Mike Flacy
By day, I'm the content and social media manager for High-Def Digest, Steve's Digicams and The CheckOut on Ben's Bargains…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more