Skip to main content

Woman Seeks $54 Million for Missing Laptop

Some people value their computers more than others. For Raeyln Campbell, a Washington D.C. resident whose laptop went missing while in for repairs at Best Buy last May, the computer and the battles she fought trying to be reimbursed for it were enough to begin a crusade. Campbell has filed a suit for $54 million against Best Buy, not to recoup the true value of the computer, but to draw attention to what she calls the “reprehensible state of consumer property and privacy protection practices at America’s largest consumer electronics retailer.”

According to the blog that Campbell has dedicated to the issue, the problem began when Best Buy refused to reimburse her adequately for the theft of her computer, offering $750 for the machine (which Campbell claims originally cost $1110.35) and $150 for its content. In August 2007, Campbell demanded the original purchase price plus $1,000 to cover the music, pictures, software and other content on it. Best Buy denied her proposal, sparking the beginning of an escalating legal to-and-fro that lead to a suit for $54,592,146.54 in November.

The breakdown of the dollar amount includes $24,146.54 in compensatory damages for expenses that arose from the loss and replacement of the laptop, $500,000 in treble damages for Best Buy’s alleged negligence, and $54,000,000 in punitive damages, which Campbell hopes will draw the attention of both Best Buy and the media. The case flew low on radar until this week, when media outlets ranging from the Wall Street Journal to Fox News picked up the story.

Although Campbell has chronicled her communications with Best Buy on the blog, the company has not yet publicly commented on the case in light of its recent publicity.

Nick Mokey
As Digital Trends’ Managing Editor, Nick Mokey oversees an editorial team delivering definitive reviews, enlightening…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more