Skip to main content

DRAM price fixing lawsuit settles for $310M

DRAM
Image used with permission by copyright holder

It’s over! The class-action lawsuit against DRAM manufacturers concerning sales of DRAM and tech powered by it around the turn of the century has been settled for $310M. The suit developed after evidence suggested that a large number of DRAM-makers, including Samsung, Toshiba, Mitsubishi, and Hitachi, may have colluded to fix their product’s value, inflating the prices of a wide range consumer products including computers, printers, graphics cards, servers and other tech. Most of the settlement ($200M) will be used to pay damages to business and consumers who purchased items built using DRAM. 

Long story short: Anyone who bought anything with DRAM in it between January 1, 1998 and December 31, 2002, may be entitled to some money. 

Interestingly, the settlement only applies to customers who purchased products with DRAM in them and people who purchased DRAM for PCs from a third party. DRAM purchased directly from a manufacturer does not qualify. According to the DRAM Claims website, consumers don’t need proof of purchase to file a claim, but may be asked for it eventually. They advise consumers to “save any documentation/proof that you may still have.”

It’s worth noting that the payout for an average consumer could range wildly depending on the number of people who come looking for their share. If less than 2.5 million claims are filed, each person or business will get at least $10, depending based on what they bought.

The deadline to file for a piece of the settlement pie ends August 1.

Topics
Mike Epstein
Former Digital Trends Contributor
Michael is a New York-based tech and culture reporter, and a graduate of Northwestwern University’s Medill School of…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more