Skip to main content

Microsoft Settles With Iowa for $180 Mln

Judge Scott Rosenberg of Iowa’s Poll County District Court has granted a preliminary approval to a settlement of antitrust charges against Microsoft in the state of Iowa. The settlement, originally announced in February, has Microsoft paying up to $179.5 million to businesses and individual computer users who purchased selected Microsoft applications and operating systems between May 18, 1994, and June 30, 2006.

The class action antitrust suit alleged Microsoft abused its monopoly position in the computing industry to overcharge for its software; Microsoft claimed its prices were fair, and, even if the defendants somehow proved they weren’t, no actual harm had come to Iowa residents as the result of inflated prices. The suit alleged damages and claims which could have totaled more than $1 billion.

Under the terms of the settlement, individuals and businesses in Iowa who purchased purchased Microsoft Word, Excel, and Office, or a wide variety of operating systems (including Windows 2000 Professional, NT Workstation, Windows for Workgroups, Windows Millennium Edition, Windows 95, Windows 98, and, yes, even MS-DOS) are eligible to receive rebates in the amount of $16 per operating system license, $29 for Microsoft Office, $25 for Excel, and $10 for Word. Individuals will receive cash, while volume licensees will receive vouchers toward products from Microsoft and other companies. The four class representatives who brought the suit will also receive $10,000 each.

As outlined in the original settlement agreement, Microsoft plans to give half of any unclaimed funds to Iowa schools in the form of vouchers towards the purchase of computer hardware and software.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more