Skip to main content

Microsoft Sued Over Vista-Capable Claims

Software giant Microsoft has been targeted by a lawsuit accusing the company of letting PC makers label computers as "Vista capable" when they’re only able to run the most basic version of the operating system.

The suit, brought by Dianne Kelley through the Seattle law firm Gordon Murray Tilden, filed in the U.S. District Court for the Western District of Washington, alleges a Microsoft engaged in "bait and switch" marking with a marketing campaign launched before the release of Windows Vista. The campaign was intended to encourage sales of new PCs while the industry was largely sitting on its hands waiting for the (much-delayed) Windows Vista to ship: PC makers could use a sticker to identify new machines as "Vista capable" so consumers would know their new machine would be able to take advantage of new Vista technologies once the operating system shipped.

However, according to the suit, a "large number" of those PCs were not designed to run anything but Vista Home Basic, the most stripped-down version of Windows Vista. However, Microsoft’s campaign advertised enhanced features and interface capabilities (such as the Aero interface) as part of Vista—but those features do not operate on these "Vista capable" machines. The suit is seeking class action status, and estimated more than 10,000 people have been defrauded with damages totaling more than $5 million.

To carry a "Vista Capable" sticker, PCs had to offer at least 512 MB of RAM and DirectX 9 graphics support; these systems cannot run features in Windows Vista Home Premium, or run them poorly.

Microsoft later introduced a "Premium Ready" designation for new PCS capable of running Vista’s advanced features.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more