Skip to main content

Microsoft Rebuffs Poor Vista Benchmarks

Microsoft Rebuffs Poor Vista Benchmarks

When testing firm Devil Mountain Software showed Windows XP ripping apart Windows Vista in benchmark tests two weeks ago, it looked like a black eye for Vista, to say the least. Not surprisingly, a spokesman at Microsoft caught wind of the testing and fired back in a blog, and Devil Mountain has issued its own rebuttal as well.

Microsoft’s attack came from Nick White, a product manager for Windows Vista. While he didn’t call out Devil Mountain by name in his Vista blog entry, the post came just days after Devil Mountain released their results, and the type of testing he referred to was clearly that employed by OfficeBench, the software Devil Mountain used. According to White, OfficeBench creates an unrealistic test of performance by measuring tasks performed at “superhuman” speed, which can exaggerate miniscule delays that would never be felt by the end user in everyday usage. He called the test  an “window-open, window-close routine” and dismissed it as a real measure of performance.

The crew over at Devil Mountain took issue with White’s description. In a post days later entitled “When Microsoft Attacks!” the team directly addressed White’s critique of OfficeBench. Their first pet peeve: White used the high-speed operation of the benchmark to explain away its results, and posted a video of it in action to demonstrate how ridiculous it looks. But the Devil Mountain crew claim the video was “ridiculously accelerated” for just that purpose.

Also at issue was White’s allegation that OfficeBench merely opens and closes windows. The Devil Mountain crew posted a summary of the software’s scripted routine to demonstrate that it generates presentations in PowerPoint, builds charts in Excel, and performs a variety of other tasks in all the programs it uses.

Although the waters have settled for the moment, we wouldn’t be surprised to see the imminent release of Vista Service Pack 1 stir up even more tension between the two companies.

Nick Mokey
As Digital Trends’ Managing Editor, Nick Mokey oversees an editorial team delivering definitive reviews, enlightening…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more