Skip to main content

Vigor Offers Dual Core 2 Extreme Colossus

Vigor Offers Dual Core 2 Extreme Colossus

Boutique computer maker Vigor Gaming wants to make sure no one can blame their lame systems for their poor performance in arena tournaments, of for failing to crack the top levels in single-player campaigns: the company’s new Colossus desktop sports two 3.2 Ghz Core 2 Extreme QX9775 quad-core processors—and that’s before gamers crank things up even further with Vigor’s "complimentary" overclocking service.

Vigor is building the Colossus systems on the Intel D5400XS mainboard, which means the Colossus also offers multi-GPU configurations—in addition to the eight processing cores, the system can support up to four graphics cards to drive two 2,560 by 1,600-pixel displays. Stock Colossus configs sport 4 GB of RAM, over 2 TB of storage in a RAID 0 configuration, a 1000 Watt power supply, and two overclocked Nvidia GeForce 8800GT graphics controllers. The Colossus can also be configured with Windows Vista 64-bit and up to 8 GB of RAM: Vigor is also offering a config with two ATI Radeon 3870 X2 graphics cards, 8 GB of RAM, a Creative Sound Blaster X-Fi audio card, and other high end components.

All this power doesn’t come cheap: the base Colossus configuration starts at $6,799, with that dual ATI/8 GB config coming in at just over $8,100.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more