Skip to main content

Acer Ferrari 3200 Laptop Review

Quote from the review:

“designed and built to stand out from the crowd. While the system’s general performance isn’t as impressive as its Ferrari F1-endorsed partner, gamers and high-end graphics users will relish the performance from ATI’s Mobility Radeon 9700 graphics chipset and the high-resolution 15in. screen. Battery performance is mediocre at around 2h 20m, but the Ferrari 3200 is still a refreshing piece of kit that will appeal to those who want a laptop that’s different – if you don’t mind paying the price premium for the Ferrari branding.

Pros: Unique branding; high-resolution screen; 64-bit CPU
Cons: No 64-bit OS yet; faster systems available; left side gets very hot

Acer’s second official Ferrari F1-endorsed laptop just screams for attention. Succeeding last year’s Ferrari 3000, the updated model is also encased in a shiny red-and-grey chassis along with the famous gold-and-black prancing stallion. The Ferrari 3200 (330x272x31mm, 3kg) includes some high-end features too, such as AMD’s Athlon 64 2800+ processor, 512MB of DDR333 SDRAM (upgradable to 2048MB with dual SODIMM modules), an 80GB hard disk, as well as ATI’s Mobility Radeon 9700 graphics chipset with 128MB of memory. The 15in. TFT display is also impressive and has a native resolution of 1400×1050 pixels (1600×1200 pixels maximum), although it has a 4:3 aspect ratio rather than a preferred widescreen variant.”

Read the full review

Ian Bell
I work with the best people in the world and get paid to play with gadgets. What's not to like?
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more