Skip to main content

Acer stuffs a Core i3 CPU into its C720 Chromebook

acer stuffs core i3 cpu c720 chromebook
Image used with permission by copyright holder
Acer just announced its C720 Chromebook, which wears an 11.6-inch 1366×768 display, and is powered by a 1.7GHz Intel Core i3-4005U dual-core processor, Intel HD Graphics 4400, and a 32GB SSD. The Chromebooks we’ve come across have usually been equipped with Celeron or Exynos processors, so seeing an Intel Core i3 CPU in one is unusual.

There are two variants of the C720. One is called the C720-3871, while the other is dubbed the C720-3404. As far as we can tell, the only difference between the two, aside from price, is the amount of RAM that each contains. The former is outfitted with 2GB of DDR3 memory, while the latter sports 4GB of DDR3. Of course, these notebooks run Chrome OS.

Port selection consists of one USB 3.0, one USB 2.0, and an HDMI connection. Wireless connectivity comes courtesy of 802.11n, and Bluetooth 4.0. Shipping in Granite Gray, the C720 measures 11.8 x 8 x 0.8-inches, and weighs 2.76 pounds.

Though it’s unclear when the C720 will be available, Acer says that it should hit the market starting sometime this month. It looks like Amazon will be one of the places that will sell the C720 once it drops.

The 2GB model will sell for $349.99, while the 4GB C720 will run you $379.99.

Konrad Krawczyk
Former Digital Trends Contributor
Konrad covers desktops, laptops, tablets, sports tech and subjects in between for Digital Trends. Prior to joining DT, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more