Skip to main content

Quanta Confirms One Mln OLPC Orders

Taiwan’s Quanta Computer says it has confirmed orders for one million notebook computers for the One Laptop Per Child (OLPC) project, and may be able to ship between five and ten million OLPC systems this year as new nations sign up for the project. In addition to confirmed countries Argentina, Brazil, Libya, Nigeria, and Thailand, Rwanda and Uruguay have recently announced their participation in the project.

The OLPC initiative is intended to put laptop computers in the hands of children in developing nations around the world, in an effort to bridge the “digital divide” between rich and poor. In developing economies, lack of infrastructure and high costs prevent many children from the educational and developmental possibilities offered by software, modern communication technology, and the Internet.

The goal of the OLPC project is to offer rugged, inexpensive laptop computer especially designed for education in developing nations—and offer them cheaply, with a target price of $100 per system. Currently, the OLPC laptops cost about $130 apiece, but with mass production those costs can come down—and the project is now described as the “pet project” of Quanta chairman Barry Lam who is eager to cut costs further. The current OLPC design sports 128 MB of RAM, 512 MB of flash storage, a 266 MHz AMD Geode GX-500 processor, an a 7.5-inch LCD dual-mode 1,200 by 900 pixel LCD display, 802.11b/g Wi-Fi, and an integrated VGA camera. The system runs a custom-developed Linux-based interface dubbed “Sugar.” A limited number of OLPC units have been built and distributed for testing.

Quanta is the world’s largest contract manufacturer of notebook PCs, making products for Apple, Dell, Hewlett-Packard, and others.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more