Skip to main content

OLPC Fires Up Give One, Get One Round Two

OLPC Fires Up Give One, Get One Round Two

The One Laptop Per Child project is getting set to pull the trigger on its Give One, Get One campaign, this time launching on November 17 in conjunction with Amazon.com. Reader may reall last year, OLPC offered its XO notebook to customers in developed nations…on the condition they also buy another XO laptop to be distributed to a child in a developing nation. The program generated enough response last year that OLPC actually extended the program past its termination date; however, production delays coupled with delivery and fulfillment problems actually turned the whoel operation into something of a black eye for the OLPC project.

This year, OLPC is partnering with online mega-retailer Amazon.com to power the Give One, Get One program. Although pricing has not yet been confirmed, the program will launch with an accompanying advertising campaign to raise awareness of the XO program. Although OLPC has been exploring using Windows XP on the XO laptop, systems offered through the Give One Get One program will run OLPC’s customized Linux OS with the Sugar interface, although it’s not known if the XO’s offered through the program could be configured to dual-boot. This time around, Amazon will handle distributing the laptops, as well as handle all payment and fulfillment.

OLPC also hasn’t revealed how long it intends to run the Give One, Get One program this time around. The original program was conceived as a one-shot to help build volume for OLPC manufacturing; with stiff competition from Intel’s Classmate PC, internal squabbles, and difficulties lining up large orders from developing countries, the OLPC program faces significant challenges to maintain and build scale moving towards its next-generation notebook.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more