Skip to main content

OLPC Sued Over Multilingual Keyboard

OLPC Sued Over Multilingual Keyboard

Sometimes even the best intentions can’t avoid patent troubles: Nigerian-owned Lagos Analysis Corporation (a.k.a. LANCOR) has filed a patent infringement lawsuit against the One Laptop Per Child project and its head Nicholas Negroponte over the multilingual keyboard design used in the low-cost OLPC XO notebook intended to benefit educational systems in developing nations. The suit accuses the OLPC project of willfully infringing on LANCOR’s design patent for multilingual keyboards, and reverse-engineering the company’s software drivers. The initial suit has been filed in Nigeria, and say it plans to bring a similar lawsuit in a U.S. federal court.

“LANCOR treats its intellectual property as one of the Company’s most important resources,” said LANCOR CEO Adé G. Oyegbola, in a release.

“The willful infringement of our client’s intellectual property is so blatant and self-evident in the OLPC’s XO Laptops,” said Ade Adedeji of the Nigerian law firm Adedeji & Owotomo a Lagos, retained by LANCOR. “We will have no problem establishing the facts of our client’s case against OLPC in any court of law.”

LANCOR is seeking damages as well as a permanent injunction barring the OLPC project from manufacturing or selling infringing products.

Lest this seem like a bolt out of the blue, LANCOR has a substantial history developing multilingual, region-specific keyboards for European, African, South American, and U.S. markets, with its Konyin Multilingual Keyboards currently on sale globally. LANCOR’s suit alleges the OLPC organization purchased two Konyin keyboards (its Nigerian and U.S. models) expressly to reverse engineer the keyboard drivers and infringe on the company’s “Shift2” technology. Although Nigerian-owned, LANCOR was founded in 1994 and is incorporated in Natick, Massachusettes—just down the road from the OLPC’s birthplace at MIT.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more