Skip to main content

German firm demos super-accurate gesture control technology

camboard-pico
Image used with permission by copyright holder

Gesture control isn’t even a prolific technology yet, but competition is already getting fierce. German firm pmdtechnologies has been developing its gesture control design for the past 10 years and believes it has the technology to take on its better-known rivals. The product it demos in the video below is CamBoard pico – a device around the size of a thumb drive that the company claims is more accurate than competing devices. 

CamBoard pico is a 3D depth sensor, which means it can identify gestures made within a “3D interaction volume,” giving the device the accuracy it needs. On the video’s YouTube post, the company says that using this technology means “you can move your hands freely, and [it can still] detect hands, fingertips, and gestures.” According to TechCrunch, pmdtechnologies claims the Pico is more accurate than Leap Motion, because the latter can only identify fingertips to measure spatial distance. The CamBoard Pico is an improved follow-up to the company’s earlier design, the CamBoard nano. 

The company doesn’t have plans to release Pico on its own. It’s a reference design pmdtechnologies plans to sell to manufacturers. So in the future, you might see an assortment of cars, computers, or even robots with gesture control technology based on this teensy device.

Mariella Moon
Former Digital Trends Contributor
Mariella loves working on both helpful and awe-inspiring science and technology stories. When she's not at her desk writing…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more