Skip to main content

Crowdfunded Play-i programmable robots reaches fundraising goal

play i robots kids fundraising goal 250k
Image used with permission by copyright holder

Play-i, a developer of programmable and customizable robots for kids aged five and up, has reached its $250,000 fundraising goal. That’s a big achievement considering that, just two days ago, they had nearly $141,000 raised. The strong spike in funding suggests that there’s plenty of demand for the lovable contraptions Play-i hopes to offer today’s youth.

Founded in part by Vikas Gupta, a Silicon Valley-based entrepreneur,  the robots of Play-i, dubbed Bo and Yana, are both controllable and customizable. Bo and Yana can be programmed to create sounds, sing songs, play games like tag, and more. Children will be able to control them with Android and iOS devices using Bluetooth 4.0.

According to the official Play-i site, the programming done by kids is visually based and combines music, stories, and animation. Think of the programming tasks as interactive cartoons that give your kids the building blocks of programming. Once the user grasps programming, Play-i claims that they’ll be able to write their own code.

Play-i states that, while Bo and Yana work great together, they’re also just as fun to play with and program independently of each other. However, Play-i says that having both Bo and Yana allows for “more advanced gameplay and new programming challenges.”

Based in Mountain View, Calif., Play-i hopes to start shipping its programmable robots in summer 2014. Though it’s unclear how much it will cost to get Bo and/or Yana, it’ll be interesting to see whether programmable robots aimed at kids catches on by the time next year’s holiday shopping season rolls around.

Konrad Krawczyk
Former Digital Trends Contributor
Konrad covers desktops, laptops, tablets, sports tech and subjects in between for Digital Trends. Prior to joining DT, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more