Skip to main content

Thalmic Labs giving away MYO armbands through #ifihadMYO campaign

MYO jedi
Image used with permission by copyright holder

Earlier this year we heard about the MYO, a new gesture control armband from Thalmic Labs. The device allows users to control their Mac or PC through motion alone. Just recently, Thalmic Labs took its project to the next level, raising an impressive $14.5 million in funding to help make its vision a reality. Now, the Waterloo-based team is taking things one step further, by offering five of its upcoming devices to some seriously lucky people.

The giveaway is called #ifihadMYO, and if that looks familiar its because its a direct (although flattering) clone of Google’s recent #ifihadGlass Twitter-based contest, in which people shared why they should get their hands on a free set of Google Glasses. Thalmic fans have the ability to choose winners for this giveaway. The company posts a live stream of all the entries on its website and then allows readers to up-vote the contestants they think are deserving of a free MYO armband.

Currently, there are only a few entries, but some of them really stand out. One contestant is planning to use a MYO armband to help get kids interested in coding. The proposed project uses the MYO in conjunction with RC cars to give kids a platform where coding is both fun and functional.

The contest will run from now, until August 31. Thalmic Labs has already racked up over 30,000 pre-orders for its product, so there is definitely a lot of interest in the MYO.

If you think you have what it takes to win one of these innovative devices, then be sure to enter the contest on Thalmic Labs’ website. 

Russ Boswell
Former Digital Trends Contributor
Russ Boswell is an aspiring video game and technology journalist from Colorado. He's been an avid gamer since he was old…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more