Skip to main content

Has the Kinect already been hacked?

Kinect hackedAdafruit Industries, the MIT-backed, open-source-loving, DIY electronics company, has found a winner for its hack the Kinect competition. Formally called the Open Kinect (or OK) project, it was launched shortly before the release of Microsoft’s new motion-detection game console, and offered up $2,000 to the first person who could hack the Kinect (under an open source license) and run it on Windows.

Why you ask? As Adafruit writes, “Imagine being able to use this off the shelf camera for Xbox for Mac, Linux, Win, embedded systems, robotics, etc. We know Microsoft isn’t developing this device for FIRST robotics, but we could! Let’s reverse engineer this together, get the RGB and distance out of it and make cool stuff!”

And now it’s looking like Adafruit has to pony up that $2,000 bounty to programmer and Natural User Interface Group (a research community focused on open source) member “AlexP.” His video showing the Kinect motor and accelerometer being controlled by a PC running Windows 7 has been put on YouTube by Adafruit – and if it turns out AlexP’s code is right (which all signs point to “yes it is”), then it could become public information very soon.

On an interesting sidenote, Alex P is the same guy who hacked the PS3 Eye in 2008; he lists it in his background on NUI.

Adafruit’s primary interest in unlocking this code is to utilize it for robotics, but it could be applied to most anything; phones, cameras, computers – as long as the user is the correct distance from the camera.

Of course, Microsoft is less than thrilled about this. In an e-mail to Cnet when Adafruit first announced the competition, the company said, “Microsoft does not condone the modification of its products. With Kinect, Microsoft built in numerous hardware and software safeguards designed to reduce the chances of product tampering. Microsoft will continue to make advances in these types of safeguards and work closely with law enforcement and product safety groups to keep Kinect tamper-resistant.”

Seems like the tampering has already been done.

Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more