Skip to main content

Exoskeletons with autopilot: A peek at the near future of wearable robotics

Automation makes things easier. It also makes things potentially scarier as you put your well-being in the hands of technology that has to make spur-of-the-moment calls without first consulting you, the user. A self-driving car, for instance, must be able to spot a traffic jam or swerving cyclist and react appropriately. If it can do this effectively, it’s a game-changer for transportation. If it can’t, the results may be fatal.

At the University of Waterloo, Canada, researchers are working on just this problem — only applied to the field of wearable robot exosuits. These suits, which can range from industrial wearables reminiscent of Aliens’ Power Loader to assistive suits for individuals with mobility impairments resulting from age or physical disabilities, are already in use as augmentation devices to aid their wearers. But they’ve been entirely manual in their operation. Now, researchers want to give them a mind of their own.

To that end, the University of Waterloo investigators are developing A.I. tools like computer vision that will allow exosuits to sense their surroundings and adjust movements accordingly — such as being able to spot flights of stairs and climb them automatically or otherwise respond to different walking environments in real time. Should they pull it off, it will forever change the usefulness of these assistive devices. Doing so isn’t easy, however.

The biggest challenge for robotic exoskeletons

“Control is generally regarded as one of the biggest challenges to developing robotic exoskeletons for real-world applications,” Brokoslaw Laschowski, a Ph.D. candidate in the university’s Systems Design Engineering department, told Digital Trends. “To ensure safe and robust operation, commercially available exoskeletons use manual controls like joysticks or mobile interfaces to communicate the user’s locomotor intent. We’re developing autonomous control systems for robotic exoskeletons using wearable cameras and artificial intelligence, [so as to alleviate] the cognitive burden associated with human control and decision-making.”

University of Waterloo: wearable robot exoskeletons camera
University of Waterloo

As part of the project, the team had to develop an A.I.-powered environment classification system, called the ExoNet database, which it claims is the largest-ever open-source image dataset of human walking environments. This was gathered by having people wear a mounted camera on their chest and walk around local environments while recording their movement and locomotion, It was then used to train neural networks.

“Our environment classification system uses deep learning,” Laschowski continued. “However, high-performance deep-learning algorithms tend to be quite computationally expensive, which is problematic for robotic exoskeletons with limited operating resources. Therefore, we’re using efficient convolutional neural networks with minimal computational and memory storage requirements for the environment classification. These dee- learning algorithms can also automatically and efficiently learn optimal image features directly from training data, rather than using hand-engineered features as is traditionally done.”

John McPhee, a professor of Systems Design Engineering at the University of Waterloo, told Digital Trends: “Essentially, we are replacing manual controls — [like] stop, start, lift leg for step — with an automated solution. One analogy is an automatic powertrain in a car, which replaces manual shifting. Nowadays, most people drive automatics because it is more efficient, and the user can focus on their environment more rather than operating the clutch and stick. In a similar way, an automated high-level controller for an exo will open up new opportunities for the user [in the form of] greater environmental awareness.”

As with a self-driving car, the researchers note that the human user will possess the ability to override the automated control system if the need arises. While it will still require a bit of faith to, for instance, trust that your exosuit will spot a flight of descending stairs prior to launching down them, the wearer can take control in scenarios where it’s necessary.

Still prepping for prime time

Right now, the project is a work in progress. “We’re currently focusing on optimizing our A.I.-powered environment classification system, specifically improving the classification accuracy and real-time performance,” said Laschowski. “This technical engineering development is essential to ensuring safe and robust operation for future clinical testing using robotic exoskeletons with autonomous control.”

University of Waterloo: wearable robot exoskeleton in use
University of Waterloo

Should all go to plan, however, hopefully it won’t be too long until such algorithms can be deployed in commercially available exosuits. They are already becoming more widespread, thanks to innovative companies like Sarcos Robotics, and are being used in evermore varied settings. They’re also capable of greatly enhancing human capabilities beyond what the wearer would be capable of when not wearing the suit.

In some ways, it’s highly reminiscent of the original conception of the cyborg, not as some nightmarish Darth Vader or RoboCop amalgamation of half-human and half-machine, but, as researchers Manfred Clynes and Nathan Kline wrote in the 1960s, as “an organizational system in which … robot-like problems [are] taken care of automatically, leaving [humans] free to explore, to create, to think, and to feel.” Shorn of its faintly hippy vibes (this was the ’60s), the idea still stands: By letting robots autonomously take care of the mundane problems associated with navigation, the human users can focus on more important, engaging things. After all, most people don’t have to consciously think about the minutiae of moving one foot in front of the other when they walk. Why should someone in a robot exosuit have to do so?

The latest paper dedicated to this research was recently published in the journal IEEE Transactions on Medical Robotics and Bionics.

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more