In “Secondhand Spoke,” the 15th episode of the 12th season of Family Guy, teenage son Chris Griffin is being bullied. With Chris unable to come up with responses to the verbal gibes of his classmates, his smarter baby brother, Stewie, hops in a backpack so that Chris can surreptitiously carry him around. Prompted by Stewie, Chris not only manages to get back at the bullies, but even winds up getting nominated for class president for his troubles.
That Family Guy B-plot bears only the most passing of resemblances to a new project carried out by Intel and the University of Georgia. Nonetheless, it’s an intriguing one: A smart backpack that’s able to help its wearer better navigate a given environment without problems — all through the power of speech.
What researcher Jagadish Mahendran and team have developed is an A.I.-powered, voice-activated backpack that’s designed to help its wearer perceive the surrounding world. To do it, the backpack — which could be particularly useful as an alternative to guide dogs for visually impaired users — uses a connected camera and fanny pack (the former worn in a vest jacket, the latter containing a battery pack), coupled with a computing unit so it can respond to voice commands by audibly describing the world around the wearer.
That means detecting visual information about traffic signs, traffic conditions, changing elevations, and crosswalks, alongside location information, and then being able to turn it into useful spoken descriptions, delivered via Bluetooth earphones.
A useful assistive tool
“The idea of developing an A.I.-based visual-assistance system occurred to me eight years ago in 2013 during my master’s,” Mahendran told Digital Trends. “But I could not make much progress back then for [a] few reasons: I was new to the field and deep learning was not mainstream in computer vision. However, the real inspiration happened to me last year when I met my visually impaired friend. As she was explaining her daily challenges, I was struck by this irony: As a perception and A.I. engineer I have been teaching robots how to see for years, while there are people who cannot see. This motivated me to use my expertise, and build a perception system that can help.”
The system contains some impressive technology, including a Luxonis OAK-D spatial A.I. camera that leverages OpenCV’s Artificial Intelligence Kit with Depth, which is powered by Intel. It is capable of running advanced deep learning neural networks, while also providing high-level computer vision functionality, complete with a real-time depth map, color information, and more.
“The success of the project is that we are able to run many complex A.I. models on a setup that has a simple and small form factor and is cost-effective, thanks to OAK-D camera kit that is powered by Intel’s Movidius VPU, an A.I. chip, along with Intel OpenVINO software,” Mahendran said. “Apart from A.I., I have used multiple technologies such as GPS, point cloud processing, and voice recognition.”
Currently in testing phase
As with any wearable device, a big challenge involves making it something that people would actually want to wear. Nobody wants to look like a science-fiction cyborg outside of Comic-Con.
Fortunately, Mahendran’s A.I. vest does well under these parameters. It conforms to the standards of what the late Xerox PARC computer scientist Mark Weiser said was necessary for ubiquitous computing: Receding into the background without attracting attention to itself. The components are all hidden away from view, with even the camera (which, by design, must by visible in order to record the necessary images) looking out at the world through three tiny holes in the vest.
“The system is simple, wearable, and unobtrusive so that the user doesn’t get unnecessary attention from other pedestrians,” Mahendran said.
Currently, the project is in the testing phase. “I did the initial [tests myself] in downtown Monrovia, California,” Mahendran said. “The system is robust, and can run in real time.”
Mahendran noted that, in addition to detecting outdoor obstacles — ranging from bikes to overhanging tree branches — it can also be useful for indoor settings, such as detecting unclosed kitchen cabinet doors and the like. In the future, he hopes that members of the public who need such a tool will be able to try it out for themselves.
“We have already formed a team called Mira, which is a group of volunteers from various backgrounds, including people who are visually impaired,” Mahendran said. “We are growing the project further with a mission to provide an open-source, A.I. based visual assistance system for free. We are currently in the process of raising funds for our initial phase of testing.”