Skip to main content

A.I. headphones could warn distracted pedestrians when there’s traffic around

Headphones have the ability to seal us in our own isolated sound bubbles; putting an invisible wall around wearers, even in public spaces. At least, it can feel that way. In reality, while the world might seem like it disappears when you put on your fancy AirPods Pro, it doesn’t actually. As walking across a busy street without paying attention would quickly remind you.

Could machine intelligence help where human intelligence fails us?

That’s certainly what researchers from Columbia University hope. They have developed a Pedestrian Audio Warning System (PAWS) that seeks to alert headphone wearers of the threat posed by passing vehicles. The smart headphone tech technology works by using machine learning algorithms to interpret vehicle sounds from up to 60 meters away. It can then provide information about the location of those vehicles. The results could be a major boon to pedestrian safety at a time when, tragically, more pedestrians than ever are killed on roads in the United States.

PAWS headphones 1
Electrical Engineering and Data Science Institute/Columbia University

The headphones used for the prototype system feature an array of low-cost microphones, located in different parts of the headset. The relevant sound features of possible cars are extracted by an onboard custom integrated circuit, which then sends them to a paired smartphone app. The smartphone uses machine learning algorithms to determine what is and is not a vehicle sound. The neural network it relies on was trained using audio from a wide range of both vehicles and environmental conditions.

The system is still far from complete. For one thing, it can only identify the approximate position of vehicles; not their trajectory. Being able to determine this would be far more useful than simply assuming a static road state for vehicles that are, in reality, anything but static. Secondly, the researchers are still trying to figure out the best way to signal this information to wearers. One possibility would be to offer warning beeps on different sides of stereo headphones to make clear exactly where a sound is emanating from.

The PAWS project has already received a grant of $1.2 million from the National Science Foundation. According to IEEE Spectrum, the team is hoping to pass a “more refined” version of the tech over to a company that could bring it to market.

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Google’s LaMDA is a smart language A.I. for better understanding conversation
LaMDA model

Artificial intelligence has made extraordinary advances when it comes to understanding words and even being able to translate them into other languages. Google has helped pave the way here with amazing tools like Google Translate and, recently, with its development of Transformer machine learning models. But language is tricky -- and there’s still plenty more work to be done to build A.I. that truly understands us.
Language Model for Dialogue Applications
At Tuesday’s Google I/O, the search giant announced a significant advance in this area with a new language model it calls LaMDA. Short for Language Model for Dialogue Applications, it’s a sophisticated A.I. language tool that Google claims is superior when it comes to understanding context in conversation. As Google CEO Sundar Pichai noted, this might be intelligently parsing an exchange like “What’s the weather today?” “It’s starting to feel like summer. I might eat lunch outside.” That makes perfect sense as a human dialogue, but would befuddle many A.I. systems looking for more literal answers.

LaMDA has superior knowledge of learned concepts which it’s able to synthesize from its training data. Pichai noted that responses never follow the same path twice, so conversations feel less scripted and more responsively natural.

Read more
How the USPS uses Nvidia GPUs and A.I. to track missing mail
A United States Postal Service USPS truck driving on a tree-lined street.

The United States Postal Service, or USPS, is relying on artificial intelligence-powered by Nvidia's EGX systems to track more than 100 million pieces of mail a day that goes through its network. The world's busiest postal service system is relying on GPU-accelerated A.I. systems to help solve the challenges of locating lost or missing packages and mail. Essentially, the USPS turned to A.I. to help it locate a "needle in a haystack."

To solve that challenge, USPS engineers created an edge A.I. system of servers that can scan and locate mail. They created algorithms for the system that were trained on 13 Nvidia DGX systems located at USPS data centers. Nvidia's DGX A100 systems, for reference, pack in five petaflops of compute power and cost just under $200,000. It is based on the same Ampere architecture found on Nvidia's consumer GeForce RTX 3000 series GPUs.

Read more
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more