Skip to main content

Could unsupervised A.I. enable autonomous cars to learn as they go?

Cortica Automotive

Most autonomous vehicle tech ventures such as Waymo, GM Cruise and Nvidia rack up miles of deep learning experience to build reliably safe systems for self-driving cars. Cortica and Renesas Electronics‘ entirely different approach focuses on helping cars learn on their own.

Cortica, an Israeli company with roots in predictive artificial intelligence based on visual perception, is embedding its latest “Autonomous A.I.” solution on the Renesas R-Car V3H system-on-chip (SoC) solution for self-driving cars.

Referred to by the companies as “unsupervised learning,” Cortica’s autonomous A.I. enables a vehicle to make predictions based on visual data received from forward-facing cameras. According to Cortica, the system uses “‘unsupervised learning’ methodology to mimic the way humans experience and incorporate the world around them.”

The goal is for the car to be able to react to any situation, whether or not the objects or circumstances were previously converted to rules by deep learning A.I. For example, if a mattress flies off the back of a pickup truck in front of you at speed on the highway, would you rather be in an autonomous vehicle managed by a system of rules based on specific experiences or a system that observes and reacts to objects in motion based on how various object classes are likely to move?

According to Cortica, its autonomous A.I. can leverage the system’s relatively low computing demand compared to deep learning systems to achieve greater perception accuracy and performance. Referring to the collaboration with Renesas prior to a demonstration at CES 2019, Cortica stated in a news release:

For the first time, the collaborative effort will introduce a more robust and scalable open-platform perception solution featuring unmatched accuracy and performance rates, faster reaction time, and overall safety upgrades for ADAS. The solution demo by Cortica at CES will demonstrate a new generation of safer, smarter, and more ‘aware’ auto running directly on the Renesas chip with unparalleled execution times. 

One hundred percent predictability is a specious goal for autonomous vehicle systems. Everyone wants error-free performance, but no system will ever have a perfect record. However, as with horseshoes and hand grenades, the closer you get, the greater the result.

Nearly all fatal accidents in the United States involve human error. In the 2016 U.S. Department of Transportation Fatal Traffic Crash Data report, the National Highway Traffic Safety Administration (NHTSA) stated, “NHTSA continues to work closely with its state and local partners, law enforcement agencies, and the more than 350 members of the Road to Zero Coalition to help address the human choices that are linked to 94 percent of serious crashes.”

The NHTSA is an active force in self-driving vehicle development. In September 2016 the agency released its Federal Automated Vehicles Policy.

The DOT fatal crash data report refers to traffic safety goals of autonomous vehicle systems, stating that, “NHTSA also continues to promote vehicle technologies that hold the potential to reduce the number of crashes and save thousands of lives every year, and may eventually help reduce or eliminate human error and the mistakes that drivers make behind the wheel.”

Editors' Recommendations

Bruce Brown
Digital Trends Contributing Editor Bruce Brown is a member of the Smart Homes and Commerce teams. Bruce uses smart devices…
Your A.I. smart assistant could one day tell if you’re lonely
roku ultra vs amazon fire tv nvidia shield apple siri

Your A.I. assistant can do plenty of things for you, whether that’s answering questions, cuing up the perfect song at the right time, making restaurant bookings, or many other tasks. Could it also work out whether you’re lonely?

“Emotion-sniffing” technology is a growing area of interest among researchers, but it’s still in its infancy. In a new proof-of-concept study, researchers from the University of California San Diego School of Medicine recently showed how speech-analyzing A.I. tools can be used for predicting loneliness in older adults.

Read more
A.I. could play a vital role in the birth of tomorrow’s IVF children
microwave a sponge baby

Since the first “test-tube baby” was born in 1978, in-vitro fertilization (IVF) has been an astonishing game changer when it comes to helping people to conceive. However, as amazing as it is, its success rate still typically hovers around 30 percent. That means that seven out of ten attempts will fail. This can be extremely taxing to would-be parents not only financially, but also mentally and physically. Could A.I. help improve those odds and, in the process, play an important role in the birth of many of tomorrow’s babies?

According to investigators from Brigham and Women's Hospital and Massachusetts General Hospital, the answer looks to be a resounding “yes.” They are working on a deep-learning A.I. that can help decide on which embryos should be transferred during an IVF round.

Read more
Smart camouflage patch could conceal fighter jets from A.I. recognition tools
McDonnell Douglas F-15 Eagle in flight during training mission

No, it’s not a deleted Q gadget from some late-stage Pierce Brosnan 007 movie. Researchers really have created a patch that could effectively disguise aerial vehicles from A.I. image recognition systems designed to autonomously identify military objects.

The technology, developed by researchers at the Netherlands Organisation for Applied Scientific Research, is capable of consistently fooling the state-of-the-art YOLO (You Only Look Once) real-time object-detection system. And, potentially, others as well. It could be used to help defend fighter planes from enemy drones.

Read more