Cornell University researchers have developed a way to help autonomous vehicles create “memories” of past experiences and use them in future navigation, especially during adverse weather conditions when the car cannot safely rely on its sensors.
Cars using artificial neural networks have no memory of the past and are constantly able to see the world for the first time – no matter how many times they have previously driven on a particular road.
The researchers produced three concurrent papers with the aim of overcoming this limitation. Two will be presented at the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2022), being held in New Orleans June 19-24.
“The fundamental question is, can we learn from repeated displacements?” said senior author Kilian Weinberger, a computer science professor. “For example, a car might mistake an oddly shaped tree for a pedestrian when the laser scanner first detects it from a distance, but once it gets close enough, the object category becomes clear. So the second time you drive past the same tree, even in fog or snow, you would hope that the car has now learned to recognize it correctly.”
Led by PhD student Carlos Diaz-Ruiz, the group compiled a dataset by repeatedly driving a car equipped with Light Detection and Ranging (LiDAR) sensors along a 15-kilometer loop in and around Ithaca, 40 times over a period of time. 18 months. The traversals capture different environments (highway, urban, campus), weather conditions (sunny, rainy, snowy), and times of day. This resulting data set has over 600,000 scenes.
“It deliberately exposes one of the main challenges in self-driving cars: bad weather conditions,” Diaz-Ruiz said. “When the street is covered in snow, people can rely on memories, but without memories, a neural network is badly damaged.”
HINDSIGHT is an approach that uses neural networks to calculate descriptors of objects as the car passes them. It then compresses these descriptions, which the group has called Spatial-Quantized Sparse History (SQuaSH) functions, and stores them on a virtual map, like a “memory” stored in a human brain.
The next time the self-driving car traverses the same location, it can query the local SQuaSH database of any LiDAR point along the route and “remember” what it learned last time. The database is continuously updated and shared between vehicles, enriching the information available for recognition.
“This information can be added as functions to any LiDAR-based 3D object detector;” said PhD student Yurong You. “Both the detector and the SQuaSH representation can be trained jointly without additional supervision or human annotation, which is time and labor intensive.”
HINDSIGHT is a precursor to additional research the team is conducting, MODEST (Mobile Object Detection with Ephemerality and Self-Training), which would go even further, allowing the car to learn the entire sensing pipeline from scratch.
While HINDSIGHT still assumes that the artificial neural network is already trained to detect objects and extends it with the ability to create memories, MODEST assumes that the artificial neural network in the vehicle has never been exposed to objects or streets. Through multiple movements of the same route, it can learn which parts of the environment are stationary and which objects are moving. Slowly it learns itself what other road users are and what is safe to ignore.
The algorithm can then reliably detect these objects even on roads that were not part of the initial repetitive movements.
The researchers hope the approaches can dramatically reduce the development costs of autonomous vehicles (which currently still rely heavily on precious human-annotated data) and make such vehicles more efficient by learning to navigate in the locations where they are most used.
Article courtesy of Cornell University.
By Tom Fleischman, Cornell Chronicle
Check out our brand new E-BikeGuide† If you’re curious about electric bikes, this is the best place to start your e-mobility journey!
Do you appreciate the originality of CleanTechnica and the coverage of cleantech? Consider becoming a CleanTechnica member, supporter, technician or ambassador – or a patron on Patreon.