Home Driverless Cars New AI Model Can Help AVs Drive Better in Bad Weather

New AI Model Can Help AVs Drive Better in Bad Weather

by Charles Choi
Safe, reliable autonomous vehicles will need to be able to precisely position themselves at all times, including during adverse weather. Credit: Yasin Almalıoğlu.

A new artificial intelligence (AI) system can help autonomous vehicles (AVs) drive more safely and reliably in bad weather and under GPS-denied scenarios, according to a new study.

Currently, AVs face challenges knowing their precise location on a map to within a few centimeters during bad weather. For example, GPS radio signals only provide meter-level location accuracy, and can get lost or degraded in many environments due to obstacles or reflections — in particular, AV operation in urban areas surrounded by high-rise buildings remains highly challenging. In addition, cameras and lidars operate in the visible and infrared spectrum, and so bad weather dramatically disrupts their sensory data.

Millimeter-wave radars are immune to adverse conditions such as scene illumination, fog, rain and snow. Furthermore, they do not require optical lenses and can therefore be integrated into plastic housings, making them highly resilient to water and dust. However, their data are much coarser and noisier than those of lidars and cameras.

This problem with positioning is a major reason why AVs “have been limited to relatively small-scale trials up to now,” study lead author Yasin Almalioglu at the University of Oxford in England said in a statement. “For instance, weather such as rain or snow may cause an AV to detect itself in the wrong lane before a turn, or to stop too late at an intersection because of imprecise positioning.”

To overcome this problem, Almalioglu and his colleagues developed a new AI model for “ego-motion estimation,” a crucial component of AV navigation in which the driving system estimates the car’s moving position relative to objects observed from the car itself. The model, dubbed geometry-aware multimodal ego-motion estimation (GRAMME), brought together data from visual sensors, which can be disrupted by adverse conditions, with information from weather-immune sources such as radar, so the benefits of each can be used under different weather conditions.

The researchers trained the AI model on several publicly available AV datasets, which included data from multiple sensors, such as cameras, lidar and radar. The information was collected under diverse settings, including variable light and darkness levels and precipitation.

These datasets helped generate algorithms to reconstruct scene geometry and calculate the car’s position from new data. The researchers demonstrated that the model showed robust all-weather performance, including conditions of rain, fog and snow, as well as day and night, outperforming state-of-the-art ego-motion estimation approaches.

“The precise positioning capability provides a basis for numerous core functionalities of AVs, such as motion planning, prediction, situational awareness, and collision avoidance,” study co-senior author Niki Trigoni at the University of Oxford said in a statement. “This study provides an exciting complementary solution for the AV software stack to achieve this capability.”

The researchers anticipate that their work will bring AVs one step closer to safe and smooth all-weather autonomous driving, and ultimately a broader use within societies.

“Estimating the precise location of AVs is a critical milestone to achieving reliable autonomous driving under challenging conditions,” study co-senior author Andrew Markham at the University of Oxford said in a statement. “This study effectively exploits the complementary aspects of different sensors to help AVs navigate in difficult daily scenarios.”

The scientists detailed their findings online Sept. 8 in the journal Nature Machine Intelligence.

You may also like