Simulated terrible drivers can cut the time and costs of autonomous vehicle testing by a factor of a thousand or more, a recent study finds.
Currently, autonomous vehicle safety performances remains significantly worse than that of human driver. Average drivers in the United States face a roughly one in 2 million chance of a crash per mile they drive. In contrast, state of the art autonomous vehicles need to disengage their automated driving systems to let human safety drivers take control about every 200,000 miles according to the 2021 Disengagement Reports from California.
A major challenge the development of fully autonomous vehicles has faced is the cost and time associated with safety testing. This is because the most dangerous emergencies faced during driving, such as accidents or near misses, “are very rare in the real world,” study senior author Henry Liu, a professor of civil engineering at the University of Michigan, Ann Arbor, said in a statement.
This problem, which the scientists dubbed the “curse of rarity,” means that safety testing autonomous vehicles is currently highly inefficient, akin to looking for a needle in a haystack. In order to encounter safety-critical events in real life, test vehicles have to drive hundreds of millions to hundreds of billions of miles in order to repeatedly experience these situations and collect the data needed to overcome them.
Now a new system from Liu and his colleagues reveals that artificial intelligence (AI) could help reduce the number of miles needed for testing by 99.99 percent. They detailed their findings in the March 23 issue of the journal Nature.
The scientists basically trained AI programs to become horrible drivers. The researchers took real- world traffic data that possessed rare safety-critical events and got rid everything reflecting the long spans when drivers behaved in responsible, expected ways. As such, these AI programs were trained to perform dangerous maneuvers, such as running red lights. This would force autonomous vehicles to make emergency decisions that confront human drivers only rarely on the road but are needed to train the machines.
In experiments, the scientists went beyond testing the autonomous vehicle software against these AI adversaries in computer-simulated worlds. Instead, they had autonomous vehicles drive in circles on real tracks where they faced virtual AI bad drivers.
The aim of these “augmented reality” environments was to better capture the complexity of the real world to better account for challenges such as dirt on camera lenses that may push self-driving cars to act in an unsafe manner. At the same time, this approach was cheaper than having the AI bad drivers handle actual cars, and much safer than humans attempt to act as bad drivers to test the autonomous vehicles.
Testing was conducted at Mcity’s urban environment in Ann Arbor, Michigan, as well as the highway test track at the American Center for Mobility in Ypsilanti, Michigan. The researchers employed a vehicle capable of level 4 autonomy — that is, the ability to act without any human intervention in the vast majority of situations.
The scientists found these AI adversaries could speed up the testing of the autonomous vehicle by 1,000 to 100,000 times. This could help automakers quickly verify whether their autonomous vehicle technology can reduce crashes and save lives.