Autonomous vehicles programmed with expressive behaviors may prove safer for pedestrians, according to a new study from Motional, the $4 billion driverless technology joint venture between Aptiv and Hyundai.
A major challenge that autonomous vehicles face in an urban setting is communicating with pedestrians and others on the road. Whereas human drivers and pedestrians often use familiar, intuitive social cues such as hand and arm movements, gaze and eye contact to communicate their intentions to stop or yield, interacting with an autonomous vehicle without a human in the driver’s seat can feel confusing due to the absence of these cues.
Scientists at Motional investigated what expressive behaviors they could endow an autonomous vehicle to convey the machine’s intention to yield to a pedestrian trying to cross a street. They sought inspiration from the animation industry’s use of movement to breathe life even in inanimate objects.
This is not the first time automakers have investigated endowing autonomous vehicles with expressive behaviors to make them safer. For instance, Volvo has suggested they can make self-driving cars safer by giving them a type of body language featuring lights and sounds.
The researchers at Motional first developed a virtual reality (VR) environment in which 53 volunteers would have to cross a four-way intersection with an unmarked crosswalk in the presence of an oncoming vehicle that might or might not stop. The oncoming vehicle was a white van modeled after a 2019 Chrysler Pacifica, and depending on the experiment, it was shown either with or without a driver at the wheel. The volunteers wore VR headsets and pressing the trigger on a handheld controller to mark when they would cross the street.
Next, the scientists crafted several expressive autonomous vehicle behaviors conveying its intention to stop for the pedestrian. These included playing sounds that exaggerated a car’s braking, as well as overemphasizing a vehicle’s nose dive when braking,
The research team recorded how quickly each volunteer determined it was safe to cross the street. The group also repeatedly surveyed the subjective experiences of each volunteer.
The scientists found that easing into a full stop or stopping farther away from pedestrians could help the volunteers make quicker decisions to cross the road. Additionally, stopping farther away from the pedestrian resulted in a higher subjective experience of a sense of safety, confidence and understanding of the machine’s intentions. Effective expressive behaviors were recognizable upon the first exposure, suggesting their intuitive nature.
Intriguingly, the researchers did not see a significant difference between pedestrian interactions with vehicles with or without human drivers. Indeed, a few volunteers noted they did not even notice the absence of a driver.
The scientists suggested further investigation into the expressive behaviors they found most effective. In addition, they publicly released the VR files they used in their study to help other researchers conduct similar work. They detailed their findings online December in the journal IEEE Robotics and Automation Letters.