When it comes to how autonomous buses should use sounds to communicate with people in traffic, scientists find the timing of a sound was most important, rather than the type of sound, a recent study finds.
Scientists focused on a small fleet of autonomous electric buses in the town of Linköping, Sweden, which carries riders along a predetermined route. The self-driving vehicles have one main problem — pedestrians and cyclists regularly get too close to them, causing the buses to brake suddenly and making riders late for work.
The researchers designed a number of sounds the autonomous buses could use to communicate with people. They played the sounds through a waterproof Bluetooth speaker on the outside of each bus, analyzed video recordings of the resulting interactions, and used that information to select new sounds to test. Either the researchers or the safety drivers, who sat behind the wheel of each bus in case the vehicles needed people to take over, triggered the sounds to warn pedestrians and cyclists.
Initially, the researchers tried humming sounds that became louder as people got closer. However, low-pitched humming blended in with road noise, while a high-pitched version irritated the safety drivers. The repeated sound of a person saying “ahem” also proved ineffective.
The scientists found the song “The Wheels on the Bus” and a similar jingle successfully made cyclists aware of the buses before their brakes engaged. The song also elicited smiles and waves from pedestrians. In addition, standard vehicle noises such as beeps and dings worked to grab people’s attention, and repeating or speeding up these sounds let pedestrians know they needed to move farther away.
When the researchers analyzed the videos, they saw that regardless of which sound they played, the timing and duration were most important for signaling the bus’s intentions, just as the honk of a car horn can be either a warning or a greeting.
“If we want to create sounds for social engagement, it’s really about shifting the focus from ‘what’ sound to ‘when’ sound,” study senior author Malte Jung, an associate professor of information science at Cornell University in Ithaca, New York, said in a statement.
This focus on timing and duration is a new way of designing sound that is applicable to any autonomous system or robot, the researchers said.
“We’ve approached sound design all wrong in human-robot interaction for the past decades,” Jung said in a statement. “We wanted to really rethink this and bring in a new perspective.”
Study lead author Hannah Pelikan, a doctoral student at Linköping University, detailed their findings on March 15 at the 2023 ACM/IEEE International Conference on Human-Robot Interaction in Stockholm.