Home Driverless Cars Driverless cars are worse at detecting children and darker-skinned pedestrians

Driverless cars are worse at detecting children and darker-skinned pedestrians

by Charles Choi
Driverless cars are currently worse at detecting children and darker-skinned pedestrians. Courtesy: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_%26_AI_%26_Machine_Learning.jpg

Driverless cars are worse at detecting children and darker-skinned pedestrians, a concern that appears to go beyond how visible they are or not on the road, a recent study finds.

Increasingly, artificial intelligence (AI) is helping support major decisions, such as who receives a loan, the length of a prison sentence, and who gets health care first. The hope is that AIs can reach decisions more impartially than people often have, but much research has found that biases embedded in the data on which these AIs are trained can result in automated discrimination en masse, posing immense risks to society.

Until now, there was relatively little research exploring the question of fairness in autonomous driving systems. This raised questions about whether autonomous driving systems might, say, display unequal performance at detecting pedestrians when it came to race, with potentially lethal consequences.

“While the impact of unfair AI systems has already been well documented, from AI recruitment software favoring male applicants to facial recognition software being less accurate for black women than white men, the danger that self-driving cars can pose is acute,” study co-author Jie Zhang at Kings College London said in a statement. “Before, minority individuals may have been denied vital services. Now they might face severe injury.”

In the new study, Zhang and her colleagues analyzed eight AI-driven pedestrian detection systems used in autonomous vehicle research. A challenge they faced was how commonly used datasets of images for pedestrian detection are often not labeled with key features such as skin tone and gender. As such, the scientists manually labeled four widely adopted real-world datasets with labels such as gender, age and skin tone.

All in all, the researchers tested AIs using more than 8,300 images with more than 16,000 gender labels, 20,000 age labels and 3,500 skin tone labels. They found significant fairness issues related to age and skin tone — on average, the systems were nearly 20 percent more accurate at detecting adults than children, and slightly more than 7.5 percent more accurate at detecting light-skinned pedestrians compared to their darker-skinned counterparts.

“Fairness when it comes to AI is when an AI system treats privileged and under-privileged groups the same, which is not what is happening when it comes to autonomous vehicles,” Zhang said in a statement. “Car manufacturers don’t release the details of the software they use for pedestrian detection, but as they are usually built upon the same open-source systems we used in our research, we can be quite sure that they are running into the same issues of bias.”

All in all, the researchers said a major cause of this discrepancy was that the images used to train the pedestrian detection systems featured more people with light skin than dark skin. This resulted in a lack of fairness in the AI systems.

The scientists also investigated how factors such as brightness, contrast and weather influenced the fairness of these detectors. They found this bias toward dark-skinned pedestrians grew significantly under scenarios of low brightness and low contrast, posing concerns when it comes to driving in the dark or bad weather. For instance, when it came to not spotting dark-skinned pedestrians versus their light-skinned counterparts, the difference grew from 7.14 percent to 9.86 percent between daytime and nighttime scenarios.

The researchers now hope that manufacturers will be more transparent when it comes to how their commercial pedestrian detection AI models are trained and perform before they hit the streets. They also suggested that stronger regulations are needed for these systems.

“Automotive manufacturers and the government need to come together to build regulation that ensures that the safety of these systems can be measured objectively, especially when it comes to fairness,” Zhang said in a statement. Currently, the “provision for fairness in these systems is limited, which can have a major impact not only on future systems, but directly on pedestrian safety. As AI becomes more and more integrated into our daily lives, from the types of cars we ride, to the way we interact with law enforcement, this issue of fairness will only grow in importance.”

The scientists detailed their findings on the ArXiv preprint server.

You may also like