In Brief

Self-driving cars ‘less likely to spot black pedestrians’

Researchers call for action to ensure automated systems can identify different demographics

Self-driving cars may have more difficulty detecting and avoiding darker-skinned pedestrians than those with lighter skin, a new study has found.

Researchers at the Georgia Institute of Technology, in Atlanta, divided images of pedestrians into groups based on the Fitzpatrick scale, which classifies skin tone, reports news site The Daily Dot

The object detection system used by self-driving cars, such as sensors and cameras, was found to recognise more images showing paler-skinned people, and was “5% better at predicting pedestrians with lighter skin tones than darker ones”.

 Vox reports that this “disparity persisted even when researchers controlled for variables like the time of day in images or the occasionally obstructed view of pedestrians”.

This makes the cars “less likely to spot black people and to stop before crashing into them”, The Independent says.

The discovery has triggered calls for action to tackle the problem.

“Engineers responsible for the development of these systems need to place more emphasis on training the systems with higher accuracy for this group,” says specialist website Interesting Engineering, which adds that the findings are “another reminder of the general lack of diversity in the AI world”.

Vox notes that the study, titled Predictive Inequity in Object Detection, has not been peer-reviewed and did not use the same image-detection systems or image sets featured in current self-driving vehicles.

However, “the researchers had to do it this way because companies don’t make their data available for scrutiny - a serious issue given that this a matter of public interest”, the site adds.

The researchers have recommended that tech firms increase the number of images of dark-skinned pedestrians in the data sets used to train self-driving car computer systems.

Jamie Morgenstern, one of the study authors, said that the Georgia team’s findings were also application to other kinds of recognition technology.

“The main takeaway from our work is that vision systems that share common structures to the ones we tested should be looked at more closely,” Morgenstern concluded.

Recommended

Catfishing: what the law says
Person typing computer keyboard
Getting to grips with . . .

Catfishing: what the law says

The Chips Act: congress’s $52bn giveaway 
US President Joe Biden holds a semiconductor chip 
Getting to grips with . . .

The Chips Act: congress’s $52bn giveaway 

How TikTok is shaking up the news
TikTok on a screen
In Focus

How TikTok is shaking up the news

How Instagram’s makeover has alienated users
A woman looks at her smartphone
Why we’re talking about . . .

How Instagram’s makeover has alienated users

Popular articles

Is World War Three on the cards?
Ukrainian soldiers patrol on the frontline in Zolote, Ukraine
In Depth

Is World War Three on the cards?

Ten Things You Need to Know Today: 19 August 2022
10 Downing Street
Daily Briefing

Ten Things You Need to Know Today: 19 August 2022

Ten Things You Need to Know Today: 17 August 2022
10 Downing Street
Daily Briefing

Ten Things You Need to Know Today: 17 August 2022

The Week Footer Banner