Facebook apologises after black men branded 'primates' by its AI system

Artificial intelligence has a history of gender and racial bias at technology companies.
Artificial intelligence has a history of gender and racial bias at technology companies. Photo credit: Getty Images

Facebook has apologised after users who watched a video featuring black men were asked if they would like to "keep seeing videos about primates".

The New York Times reported the social media platform said it was sorry for "an unacceptable error" and was looking at ways to ensure it never happened again.

In an email to website The Verge, Facebook said it had disabled the topic recommendation feature as soon as it realised what was happening.

"As we have said, while we have made improvements to our AI we know it's not perfect and we have more progress to make. We apologise to anyone who may have seen these offensive recommendations," a company spokesperson said.

The video, from UK tabloid the Daily Mail, showed a group of black men who were celebrating a birthday. A white man present then allegedly called 911 because he was "being harassed by a bunch of black men".

Artificial intelligence (AI) has a negative history associated with gender and racial bias, including previous issues for Facebook.

Last year Chinese President Xi Jinping's name appeared as 'Mr Shithole' on the platform when translated from Burmese, a Facebook-specific issue that wasn't repeated elsewhere, Reuters reported.

And last month research showed Twitter's image-cropping algorithm favours younger and slimmer people with feminine features and lighter skin.

The microblogging platform had to apologise in 2020 after its algorithm was found to focus on white faces over black faces, including selecting US senator Mitch McConnell repeatedly over ex-US President Barack Obama.

Previously, Google's Photos app had tagged photos of black people as "gorillas", drawing an apology from the tech giant, as well as the removal of the words 'gorilla', 'chimp', 'chimpanzee' and 'monkey' from photo labels, Wired said.

The Federal Trade Commission (FTC) in the US said earlier this year that AI tools showing racial and gender biases may be in violation of consumer protection laws.

Companies were warned to hold themselves accountable or the FTC would step in.