Twitter launches competition to find biases in its image algorithm

The company was criticised last year after image previews excluded black people's faces.
The company was criticised last year after image previews excluded black people's faces. Photo credit: Getty Images

Twitter has launched a competition for computer researchers and hackers to identify biases in its image-cropping algorithm, after a group of researchers previously found the algorithm tended to exclude Black people and men.

The competition is part of a wider effort across the tech industry to ensure artificial intelligence (AI) technologies act ethically. 

The social networking company said in a blog post that the bounty competition was aimed at identifying "potential harms of this algorithm beyond what we identified ourselves."

Following criticism last year about image previews in posts excluding Black people's faces, the company claimed three of its machine-learning researchers found an eight percent difference from demographic parity in favour of women, and a four percent favour toward white individuals.

Twitter publicly released the computer code which decides how images are cropped in the Twitter feed, and asked  participants to find how the algorithm could cause harm, through stereotyping or denigrating any group of people. 

Competitors will have to submit a description of what they've found as well as a dataset that will be run through Twitter's algorithm to demonstrate the fault.

Points will be awarded based upon the type of harm and the potential impact of the issue to determine the winners.

The winners will receive cash prizes ranging from US$500 to US$3500,with US$1000 prizes for the most innovative work and the work that applies to most types of algorithms.

They will be invited to present their work at a workshop hosted by Twitter at DEF CON in August, one of largest hacker conferences held annually in Las Vegas.

Much like other technologies, AI has the power for both good and bad. A University of Otago report in May suggested that Kiwis could one day work less but get paid the same amount because of it.

But in June, it emerged global shopping giant Amazon was firing drivers via automated emails when its algorithm was making judgement calls. 

And last week potential harm was again highlighted when a grieving Canadian man created an AI chatbot of his ex-fiancee eight years after she died as he was struggling to come to terms with her death - leading to "more pain" and subsequent warnings over its use.