Twitter's image-cropping algorithm favours younger and slimmer people with feminine features and lighter skin, according to the winner of a challenge to find biases.
Last week the social media giant announced the opportunity for computer researchers and hackers to test the algorithm, with the aim of identifying "potential harms of this algorithm beyond what we identified ourselves".
Twitter had to apologise last year after the algorithm was found to automatically focus on white faces over black faces, including selecting US senator Mitch McConnell repeatedly over former US President Barack Obama.
At the time a company spokesperson said "our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing but it's clear from these examples that we've got more analysis to do".
In May this year the company said further analysis by researchers found an eight percent difference in favour of women, and a four percent favour toward white individuals with image cropping - and it appears from the winner's work there are still some issues.
The competition was won by Bogdan Kulynych, a privacy, security and artificial intelligence (AI) graduate student.
His work compared original images of people to a series of variations created by a tool that generates random, fake images based on training on real images.
"The target model is biased towards deeming more salient the depictions of people that appear slim, young, of light or warm skin colour and smooth skin texture, and with stereotypically feminine facial traits," Kulynych concluded in his submission.
"This bias could result in exclusion of minoritised populations and perpetuation of stereotypical beauty standards in thousands of images."
Second place went to a team led by Parham Aarabi, a professor at the University of Toronto. Their submission found the algorithm was biased against individuals with white or grey hair by comparing original photos to those with artificially lightened hair.
"The most common effect it has is it crops out people who are older," Aarabi told NBC News. "The older demographic is predominantly marginalised with the algorithm."
Other issues identified included wheelchair users being cropped out of photographs and people wearing head coverings - like hijabs - more likely to be ignored.
Rumman Chowdhury, director of the machine learning ethics team at Twitter that ran the contest, said identifying all the ways in which an algorithm might go wrong when released to the public was daunting and probably not possible.
"We want to set a precedent at Twitter and in the industry for proactive and collective identification of algorithmic harms," she told NBC.