Facebook could clean up its act, but probably won't - professor


There's doubt Facebook and other social media giants will be able to spot and remove hateful content as efficiently and comprehensively as governments would like.

Facebook's undergoing an overhaul this week, with a revamped app being rolled out putting more of the site's focus on groups and privacy.

"I get that a lot of people aren't sure that we are serious about this," chief executive told developers at a conference this week, triggering laughter.

Facebook's past records on privacy and hate speech are spotty at best. University of Canterbury law professor Ursula Cheer says, while some of it's latest promises are "puffery", the proof will be in the pudding.

"I think these tech companies can pretty much do anything they want," she told The AM Show on Thursday, and that the only real barrier is cost.

"It just depends how they set out to do that. They might want to do this with a bit of AI and combine it with some real human beings doing some filtering. It's going to cost them to do it, which is largely why they haven't done it in the past.

"It'll come down to what exactly they go looking for, how they do it, how soon they're able to remove material if they find it and how efficient those processes are."

Zuckerberg said the company is "very focused on making sure that our recommendations and discovery surfaces aren't highlighting groups where people are repeatedly sharing misinformation or harmful content".

"We are working hard to completely remove groups if they exist primarily to violate or policies or do things that are dangerous."

So far it's been left largely to Facebook itself to decide what counts as harmful content, but governments are starting to weigh in - most significantly Australia, which can now fine Facebook millions if it doesn't remove harmful content "expeditiously" and inform the authorities. The move "puts Australia at the forefront of a global movement to hold companies like Facebook and YouTube accountable", as the New York Times put it.

Cheer says the problem won't be easy to fix, with different ideas on what constitutes harmful speech in different parts of the world.

"Ultimately each state must have the ultimate say as to how its citizens are dealt with and how harmful material is dealt with in each country."

Ursula Cheer.
Ursula Cheer. Photo credit: The AM Show

Facebook might also find its algorithms aren't up to the task.

"If they want to remove groups that have harmful content, what about the groups that want to get together and talk about harmful content? Whatever tool you're using has to be able to differentiate between legitimate speech and hateful speech, however they define it."

But even humans might have trouble figuring that out.

"All governments face this problem if they're trying to define hate speech laws… at the same time you have to balance and allow for legitimate discussion."

Prime Minister Jacinda Ardern will host a forum with French President Emmanuel Macron later this month aimed at coming up with a global approach to social media's hate problem.

"Whether that'll work or not, I don't know. It's a good idea, but it's the first time we've seen some sort of global move to try and get consistency across the approach to it," says Cheer.

"Censorship is never entirely 100 percent successful and it will often take in too much material or even the other way, too little material, it seems remarkable to think that they might come up with some sort of definition or process that will be valid in every single state that it's operating in, and everybody's got different ideas about what might be hateful and what might not be."



Contact Newshub with your story tips: