Green MP Golriz Ghahraman called for Facebook's self-governing days to end during the social media giant's first appearance before a parliamentary committee in New Zealand.
"I have some grave concerns," Ghahraman told Mia Garlick, Facebook's Regional Director of Public Policy for Australia, New Zealand and the Pacific, during her appearance before the Justice Select Committee on Thursday.
Ghahraman said her "dual concerns" centred on data privacy and the spread of disinformation on Facebook. She pointed to 2019 when the personal data of over 500 million Facebook users was exposed, and the Cambridge Analytica data mining scandal.
Ghahraman also touched on Twitter suspending then-US President Donald Trump's account earlier this year after the storming of the Capitol, and asked the Facebook representatives what standards they're applying when making these decisions.
"When are we going to see Facebook present us with a transparent, accountable system of where you will intervene, where your standards actually lie and why we should leave you to be self-governing as we have largely done until now?" Ghahraman said.
"I don't think you should be left self-governing anymore."
Garlick said Facebook's Community Standards are publicly available and are regularly updated. Facebook also released its Community Standards and Engagement Report every quarter, which shows enforcement action taken by Facebook.
Facebook also made it mandatory in April 2020 for New Zealand political parties to join its ad transparency tool, which shows how much an ad cost and other details such as who the demographic target is and where it's running.
"We absolutely hear you on the broader concern of platforms making so many important decisions by themselves and that's why we've been actively calling for regulatory frameworks, in particular in relation to election integrity and political advertising, because we do recognise the broader importance for our society," said Garlick.
Ghahraman said while she appreciated that Facebook is "more transparent than you used to be", her concern is that Facebook's enforcement action isn't known until after it has happened.
"I guess my concern is, that we're yet to know - and we should know - what standard you're actually applying before you apply it, so we know how to create regulation [to know] whether you're falling short of infringing on freedom of speech, which is really important, or whether you're falling short when it comes to things that actually endanger us," she said.
"I think the time of us treating you as the postman rather than the publisher has passed - you are the publisher and we hold publishers to account in a very different way in online spaces."
Garlick said Facebook recognises the need to be more transparent.
"Obviously after Cambridge Analytica we made a number of changes... in terms of the data that developers could use and we also audited all of the apps and made a number of changes in that respect," she said.
"As we get better at detecting and removing fake accounts or other malicious activity on our services, people will try to game the system and seek new ways around that, and so it's a constantly evolving process for us to identify new ways in which people are attempting to misuse our services so we can get ahead of that both in terms of whether we need to update policies or invest more in technology."
Garlick revealed 2.6 billion fake accounts had been removed from the site, which she described as "quite normal and every day for us".
Ghahraman said she remained "dissatisfied".
"I know that in the European system, particularly in Germany, they have a far more effective way of ensuring safety when users access Facebook, when it comes to extremism. That's because they have a 50 million euro fine for breaches that will apply to Facebook if their hate speech laws are breached," she said.
"We haven't heard anything from you that is using all of the tools at your disposal to protect us in the way that you protect Europeans because that fine hangs over your head. Will you do that for us in New Zealand?"
Garlick said the only difference is that Germany has a hate speech code of conduct, requiring platforms to send suspected criminal content directly to the federal police at the point it's reported by a user.
"The main difference with Germany is we're determined to require illegality of content which I think is not a place we're comfortable with being. But there's certainly no difference in terms of how we develop our policies and enforce them," Garlick said.
"The way it works is we have global standards and then if there's a law in a particular country that sets a different standard, the most obvious example would be that there are much greater restrictions in Europe in relation to some symbols from the Second World War, that might be allowed in America, so we'll restrict access to content that complies with our standards but doesn't comply with local law.
"Similarly in New Zealand, if there's content that meets our standards but doesn't meet New Zealand law, we'll stop it from being accessible in New Zealand out of respect for local law."
Ghahraman remains unconvinced.
"We've got a multinational, multibillion-dollar company making decisions about public safety, freedom of speech and our democracy but we don't have any clear guidelines about how they're applying those standards.
"I'd like to see legislation that sets out how we treat these online spaces because we live in these online spaces now."
She described Facebook's submission as "quite waffly".