Experts concerned over the rise of deepfake technology

Experts are calling for more protection for New Zealanders from the harms of deepfakes and other synthetic media.

Deepfakes, artificially generated mimicries of real people, have been prominent online since 2018. However, recent advances in the underlying technology mean they are easier to make and harder to detect.

Experts concerned over the rise of deepfake technology

“Anybody who wants to do a deepfake can probably do one from home,” says Andrew Chen, Auckland University research fellow.

While deepfakes are currently identifiable as reproductions, as the technology improves, “there’s just not going to be a way for us to be able to distinguish that this is a real image versus one that's been generated by a computer,” says Chen.

In recent years deepfakes have been used in elections with relatively low impact, however their potential harm increases as they become more convincing.

Politics Lecturer Dr Sarah Bickerton observes “We are a high trust society, but that can’t be taken for granted. Losing that trust has massive ramifications.”

While we are past the point of preventing the production of deepfakes, Dr Bickerton argues that we should work to stem their spread - New Zealand has already similarly led the way with the Christchurch Call.

“The virality of it is the problem. It’s tens of them, hundreds of them, thousands of them, being similarly influenced and taken in by deepfakes. And that’s where we can do something.”

Bickerton notes that “we are already seeing gendered targeting of politicians. So we're going to see deepfakes doing exactly the same thing.”

The gendered nature of deepfake targeting is already visible in the prevalence of deepfaked pornography featuring women, from celebrities to civilians. 

Netsafe’s Sean Lyons says that in New Zealand there has certainly been an "increase in reports of this type of content" during that period but it is "hard to ascertain the degree or extent to which images are altered or synthesised" so it is "not possible to quantify."

Arran Hunt, a lawyer who specialises in technology, says the Harmful Digital Communication Act is ambiguous and it’s currently unclear whether creating sexualised deepfakes is an offence.

However, deepfaked videos are only one part of the emerging technological field that is synthetic media. 

New AI tools such as Dalle-2 and GPT3, when used in combination, will soon be able to write convincing news stories, academic essays, or cabinet documents, accompanied by entirely fake videos and imagery, all in a way practically impossible to detect.

While the potential consequences of this technology is alarming to some, others believe current legislation is appropriate and increasing restrictions may block pathways for creative expression, and alarmism risks unnecessary censorship.

"That issue is already here and we’ve dealt with it," says Tom Barraclough, author of an extensive 2019 report for the law foundation on deepfakes. He argues regulation risks limiting freedom of expression.

"If we're intervening too heavily from a legal perspective, we risk undermining people's ability to do that."

But Bickerton fears inaction would be worse than risking censorship. "We have a unique opportunity here. But if we don't take it, there are some massive ramifications down the track, particularly over the next few years in terms of the rapidity of elections that are occurring over that period of time."

Regardless of your position, the ramifications on how media is produced and consumed will be wide-ranging.

Chen observes, "Seeing is believing, but if now we can no longer rely on that to actually believe, then that undermines what it means to be a human in some sense. Our eyes now betray us."

Watch the full video for more.