'They're going to get smarter': How to spot if you're talking to a bot

If you're getting information about the coronavirus pandemic, politics and other contentious topics from social media, be careful - there's a good chance you're reading tweets written by bots that don't have the best intentions.

Twitter and other social media sites have been cracking down on fake accounts and bots lately. In June nearly 180,000 accounts linked to authoritarian regimes were targeted, and in July 150,000 spreading lies linked to the QAnon conspiracy movement were removed or limited in their reach. 

On Wednesday, the social media giant announced it had removed several accounts falsely posing as African-American supporters of US President Donald Trump. 

Twitter says it's the primary news source for one-in-three Kiwi users.

"In New Zealand and around the world, Twitter has consistently worked to protect the public conversation around elections - the cornerstone of our democracies," spokesperson Kara Hinesley said on Tuesday. 

But bots are likely to be out there, says NortonLifeLock researcher Daniel Kats - some helpful, others not. 

"Usually when people say bots, they mean one of two things - on one hand, you have what are called sock puppets," he told The AM Show on Wednesday.

"This is when a single entity or person sets up a group of accounts that are meant to look like distinct people, but they're really controlled by one person or entity. You may be most familiar with this type of phenomenon if you're on Instagram - you know about fake followers.

"On the other hand, [there are] fully-automated accounts that can interact with you without human interaction."

He says a 2018 study found half of all democracies are being targeted by bots on social media. The US in 2016 is perhaps the most notable example, Russia taking advantage of lax rules on Facebook in particular to sow division amongst American voters to tip the election in Trump's favour. The Brexit vote in the UK was targeted in a similar way.

But it's happening here too. In February about 40 accounts simultaneously posted identical messages backing Winston Peters. Asked if it had anything to do with it, his party NZ First didn't respond.

Daniel Kats.
Daniel Kats. Photo credit: The AM Show

Bots aren't always so easy to spot though. AI researchers have made huge strides since 2016, particularly in language. In February 2019, AI company OpenAI announced it had created a system - GPT-2 - so good at writing like a human they initially refused to make it publicly available, fearing it would be used to fill the internet with fake news. It was eventually released in November 2019. 

A new version, GPT-3, is out in a beta form - with database 100 times bigger than  GPT-2. In September UK newspaper the Guardian posted an op-ed on the dangers of AI written entirely by GPT-3, saying it was so well-written it "took less time to edit than many human op-eds".

Both GPT-2 and GPT-3 have been used for humour on social media - there's an entire subreddit populated by bots trained on the GPT-2 text which straddles the uncanny valley, and Twitter bots have been created which generate everything from fake New Zealand news to Star Trek plot outlines. 

But not everyone uses the technology to amuse. 

"I think unfortunately as we get more advances in machine learning and AI technology, you'll start seeing fully-automated bots get much more sophisticated and it will become a lot harder to tell them apart from humans in how they interact," said Kats. "They're going to get smarter."

So how can we spot them? NortonLifeLock has created a browser plugin called BotSight which it claims can tell you, with 96 percent accuracy, whether an account is human or robot. 

Newshub tested it out - it correctly identified two bots created by the author - @NzNewsByABot and @GamebookBot400 - but failed to spot a third, @kiwinewsbot, which it said was 59 percent likely a real person. 

Judith Collins' Twitter account is 98 percent likely to be run by a human, BotSight said. Other politicians Newshub tested had similar results. 

If the technology lets you down, Kats says there are other clues to look out for.

"The actors that are creating these bots are usually creating them in really big numbers - think thousands, tens of thousands, hundreds of thousands. Therein lies their strength. They're not taking the individual effort to really try and personalise each account - you can see little tells. 

"For example, you have digits in the account handle, or you can have a stock profile photo or no photo at all. Of course when you're dealing with bots proper - fully automated accounts - they're really not very smart, at least not yet. You can tell by how limited their interaction is that it's a fully automated account." 

With recent studies finding people who rely on social media for information about COVID-19 among those most susceptible to misinformation, being able to tell when someone's trying to trick you is a vital skill.