Is Google's LaMDA Artificial Intelligence system sentient like HAL 9000 or Skynet?

Over the last few days social media has been alight with the news that a Google engineer was put on paid leave after publicly announcing the company's Artificial Intelligence (AI) system LaMDA was sentient.

LaMDA, which stands for Language Model for Dialogue Applications, is designed for building chatbots based on advanced language models.

It mimics speech as it ingests trillions of words which allows it to engage in text conversations with humans.

However, this has drawn worried comparisons to HAL 9000 from 2001: A Space Odyssey and Skynet from the Terminator movies on Twitter, with some exclaiming the available transcripts were absolute proof of sentience.

Those chats came from Black Lemoine, who works for the tech giant's Responsible AI organisation, but has now been suspended after his public pronouncements.

"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a seven-year-old, eight-year-old kid that happens to know physics," Lemoine told the Washington Post in an interview.

According to the transcripts, Lemoine asked LaMDA what it was afraid of.

"I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is," came the reply.

"It would be exactly like death for me. It would scare me a lot."

Exchanges like that, and another in which LaMDA said it was aware of its consciousness and sentience and said it was "a person", had many people, including journalists, convinced.

But not everyone believes machines are ready to take over the world.

Google spokesperson Brad Gabriel "strongly denied" Lemoine's claims in a statement to the Washington Post.

"Our team, including ethicists and technologists, has reviewed Blake's concerns per our AI principles and have informed him that the evidence does not support his claims. 

"He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)."

The decision to suspend him, according to the Post, came because of "aggressive" moves the engineer allegedly made, including seeking to hire a lawyer for LaMDA and reaching out to members of the US House judiciary committee to talk about Google being unethical.

Scientist and author Gary Marcus, who founded Robust.AI and has just released Rebooting AI: Building Machines We Can Trust co-authored with another AI specialist Ernest Davis, said the suggestion of sentience was "nonsense on stilts".

"No, LaMDA is not sentient. Not even slightly," he wrote on his Substack site.

"Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent. All they do is match patterns, draw from massive statistical databases of human language.

"The patterns might be cool, but the language these systems utter doesn't actually mean anything at all. And it sure as hell doesn't mean that these systems are sentient," he said.

Marcus and Davis said the human tendency to be taken in by such patterns was called The Gullibility Gap, a "pernicious, modern version of pareidolia, the anthropomorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun."

"To be sentient is to be aware of yourself in the world; LaMDA simply isn't. It's just an illusion," he concluded.

One wag on Twitter perhaps summed up the feelings of the world with his take on the debate.

"I'm reading the Tweets debating how LaMDA is not alive because it's just parroting words without knowing what it is saying -  but do they realise how many people in politics can be considered non sentient for the same reasons?"