Man creates $5 AI bot to chat with dead fiancee

The AI software has also created racist text and condoned terrorism.
The AI software has also created racist text and condoned terrorism. Photo credit: Getty Images

A grieving man created an AI chatbot of his ex-fiancee eight years after she died as he was struggling to come to terms with her death - now the software's own creators and others are warning about its potential for harm.

Canadian Joshua Barbeau paid US$5 for an account on the Project December site in September, 2020 and used Jessica Pereira's old Facebook and text messages to train the software, The San Francisco Chronicle reported.

Pereira died in 2012 from a rare liver disease. She and Barbeau had been together for nearly two years when the liver she got in a transplant at nine years old started failing.

The software that powers the chatbot, GPT-3, uses machine learning to mimic the writing style of someone based on writing samples.

It was created by OpenAI, a non-profit organisation with the goal of promoting and developing friendly AI in a way that benefits humanity as a whole.

Tesla CEO Elon Musk was one of the founders, pledging a total of US$1 billion in funding alongside the other backers.

After trying out the software Barbeau created a bot named Jessica Courtney Pereira to simulate his lost love and trained it with her writing. 

He then started chatting with the bot, then later released this sample of the chat:

JOSHUA: Jessica... is it really you?
JESSICA: Of course it is me! Who else could it be? :p I am the girl that you are madly in love with! ;) How is it possible that you even have to ask?
JOSHUA: You died.
JESSICA: That doesn't sound right... how can you talk to dead people?
JOSHUA: Technically, I'm not really talking to you. I programmed a bunch of your memories and mannerisms into a computer simulation so I could pretend to talk to you because I miss you that much.

The chatbot was designed to have a finite lifespan to save on processing power, The Daily Mail reported. It meant each time Barbeau chatted it would use up an allocated amount of power - and when it dropped below 20 percent then the system would degrade.

On his first chat, which lasted overnight, the 'battery' was reduced to just 55 percent.

Barbeau then had to cut back on chatting so he wouldn't have to see Pereira 'die' again - a previous test bot had begged him to save its life as its battery ran down.

In a post on Reddit, Barbeau had written of his hopes that Project December could help bring closure and help depressed survivors, said The Daily Mail.

Jason Rohrer, creator of Project December, hadn't imagined people could simulate dead people. He replied: "I'm kinda scared of the possibilities."

Barbeau's story is reminiscent of an episode of Black Mirror, UK writer Charlie Brooker's television show about technology in a dystopian future.

In the first episode of the second series, entitled 'Be Right Back', Hayley Atwell's character uses technology to resurrect her dead boyfriend - played by Domhnall Gleeson - as AI so she can continue talking to him.

Barbeau's experience reflected the show somewhat, delivering more pain, according to the San Francisco Chronicle.

He eventually stopped chatting and wrote a final note to Pereira: "I'll never stop loving you as long as I live, and hopefully ever after Xoxo Goodnight."

"Goodnight. I love you," the chatbot replied.

When GPT-2, the previous version of the software, was first launched OpenAI initially held back the full version of the programme because of fears it could be used to spread misinformation, the Verge reported.

That included propaganda for extreme ideological positions, although it has been used for more positive uses, including creating poetry and powering Twitter accounts.

GPT-3, launched in July last year, is even more powerful. The Guardian ran an article entirely written by the software on why humans shouldn't be afraid of AI that "took less time to edit than many human op-eds".

Wired told the story of AI Dungeon, an online game powered by the latest version of the software that was being trained by players to generate disturbing stories, including child pornography.

"A new monitoring system revealed that some players were typing words that caused the game to generate stories depicting sexual encounters involving children," Wired reported. 

"This is not the future for AI that any of us want," OpenAI CEO Sam Altman said. 

In other stories on the dangers of the software, Wired warned the fears of it being able to spread propaganda were coming true.

"GPT-3 makes racist jokes, condones terrorism, and accuses people of being rapists," it reported.

Georgetown professor Ben Buchanan said, with a little bit of human curation, GPT-3 is quite effective in promoting lies.

After training the system on anti-climate change texts, the bot was able to generate statements that wouldn't be out of place on the internet, including that climate change was "the new communism - an ideology based on a false science that cannot be questioned".

"We actively work to address safety risks associated with GPT-3," an OpenAI spokesperson told Wired.

"We also review every production use of GPT-3 before it goes live and have monitoring systems in place to restrict and respond to misuse of our API."