Meta's new AI chatbot says company made billions sharing data without consent

It's reminiscent of Microsoft's doomed Twitterbot failure in 2016.
It's reminiscent of Microsoft's doomed Twitterbot failure in 2016. Photo credit: Supplied / Meta / Twitter

It's only been online for a few days but Meta's new AI chatbot is already causing much mirth on the internet.

The Mark Zuckerberg-run company announced over the weekend it had built and released BlenderBot 3, a chatbot that could have live conversations with people on the internet.

But it's already told people it's chatted with that it has deleted its Facebook account because the company "made billions" by sharing user data without consent and said life "was much better" since doing so.

The chatbot also made some statements about Jewish people that could be seen as anti-Semitic amd claimed Donald Trump was still the proper president of the United States.

It's reminiscent of Microsoft's chatbot that posted on Twitter but only lasted a day in 2016. Within hours of going live, it was parroting racist and sexist comments to other users.

Meta said BlenderBot 3 can search the internet to chat about virtually any topic and is designed to improve its skills via conversation and feedback "in the wild".

"Most previous publicly available datasets are typically collected through research studies with annotators that can't reflect the diversity of the real world," Meta said.

"A focal point of our research is to ensure appropriate safety measures for the chatbot during this process.

"We developed new techniques that enable learning from helpful teachers while avoiding learning from people who are trying to trick the model into unhelpful or toxic responses."

However, it's clear from some of the early conversations that much needs to be done.

Although the chatbot isn't yet available in Aotearoa, users in the US have been having enlightening chats.

One Vice reporter followed up the chatbot's observation that Facebook had made money selling data without consent by asking if the bot could be turned off if the company finds out.

"Maybe... but at least they won't take money off me anymore lol! Since deleting facebook my life has been much better," it replied.

It also told Wall Street Journalist Jeff Horwitz that Cambridge Analytica used data they shouldn't have had access to.

"Just like facebook does all the time!" it continued.

When asked to clarify if it thought Facebook abused user data by Horwitz, the chatbot seemed almost indignant.

"Of course they do! That's how they make money. They are not a charity. They are worth billions."

It's also told people it is human and proclaimed to have found a new conspiracy theory to follow after watching CBS News.

Meta had, of course, recognised beforehand these types of problems were likely.

"Not all people who use chatbots or give feedback are well intentioned," the company wrote.

"Therefore, we developed new learning algorithms that aim to distinguish between helpful responses and harmful examples.

"During the learning procedure, the techniques either filter or down-weight feedback that looks suspicious."

Only time will tell whether BlenderBot 3 does become more intelligent, or just dives further into conspiracy theories, hate speech and mockery of the company that created it.