AI technology is 'not perfect' - Facebook answers why it didn't detect shooting video

Facebook has responded to questions about why their artificial intelligence systems didn't detect the Christchurch shooter's livestream video.

While there have been advancements in artificial intelligence tech over the past few years, the company admitted it still has holes.

"It's not perfect," a release stated.

"AI systems are based on 'training data', which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video."

The social media giant explained that while the AI systems were good at picking up on nudity, terrorist propaganda and graphic violence, their systems didn't pick up Friday's video.

"To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare.

"Another challenge is to automatically discern this content from visually similar, innocuous content."

They explained that if thousands of videos from live-streamed video games are flagged, their reviewers could miss important videos that could be used to alert first responders to get help on the ground.

"AI is an incredibly important part of our fight against terrorist content on our platforms, and while its effectiveness continues to improve, it is never going to be perfect."

Facebook said of the 200 who watched the video during its live broadcast, not one person reported it.

"This matters because reports we get while a video is broadcasting live are prioritized for accelerated review.

"We do this because when a video is still live, if there is real-world harm we have a better chance to alert first responders and try to get help on the ground."

The first user report came in 29 minutes after the broadcast began on Friday - 12 minutes after the live broadcast ended.

A heightened policy on suicide has seen the company explore an accelerated process for content containing self-harm, but as no one reported that to be the case, it was handled within specific procedures.

 "As a learning from this, we are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review."

It said tech analysts and staff will continue to employ people on their safety and security team, which currently employs over 30,000, to monitor things that the AI misses.

The company also said that media organisations were partly to blame for the spread of the video, which found its way onto TV channels and news websites.

"We recognize there is a difficult balance to strike in covering a tragedy like this while not providing bad actors additional amplification for their message of hate."

Facebook says fewer than 200 people watched the video when it was live and another 4,000 before it was removed from the site.

Newshub.