ADVERTISEMENT

Facebook Executive Warns AI Video Screening Still a Long Way Off

Facebook Executive Warns AI Video Screening Still a Long Way Off

(Bloomberg) -- Facebook Inc.’s chief artificial intelligence scientist said the company is years away from being able to use software to automatically screen live video for extreme violence.

Yann LeCun’s comments follow the March livestream of the Christchurch mosque shootings in New Zealand.

"This problem is very far from being solved," LeCun said Friday during a talk at Facebook’s AI Research Lab in Paris.

Facebook was criticized for allowing the Christchurch attacker to broadcast the shootings live without adequate oversight that could have resulted in quicker take-downs of the video. It also struggled to prevent other users from re-posting the attacker’s footage.

LeCun said livestreams of violence presented numerous problems for automated systems, in particular the disturbing audio that accompanies videos of extreme violence, such as shootings or beheadings. A system needed to be trained on both picture and sound, he said, and would likely have to incorporate information about the individual posting the video and content they had recently published.

Another problem is that there was not enough data to train an AI to reliably detect such videos. "Thankfully, we don’t have a lot of examples of real people shooting other people," he said.

While there were plenty of examples from movies of simulated violence that could be used for software training, LeCun said a system would have trouble differentiating between real violence and action movies, and would block both -- even though posting something from a movie is allowed.

Jerome Pesenti, Facebook’s vice president of AI, said it was the company’s goal to use a combination of human reviewers and automated systems to remove prohibited content -- such as videos of extreme violence -- as soon as possible.

But Pesenti said that if automated software had never encountered a particular video before and its confidence in classifying the content as prohibited was low, a human reviewer would have to screen the video and make a determination.

Facebook has made progress in automatically detecting and blocking certain sub-categories of extremist content, the company has said, and can now spot and block the posting of 99% of content linked to the terrorist group al-Qaeda.

But detecting and blocking all extremist content -- regardless of origin -- is a "very hard problem," LeCun said.

To contact the reporter on this story: Jeremy Kahn in London at jkahn21@bloomberg.net

To contact the editors responsible for this story: Giles Turner at gturner35@bloomberg.net, Nate Lanxon

©2019 Bloomberg L.P.