(Bloomberg) -- Artificial intelligence is helping Facebook Inc. tackle problems of extremist propaganda, fake accounts and hate speech, but is still not sophisticated enough to handle many of the most pressing issues facing the social network, the company’s leading AI researcher said Wednesday.
Yann LeCun, Facebook’s chief AI scientist and a pioneer in the development of deep learning, said that “AI is part of the answer, but only part,” of the solution to the issues facing the company.
In the wake of recent scandals over suspected Russian election meddling, the misuse of user data and extremist content, Facebook Chief Executive Officer Mark Zuckerberg has told U.S. and European lawmakers that AI will eventually help the company solve such issues. But the social network co-founder has been vague about when such systems might be ready or how exactly they will work.
LeCun, speaking at Bloomberg’s Sooner Than You Think technology conference in Paris, said that machine learning software was now fairly effective at blocking extremist videos from being posted, flagging potential hate speech and identifying fake accounts. But he said that when it came to issues such as “fake news,” or some kinds of extremist content, the systems could not understand enough context to be effective.
For example, such systems can’t easily tell the difference between extremist videos used for terrorist propaganda and the same video used to illustrate a legitimate television news broadcast or speech being used to spread hate and the same speech being excerpted by a civil rights group to highlight the danger of hate groups. He also said today’s machine learning systems don’t generally understand irony or humor.
The hardest problem to tackle was false news, LeCun said. “AI is nowhere near being able to solve that problem.”
While computer vision and natural language processing have made great strides through the kinds of deep learning algorithms LeCun and other researchers have developed, artificial intelligence still struggles in situations where a system needs some sort of outside knowledge not directly contained in the data it is being trained on, to make a correct assessment.
LeCun said he isn’t concerned about the ability of people with bad intentions using machine learning to generate fake videos or photos that could more easily mislead people. He said in such cases, AI itself could be use to help detect AI-generated images. “There is quite a bit of research on training a system to actually detect if an image has been artificially generated or manipulated,” he said.
©2018 Bloomberg L.P.