ADVERTISEMENT

Tech Companies Push Back on U.K.'s Claim They Are Lax on Terror

Tech Companies refute Theresa May’s claims that Internet is a ‘safe space’ for terrorists.

Tech Companies Push Back on U.K.'s Claim They Are Lax on Terror
Theresa May, U.K. prime minister, leaves number 10 Downing Street to make a statement on the terror attack in London, U.K., on Sunday ( Photographer: Chris Ratcliffe/Bloomberg)

(Bloomberg) -- Technology companies are refuting U.K. Prime Minister Theresa May’s claims that they provide a "safe space" for terrorists.

The British leader’s call for greater regulation of the internet on Sunday followed the terrorist attack in London, which killed seven and left dozens wounded.

Facebook Inc. condemned Saturday’s attacks and said that it wanted to provide a service where law-abiding users -- not terrorists -- felt safe. "We want Facebook to be a hostile environment for terrorists," Simon Milner, Facebook’s policy director for Europe, the Middle East and Africa, said in a statement.

Milner said that Facebook does not allow terrorist content on its platform and already "worked aggressively," using a combination of automated technology and human reviewers, to remove it as soon as the company became aware of it.

Facebook has also been helping anti-extremist groups produce counternarratives -- content designed to undermine terrorist propaganda and help dissuade people from joining terrorist groups.

Google -- the main division of Alphabet Inc. -- said it shared the government’s commitment to "ensuring terrorists do not have a voice online." The company said it already employed thousands of people and invested "hundreds of millions of pounds" to fight abuse on our platform. It said it was working on an "international forum to accelerate and strengthen our existing work in this area."

For its part, Twitter Inc. said "terrorist content has no place on" its platform.

But some tech companies were more explicit in their criticism of May’s calls for greater regulation.

Telegram Messenger LLP, the messaging app that has been used by terrorist groups because it is perceived to be more secure, said governments should not "shoot the messenger" and go after social media companies or try to outlaw the strong encryption that apps such as Telegram employ.

The messaging service said it removed any public channels -- which allow a Telegram user to broadcast a message to a large group of followers -- distributing terrorist content "within hours" of being made aware of their existence.

Markus Ra, Telegram’s spokesman, referred journalists seeking further response from Saturday’s attacks to a lengthy blog post he authored in late March following a terrorist attack on the U.K. Parliament and Westminster Bridge. In the post, Ra said that seeking to ban a particular messaging service or end-to-end encryption, such as Telegram’s, would do nothing to stop terrorism. Terrorists would simply switch to other forms of communication or build their own fully encrypted messaging apps, using publicly available technology, he wrote.

Ra also blamed the news media for magnifying the risk terrorism poses and creating an atmosphere of hysteria that enables politicians to erode civil liberties. "Terror spreads on the wings of the click-hungry press, spurred on by those politicians who are looking for more power and less accountability," he wrote.

NordVPN, a company that provides virtual private networks -- a secure, encrypted connection to the internet that makes it hard for governments to track someone’s online activity -- said there was no evidence that increased regulation would be effective in stopping terrorist plots. Any attempt to force companies to install "backdoors" that would enable governments to bypass encryption would make most people less safe, Marty Kamden, NordVPN’s chief marketing officer, said in a statement.

"The essence of the internet is to be a free space --it was not built to have regulation, censorship or administrators," he said.

Most social media companies rely primarily on their own users to flag extremist material, which is then reviewed by humans, often low-paid contractors, and removed if it violates that company’s policies.

Politicians and anti-extremist groups have called on the tech companies to do more -- whether that means employing many more people to screen posts or investing heavily to develop artificial intelligence capable of blocking it.

Facebook uses technology to identify photos or videos that it has previously removed for violating its rules against terrorist content. It then sends these to its human reviewers who can understand context -- for instance, a news program might have a legitimate reason to use a portion of a terrorist video -- and make the decision whether to remove it.

Google, Facebook, Microsoft and Twitter have also begun working on a shared database of photos or videos that any one of these companies have removed as violating its rules on terrorist content and which automatically flags it for review if someone tries to post it to another company’s platform.

The companies have also been researching the use of artificial intelligence to identify terrorist content automatically, without the need for a human user to flag it. But so far, the technology is not reliable enough.

To contact the reporter on this story: Jeremy Kahn in London at jkahn21@bloomberg.net.

To contact the editors responsible for this story: Giles Turner at gturner35@bloomberg.net, Molly Schuetz