ADVERTISEMENT

Tech Firms Face Hefty Fines Under New EU Terror Rules

EU unveils proposals requiring tech to remove terror content.

Tech Firms Face Hefty Fines Under New EU Terror Rules
In this file photo Skype Technologies SA, Facebook Inc., and Google Inc. logos are displayed on computer screens for a photograph in New York, U.S. (Photographer: Jin Lee/Bloomberg)

(Bloomberg) -- Alphabet Inc.’s Google, Twitter Inc., Facebook Inc. and other tech firms could be slapped with fines as high as 4 percent of annual revenue if they fail to remove terror propaganda from their sites quickly enough under new European Union legislative proposals unveiled Wednesday.

The European Commission, the bloc’s executive body, proposed new legislation forcing internet companies to wipe Islamic State videos and other terror content from their services within an hour of notification for removal by national authorities. Companies would be fined by national governments in the event of systematic failures to remove content.

Wednesday’s proposal follows similar guidelines the EU issued in March. The EU at the time threatened to issue the regulation should the tech firms fall short of expectations.

Large tech platforms have made rapid improvements in their efforts to tackle terror content, partly thanks to automated tools, but the EU says some of the platforms have failed to meet the one-hour deadline and need to do more.

"While we have made progress on removing terrorist content online through voluntary efforts, it has not been enough," said Julian King, European commissioner for security policy. "We need to prevent it from being uploaded and, where it does appear, ensure it is taken down as quickly as possible -- before it can do serious damage."

A 4 percent fine would only occur in the event of "systematic failures" in removing content. For Google parent Alphabet that would amount to more than $4.4 billion and more than $1.6 billion for Facebook, according to the 2017 revenues.

Spokeswomen for Google’s YouTube and Facebook both said the companies share the commission’s goal to combat the spread of terrorism content on their platforms.

Automated and machine-learning tools can help tech firms catch any malicious posts. But web firms typically also use human reviewers to look over posts to try to ensure they don’t remove terror content when it’s used in a neutral context, such as by news outlets, whistle-blowers or non-governmental agencies. That can make tight turnaround times a challenge if companies want to ensure they aren’t overly censoring users.

Calling the commission’s proposal "troublesome," Raegan MacDonald, head of EU public policy at Mozilla, said "it would force private companies to play an even greater role in defining acceptable speech online."

The move is part of a wider shift by legislators in Europe and the U.S. to hand more legal responsibility to tech firms for the content that appears on their sites. Also on Wednesday, the European Parliament voted to back copyright rules that would help video, music and other rights holders seek compensation for use of their content online. In the U.S., President Donald Trump signed a law in April making websites liable if they knowingly facilitate sex trafficking.

The EU on Wednesday also called on the companies, member states and Europol to increase their cooperation, including by ensuring that a point of contact at each company and each national authority is reachable 24/7. The tech firms and the EU’s member states will also be required to report back to the commission regularly on the removals of terror content.

The commission proposal still needs approval from the EU’s member states and the European Parliament before it becomes law. EU member states said in late June they welcomed the commission’s intentions to present a legislative proposal in the area.

To contact the reporter on this story: Natalia Drozdiak in Brussels at ndrozdiak1@bloomberg.net

To contact the editors responsible for this story: Giles Turner at gturner35@bloomberg.net, Peter Chapman

©2018 Bloomberg L.P.