ADVERTISEMENT

How to Prevent Another Attack Like Christchurch

How to Prevent Another Attack Like Christchurch

(Bloomberg Opinion) -- New Zealand Prime Minister Jacinda Ardern has ordered a top-level inquiry into whether police and intelligence services could have done more to prevent the March 15 terrorist attack that left 50 people dead. The rest of the world can no doubt also learn from the New Zealand inquiry’s findings, but all democracies should be putting measures in place to counter far-right extremism before it reaches the radar of intelligence services or police.

The Christchurch attack shocked the world, but not those of us that have studied the far-right and its flourishing transnational online ecosystem. Representing nearly half of all referrals to the UK government’s counter-extremism program in 2017-2018 and the vast majority of extremist-related killings in the U.S. in recent years, the far-right has inspired violence within its ranks while also pushing its ideas out to mainstream audiences largely unabated.

Analysis by the Institute for Strategic Dialogue in 2017 of public Facebook pages in the U.K. unearthed some 40,000 accounts engaging with hateful far-right views, compared to only 2,000 engaging with Islamist extremist views. But despite the clear threat, efforts by governments and the private sector to stem Islamist violent extremism online simply have not been matched by efforts against far-right extremism.

The U.K., for example, has public and private sector frameworks, technology and personnel in place to deal with a range of harmful content, including child pornography and Islamist violent extremism. But they failed to cope with the thousands of attempts to repost and modify the Christchurch terrorist’s video in order to bypass filters. While tech giants have come under increased pressure from European governments to remove illegal content, these platforms have struggled to meet the scale of the problem.

One urgent need is for the companies to work with experts in order to train their human and machine-learning systems to better identify far-right content and accounts against the backdrop of this fast evolving threat. However, this will not address the problem posed by the wider online ecosystem.

Extremists have increasingly migrated to alternative online spaces where moderation is either limited or non-existent. Forums such as 4chan and 8chan, messaging apps such as Telegram and Gab, and gaming platforms like Discord, act as virtual safe havens for hateful propaganda and even the mobilization and planning of illegal activities. Efforts need to be made by governments to ensure offline laws are applied effectively to all these online spaces.

When it comes to existing laws addressing hate speech, harassment and even terrorism, enforcement focuses heavily on explicit incitement to violence or affiliation with proscribed groups – both a relatively small part of the problematic behavior that drives extremist sub-cultures and ecosystems. So what do we do about content that sits in a legal grey zone – such as the would-be manifesto of the New Zealand attacker - objectionable and hateful but not necessarily illegal?

Firstly, our educational programs need to be fit for the digital age. Improving digital literacy and enabling people – young and old -- to identify propaganda and understand how the platforms serve them content and use their data is essential. There are some promising programs that do this, but they are not delivered at the scale we need. Governments should move to integrate digital citizenship education into national curriculums.

Our second line of effort must focus on equipping civil society with the tools and skills to compete more effectively with extreme voices in our new digital ecosystem. Civil society organizations like ours have worked in partnership with tech companies and experts to trial innovative community-based solutions. These include tools for local authorities to analyze online hate speech in their communities and its impact on hate crimes, one-to-one engagements with those expressing violent extremist views and promoting alternative content for those searching for extremist material. These types of projects, however, struggle to get past the pilot phase and need to be scaled up significantly to have an impact on the problem.

The virtual elephant in the room though are the products and algorithms deployed by tech companies that tip the scales towards hateful, extreme voices over the majority of us who oppose them. Engagement-powered newsfeeds still disproportionately influence public discourse. Facebook founder Mark Zuckerberg has admitted that users engage with the most controversial content on his platform, and its architecture still encourages this.

European governments are moving closer to regulation of social media companies, with Germany having already enacted legislation (NetzDG, which fines platforms if they fail to remove flagged illegal content within 24 hours) and the U.K. is due to publish its plans in the coming weeks. Their focus so far has only been on big tech and on content moderation. They must also look at ways to encourage changes to the distortive impact of the platforms’ technological architecture, as well as the wider tech/media ecosystem.

Extremism is, and always has been, a pan-ideological phenomenon. We need to apply existing laws more effectively to the online world and dedicate the resources and solutions built up focusing on Islamist-inspired extremism to the far-right threat. Without a more robust effort by democratic countries to come to a consensus on internet governance, virtual hate will continue to have devastating real world consequences.  

To contact the editor responsible for this story: Therese Raphael at traphael4@bloomberg.net

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Sasha Havlicek is the founder and CEO of the Institute for Strategic Dialogue.

Zahed Amanullah is a resident senior fellow at the Institute for Strategic Dialogue.

©2019 Bloomberg L.P.