ADVERTISEMENT

The Backlash Against “Techlash” Is Unfair

The Backlash Against “Techlash” Is Unfair

(Bloomberg Opinion) -- It was almost inevitable that “techlash” – the growing dislike of tech platforms and, in particular, social media for its role in undermining democracy – would attract its own backlash. Key theories such as the filter bubble and echo chamber are themselves being challenged.

In a Medium post this week, Jeff Jarvis, a journalism professor and blogger, reviewed some of the recent academic work on the subject. In particular, he looked at a paper by Axel Bruns from Queensland University of Technology in Australia provocatively entitled “It’s Not the Technology, Stupid: How the ‘Echo Chamber’ and ‘Filter Bubble’ Metaphors Have Failed Us.”

In it, Bruns argues we don’t select friends on solely ideological criteria. Instead, “contacts from the many facets of the user’s personal life – family, friends, acquaintances, workmates, and others – connect and communicate with each other in an unruly and often uncontrollable mêlée." Because of this, users encounter a greater variety of views than non-users. They aren’t locked into watertight “bubbles” by the social networks’ content selection algorithms, Bruns says.

So social networks shouldn’t be held responsible for the proliferation of fake news and hyper-partisan commentary. In fact, Bruns argues, this debate distracts us from a much more important question: why are people getting more intolerant when confronted with opposing opinions?

He isn’t the only academic to question echo chambers (a term coined by my Bloomberg Opinion colleague Cass Sunstein) and filter bubbles (a concept developed by Upworthy co-founder Eli Pariser).  In 2016, Seth Flaxman from Oxford University, Sharad Goel from Stanford University and Justin Rao, a Microsoft Corp. employee, noted that social networks and search engines increased people’s exposure to material from their less preferred side of the political spectrum, even if they did reduce “the mean ideological distance between individuals.” Both effects, though, were relatively modest.

Perhaps coincidentally, these findings are similar to those of Facebook Inc.’s own researchers. In a 2015 Science article, Eytan Bakshy and collaborators wrote that social networks’ algorithms expose users to “cross-cutting viewpoints” – but that users themselves tend not to click on such links.

“Our work suggests that the power to expose oneself to perspectives from the other side in social media lies first and foremost with individuals,” Bakshy wrote.

These are all valid points. Some social network users – and not just journalists – often make a conscious effort to follow people with opposing views as a reality check. And the vast majority of people have friends form across the political spectrum, something that exposes them to differing views.

But it would be misguided to dismiss the idea of the filter bubble as “the dumbest metaphor on the internet,” as Bruns does, because it is highly likely that the way in which social networks work directly affects the behavior of ideologically rigid individuals and their reaction to the opposing views they encounter. 

A large body of academic literature points to the role social networks play in organizing political action in real life. Troublingly, this goes for political violence, too. Last year, Karsten Mueller from Princeton University and Carlo Schwarz from the University of Warwick published a paper showing that in German towns with more active Facebook users, violence against immigrants also increased. In April, 2019, Mattias Wahlstrom and Anton Tornberg from the University of Gothenburg took those findings further by describing the mechanisms that translate social media interactions into real-world xenophobic violence in Sweden. 

At the time the Wahlstrom paper was written, the largest political group on the Swedish segment of Facebook was the ultra-nationalist “Stand Up for Sweden” group with almost 170,000 members. Such large online communities, the Swedish researchers wrote, serve to lend moral legitimacy to violent actions by providing individuals with “feedback, mutual recognition and emotional responses that motivate action.” Collectively, they also form an alternative news and analysis discourse that contradicts whatever they see of mainstream or “cross-cutting” views.

In the pre-social network world, it was hard for hyper-partisan, potentially violent people to find each other, and any groups such people formed were small. Now, anyone who has ever had a Twitter mob descend on them knows how easy it is to meet like-minded people and have your hatred reinforced and legitimized. Echo chambers and filter bubbles don’t need to be perfectly insulated to produce the mob effect – and to provide opportunities for paid trolls to incite harassment and, ultimately, violence. The greatest danger posed by social media isn’t insulation, it’s amplification.

Of course, as Jarvis correctly points out in his Medium post, any journalist (myself included) works for an industry that competes with the social networks. Collectively, we can be seen as seeking to preserve our monopoly on content mediation. To me, however, there’s no problem with being open about this. That mediation monopoly used to be a moderating factor. It kept public discourse civil and made sure out-of-control hatred was marginalized. Now, the online mobs have proliferated. I’m not certain this genie can be chased back into the bottle, but if the public backlash against social media grows rather than recedes, the chances of that happening will be greater.

To contact the editor responsible for this story: Edward Evans at eevans3@bloomberg.net

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Leonid Bershidsky is Bloomberg Opinion's Europe columnist. He was the founding editor of the Russian business daily Vedomosti and founded the opinion website Slon.ru.

©2019 Bloomberg L.P.