ADVERTISEMENT

Social Media Fails to Curb Racist Emojis Aimed at Soccer Stars

Twitter, Facebook Struggle to Control Racist Use of Emojis

A wave of online racism aimed at some of England’s Black soccer players has highlighted how social media companies’ content moderation systems are failing to monitor the use of emojis.

On Sunday, England’s men’s soccer team, playing in their first major tournament final since 1966, fell to Italy on penalties. In the aftermath, a wave of racist abuse was leveled at three Black England players -- Marcus Rashford, Jadon Sancho and Bukayo Saka -- and messages on social networks like Twitter, Facebook and Instagram included monkey and banana emojis.

Saka on Thursday posted a statement to his Twitter followers to thank fans for their support, but also to call out technology companies for failing to curb abuse.

“To the social media platforms Instagram, Twitter and Facebook, I don’t want any child or adult to have to receive the hateful and hurtful messages that me, Marcus and Jadon have received this week,” he said. “I knew instantly the kind of hate that I was about to receive and that is a sad reality that your powerful platforms are not doing enough to stop these messages.”

The digital abuse isn’t a new phenomenon. The Professional Footballers’ Association and data science company Signify found in a 2020 study of tweets sent to some players that there were more than 3,000 explicitly abusive messages, with 29% of the racially abusive posts in the form of emojis, the tiny images and symbols used in texts, emails and other digital communications.

“Twitter’s algorithms were not effectively intercepting racially abusive posts that were sent using emojis,” the study found. “This highlights a glaring oversight.”

But despite the long-standing problem, the abuse via emojis has continued. A more recent analysis published Monday flagged almost 2,000 tweets as potentially abusive targeting some black players during the European tournament, and said that although a number of the tweets were deleted, Twitter Inc. didn’t permanently suspend the accounts.

Social media companies such as Facebook Inc., Twitter and Google, which owns YouTube, have spent years developing algorithms to detect offensive speech so that it can be removed. But experts say that they have put in a smaller effort and developed less expertise in analyzing emoji language -- and that has left an opening.

Social Media Fails to Curb Racist Emojis Aimed at Soccer Stars

“It’s OK to send a monkey emoji to someone, but if you call someone a monkey, you get banned so that’s the contradiction,” said Vyvyan Evans, a linguistics expert who wrote a book on the subject. “Insufficient effort to date has been focused on policing emojis.”

Spokespeople for Twitter and Facebook said the companies have been removing posts and disabling accounts since Sunday’s final, with Twitter saying that the network was proactive and removed more than 1,000 tweets and permanently suspended accounts in the hours after the game.

“Using emojis, like monkey or banana emojis, to racially abuse someone is completely against our rules,” said a Facebook company spokesperson. “We use technology to help us review and remove harmful content, but we know these systems aren’t perfect, and we’re constantly working to improve.”

U.K. leaders condemned the hate speech, with Prime Minister Boris Johnson saying he warned executives from Facebook, Twitter, ByteDance Ltd.’s TikTok, Snapchat Inc. and Instagram at a Tuesday meeting that they need to crack down on online abuse.

Players and officials also spoke out, including Rashford in a widely shared statement on social media. “I’ve grown into a sport where I expect to read things written about myself,” he wrote. “I can take critique of my performance all day long, my penalty was not good enough, it should have gone in, but I will never apologise for who I am and where I came from.”

Bertie Vidgen, a research fellow in online harms at the Alan Turing Institute, has been working with colleagues from Oxford University to test how speech detection models, including one from Google called Jigsaw, respond to offensive emojis. The findings so far have not been encouraging, and Vidgen said it’s not because emojis necessarily pose a more difficult technical challenge.

“They have really low performance. You can say something hateful, which if you wrote out in text would definitely be picked up,” Vidgen said. “There’s zero justification for having that loophole. They just need to enforce their policies.”

©2021 Bloomberg L.P.