Skip to content Skip to sidebar Skip to footer

The Petition:

Social media corporations are being slammed in response to racist abuse aimed at black footballers once again.

Following England’s defeat to Italy in the Euro 2020 championship final on Sunday, Rashford, Sancho, and Saka were racially attacked on Twitter and Instagram. Neither player scored in the game’s penalty shootout.

After the game, racist abuse featuring monkey and banana emoticons and racial slurs were directed at the players by anonymous and real-named accounts.

Facebook and Twitter said they removed the abusive comments swiftly. However, numerous users noticed that racist comments remained accessible hours later, even after being notified.

The Football Association said it was “appalled” by the online racism directed at some of England’s players.

Major English football bodies have already criticised Facebook and Twitter for their alleged systemic failure to stop racism and harassment.

The Premier League shunned Facebook, Twitter, and Instagram for three days in April to “demand change.” The Premier League argued it is a high time that social media platforms must do more to combat internet hatred.

Here’s why, despite public warnings, social media networks are still unable to stop racist abuse—and what this might take to repair it.

Why is abuse rampant?

In April, the Premier League and other English footballing authorities wrote to the CEOs of Twitter and Facebook, urging them to implement certain restrictions. They suggested monitoring and censoring racist posts before they are delivered and removing content that sneaks through using “robust, transparent, and swift” techniques.

While Facebook and Twitter explicitly prohibit racial abuse, considering much of it still gets through says something about their enforcement.

Platforms are wary of outright banning words, emojis, or phrases since users may easily change their vocabulary. Additionally, racism victims may opt to reclaim or publicize slurs to bring attention to the mistreatment. Language is context-dependent, and outright censorship ignores this.

However, social media sites with billions of followers are too big to rely solely on human reviewers or moderators to review every post. As a result, massive platforms like Twitter and Facebook have developed sophisticated algorithms to detect and eliminate hate speech.

But those algorithms aren’t always reliable. The Instagram moderation system responded to reports of posts containing racist slurs and emojis by claiming that its tech discovered that the post probably didn’t contradict with its Community Guidelines.

Besides, the filtering process is only limited to what the algorithms know. Users can create new substitute characters or epithets. People can use current racial slurs in contexts that don’t spread hate, for example, when a victim wants support after receiving an abusive message.

“Environment is the surrounding in which we live. There is a balanced natural cycle exists between environment and lives of human beings, plants and animals. All the human actions in this modern world directly impact the whole ecosystem. Let’s save it!”

Mark Adams

While social media companies have successfully blocked terrorist content and photographs of child exploitation, these are different problems from a technological standpoint.

Fortunately, the number of abuse photographs is limited. Sadly, this number is rising, but because much of this content has already been posted, it has been fingerprinted, making it much easier to discover and immediately remove. The technological problems of fingerprinting images and deciphering messages in English are vastly different.

Most natural language processing (NLP) software does not consider the context a human understands, although several businesses claim that their software does.

Should platforms no longer accept anonymous users?

The Premier League also asked social media sites to subject all members to an enhanced verification process, which would help law enforcement to name the people behind any racist profiles.

In general, players, game officials, managers, and trainers of any background and origin, at any football level, should be free to participate in sports without having to undergo illegal abuse. Although Premier Leagues corporations can do that, it will only be possible if social media companies change the ability of criminals to remain anonymous.”

This is something that some platforms already do. People who want to publish political adverts on Facebook, for example, must prove their identity with official papers and confirm their locations by receiving a code in the mail.

Other analysts have warned that mandating users to authenticate their identities will not halt racial abuse because much of it happens in public. For instance, a particular real estate brokerage business, suspended an employee after tweets with racial slurs targeting Black footballers were sent from an account bearing his real name. According to reports, the employee claimed his account was hijacked.

According to some experts, removing staying anonymous online may affect vulnerable users. Ending anonymity isn’t the simple solution you think it is. Vulnerable organizations, more than racists, require anonymity. Complex themes, such as mental health, end-of-life experiences, sexual and gender identity, and so on, are frequently only accessible to a select few via anonymous correspondence. This must be safeguarded.

What do social media sites have to say about it?

Without responding to the Premier League’s precise policy ideas, a Twitter representative said in a release that the firm had removed over 1,000 tweets relating to abuse of England players and had permanently terminated “a handful” of accounts. The representative added that heinous racial abuse hurled against England players has no place on Twitter.

They also mentioned that they will continue taking action if any accounts or Tweets break our policies. They have aggressively engaged and will continue to communicate with their trusted partners across the community to seek methods to address the issue collectively. Further, they will continue to play our role in restraining this reprehensible behavior—both online and offline.”

A Facebook rep, on the other hand, explicitly said that no one should have to face racist abuse and discrimination anywhere, and we don’t want it on Instagram. Earlier on, Facebook had immediately banned remarks and accounts aimed at England’s footballers, and will continue taking against individuals who violate its policies.

In addition to its efforts to delete harmful content, the company advised all players to enable Hidden Words, a function that ensures no one sees abuse in their direct messages or comments. The rep ended by saying that “Nothing will solve this problem overnight, but we are dedicated to maintaining the safety of our community.”

Leave a comment