April 14, 2021

How Content Moderation Efforts are Failing the Needs of an Interconnected World

In the early days of social media, no one gave much thought to the immense power this tool could hold, least of all the tech giants themselves. The central idea was to connect people and make the platforms as easily accessible as possible, prioritizing content volume over its quality. Overtime, however, it has become exceedingly obvious that now more than ever before, apps like Facebook, Twitter and Instagram have become important tools of protest, information dissemination and mobilization. This brings with it new issues surrounding fake news, mis/disinformation, hate speech, online harassment – and the ever-growing realization that with more access comes more risk of real-world damage.

Facebook and Instagram have over 2 billion and roughly 1 billion monthly active users, respectively, while Twitter boasts 192 million daily active users. The problem facing these tech giants is unprecedented; billions of voices coming in from every corner of the world, each steeped in their own cultural context and intricate language. The challenge is to ensure nothing that causes real harm is allowed to stay up on the platform, so social media platforms all have their own individual rules of what is allowed to stay up and what will be taken down – often called the “community guidelines.” The platforms have certainly made efforts to moderate content, hiring human moderators as well as using automated tools to monitor the content being posted on their website.

A multi-lingual approach to content moderation

In Pakistan, language cannot be condensed down to just one default language i.e. Urdu. The country has many different dialects and languages which vary from region to region. The situation is similar in other countries, like India. However, most big social media platforms emerged from the free speech tradition of the United States, since a lot of the most popular tech giants are based there. They had a hands off approach to content moderation in the beginning, but these platforms slowly began seeing the consequences of increased access in varying political, legal and social systems, and in war-torn or deeply divided countries and environments.

Nowhere is this more evident than In Myanmar, where Facebook was used to spread hateful propaganda and violent hate speech against the already marginalized and targeted Rohingya Muslims. This added to the ongoing Rohingya Muslim genocide in the region, with Facebook being used to spread hate speech about the community since as far back as 2013. There were only two people at Facebook who could speak Burmese reviewing problematic posts in 2015. Before that, most people reviewing Burmese content spoke English.

As of June 2019, Facebook said it had hired 100 Burmese-speaking moderators, including some who knew regional dialects. However, 20 million out of Myanmar’s 53 million population use the social media app, making the addition of a hundred or so moderators seem insufficient to tackle such a major problem.

In February 2021, Twitter relaunched an experiment on iOS that asked users to review their replies that may be harmful or offensive. The experiment was originally launched in May 2019 on web, iOS and Android. The relaunch is limited to iOS users, which means Pakistan and other countries that don’t have Apple stores in the country can’t access it (with few exceptions).

Not only is this exclusionary, it does nothing to counter the growing problem of disinformation and hate speech in countries like Pakistan, where Urdu and other local languages take longer to be detected and deleted, if at all. Hashtags that promote violence or hatred often trend throughout the day until they fade out themselves.

One of the reasons for this lack of moderation in languages other than English is the fact that most languages from the Global South, like Sinhala in Sri Lanka, are resource poor. This refers to the fundamental data and algorithms available that work on any given language. Unlike English, which is considered resource rich by linguists, effective algorithms to process hundreds of Global South languages are at best difficult to achieve and at worst impossible.

The necessity of context

Since most of the popular tech companies like Facebook, Twitter, YouTube and Instagram are U.S.-based, their policies are defined or shaped by the legal, political, cultural and social norms of the country of incorporation, without due considerations to the context in specific regions.

In an attempt to localize its policies, Facebook has included caste as part of its hate speech policy in India, which is a step in the right direction. However, it took the company a decade of operating in India to make the change, signifying the glacial pace with which tech companies move to counter online harm in developing non-English speaking countries.

Additionally, subjectivity regarding ethics and morals also plays a role in the problems of moderation. Content consumed by a group within the United States could be considered offensive or problematic, as compared to its perception in another part of the world, where it could be perceived much more positively. We all know that ideas of right and wrong, legal and illegal vary from country to country. Indeed, that’s why social media apps like TikTok have been getting into trouble with Pakistani authorities. The content being posted there (allegedly) isn’t in line with Pakistan’s much more conservative legislation and society. But that doesn’t mean that the content being posted is inherently immoral or wrong. Perceptions shift from society to society, which is why social media companies now find themselves in the unique position of having to choose between the freedom of expression of their content creators and governmental pressures from different countries to remove certain content.

Then, of course, there is content that is not offensive in certain contexts within a certain region or group, but may seem that way to an outsider. This is why it’s important to have content moderators with an intimate, in-depth understanding of how in-country cultural and political norms differ from the norms within the greater diaspora.

Progress in content moderation

Most social media apps use both algorithms and human moderators to sift through the millions of posts that are uploaded to their platforms. Although human moderators have more understanding of nuance and context than algorithms, the nature of the job can often be traumatizing and disturbing. The job brings with it a significant emotional toll, as well as low wages, and poor working conditions, bringing into question how highly companies like Facebook and Google actually value it.

As for algorithms, just because they don’t have the capability of nuanced thinking like human moderators, this doesn’t necessarily make them useless. In Sri Lanka, a group of researchers published two large bodies of text of Sinhala for open analysis by local researchers. One of the texts was composed of 28 million words with all three of the major languages spoken in Sri Lanka. The other was a single-language text more easily accessible to researchers. The method was claimed to decentralize power by revealing new datasets of the language, allowing local linguists to collaborate with computer scientists in a mutually beneficial relationship with technology corporations.

The goal to make content moderation more inclusive of all languages and cultures may seem impossible, but due to growth in global internet accessibility, we don’t have much of a choice. Collaborations with linguists and researchers can help streamline algorithms toward a single language, making them more effective at quickly identifying hate speech. More diverse algorithms mean the options to review or flag problematic content would be available to most users of social media, regardless of their country.

On the other hand, content moderators should be composed of a diverse group of people, with each having an in-depth knowledge of the culture and language of the country they’ve been tasked to monitor. During times of crisis or elections in a country, content moderation efforts toward that country should ideally also increase. Additionally, content moderators should be provided with mental health resources, as well as better wages and working conditions.

It may not be possible to get rid of every problematic or offensive piece of content that exists on social media, especially not from all over the world. However, as the products of major tech corporations continue to shape and define so much of our socio-political fabric and online sphere, the work of content moderation becomes increasingly crucial.

Written by

Romessa Nadeem is a Project Coordinator at Media Matters for Democracy, which runs the Digital Rights Monitor.

No comments

leave a comment