10/23/2021

Facebook Updates its Policies in Myanmar to Curb Ongoing Violence in the Country

Facebook on April 14 said it is implementing a specific policy on its platform in Myanmar to “remove praise, support and advocacy of violence” by Myanmar security forces and protestors. This move is in response to the ongoing Rohingya Muslim genocide in the country.

Facebook said the new policy falls under its existing coordinating harm and publicizing crime policy, a set of guidelines prohibiting people from facilitating, organising, promoting or admitting to certain criminal or harmful activities targeted at people, businesses, property or animals. The new policy will remove posts that praise and support violence committed against civilians, the military or security forces in Myanmar, as well as those that praise, support or advocate for the arrests of civilians by the Myanmar military and security forces.

Facebook explains what kind of content it will remove in its most recent update

The tech giant said it will continue to “closely monitor” the situation in Myanmar and take the necessary steps to keep the platform safe.

Facebook’s last update on March 31 introduced a new safety feature in Myanmar that allowed users to lock their profile. This meant that people outside of a user’s friends list could not enlarge, share or download full sized profile pictures or cover photos. They also weren’t allowed to see photos and posts on a user’s timeline (both historic and new).

Prior to this on February 24, Facebook banned the remaining Myanmar military, referred to as  the Tatmadaw, and military-controlled state and media entities from both Facebook and Instagram. It also blocked ads from military-linked commercial entities. This was in response to the events since the military coup on February 1, including protests and deadly violence.

The UN’s findings about the genocide

United Nations’ Fact-Finding Mission on Myanmar in 2018 released a report stating that Facebook was used as a tool to spread mis/disinformation and bigotry against the persecuted Rohingya Muslims. Posts that incited hatred included graphic content, as well as derogatory and dehumanizing language to refer to Rohingya Muslims in particular. The report also noted that the measures taken by Facebook were ineffective in preventing the ongoing genocide and socio-political turmoil in the country.

The Mission examined documents, publications, statements, Facebook posts and audio-visual materials that contributed to shaping public opinion on the Rohingya community, as well as the Muslim population in general. They discovered a “carefully crafted hate campaign” pushing a negative perception of Muslims among the Myanmar population. The campaign was attributed to nationalistic political parties and politicians, leading monks, academics, prominent individuals and members of the government. The hate campaign portrayed the Rohingya and other Muslims as “an existential threat to Myanmar and to Buddhism,” with the Rohingya community branded “illegal Bengali immigrants.”

“Although improved in recent months, the response of Facebook has been slow and ineffective. The extent to which Facebook posts and messages have led to real-world discrimination and violence must be independently and thoroughly examined. The mission regrets that Facebook is unable to provide country-specific data about the spread of hate speech on its platform, which is imperative to assess the adequacy of its response,” an advance edited version of the report said.

Content moderation in Myanmar

On April 5, 2018, a group of six civil society organisations wrote an open letter to Mark Zuckerberg, criticizing Facebook’s content moderation efforts for being inadequate. The letter said that Facebook’s methods to moderate hate speech in Myanmar reveals “an over-reliance on third parties, a lack of a proper mechanism for emergency escalation, a reticence to engage local stakeholders around systemic solutions and a lack of transparency.”

In 2015, there were only two people at Facebook who could speak Burmese in charge of reviewing problematic posts on the platform. Before that, most people reviewing Burmese content spoke English.

As of June 2019, Facebook said it has hired 100 Burmese-speaking moderators, including some who knew regional dialects. However, an estimated 20 million out of Myanmar’s 53 million population use the social media app, making the addition of a hundred or so moderators seem insufficient to tackle such a problem of this magnitude.

The situation in Myanmar is reflective of the current state of content moderation on Facebook in general, with efforts failing to be more inclusive of all languages and cultures around the world. Even in Pakistan, conversations about recent protests by proscribed organization Tehreek-e-Labaik Pakistan (TLP) were prevalent on social media. Despite being banned, causing a national shutdown and inflicting lethal violence against police officers, supporters and members of the party were still using Twitter to advocate for the protests. The trending page on Twitter featured hashtags that propagated pro-TLP activities and ideologies. Consequently, access to social media and mobile data was temporarily restricted to keep party members from mobilizing and spreading further information. In the wake of the protests, a hashtag began trending spreading disinformation that there was a civil war occurring in Pakistan.

The conversation about content moderation is now more crucial than ever, with Myanmar being the worst-case scenario of what could happen without proper monitoring. It’s important to have content moderators with an intimate, in-depth understanding of the context behind in-country cultural and political norms to isolate hate speech, harassment and mis/disinformation.

No comments

leave a comment