10/24/2021

Twitter adds labels to combat COVID-19 related misinformation; introduces strike system to control spread

March 2, 2021 – Twitter has started applying labels to tweets containing misleading information regarding the COVID-19 vaccine, effective March 1. The measure is an expansion to the approach unveiled by the social media company in December 2020 to combat misinformation about COVID-19, which prioritised the removal of the most “harmful misleading” information.

In a blogpost published on the website of the microblogging platform, Twitter announces that labels will initially be applied by team members when they find content in breach of Twitter’s policy, and will be used to further train automated tools to identify similar content in the future. “Our goal is to eventually use both automated and human review to address content that violates our COVID-19 vaccine misinformation rules,” Twitter says. English-language content will be the first to be scrutinised, with other languages and cultural contexts eventually added. Labeled tweets will also contain links to curated content and official public health information or the Twitter Rules.

Twitter further said it has removed over 8,400 tweets and challenged 11.5 million accounts globally since first introducing its COVID-19 guidance.

Strike system

In addition to this update, Twitter is also introducing a strike system that determines when further enforcement action is necessary, with an aim to inform the public on company policies and further reduce the spread of misleading information, especially for repeated moderate and high-severity violations of its rules. Through this process, individuals will be notified directly when their account needs further enforcement after a label is added to their tweet or when the tweet is required to be removed.

Twitter explains in the blog that one strike will not result in account-level action, but two and three strikes will result in a 12-hour account lock each. Four strikes will lead to a 7-day lock and five or more strikes will result in permanent account suspension. These actions, in an event of being taken as an error, can be appealed.

Facebook fights back against COVID-19 misinformation

On February 8, Facebook in a blogpost said it is expanding efforts to remove false claims related to COVID-19, and about all vaccines, effective immediately. The decision, which will also be implemented on Instagram, came after years of controversies and misinformation related to effects of vaccines that thrived on the social media platforms owned by Facebook. Claims that may fall under this category, created through consultations with health organisations like the World Health Organization (WHO), include additional debunked claims about COVID-19 and vaccines.

Specifically, the measure covers claims that COVID-19 is man-made, that vaccines are ineffective, and that it is safer to get the disease than to get the vaccine. Additional misinformation being combated includes claims of vaccines being toxic, dangerous or causing autism.

Groups, pages and accounts on both Facebook and Instagram that repeatedly share misinformation may be removed altogether. Claims about COVID-19 that do not breach Facebook policies will still be eligible for review by third-party fact-checkers, but if they are rated false, they will be labeled and demoted. The social media giant claimed that, since December, it has removed false claims about COVID-19 vaccines that had been debunked by public health experts.

In September 2020, Center for Countering Digital Hate (CCDH) published a report that said that Instagram’s anti-vaccination audience is roughly around 7.8 million, and the growth of the anti-vax audience on Instagram accounts for 42 percent of the total growth across Facebook, Instagram, YouTube, and Twitter. Instagram, in December 2020, announced that it would remove “widely debunked claims about the COVID-19 vaccines, when people search for terms related to vaccines or COVID-19,” and would direct users to credible information sources. Despite the announcement, many users reported anti-vax pages appearing on Instagram when searched the term “vaccine”.

As of writing this article, the top search results for the term “vaccine” on Instagram continue to show anti-vax pages after the three featured credible sources.

Social media doubling down on misinformation

Facebook said it is continuing to improve search results on its platforms, especially when users search for vaccine or COVID-19 related content. The social media platform is aiming to promote verified results and authoritative third-party resources to endorse expert information about vaccines and the pandemic. Instagram, in addition to promoting authoritative results in the search process, will make it harder to find accounts that discourage people from getting vaccinated.

Similar to the approach taken by Twitter, Facebook in December 2020 began sending notifications to users that interacted with posts that violated policies against misinformation about COVID-19. Although the posts would be deleted, users would be notified about why it was false. Additionally, Facebook said it put warning labels on about 50 million pieces of COVID-19 related content in April 2020, based on around 7,500 articles by its independent fact-checking partners.

As for Twitter, the aforementioned measure is not the first time it attempted to hide misinformation about important events, as during the 2020 U.S. elections, the platform labeled and hid tweets with misleading and false claims from American political figures and parties behind warning screens. This rule applied to US-based accounts with over 100,000 followers. To access the labeled tweet, users had to click past the warning, which said, “some or all of the content shared in this Tweet is disputed and may be misleading.”

Specifically, the rules applied to claims by users, including candidates for office, announcing election results before it was authoritatively called. Tweets aiming to cause interference with the election process or with the implementation of results, such as through violent action, were also subject to removal.

Not only were users not allowed to reply to such tweets, they were unable to retweet, decreasing the promotion of such posts, however quote-tweet was still enabled. The content was not recommended by Twitter’s algorithms, meaning users wouldn’t see them in their main timelines.

No comments

Sorry, the comment form is closed at this time.