09/25/2021

Twitter Introduces New Autoblock Feature to Curb Harassment

Twitter is currently experimenting with a feature called Safety Mode, through which the platform will automatically block another account that’s engaging in potentially abusive or spammy behavior. The autoblock will remain active for seven days.

“Autoblock is Twitter’s way of helping people control unwelcome interactions,” a blog post said. 

Twitter will use automated technology to look at the content of a tweet and the relationship between the Tweet author and replier to determine if a block is warranted. 

The new feature is currently launching for a “small feedback group” in the beta stage. The feature will go live for Android, iOS, and the Twitter website, with English enabled. Twitter plans to expand the group in the next few months.

Twitter clarified that autoblocks come from the platform, not from individuals. Although autoblocks last for a week, they can be undone by the account owner at any time. 

Autoblocking can only occur if a user is in Safety Mode, but there is no limit to how long someone stays in Safety Mode. When a user gets autoblocked, they will not be able to interact with the other account, see their Tweets, follow them, or send them Direct Messages.

All existing replies from autoblocked accounts move to the bottom of the conversation. Even when an account is unblocked, a user can still manually block the profile of the account.

Twitter previously introduced the feature during an Analyst Day presentation in February.

Twitter has been battling abusive behaviour on its platform for a long time now, and has introduced many features to prevent it. Twitter recently introduced other features such as the option to hide replies, limiting who can reply to a user’s posts, and warning before tweeting a reply that could be abusive.

How effective can it be?

The newest feature is only available so far in English. To truly end abusive behaviour on the platform, these features need to be cognizant of other languages and cultures besides English-speaking countries. The diversity in languages and cultures on Twitter has been a hindrance for effective content moderation.

Digital rights activist Hija Kamran said, “While the new feature might help in curbing harassment on the platform, it wouldn’t be surprising if it fails to give the same protection to people receiving hate and harassment in languages other than English, and with a more local context.”

She noted that Twitter has consistently ignored calls for better content moderation on its platform, particularly in the context of developing countries where English isn’t widely spoken or understood. She pointed out similar indifference by Facebook and Google, when content moderation of local languages and local contexts is involved.

“It will be especially significant to see how women and gender minorities in the developing countries are constantly impacted by the hate on social media platforms, and how many new tools, policies and features protect or fail to protect them,” she said.

What includes abusive or spammy behaviour?

Users that tweet repetitive content, send unsolicited mentions or replies, use unrelated hashtags and/or follow indiscriminately are at risk of being autoblocked.

“We see accounts that engage in bulk or aggressive activity as potentially abusive because it disrupts your experience,” Twitter said.

Twitter also said that it is working to improve its detection over what accounts should be autoblocked.

Twitter also further stressed that interactions on the platform should be civil and non-repetitive. It also advised against name-calling and insults. 

 

No comments

leave a comment