Twitter on Wednesday launched the Safety Mode feature to automatically block harmful or abusive accounts for a temporary period of time.
(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)
The feature will block accounts that use potentially harmful language including insults, hateful remarks, or repetitive and uninvited replies or mentions, for a period of seven days, Twitter said in a statement.
The microblogging platform said that its systems will assess the likelihood of a negative engagement by considering both the tweet’s content and its relationship to the author. It will also take existing relationships into consideration, so accounts followed by the user or frequently interacted with will not be auto blocked, the company noted.
Once an account is auto blocked, it will temporarily be unable to follow other user accounts, see their tweets or send direct messages. The safety feature can be disabled at any time by users, Twitter added.
Also Read | Twitter to start removing COVID-19 vaccine misinformation
“Our goal is to better protect the individual on the receiving end of tweets by reducing the prevalence and visibility of harmful remarks,” Senior Product Manager Jarrod Doherty said in a statement.
The California-based social network introduced several measures in the past to protect users from abusive accounts, including features to hide replies and showing warning messages before a user tweets with “strong language”.