‘At Twitter, no one is above rules’

March 25, 2021 11:14 pm | Updated March 26, 2021 12:19 pm IST

Kathleen Reen, Senior Director Public Policy & Philanthropy, APAC, Twitter, in an email interview with The Hindu talks about the new intermediary guidelines, preparedness of micro-blogging websites for the upcoming Assembly elections, and navigating between freedom of expression and hate speech. Edited excerpts:

Recently, the Indian government notified new guidelines for social media intermediaries, mandating them to identify the originator of certain messages, taking down content within a specific time-frame and setting up grievance redressal mechanisms. What is your take on these? And what steps are you taking to be compliant?

Twitter supports a forward-looking approach to regulation that protects the Open Internet, drives universal access, and promotes competition and innovation. We believe regulation is beneficial when it safeguards citizen’s fundamental rights and reinforces online freedom. We are studying the updated intermediary guidelines and engaging with a range of organisations and entities impacted by them. We deeply appreciate that engagement from govt., civil society, activists and academic experts alike. We look forward to continued engagement with the Government of India and hope to strike a fair and dynamic balance between transparency, freedom of expression, and privacy for everyone using Twitter.

Many other countries also seem to be contemplating similar regulations. Your thoughts on the need to regulate social media platforms and the best way to go ahead with it?

We are in a highly-dynamic global regulatory environment, and discussions around privacy, online content and self-regulation are happening all around the world. Regulating online content requires striking a careful balance between protecting from harm while preserving human rights, including freedom of expression, privacy and procedural fairness for everyone.

The technology industry, on many fronts, has committed its support to self-regulation models for content moderation. Our wider efforts on countering violent extremism is a strong example of the work industry coalitions and self-regulation can achieve together: We are members and signatories of many coalitions and organsiations, including but not limited to, the Global Internet Forum to Counter Terrorism (GIFCT), the Aqaba Process, the Christchurch Call to Action, and the Australian Taskforce to Combat Terrorism and Extreme Violent Material Online. We also invested in the Global Research Network on Terrorism and Technology (GRNTT) to develop research and policy recommendations designed to prevent terrorist exploitation of technology.

Similarly, in the case of child sexual exploitation, we have strong technology coalitions to stay ahead of bad-faith actors and to ensure we’re doing everything we can to remove content, facilitate investigations, and protect minors from harm — both online and offline. Our partnership with the National Centre for Missing and Exploited Children (NCMEC) highlights the work we do to fight online child sexual exploitation. When we remove content, we immediately report it to NCMEC and reports are made available to the appropriate law enforcement agencies around the world to facilitate investigations and prosecutions.

Our approach to regulation and public policy issues is centered on protecting the Open Internet that is open to all and promotes safety, inclusion, diversity, competition, and innovation. Together with governments around the world, civil society and academia, we are committed to building an adaptable future Internet that people trust, that empowers open public conversation, and one that is a global force for good.

What is the role Twitter sees itself playing in the upcoming elections in India? What are the learnings that you have gotten from some of the past initiatives?

Every year is an election year on Twitter and we are committed to providing a service that fosters free and open civic discourse. We recognise our role as that of an essential service where people come for credible information. That includes where, when and how to vote, to learn about candidates and their platforms, as well as to engage in healthy civic debate and conversation — much as they did during the 2019 Lok Sabha election and previous Assembly elections.

We’re seeking continuous improvement and adaptations to achieve these goals: Drawing on insights and lessons from previous elections such as those at home in India as well as globally, we are implementing significant product, policy, and enforcement updates to protect and support the multilingual conversation taking place during the course of the upcoming Assembly elections. A global cross-functional team with local, cultural and language expertise is in place and has been tasked with keeping the service safe from attempts to incite violence, abuse, and threats that could trigger the risk of offline harm.

Our goal is to make it easy to find credible information on Twitter, while limiting the spread of potentially harmful and misleading content. We have prioritised our approach to tackle misinformation based on the highest potential for harm in the context of these elections, which is why we focus on ‘Synthetic and manipulated media’ and ‘Civic integrity’.

For a content to be labelled or removed under ‘synthetic and manipulated media’, we must have reason to believe that media, or the context in which media are presented, are significantly and deceptively altered or manipulated. We will label ‘synthetic and manipulated media’ and link it to a ‘Twitter Moment’ to give people additional context and we’ll surface related conversations so they can make more informed decisions on the content they want to engage with, or amplify. When people attempt to retweet Tweets with a ‘synthetic and manipulated media’ label, they will see a prompt pointing them to credible information. These labelled Tweets won’t be algorithmically recommended by Twitter, thereby further reducing the visibility of the misleading information, and, it will encourage people to reconsider if they want to amplify these Tweets.

Twitter, along with other social media platforms, has been at the focus of debate on bias towards certain content or accounts. How do you address these issues, particularly in light of the upcoming elections?

At Twitter, no one is above the rules and we enforce our policies judiciously and impartially for everyone. Our product and policies are never developed or implemented on the basis of political ideology. We use a combination of machine learning and human effort and expertise to review reports and determine whether they violate the Twitter rules.

We take a behaviour-first approach, meaning we look at how accounts behave before we review the content they are posting. Twitter’s open nature means our enforcement actions are plainly visible to the public, even when we cannot reveal the private details of individual accounts that have violated our rules.

We have also worked to build better in-app notices where we have removed Tweets for breaking our rules: We communicate with both the account that reports a Tweet and the account that posted it with additional detail on our actions. We can continue to improve our product, policies and processes to further earn the trust of the people using Twitter.

There is also a renewed debate of how such platforms navigate between freedom of expression and hate speech? How does Twitter negotiate between the two?

Twitter’s purpose is to serve the public conversation. We believe that public conversation is at its best when as many people as possible can participate. Participation is a function of a free and Open Internet; an Internet that is global, not walled off, not censoring critical or vulnerable voices, and one that is safe and promotes diversity, competition and innovation.

We want to make sure conversations on Twitter are healthy and that people feel safe to express their points of view. We do our work recognising that free speech and safety are deeply interconnected and can sometimes be at odds. We must ensure that all voices can be heard, and we continue to make improvements to our service so that everyone feels safer participating in the public conversation. As a principle, our enforcement is judicious and impartial for everyone, regardless of their political beliefs and background.

At Twitter, we use a combination of machine-learning and people with expertise to review reports and determine whether they violate the Twitter rules. We take a behaviour-first approach, meaning we look at how accounts behave before we review the content they are posting. We are also continually updating our policies, and seeking feedback, to address emerging and evolving behaviors online.

Our most recent update to our hateful conduct policy is one example of how we have cooperated with the public and experts to keep evolving our policy. In the case of hateful conduct, we prohibit inciting behavior that targets individuals or groups belonging to protected categories

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.