In the midst of the massive protests that have erupted in the U.S. following the death of an African-American man in police custody, social media platform Twitter decided to do something it had avoided for several years: call out some of President Donald Trump’s tweets as incorrect information and as against its policies. This prompted Mr. Trump to warn social media platforms of stricter controls via an Executive Order. It also prompted another social media giant, Facebook, to state that they did not want to take similar action since they did not want to be ‘arbiters of truth’. These developments bring into focus the level of involvement the social media giants have in the dissemination of the public discourse, and raise questions on how these platforms should regulate content. Mishi Choudhary and Rishab Bailey discuss these issues in a conversation moderated by P.J. George . Edited excerpts:
Twitter and Facebook go to great extents to avoid the ‘arbiter of truth’ label. But they apply their terms of use selectively on erring users. Even on Mr. Trump, Twitter did not act on his tweets for a long time. In this sense, are they not arbiters of truth already?
Mishi Choudhary (MC):I think truth is an accident on these platforms. They are actually creators of engagement, who wish to usurp all human attention. Their entire business model feeds on that. But inevitably the question will rise about whether you are for truth or against it. I think the action which prompted this entire brouhaha is Mr. Dorsey [Twitter CEO Jack] giving what I would call ‘ice in wintertime’. Saying that we’re going to look for more facts on a problematic tweet is only a label telling people who want junk food, “hey, come here, eat spinach.” But what most platforms want to say is what Mr. Zuckerberg [Facebook CEO Mark] has said; that they really don’t care. And that all they are here for is to serve their business model, which is based on the reactions on the post. If they are forced to take responsibility for something, they just want to bow out and tell us that they are private platforms that [have their] own rules and regulations and community guidelines. That is why society has to decide what it wants to do, now that information distribution is by people who don’t care, and who prioritise engagement over truth.
Rishab Bailey (RB):This behaviour as “arbiters of truth” arises out of four factors. First, the Communications Decency Act in the U.S., empowers intermediaries to make decisions regarding content, which is seen as avoiding government intervention and, therefore, protecting speech. Second, these entities are typically seen as private platforms that have the right to choose the content. Third, the user-platform relationship is governed by contracts that give platforms a great deal of power in deciding what they will permit. Fourth, there are practical considerations. Often decisions need to be made in real time, such as when illegal or harmful content, including terrorist attacks or suicide attempts are being streamed. Also, the sheer volume of content being exchanged on these platforms makes it difficult or impractical for external decision making.
Facebook and Twitter have a say over the self-expression of billions of people. Never before in history has so much control rested with so few people, and that too private entities. Has their clout grown too much?
RB:The power of the big technology companies can be problematic in a variety of ways. In the media business, for example, the advertising market is moving towards specific platforms. So, it is important to regulate online platforms. However, it is important to do this without hurting the benefits that the Internet has brought us, whether in promoting civil liberties or in enhancing efficiency. While there is clearly a problem with the way platforms handle content moderation and how they self-regulate, having the government involved in censorship is also far from ideal. This can lead to over-censorship or politically-motivated censorship, particularly in a country with a relatively low rule of law standard as India. We’ve seen, for example, how even a relatively independent body such as the CBFC [Central Board of Film Certification] has performed in India.
The primary issue appears to be a lack of transparency, accountability in decision making, and allegations of bias and discrimination. These problems are exacerbated by the monopoly that many platforms enjoy, and there has to be broader thinking on the structural problems with the digital economy. But censorship is a different issue and it is important not to try a one-size-fits-all approach, like the generic regulations in the Indian government’s intermediary guidelines of December 2018. It makes better sense to regulate procedural aspects, like the German law that requires social media companies with over two million registered users in Germany to put in place processes to receive user complaints and disable access to manifestly illegal content. Companies are also required to improve their transparency mechanisms and make public disclosures of how they handle complaints. The law fines companies not for failing to remove content, but for not having robust grievance redressal mechanisms.
So we need to look at targeted or proportionate measures that clarify what intermediaries are required to do from a procedural perspective, with an aim to make them more transparent and accountable to users. This can be done through supervised self-regulation or even co-regulatory processes. Here, it is important to understand how experiments such as Facebook’s Oversight Board will work.
MC:We all just want to tweak and fix the current system itself, whether it is the liability issues under Section 79 of the Information Technology act in India, or the counterpart DMCA [Digital Millennium Copyright Act] and Section 230 in the Communications Decency Act (CDA) in the U.S. All of us are now discussing whether it will be antitrust, or if something would be broken up, or if it could be regulation of some kind. I want to say that the companies recognise that something must change. They know that the jig is up, and that the period of unregulated behaviour is over. They know that denial is no longer going to work. And that is why they are also in negotiation mode. Everyone now knows that it is not just innovation, but it’s also harmful. And I do not think there are simple answers such as saying we want self-regulation, or that we don’t want self-regulation. That’s why Facebook has appointed an oversight board; because there has to be something more than just self-regulatory behaviour. But if you try to introduce a government into the picture, it is going to be mostly political censorship. It is also going to start a process of self-censoring, because there is fear.
I don’t have a simple answer, but I don’t think anybody does. And that’s what it leads us to this Executive Order from Trump.
Section 230 of the Communications Decency Act (CDA) in the U.S. has been credited with having spurred innovation in the technological sphere. Do you think taking it away will throttle innovation?
MC:Despite the fact that a vast majority of Facebook’s users are now outside the United States, the discussion is still concentrated on Section 230 and what that did in the United States for the platform companies. Now, there is definitely the feeling that the companies have gone too far to the other side, and that they are not really innovating, but are mostly acquiring new products or just squishing out smaller products. At the time Section 230 was enacted, we needed such a thing. But today we live in a very different world where surveillance is the only economy or only business model that the Internet knows. People are addicted to free data. People are addicted to free services in return for free spying by the commercial platform companies. This isn’t the age we lived in at that time when Section 230 was enacted. Something has to shift, and everybody recognises it. But nobody quite knows where they want to land.
In India, the similar Section 79 of the IT Act has been continuously challenged since its enactment. We start with the Avinash Bajaj case where the industry realised you can’t have any e-commerce if you’re going to go keep arresting CEOs of companies. With Section 79, the vision is the same: that we want the platform companies to do new things, and to come and have new, innovative ideas in India, and we will have to protect them some way or the other.
But we only get to talk about the larger players like a Facebook or a Google who can afford litigation teams. But the smaller players cannot. Police harass platform companies all the time and the smaller trial courts are filled with such cases. So, I will say it is much more complicated.
RB:Before answering the question, I think it is important to keep in mind that there are differences in the intermediary liability frameworks in India and in the U.S. Section 230 of the CDA does two things. It first recognises that all intermediaries are not publishers, and second, it explicitly sets up a self-regulating system for intermediaries. In India, on the other hand, the law is primarily brought in to extend the common law of distributor’s liability to the Internet. So the idea just clarifies that intermediaries are not to be liable for illegal third party content, if they don’t actively participate in the commission of the offence, or if they take down content once they have actual knowledge of this. Now, the original version of Section 79, when it was enacted in 2000, had language under which an intermediary basically had to show that it took all relevant measures to prevent illegal content from being exchanged. This led to the CEO of Baazi.com being arrested because users had exchanged an obscene clip online. The provision was changed in 2008, and now intermediaries just have to show that they carried out due diligence under the guidelines laid down by the government. These guidelines lay down self-regulatory frameworks that companies are required to adhere to. There is however, no specific requirement to censor content and so the decision broadly remains with the intermediary on how to enforce its terms. So changing these provisions by removing or significantly diluting safe harbour provisions will certainly have a negative effect on the digital ecosystem, both in terms of innovation and, importantly, in terms of civil liberties protections. It has been broadly accepted by academic literature on courts in India and abroad, that holding an intermediary responsible for third party content could lead to a chilling effect on free speech. I mean, as Mishi mentioned, smaller platforms in particular could face significant harassment if protections were removed. You will then have to move to a system where companies are incentivised to over-censor.
Truth is an accident on these platforms. They are actually creators of engagement, who wish to usurp all human attention
Mishi Choudhary,
Legal researcher
COMMents
SHARE