What is the right way of regulating social media?

Policy discussions involving the public, and not tech solutions alone, would help fight fake news.

August 30, 2019 12:15 am | Updated 08:03 am IST

FILE PHOTO: Silhouettes of laptop and mobile device users are seen next to a screen projection of Whatsapp logo in this picture illustration taken March 28, 2018.  REUTERS/Dado Ruvic/File Photo

FILE PHOTO: Silhouettes of laptop and mobile device users are seen next to a screen projection of Whatsapp logo in this picture illustration taken March 28, 2018. REUTERS/Dado Ruvic/File Photo

The Supreme Court recently stressed the need to find a balance between the right to online privacy and the right of the state to detect people who use the web to spread panic and commit crimes. Are current regulations and the nature of Internet platforms tuned to find this balance? In a conversation moderated by Srinivasan Ramani, Arun Mohan Sukumar, Head of  Cyber Security and Internet Governance Initiative  at the Observer Research Foundation, and Raman Chima, Asia Policy Director and Senior International Counsel at 'Access Now', take stock of the issues involved and offer some suggestions. Excerpts:

Arun, in the last few years there has been an explosion in the use of messaging apps such as WhatsApp. Concomitantly, there has been an increase in fake news and rumour-mongering leading to lynchings. Are the steps taken by WhatsApp to combat this enough or should it do more?

Arun Mohan Sukumar: When you ask us what are the steps, we should also ask whether these are the steps that we should take in the first place. I think many would agree that some of these problems have nothing to do with the platforms themselves and cannot be resolved by technological solutions.

Fake news is not something that has been catalysed in the digital age alone; it has been a long-standing problem. We have had very little success in trying to persuade people not to believe certain stuff. And I’m not entirely sure whether the solution to this problem necessarily lies in technology.

WhatsApp, to its credit, tried to limit forwards to five people and the norm has been tested. It has been piloted in other parts of the world as well. WhatsApp is looking at India not just as a booming market but also as a place where it can pilot some of these solutions and test them out in other emerging markets as well.

If you take uncomfortable situations developing in another part of the world, Facebook and Twitter were fairly quick to acknowledge the disinformation operations that were backed by the Chinese government in Hong Kong. This came out in a simultaneous way, documenting instances where state-sponsored elements were perpetrating fake news and sophisticated disinformation campaigns against protesters in Hong Kong. That happened because, one, the extent of the commercial engagement of both these platforms in China is fairly limited. Two, there is an element of geopolitics in this which we can’t ignore. The fact is that both of these are American platforms. The orchestrated disclosure, I believe, could have had the blessings of the American government. That is the extent to which these platforms are prepared to take cognisance of fake news. In other economies, it’s quite selective.

While WhatsApp has been trying to resist this idea of message traceability, it is also trying to maintain the integrity of the platform. Many regulators in India believe that technological fixes are solutions even if they weaken end-to-end encryption. I’m not sure that is the right way to go.

Raman, while technology per se is not the problem, virality of texts makes fake news spread very fast. Would you agree with some of the solutions that have been propounded — for example, Professor V. Kamakoti’s idea of tracing origin of WhatsApp messages?

Raman Chima: Firstly, on virality, communication virality has been there right from the invention of the Gutenberg printing press. Mass circulation has always resulted in tension between people in power with others.

When it comes to messaging services, when they were implemented in India and in other emerging economies, they were not just used for the purpose of messaging. They were, for many people, information discovery platforms. They do not often relate or refer to the World Wide Web. The kind of information consumed in messages are images and videos that may not be actually hosted on the web. The problem, therefore, is that messaging platforms haven’t been able to do a good job in ensuring that people have access to good, accurate information. For example, if you sign up to a messaging service, say WhatsApp, are you informed in your local language about how you could report in your own language disinformation content and messages that are malicious or abusive? Sadly, the reality is that there is not even a splash screen in the local language to know what you can or cannot say, during the process of signing up.

Also, fact-checking websites, fake news busters and government sources don’t get the support they need to distribute their content to local users in interior areas. Therefore, the messaging services companies could do more in fighting disinformation. I agree with Arun that they cannot be held liable and that they shouldn’t implement technological solutions as a panacea. You mentioned suggestions by Professor Kamakoti of IIT Madras to the Madras High Court. First of all, there is the argument that the Madras High Court should not be going into an area which is a legislative issue. Even if that is set aside, his proposals have been critiqued by other computer scientists. Professor Manoj Prabhakaran of IIT Bombay, for example, has argued against such models of imparting traceability.

Both of you seem to agree that the solution doesn’t lie in technology, and neither is there any need to add any extra layer of liability for social media platforms and websites. So, the Shreya Singhal judgment in 2015 was along these lines, right? Some provisions on intermediary liability on publishing were actually read down. But last year, the Ministry of Electronics and Information Technology notified new draft rules for intermediaries and called for public comments. What levels of liability would you set for social media platforms?

AMS: There has been a raft of litigious activity and, concurrently, fairly explosive growth in regulatory guidelines as well. These guidelines from the government have been trying to enhance the agency of the government over technology companies. For instance, there is a debate among government industries today about data localisation, something that will affect the working of most of these big technology companies. The fundamental tension at work is that most of the technology companies, which are into the bread-and-butter business of communication, are based abroad. The consumer base is clear with a WhatsApp or Facebook or Twitter. WhatsApp has effectively made encrypted communication a mass market phenomenon here, which is great for correspondence generally. But on the other hand, the government has very little agency to make these companies do what they want to in terms of adhering to certain intermediary guidelines. Of course, the reason why these guidelines were lampooned was because the government imposed a high degree of liability, and takedown requirements in many cases were selectively followed. The fact is that if you were to take a step back and look, the government has very limited agency over these companies at the moment. On the one hand, there is a great deal of adoption by a wide user base, which is only increasing as Internet connectivity grows in India. And WhatsApp did not even have an office in India till very lately!

And the same thing goes for Internet shutdowns. Now, nobody would say that Internet shutdowns are a desirable phenomenon. But if you speak to local law enforcement agencies and district magistrates, they tell you that they have very limited avenues by which they can prevent the proliferation of malicious content on the Internet through social media platforms at a time of crisis, whether that crisis is a natural calamity or whether it is man made. So, they have resorted to these in a ham-handed fashion. Of course, you can’t justify these measures. But the fact is that at the local level or at the federal level, there seems to be very little agency that government officials have to do what they should do.

RC: On intermediary liability, it has already been identified by our judiciary that the issue of making platforms liable for the content posted by users impacts free speech. And the basic premise there is, you can put pressure on tech platforms to over censor or even perhaps harm the privacy of users by making them liable for all the content they have posted on platforms.

When Parliament legislated provisions, there were some ambiguities over what the executive branch could regulate via rules. Rules were criticised when they were released in 2012 and, ultimately, as you mentioned, they were read down in the Shreya Singhal judgment. The court basically said that if you are asking for content takedowns, that can be done only via a court order or through a legal process. The government’s proposed amendments to the rules, for example, that web platforms should deploy self-censoring/auto-filtering of content by users could definitely fly against the face of the court’s judgment.

More importantly, on some issues such as identifying the origin of messages through breaking encryption, the government seems to be using rule-making as a way to fix and patch these. Whereas it would be better off to have substantial legislative policy discussions held in a public manner over such knotty issues. Also, as Arun says, there is a lack of agency for the government to receive information from the platforms as there is no clear privacy law in place.

Government agencies lack sufficient agency and often use a ham-handed approach to enforce takedowns or shutdowns. In some cases, a total communication shutdown, as we see in Kashmir today, invoking ‘national interest’. What kind of mechanisms would you suggest instead of this approach?

AMS: We did this capacity-building workshop a couple of years ago, with law enforcement agencies from across States. Some States clearly did better, because they had, for lack of a better reason, good cops. They were interested in pursuing this sort of “finesse” measures and not merely rely on takedowns. Telangana, for instance, has a cadre of officers who dedicate themselves to preventing the propagation of fake news through channels like WhatsApp. Some of these steps require serious investment and I am not sure if all States have the capacity.

So perhaps the Prime Minister’s/the Centre’s sending a message down to folks at the district level may well produce some results. But the fact is that they resort to these ham-handed measures because they do not have any other tools.

But I also agree that India is one of the countries which often tops the number of takedown requests, but it’s not the only country or the only government that is interested in data from users. Facebook’s reports that indicate the number of requests that the government has made for a takedown show that India’s [requests for a takedown] are up there with the U.S. government or any other Western European government.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.