How do we find a middle ground between accountability and freedom of expression?

Earlier this week, a Delhi court issued summons to top Internet firms named in a petition for hosting “offensive and hurtful” content on the Web, social media in particular. While this has revived the controversial debate, triggered this month by Union IT and Telecom Minister Kapil Sibal laying the onus of “pre-screening” content — only to backtrack later — on Internet companies or intermediaries, critics argue that this will stifle “freedom” on the Internet as we know it.

Internet companies, in response, have taken the high road as custodians of a “free society” wanting to protect “differing views”, as long as it is legal. They have also argued that it is virtually impossible to monitor the vast, enormous troves of data that enter their servers by the second.

Those batting for these companies argue that the sheer enormity in scale — 34 million Facebookers in India, 250 million tweets a day and 48 hours of video being uploaded every minute on YouTube — makes it “impossible” to filter content that may be offensive.

Merely a conduit?

But are Internet companies merely a conduit (some commentators have likened it to a phone), which absolves them of any responsibility for the content they host?

On the contrary, one need only recall how Web-hosting companies chose to safeguard their economic interests over “Internet freedom”, when many Web services — cloud hosting and Web payment services — pulled the plug on WikiLeaks, apparently buckling under pressure from powerful governments.

Screening of Web content is not rocket science.

Firewalls — a popular example is the Chinese firewall that blocks keywords that are politically sensitive — are only a small slice of the pie.

Complex algorithms

Typically, Internet companies use a complex set of sophisticated algorithms to offer several features and services to consumers.

For instance, when Facebook “recommends” people you are likely to know on the social network, or the more recent feature of telling you who else is talking about a topic that you've just posted on, there are algorithms at play that do the homework for you. To offer these features, and also to an extent to monetise on content and usage, Internet companies (or their code) are continuously monitoring what we all do on the Web. So, by default, companies that offer services on the Web are continuously crawling the Web. This means that simple computer software layers go into every page uploaded, reading the content (using data parsing algorithms), looking for meta tags and isolating the key words. This is then used for a plethora of elementary functions, for instance, allotting a page rank.

So filtering content that contains keywords or combinations of words that can be offensive should be easy, and by no means impossible; after all, that's what the algorithms are doing anyway. Although things get a little more complicated when it comes to social media, even sites such as Facebook already have a complex set of algorithms that are delivering you features or classifying your live feeds using key words, all in real time.

Much bigger challenge

Filtering video, of course, is a much greater challenge. While pornographic imagery has always been a challenge, research teams have reportedly developed systems that analyse pixel content to determine (to an extent) whether an image is pornographic or not.

This becomes more complex when applied to moving pictures. However, there are algorithms that convert video to image, and a few that capture the audio and convert it to text, which can then be used for filtering. However, experts admit that screening content for hate speech or offensive content is very difficult and would require fresh interventions.

“It is a myth that this cannot be done. And technically speaking, this will not incur huge overheads either. Filtering content that could be offensive and hate speech is not difficult considering the sophisticated algorithms that run various functions on these very sites. Companies such as Google have done it in other countries such as China,” points out Hemanth H.M., a software engineer who specialises in networking.

Though pre-screening is “not necessary in a democratic country like India”, he says that economic interests are behind the stance taken by Internet firms. “The stakes are higher here as the user base is big, and they stand to lose people if they block certain types of content,” he asserts.

The central dilemma

Mr. Sibal's comments have brought to the fore the larger question of applying the principle of ‘reasonable restrictions' to free speech, as provided by our Constitution, on what is posted on the Web.

The central dilemma of those who are caught in the crossfire between the companies that enjoy a significant commercial stake in the Web, and the ham-handed efforts of public authority to stifle genuine freedom of expression is this.

How do we make those who post content responsible for what they put on the Web, while not allowing the Web's potential as a democratic medium to be stifled by political authority?

More In: Bengaluru