YouTube, Telegram and X (formerly Twitter) have been told to proactively filter child sexual abuse material (CSAM) out “on the Indian Internet”, the Ministry of Electronics and Information Technology said on Friday. The notices to the platforms, the government said, threatened the firms that they would lose intermediary liability protections if they didn’t take action, meaning that the companies would themselves be open to legal action alongside users who posted CSAM.
An editor at a financial daily said the notices were sent a day after the paper had reached out to the government about such content available on these platforms.
In a statement, a YouTube spokesperson said: “We have a zero-tolerance policy on child sexual abuse material. No form of content that endangers minors is acceptable to us. We have heavily invested in the technology and teams to fight child sexual abuse and exploitation online and take swift action to remove it as quickly as possible. In Q2 2023, we removed over 94,000 channels and over 2.5 million videos for violations of our child safety policies. We will continue to work with experts inside and outside of YouTube to provide minors and families the best protections possible.”
The other two companies did not immediately have a comment to offer on the notice.
YouTube uses an automatic tool called Child Sexual Abuse Imagery (CSAI) Match to proactively weed out CSAM, and the company says on a page on its website that it licenses this technology to other firms free of charge.
X does not appear to make any claims about what tools it uses to proactively remove CSAM. The Stanford Internet Observatory said in a report in June that it ran PhotoDNA, a tool developed by Microsoft similar to CSAI Match, to scan public content on X, and found dozens of CSAM images “bypassing safeguards that should have been in place to prevent the spread of [CSAM].”
As for Telegram, the messaging app is end-to-end encrypted for personal chat, but not so for ‘channels,’ which allow one user to broadcast documents, images and video to a large audience. The company running the app has complied with deletion requests from the Delhi High Court in the past. It is unclear whether Telegram uses any proactive CSAM detection technologies.
‘Safe harbour’
The Information Technology (Intermediary Liability Guidelines and Digital Media Ethics Code) Rules, 2021 contain the grounds on which social media platforms can lose the ‘safe harbour’ which they enjoy for the content posted by users. The government said that Rule 3(1)(b) and Rule 4(4) of these Rules are such grounds.
The former says that platforms must “make reasonable efforts” to prevent users from posting content that is “paedophilic” or “harmful to child”. The latter requires large social media platforms to “endeavour to deploy technology-based measures, including automated tools … to proactively identify information that depicts … child sexual abuse.”
“Sections 66E, 67, 67A, and 67B of the [Information Technology Act, 2000] impose stringent penalties and fines for the online transmission of obscene or pornographic content,” the government added.