Google removed about 59,350 pieces of content from its social media platforms in April last, following over 27,700 complaints received from individual users in India, according to the company’s maiden monthly transparency report.
The report follows the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 that came into force on May 26. The rules require social media platforms with more than 50 lakh users in India to publish compliance report every month mentioning the details of complaints received and action taken. The platforms also need to mention the number of specific communication links or parts of information they have removed or disabled access to, following proactive monitoring conducted by using automated tools.
In an emailed statement, a Google spokesperson said the company has a long history of providing transparency into the different types of requests it receives from around the world, and how it responds. All of these requests are tracked and included in the company’s existing Transparency Report since 2010.
“This is the first time we will publish a monthly transparency report in accordance with the new IT Rules, and will continue to publish more details as we refine our reporting processes for India,” the spokesperson stated.
As per the report, the company received a total of 27,762 complaints from individual users located in India via designated mechanisms and relates to third-party content that is believed to violate local laws or personal rights on Google's significant social media intermediary (SSMI) platforms, including YouTube. This data also includes individual user complaints accompanied by a court order.
About 96% of the complaints received were related to issues of copyright, followed by trademark (1.3%), defamation (1%), legal (1%), counterfeit (0.4%) and circumvention (0.1%).
It further stated that the number of removal actions stood at 59,350 during April. Each unique URL in a specific complaint is considered an individual ‘item’ and a single complaint may specify multiple items that potentially relate to the same or different pieces of content.
“...When we receive complaints from individual users regarding allegedly unlawful or harmful content, we review the complaint to determine if the content violates our community guidelines or content policies, or meets local legal requirements for removal. The figure...shows the total number of removal actions taken during the one month reporting period,” it said.
There would be a two-month lag for reporting to allow sufficient time for data processing and validation. “In future reports, data on removals as a result of automated detection, as well as data relating to impersonation and graphic sexual content complaints received post May 25, 2021, will be included. We are committed to making improvements in the upcoming iterations of the report based on feedback from all stakeholders, including providing more granular data,” the report noted.
On Tuesday, another U.S.-headquartered platform Facebook said it would publish an interim report this week with details of content removed from its platforms proactively. However, a final report would be published on July 15, which would contain details of user complaints received and the action taken, along with data related to WhatsApp.