ADVERTISEMENT

Regulating deepfakes and generative AI in India | Explained

Updated - December 04, 2023 08:14 pm IST

Published - December 04, 2023 06:57 pm IST

Why are deepfakes dangerous? Is there a legal vacuum? What are the international best practices? What should be the regulatory response according to experts?

Image for representation

The story so far: Last month a video featuring actor Rashmika Mandanna went viral on social media, sparking a combination of shock and horror among netizens. The seconds-long clip, which featured Mandanna’s likeness, showed a woman entering a lift in a bodysuit. The original video was of a British Indian influencer named Zara Patel, which was manipulated using deepfake technology. Soon after, the actor took to social media to express her dismay, writing, ”Something like this is honestly, extremely scary not only for me, but also for each one of us who today is vulnerable to so much harm because of how technology is being misused.”

ADVERTISEMENT

Deepfakes are digital media — video, audio, and images edited and manipulated using Artificial Intelligence (AI). Since they incorporate hyper-realistic digital falsification, they can potentially be used to damage reputations, fabricate evidence, and undermine trust in democratic institutions. This phenomenon has forayed into political messaging as well, a serious concern in the run-up to the general elections next year.

Back in 2020, in the first-ever use of AI-generated deepfakes in political campaigns, a series of videos of Bharatiya Janata Party (BJP) leader Manoj Tiwari were circulated on multiple WhatsApp groups. The videos showed Tiwari hurling allegations against his political opponent Arvind Kejriwal in English and Haryanvi, before the Delhi elections. In a similar incident, a doctored video of Madhya Pradesh Congress chief Kamal Nath recently went viral, creating confusion over the future of the State government’s Laadli Behna Scheme.

ADVERTISEMENT

Other countries are also grappling with the dangerous consequences of rapidly evolving AI technology. Recently, the presidential polls in Argentina became a testing ground for deepfake politics — while Javier Milei was portrayed as a cuddly lion, his contender, Sergio Massa, was seen as a Chinese communist leader. In May last year, a deepfake video of Ukrainian President Volodymyr Zelenskyy asking his countrymen to lay down their weapons went viral after cybercriminals hacked into a Ukrainian television channel.

Deepfake and its gendered imapct

Deepfakes are created by altering media — images, video, or audio using technologies such as AI and machine learning, thereby blurring the lines between fiction and reality. Although they have clear benefits in education, film production, criminal forensics, and artistic expression, they can also be used to exploit people, sabotage elections and spread large-scale misinformation. While editing tools, like Photoshop, have been in use for decades, the first-ever use of deepfake technology can reportedly be traced back to a Reddit contributor who in 2017 had used a publicly available AI-driven software to create pornographic content by imposing the faces of celebrities on to the bodies of ordinary people.

Now, deepfakes can easily be generated by semi-skilled and unskilled individuals by morphing audio-visual clips and images. “The tools to create and disseminate disinformation are easier, faster, cheaper, and more accessible than ever,” the Deeptrust Alliance, a coalition of civil society and industry stakeholders cautioned in 2020.

ADVERTISEMENT

As deepfakes and other allied technology become harder to detect, more resources are now accessible to equip individuals against their misuse. For instance, the Massachusetts Institute of Technology (MIT) created a Detect Fakes website to help people identify deepfakes by focusing on small intricate details.

The use of deepfakes to perpetrate technology-facilitated online gendered violence has been a rising concern. A 2019 study conducted by AI firm Deeptrace found that a staggering 96% of deepfakes were pornographic, and 99% of them involved women.

Highlighting how deepfake technology is being weaponised against women, Apar Gupta, lawyer and founding director of Internet Freedom Foundation (IFF) says, “Romantic partners utilise deepfake technology to shame women who have spurned their advances causing them psychological trauma in addition to the social sanction that they are bound to suffer.”

ADVERTISEMENT

Existing laws

India lacks specific laws to address deepfakes and AI-related crimes, but provisions under a plethora of legislations could offer both civil and criminal relief. For instance, Section 66E of the Information Technology Act, 2000 (IT Act) is applicable in cases of deepfake crimes that involve the capture, publication, or transmission of a person’s images in mass media thereby violating their privacy. Such an offence is punishable with up to three years of imprisonment or a fine of ₹2 lakh. Similarly, Section 66D of the IT Act punishes individuals who use communication devices or computer resources with malicious intent, leading to impersonation or cheating. An offence under this provision carries a penalty of up to three years imprisonment and/or a fine of ₹1 lakh.

Further, Sections 67, 67A, and 67B of the IT Act can be used to prosecute individuals for publishing or transmitting deepfakes that are obscene or contain any sexually explicit acts. The IT Rules, also prohibit hosting ‘any content that impersonates another person’ and require social media platforms to quickly take down ‘artificially morphed images’ of individuals when alerted. In case they fail to take down such content, they risk losing the ‘safe harbour’ protection — a provision that protects social media companies from regulatory liability for third-party content shared by users on their platforms.

Provisions of the Indian Penal Code, 1860, (IPC) can also be resorted to for cybercrimes associated with deepfakes — Sections 509 (words, gestures, or acts intended to insult the modesty of a woman), 499 (criminal defamation), and 153 (a) and (b) (spreading hate on communal lines) among others. The Delhi Police Special Cell has reportedly registered an FIR against unknown persons by invoking Sections 465 (forgery) and 469 (forgery to harm the reputation of a party) in the Mandanna case.

ADVERTISEMENT

Apart from this, the Copyright Act of 1957 can be used if any copyrighted image or video has been used to create deepfakes. Section 51 prohibits the unauthorised use of any property belonging to another person and on which the latter enjoys an exclusive right.

Is there a legal vacuum?

“The existing laws are not really adequate given the fact that they were never sort of designed keeping in mind these emerging technologies,” says Shehnaz Ahmed, fintech lead at the Vidhi Centre for Legal Policy in Delhi. She however cautions that bringing about piecemeal legislative amendments is not the solution. “There is sort of a moral panic today which has emanated from these recent high profile cases, but we seem to be losing focus from the bigger question — what should be India’s regulatory approach on emerging technologies like AI?”, she says.

She highlights that such a regulatory framework must be based on a market study that assesses the different kinds of harm perpetrated by AI technology. “You also need to have a very robust enforcement mechanism because it is not a question of designing laws only, you need the institutional capacity to be able to implement those laws,” she adds.

Pointing out a lacuna in the existing IT Rules, she says that it only addresses instances wherein the illegal content has already been uploaded and the resultant harm has been suffered; instead, there has to be more focus on preventive measures, for instance, making users aware that they are looking at a morphed image.

Agreeing that there is a need to revamp the existing laws, Mr. Gupta points out that the current regulations only focus on either online takedowns in the form of censorship or criminal prosecution but lack a deeper understanding of how generative AI technology works and the wide range of harm that it can cause.

‘The laws place the entire burden on the victim to file a complaint. For many, the experience that they have with the local police stations is less than satisfactory in terms of their investigation, or the perpetrator facing any kind of penalty,” he asserts.

Proposed reforms — Centre’s response

Following the outrage over Mandana’s deepfake video, Union Minister of Electronics and Information Technology Ashwini Vaishnaw on November 23 chaired a meeting with social media platforms, AI companies, and industry bodies where he acknowledged that “a new crisis is emerging due to deepfakes” and that “there is a very big section of society which does not have a parallel verification system” to tackle this issue.

He also announced that the government will introduce draft regulations, which will be open to public consultation, within the next 10 days to address the issue.

The rules would impose accountability on both creators as well as social media intermediaries. The Minister also said that all social media companies had agreed that it was necessary to label and watermark deepfakes.

However, the Minister of State for Electronics and Information Technology (MeitY) Rajeev Chandrasekhar has maintained that the existing laws are adequate to deal with deepfakes if enforced strictly. He said that a special officer (Rule 7 officer) will be appointed to closely monitor any violations and that an online platform will also be set up to assist aggrieved users and citizens in filing FIRs for deepfake crimes. An advisory was also sent to social media firms invoking Section 66D of the IT Act and Rule 3(1)(b) of the IT Rules, reminding them they are obligated to remove such content within stipulated timeframes in accordance with the regulations.

Mr. Gupta points out, “The advisory issued by the MeitY does not mean anything, it does not have the force of law. It is essentially to show some degree of responsiveness, given that there is a moral panic around generative AI sparked by the Rashmika Mandanna viral clip. It does not account for the fact that deepfakes may not be distributed only on social media platforms.”

Judicial intervention

The Delhi High Court on December 4 expressed reservations over whether it could issue any directions to rein in the use of deepfakes, pointing out that the government was better suited to address the issue in a balanced manner. A bench of Acting Chief Justice Manmohan and Justice Mini Pushkarna was considering a Public Interest Litigation (PIL) petition to block access to websites that generate deepfakes.

During the proceedings, Acting Chief Justice Manmohan remarked, ”This technology is now available in the borderless world. How do you control the net? Can’t police it that much. After all, the freedom of the net will be lost. So there are very important, balancing factors involved in this.” Taking into consideration that the government has already taken cognisance of this issue, the Court posted the matter for further hearing on January 8.

International best practices

In October 2023, US President Joe Biden signed a far-reaching executive order on AI to manage its risks, ranging from national security to privacy. The Department of Commerce has been tasked with developing standards to label AI-generated content to enable easier detection — also known as watermarking. States such as California and Texas have passed laws that criminalise the publishing and distribution of deepfake videos that intend to influence the outcome of elections. In Virginia, the law imposes criminal penalties for the distribution of nonconsensual deepfake pornography.

Additionally, the DEEP FAKES Accountability Bill, 2023, recently introduced in Congress requires creators to label deepfakes on online platforms and to provide notifications of alterations to a video or other content. Failing to label such ‘malicious deepfakes’ would invite criminal sanction.

In January, the Cyberspace Administration of China rolled out new regulations to restrict the use of deep synthesis technology and curb disinformation. The policy ensures that any doctored content using the technology is explicitly labeled and can be traced back to its source. Deep synthesis service providers are required to abide by local laws, respect ethics, and maintain the ‘correct political direction and correct public opinion orientation.’

The European Union (EU) has strengthened its Code of Practice on Disinformation to ensure that social media giants like Google, Meta, and Twitter start flagging deepfake content or potentially face multi-million dollar fines. The Code was initially introduced as a voluntary self-regulatory instrument in 2018 but now has the backing of the Digital Services Act which aims to increase the monitoring of digital platforms to curb various kinds of misuse. Further, under the proposed EU AI Act, deepfake providers would be subject to transparency and disclosure requirements.

The road ahead

According to Mr. Gupta, AI governance in India cannot be restricted to just a law and reforms have to be centered around establishing standards of safety, increasing awareness, and institution building. “AI also provides benefits so you have to assimilate it in a way that improves human welfare on every metric while limiting the challenges it imposes,” he says.

Ms. Ahmed points out that India’s regulatory response cannot be a replica of laws in other jurisdictions such as China, the US, or the EU. “We also have to keep in mind the Indian context which is that our economy is still sort of developing. We have a young and thriving startup eco-system and therefore any sort of legislative response cannot be so stringent that it impedes innovation” she says.

She says that we could also learn from other sectors, proposing that such a law “should perhaps also have a provision for some innovative policy tools like regulatory sandboxes — this is something that works for the financial sector. It is a framework that allows companies and startups to innovate and also helps the legislature to design laws.” There should also not be any curtailment of free speech under the garb of regulating AI technology, she further outlines.

This is a Premium article available exclusively to our subscribers. To read 250+ such premium articles every month
You have exhausted your free article limit.
Please support quality journalism.
You have exhausted your free article limit.
Please support quality journalism.
The Hindu operates by its editorial values to provide you quality journalism.
This is your last free article.

ADVERTISEMENT

ADVERTISEMENT