On November 6, Indian actor Rashmika Mandanna tweeted, “[..] if this happened to me when I was in school or college, I genuinely can’t imagine how could I ever tackle this.” Her tweet was in response to a viral deepfake video that showed her face on another woman’s body.
Made with AI tools, these morphed media samples - or deepfakes - can take the form of videos such as the one Mandanna condemned, or violent photos featuring real people (such as musician Taylor Swift), or even fake audio clips with victims asking people to do harmful things (as in the case of U.S. President Joe Biden).
The availability of free AI tools can allow anyone to create a deepfake video. Powered by accessible generative AI models, deepfakes are no longer messy cut-and-paste edits one dismisses at first glance, but are so realistic they could fool even experts.
But what can you do if the deepfake video features you? Experiencing fear or distress is natural, but trying to stay calm and remembering that legal protections are available is crucial.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
Whether the videos were made with images you privately shared, images of you taken from public forums, or intimate media stolen from your phone or cloud-based storage, you can file a complaint with the cyber crime division.
If you are under 18, reach out to a safe and supportive adult - such as a parent or a teacher - to let them know and get their help in reporting the deepfake. You can also reach out to social media companies or the police directly.
The complaint framework
Deepfakes are considered morphed photos, which are already covered under the IT Rules of 2021. Affected users can approach a social media company’s grievance officer in order to have the deepfakes removed.
“All you have to do is flag the content and submit the offending URLs, and the intermediary in question will have to take expeditious action and disable access to such content,” Radhika Roy, litigation counsel at the Internet Freedom Foundation (IFF), said.
For example, if a deepfake featuring you was first shared on platform X (formerly Twitter) before being cross-posted to Instagram and Facebook, you should reach out to X’s grievance officer as well as Meta’s officer.
Per Rule 3(2)(b) of the IT Act, the intermediaries - here, the social media companies - have 23 hours to pull down the offending material after receiving a complaint, Roy said.
Links through which to report a deepfake
- Meta Grievance Officer
- X (Twitter) Grievance Officer
- Google (and YouTube) Grievance Officer
- National Cybercrime Portal
Affected people can approach the Cyber Crime Cell of their local police station, and/or report the content on the National Cybercrime Reporting Portal online. One can also lodge a complaint against those circulating the images, through certain provisions of the Information Technology Act, 2000, Roy added.
While several Indian celebrities have already spoken out against deepfakes, meaning police should be aware of the cyber crime, Roy urges affected persons to be ready with a definition in case they have to explain the issue.
How to explain a deepfake to the police?
Know your rights
“Additionally, you must let them [police] know about the IT Rules and how such content is prohibited. Most importantly, while filing a complaint, refer to your rights under Sections 66E, 67, 67A, 67B of the IT Act. These provisions stipulate punishments for violation of privacy, for impersonation for the purpose of cheating, for publishing or transmitting obscene/sexually explicit content,” Roy said.
Important sections of the IT Act
“While all of this seems very straightforward, actual implementation takes time and can be a bit harrowing. This definitely changes for marginalised communities due to structural impediments in just how our society functions - right from being a victim of such content to even getting a complaint lodged and taken seriously,” Roy explained.
A person being targeted with deepfakes might naturally wonder if they will be named in the news and if the deepfake will be published. This depends on the nature of the deepfake, and whether it involves sexually abusive content. Under the Indian Penal Code (IPC), victims of sexual offences have a right to anonymity. The same could apply to a person who is featured in an abusive or sexually explicit deepfake, according to Roy.
But despite the available protections, the Indian government is also struggling to update its regulations amidst surging deepfake incidents.
Explained | Regulating deepfakes and generative AI in India
“The IT Ministry had also come out with an advisory to nominate a “Rule 7” officer who would assist the aggrieved to file an FIR against intermediaries and the offender, but there has been no movement on that front,” explained Roy.
“Additionally, that is also an incorrect interpretation of the provision and nominating an officer for the same would be beyond the contours of the law. However, the MoS [Union Minister of State for Electronics and IT, Rajeev Chandrasekhar] himself has encouraged the filing of FIRs at the nearest police station if one is aggrieved by deepfakes,” she added.
How are deepfakes made?
The ubiquity of deepfake making
A few seconds of Googling throws out not one but entire pages of websites claiming to face-swap photos and videos for free or at nominal prices, to creative convincing deepfakes. While some platforms specialise in celebrity deepfakes, others let users make videos with their own images or the faces of others.
One particular website checked by The Hindu claimed that it would not store users’ data, would not watermark the resulting videos, and would not even use content filters. This means someone could use the tool to create child abuse material or media with non-consensual nudity in a way that would make it harder for the police or cybersecurity experts to find the offender. The same site even allowed cryptocurrency payments, which are anonymous and challenging to track.
The face-swapping website seen by The Hindu used a demo video that featured a partially nude model in a tight swimsuit having her face switched with another person. An app version of the same deepfake maker was available for download on both the Google and Apple app stores.
Experts have pointed out how deepfakes are often used to specifically attack female-presenting people, by undressing them or putting them in explicit situations to elicit shame and objectify them. In fact, a simple Google search for celebrity deepfakes showed websites that openly claimed to host “deepfake pornography” featuring female Bollywood and Hollywood celebrities.
Recovering and moving on
If you are ever a target of digital violence, know that you are not alone and that many have spoken up before you in order to make the reporting process easier.
After you have made your report to the social media companies and the police, it is important to process the traumatising experience and ask for professional help where needed.
If you experience debilitating fear, panic attacks, flashbacks, nightmares, anxiety, or just find it impossible to return to your normal schedule, reach out to a doctor or a therapist. Setting boundaries online and upgrading your data security can also help you move forward with confidence.
In case you are unable to seek out therapy, stay in touch with respectful friends or family members who are ready to listen and offer their support.
COMMents
SHARE