The story so far: On 8 November, the Indian government instructed “social media intermediaries” to remove morphed videos or deepfakes from their platforms within 24 hours of a complaint being filed, in accordance with a requirement outlined in the IT Rules 2021. The instructions came as deepfake videos of actors Rashmika Mandanna and Katrina Kaif surfaced online within the span of one week.
What are deepfakes?
Deepfakes have been around since 2017 and refer to videos, audios or images created using a form of artificial intelligence called deep learning. The term became popular when a Reddit contributor used a publicly available AI-driven software to impose the faces of celebrities onto the bodies of people in pornographic videos.
Fast forward to 2023, deepfake tech, with the help of AI tools, allows semi- and unskilled individuals to create fake content with morphed audio-visual clips and images.
Researchers have observed a 230% increase in deepfake usage by cybercriminals and scammers, and have predicted the technology would replace phishing in a couple of years, Cyfrima, a cybersecurity company said.
Deepfake tech can be used to fictional material from scratch, unlike the morphing of an existing video seen in the case of Rashmika Mandanna.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
How does deepfake technology work?
The technology involves modifying or creating images and videos using a machine learning technique called generative adversarial network (GAN). The AI-driven software detects and learns the subjects’ movements and facial expressions from the source material and then duplicates these in another video or image.
To ensure that the deepfake created is as close to real as possible, creators use a large database of source images. This is why more deepfake videos are created of public figures, celebrities and politicians. The dataset is then used by one software to create a fake video, while a second software is used to detect signs of forgery in it. Through the collaborative work of the two software, the fake video is rendered until the second software package can no longer detect the forgery. This is known as “unsupervised learning”, when machine-language models teach themselves. The method makes it difficult for other software to identify deepfakes.
What do laws in India say about deepfakes?
Deepfakes are fast becoming a problem and are used by threat actors to spread misinformation online. However, there are laws which can be invoked to deter threat actors from creating deep fake videos. India’s IT Rules, 2021 require that all content reported to be fake or produced using deep fake be taken down by intermediary platforms within 36 hours.
Since the deepfake videos of Rashmika Mandanna went viral, the Indian IT ministry has also issued notices to social media platforms stating that impersonating online was illegal under Section 66D of the Information Technology Act of 2000. The IT Rules, 2021, also prohibit hosting any content that impersonates another person and requires social media firms to take down artificially morphed images when alerted.
Why do people create deepfake content?
Apart from being used to create morphed images or videos to make fun, deepfakes have also been used to create pornographic content. The technology could potentially be used to incite political violence, sabotage elections, unsettle diplomatic relations, and spread misinformation. This technology can also be used to humiliate and blackmail people or attack organisations by presenting false evidence against leaders and public figures.
However, as is the case with all new tech, deepfakes have positive usages as well. The technology has been used by the ALS Association in collaboration with a company to use voice-cloning technology to help people with ALS digitally recreate their voices in the future.
How have other countries reacted to the threat from deepfakes?
Different countries around the globe have passed legislations to curb the misuse of deepfake tech. The EU has issued guidelines for the creation of an independent network of fact-checkers to help analyse the sources and processes of content creation. The EU’s code also requires tech companies including Google, Meta, and X to take measures in countering deepfakes and fake accounts on their platforms.
China has issued guidelines to service providers and users to ensure that any doctored content using deepfake tech is explicitly labelled and can be traced back to its source.
The United States of America has also introduced the bipartisan Deepfake Task Force Act to counter deepfake technology.