The story so far:
The Cyberspace Administration of China, the country’s cyberspace watchdog, is rolling out new regulations, to be effective from January 10, to restrict the use of deep synthesis technology and curb disinformation. Deep synthesis is defined as the use of technologies, including deep learning and augmented reality, to generate text, images, audio and video to create virtual scenes. One of the most notorious applications of the technology is deepfakes, where synthetic media is used to swap the face or voice of one person for another. Deepfakes are getting harder to detect with the advancement of technology. It is used to generate celebrity porn videos, produce fake news, and commit financial fraud among other wrongdoings. Under the guidelines of China’s new rules, companies and platforms using the technology must first receive consent from individuals before they edit their voice or image.
What is a deepfake?
Deepfakes are a compilation of artificial images and audio put together with machine-learning algorithms to spread misinformation and replace a real person’s appearance, voice, or both with similar artificial likenesses or voices. It can create people who do not exist and it can fake real people saying and doing things they did not say or do.
The term deepfake originated in 2017, when an anonymous Reddit user called himself “Deepfakes.” This user manipulated Google’s open-source, deep-learning technology to create and post pornographic videos. The videos were doctored with a technique known as face-swapping. The user “Deepfakes” replaced real faces with celebrity faces. Deepfake technology is now being used for nefarious purposes like scams and hoaxes, celebrity pornography, election manipulation, social engineering, automated disinformation attacks, identity theft and financial fraud, cybersecurity company Norton said in a blog.
Deepfake technology has been used to impersonate notable personalities like former U.S. Presidents Barack Obama and Donald Trump, India’s Prime Minister Narendra Modi, Facebook chief Mark Zuckerberg and Hollywood celebrity Tom Cruise, among others.
What is China’s new policy to curb deepfakes?
The policy requires deep synthesis service providers and users to ensure that any doctored content using the technology is explicitly labelled and can be traced back to its source, the South China Morning Post reported. The regulation also mandates people using the technology to edit someone’s image or voice, to notify and take the consent of the person in question. When reposting news made by the technology, the source can only be from the government-approved list of news outlets. Deep synthesis service providers must also abide by local laws, respect ethics, and maintain the “correct political direction and correct public opinion orientation”, according to the new regulation.
Why has such a policy been implemented?
China’s cyberspace watchdog said it was concerned that unchecked development and use of deep synthesis could lead to its use in criminal activities like online scams or defamation, according to a report by the South China Morning Post. The country’s recent move aims to curb risks that might arise from activities provided by platforms which use deep learning or virtual reality to alter any online content. If successful, China’s new policies could set an example and lay down a policy framework that other nations can follow.
What are other countries doing to combat deepfakes?
The European Union has an updated Code of Practice to stop the spread of disinformation through deepfakes. The revised Code requires tech companies including Google, Meta, and Twitter to take measures in countering deepfakes and fake accounts on their platforms. They have six months to implement their measures once they have signed up to the Code. If found non-compliant, these companies can face fines as much as 6% of their annual global turnover, according to the updated Code. Introduced in 2018, the Code of Practice on Disinformation brought together for the first time worldwide industry players to commit to counter disinformation.
The Code of Practice was signed in October 2018 by online platforms Facebook, Google, Twitter and Mozilla, as well as by advertisers and other players in the advertising industry. Microsoft joined in May 2019, while TikTok signed the Code in June 2020. However, the assessment of the Code revealed important gaps and hence the Commission has issued a Guidance on updating and strengthening the Code in order to bridge the gaps. The Code’s revision process was completed in June 2022.
In July, last year, the U.S. introduced the bipartisan Deepfake Task Force Act to assist the Department of Homeland Security (DHS) to counter deepfake technology. The measure directs the DHS to conduct an annual study of deepfakes — assess the technology used, track its uses by foreign and domestic entities, and come up with available countermeasures to tackle the same.
Some States in the United States such as California and Texas have passed laws that criminalise the publishing and distributing of deepfake videos that intend to influence the outcome of an election. The law in Virginia imposes criminal penalties on the distribution of nonconsensual deepfake pornography.
In India, however, there are no legal rules against using deepfake technology. However, specific laws can be addressed for misusing the tech, which include Copyright Violation, Defamation and cyber felonies.
Does this technology disrupt the right to privacy?
While Canada does not have any regulations to tackle deepfakes, it is in a unique position to lead the initiative to counter deepfakes. Within Canada, some of the most cutting-edge AI research is being conducted by the government with a number of domestic and foreign actors. Furthermore, Canada is a member and leader in many related multilateral initiatives like the Paris Call for Trust and Security in Cyberspace, NATO Cooperative Cyber Defence Centre of Excellence and the Global Partnership on Artificial Intelligence. It can use these forums to coordinate with global and domestic actors to create deepfake policy in different areas.