Lead

Countering deepfakes, the most serious AI threat

Disinformation and hoaxes have evolved from mere annoyance to high stake warfare for creating social discord, increasing polarisation, and in some cases, influencing an election outcome. Deepfakes are a new tool to spread computational propaganda and disinformation at scale and with speed.

Access to commodity cloud computing, algorithms, and abundant data has created a perfect storm to democratise media creation and manipulation. Deepfakes are the digital media (video, audio, and images) manipulated using Artificial Intelligence. This synthetic media content is referred to as deepfakes.

Also read | The rise of deepfakes and threats to truth

A cyber Frankenstein

Synthetic media can create possibilities and opportunities for all people, regardless of who they are, where they are, and how they listen, speak, or communicate. It can give people a voice, purpose, and ability to make an impact at scale and with speed. But as with any new innovative technology, it can be weaponised to inflict harm.

Deepfakes, hyper-realistic digital falsification, can inflict damage to individuals, institutions, businesses and democracy. They make it possible to fabricate media — swap faces, lip-syncing, and puppeteer — mostly without consent and bring threat to psychology, security, political stability, and business disruption. Nation-state actors with geopolitical aspirations, ideological believers, violent extremists, and economically motivated enterprises can manipulate media narratives using deepfakes, with easy and unprecedented reach and scale.

Targeting women

The very first use case of malicious use of a deepfake was seen in pornography, inflicting emotional, reputational, and in some cases, violence towards the individual. Pornographic deepfakes can threaten, intimidate, and inflict psychological harm and reduce women to sexual objects. Deepfake pornography exclusively targets women.

Also read | Deepfake content is doubling every six months, trend to grow in Asia

Deepfakes can depict a person indulging in antisocial behaviours and saying vile things. These can have severe implications on their reputation, sabotaging their professional and personal life. Even if the victim could debunk the fake via an alibi or otherwise, it may come too late to remedy the initial harm. Malicious actors can take advantage of unwitting individuals to defraud them for financial gains using audio and video deepfakes. Deepfakes can be deployed to extract money, confidential information, or exact favours from individuals.

Deepfakes can cause short- and long-term social harm and accelerate the already declining trust in news media. Such an erosion can contribute to a culture of factual relativism, fraying the increasingly strained civil society fabric. The distrust in social institutions is perpetuated by the democratising nature of information dissemination and social media platforms’ financial incentives. Falsity is profitable, and goes viral more than the truth on social platforms. Combined with distrust, the existing biases and political disagreement can help create echo chambers and filter bubbles, creating discord in society.

Also read | Deepfake video shows Bezos, Musk in Star Trek episode

Imagine a deepfake of a community leader denigrating a religious site of another community. It will cause riots and, along with property damage, may also cause life and livelihood losses. A deepfake could act as a powerful tool by a nation-state to undermine public safety and create uncertainty and chaos in the target country. It can be used by insurgent groups and terrorist organisations, to represent their adversaries as making inflammatory speeches or engaging in provocative actions to stir up anti-state sentiments among people.

Undermining democracy

A deepfake can also aid in altering the democratic discourse and undermine trust in institutions and impair diplomacy. False information about institutions, public policy, and politicians powered by a deepfake can be exploited to spin the story and manipulate belief.

A deepfake of a political candidate can sabotage their image and reputation. A well-executed one, a few days before polling, of a political candidate spewing out racial epithets or indulging in an unethical act can damage their campaign. There may not be enough time to recover even after effective debunking. Voters can be confused and elections can be disrupted. A high-quality deepfake can inject compelling false information that can cast a shadow of illegitimacy over the voting process and election results.

Also read | Deepfakes enter Indian election campaigns

Deepfakes contribute to factual relativism and enable authoritarian leaders to thrive. For authoritarian regimes, it is a tool that can be used to justify oppression and disenfranchise citizens. Leaders can also use them to increase populism and consolidate power. Deepfakes can become a very effective tool to sow the seeds of polarisation, amplifying division in society, and suppressing dissent.

Another concern is a liar’s dividend; an undesirable truth is dismissed as deepfake or fake news. Leaders may weaponise deepfakes and use fake news and an alternative-facts narrative to replace an actual piece of media and truth.

Also read | Doctored video of Pelosi brings renewed attention to ‘cheapfakes’

Major solutions

To defend the truth and secure freedom of expression, we need a multi-stakeholder and multi-modal approach. Collaborative actions and collective techniques across legislative regulations, platform policies, technology intervention, and media literacy can provide effective and ethical countermeasures to mitigate the threat of malicious deepfakes.

 

Media literacy for consumers and journalists is the most effective tool to combat disinformation and deepfakes. Media literacy efforts must be enhanced to cultivate a discerning public. As consumers of media, we must have the ability to decipher, understand, translate, and use the information we encounter (https://bit.ly/2HFlUs8). Even a short intervention with media understanding, learning the motivations and context, can lessen the damage. Improving media literacy is a precursor to addressing the challenges presented by deepfakes.

Meaningful regulations with a collaborative discussion with the technology industry, civil society, and policymakers can facilitate disincentivising the creation and distribution of malicious deepfakes. We also need easy-to-use and accessible technology solutions to detect deepfakes, authenticate media, and amplify authoritative sources.

Deepfakes can create possibilities for all people irrespective of their limitations by augmenting their agency. However, as access to synthetic media technology increases, so does the risk of exploitation. Deepfakes can be used to damage reputations, fabricate evidence, defraud the public, and undermine trust in democratic institutions.

Watch | Have you been swapping?
 

To counter the menace of deepfakes, we all must take the responsibility to be a critical consumer of media on the Internet, think and pause before we share on social media, and be part of the solution to this infodemic.

Ashish Jaiman is the Director of Technology and Operations in the Customer Security and Trust organization at Microsoft, focusing on the Defending Democracy Program

This article is closed for comments.
Please Email the Editor

Printable version | Dec 2, 2020 7:03:31 AM | https://www.thehindu.com/opinion/lead/countering-deepfakes-the-most-serious-ai-threat/article32966578.ece

Next Story