Disinformation and hoaxes have evolved from mere annoyance to warfare that can create social discord, increase polarisation, and in some cases, even influence the election outcome. Nation-state actors with geopolitical aspirations, ideological believers, violent extremists, and economically motivated enterprises can manipulate social media narratives with easy and unprecedented reach and scale. The disinformation threat has a new tool in the form of deepfakes.
What are deepfakes?
Deepfakes are digital media - video, audio, and images edited and manipulated using Artificial Intelligence. It is basically hyper-realistic digital falsification. Deepfakes are created to inflict harm on individuals and institutions. Access to commodity cloud computing, public research AI algorithms, and abundant data and availability of vast media have created a perfect storm to democratise the creation and manipulation of media. This synthetic media content is referred to as deepfakes.
Artificial Intelligence (AI)-Generated Synthetic media or deepfakes have clear benefits in certain areas, such as accessibility, education, film production, criminal forensics, and artistic expression. However, as access to synthetic media technology increases, so does the risk of exploitation. Deepfakes can be used to damage reputation, fabricate evidence, defraud the public, and undermine trust in democratic institutions. All this can be achieved with fewer resources, with scale and speed, and even micro-targeted to galvanise support.
Who are the victims?
The first case of malicious use of deepfake was detected in pornography. According to a sensity.ai, 96% of deepfakes are pornographic videos, with over 135 million views on pornographic websites alone. Deepfake pornography exclusively targets women. Pornographic deepfakes can threaten, intimidate, and inflict psychological harm. It reduces women to sexual objects causing emotional distress, and in some cases, lead to financial loss and collateral consequences like job loss.
Deepfake can depict a person as indulging in antisocial behaviors and saying vile things that they never did. Even if the victim could debunk the fake via alibi or otherwise, that fix may come too late to remedy the initial harm.
Deepfakes can also cause short-term and long-term social harm and accelerate the already declining trust in traditional media. Such erosion can contribute to a culture of factual relativism, fraying the increasingly strained civil society fabric.
Deepfake could act as a powerful tool by a malicious nation-state to undermine public safety and create uncertainty and chaos in the target country. Deepfakes can undermine trust in institutions and diplomacy.
Deepfakes can be used by non-state actors, such as insurgent groups and terrorist organisations, to show their adversaries as making inflammatory speeches or engaging in provocative actions to stir anti-state sentiments among people.
Another concern from deepfakes is the liar’s dividend; an undesirable truth is dismissed as deepfake or fake news. The mere existence of deepfakes gives more credibility to denials. Leaders may weaponise deepfakes and use fake news and alternative-facts narrative to dismiss an actual piece of media and truth.
What is the solution?
Media literacy efforts must be enhanced to cultivate a discerning public. Media literacy for consumers is the most effective tool to combat disinformation and deepfakes.
We also need meaningful regulations with a collaborative discussion with the technology industry, civil society, and policymakers to develop legislative solutions to disincentivising the creation and distribution of malicious deepfakes.
To counter the menace of deepfakes, we all must take the responsibility to be critical consumers of media on the Internet, think and pause before we share on social media, and be part of the solution to this ‘infodemic’.