Many elections, AI’s dark dimension

With a series of elections to be held across the world in 2024, the potential of AI to disrupt democracies cannot be dismissed

March 18, 2024 01:02 am | Updated 08:34 am IST

‘The year 2024 would be a test case as to whether AI’s newer models could alter electoral behaviours and verdicts’

‘The year 2024 would be a test case as to whether AI’s newer models could alter electoral behaviours and verdicts’ | Photo Credit: Getty Images

The rapid development of Artificial Intelligence (AI) models suggests that we are at an inflection point in the history of human progress. The speed with which the development of newer skills is taking place suggests that the day is not far off when Generative Artificial Intelligence (GAI) will transform into Artificial General Intelligence (AGI), which can mimic the capabilities of human beings. Such a situation could revolutionise our ideas about what to expect from machines. Breakthroughs in the AI domain will bring about a new chapter in human existence, including the way people react to both facts and falsehoods.

The potential of AI is already clear. Many such as Sam Altman of OpenAI in the United States, believe that it is the most important technology in history. AI protagonists further believe that AI is set to turbocharge, and dramatically improve, the standard of living of millions of human beings. It is, however, unclear, as of now, whether, as many Doomsday sayers aver, whether AI would undermine human values and that advanced AI could pose ‘existential risks’.

AI and the electoral landscape

With the seven-phase general election in India having been announced, and to be held from April 19 to June 1, 2024, political parties and the electorate cannot, however, afford to ignore the AI dimension. This year, elections are also scheduled to be held (according to some reports) in as many as 50 other countries across the globe, apart from, and including, India, Mexico, the United Kingdom (by law, the last possible date for a general election is January 28, 2025) and the United States.

These elections are set to alter the fate of millions of people, and policymakers and the electorate need to ponder over the positive and negative impacts of this new technology. Rapid technological breakthroughs in AI (especially its latest manifestation, such as Generative AI, that provides dynamic simulations and mimics real world interactions) carry their own burdens. It may be too early to fully contemplate the possible impact of AGI — AI systems that simulate the capability of human beings — but all this is indicative of yet another dimension to electoral dynamics that cannot be ignored.

It may, hence, not be wrong to consider the elections of 2024 as a curtain-raiser to whether AI and its offerings (such as Generative AI) would prove to be a game changer. The world is by now aware that AI models such as ChatGPT, Gemini, Copilot are being employed in many fields, but 2024 would be a test case as to whether AI’s newer models could alter electoral behaviours and verdicts as well. The good news, perhaps, is that those wishing to employ Generative AI to try and transform the electoral landscape do not have adequate time to fine-tune their AI models. It would, however, still be a mistake to underestimate the extent to which AI could impact the electoral landscape this time as well. What might not happen in 2024, may well happen in the next round of elections, both in India and worldwide.

A recently published Pew Survey (if it can be treated as reliable) indicates that a majority of Indians support ‘authoritarianism’. Those employing AI could well have a field day in such a milieu to further confuse the electorate. As it is, many people are already referring to the elections in 2024 worldwide as the ‘Deep Fake Elections’, created by AI software. Whether this is wholly true or not, the Deep Fake syndrome appears inevitable, given that each new election lends itself to newer and newer techniques of propaganda, all with the aim of confusing and confounding the electorate. From this, it is but a short step to the inevitability of Deep Fakes.

Tacking AI ‘determinism’

AI technology makes it easier to enhance falsehoods and enlarge mistaken beliefs. Disinformation is hardly a new methodology or technology, and has been employed in successive elections previously. What is new is that sophisticated AI tools will be able to confuse the electorate to an extent not previously known or even envisaged. The use of AI models to produce reams of wrong information, apart from disinformation, accompanied by the creation of near realistic images of something that does not exist, will be a whole new experience. What can be said with some degree of certainty is that in 2024, the quality and quantity of disinformation is all set to overwhelm the electorate. What is more worrying is that the vast majority of such information would be incorrect. Hyper realistic Deep Fakes employed to sway voters, and micro targeting are set to scale new heights.

The potential of AI to disrupt democracies is, thus, very considerable. Simply being aware of the disruptive nature of AI and AI fakes is not enough. It may be necessary, for democracies in particular, to prevent such tactics from distorting the ‘thought behaviour’ of the electorate. AI deployed tactics will tend to make voters more mistrustful, and it is important to introduce checks and balances that would obviate efforts at AI ‘determinism’. Notwithstanding all this, and while being mindful of the potential of AGI, panic is not warranted. There are many checks and balances available that could be employed to negate some of AI’s more dangerous attributes.

The wide publicity given to a spate of recent inaccuracies associated with Google is a timely reminder that AI and AGI cannot be trusted in each and every circumstance. There has been public wrath worldwide over Google AI models, including in India, for their portrayal of persons and personalities in a malefic manner, mistakenly or otherwise. These reflect well the dangers of ‘runaway’ AI.

Also Read: Why has the government issued an AI advisory? | Explained 

Inconsistencies and undependability still stalk many AI models and pose inherent dangers to society. As its potential and usage increases in geometric proportion, threat levels are bound to go up. As of now, even as the potential of AI remains very considerable, it tends to be undependable. More so, its ‘mischief potential’ cannot be ignored.

As nations increasingly depend on AI solutions for their problems, it is again important to recognise what many AI experts label as AI’s ‘hallucinations’. In simple terms, what these experts are implying is that ‘hallucinations’ make it hard to accept and endorse AI systems in many instances. What they further imply, specially in the case of AGI, is that it tends at times to make up things in order to solve new problems. These are often probabilistic in character and cannot be accepted ipso facto as accurate. The implication of all of this is that too much reliance on AI systems at this stage of development may be problematic. The stark reality, though, is that there is no backtracking from what AI or AGI promises, even if results are less dependable than one would like.

We cannot also afford to ignore other existential threats associated with AI. The dangers on this account pose an even greater threat than harm arising from bias in design and development. There are real concerns that AI systems, oftentimes, tend to develop certain inherent adversarial capabilities. Suitable concepts and ideas have not yet been developed to mitigate them, as of now. The main types of adversarial capabilities, overshadowing other inbuilt weaknesses are: ‘poisoning’ that typically degrades an AI model’s ability to make relevant predictions; ‘back dooring’ that causes the model to produce inaccurate or harmful results; and even ‘evasion’ that entails resulting in a model misclassifying malicious or harmful inputs thus detracting from an AI model’s ability to perform its appointed role. There are possibly other problems as well, but it may be too early to enumerate them with any degree of probability.

India’s handling of AI

Elections apart, India being one of the most advanced countries in the digital arena, again needs to treat AI as an unproven entity. While AI brings benefits, the nation and its leaders should be fully aware of its disruptive potential. This is specially true of AGI, and they should act with due caution. India’s lead in digital public goods could be both a benefit as well as a bane, given that while AGI provides many benefits, it can be malefic as well.

M.K. Narayanan is a former Director, Intelligence Bureau, a former National Security Adviser, a former Governor of West Bengal, and a former Executive Chairman of CyQureX Private Limited, a U.K.-U.S. cyber security joint venture

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in


Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.