It is abundantly clear that the 2019 General Election is going to be fought via social media and messaging platforms like WhatsApp. To tackle the sheer volume of information and misinformation that is floating around the Internet, artificial intelligence is increasingly being used. But is it enough to verify and differentiate between fact and fiction? Five people in the know tell us what to expect.
Govindraj Ethiraj, founder of fact-checking website BOOM: The nature of misinformation is that it is so carefully and smartly designed, there is no way a computer can pick it up. Confirming requires on-ground reporting; can artificial intelligence (AI) do that independently, without human intervention?
Krish Ashok, techie, blogger, musician: What's happened now is that everyone trusts their own sources of news. It's really a weakness with human society more than a problem that has been created with AI or will be solved by AI.
Sanjana Hattotuwa, founding editor of Ground views.org, and senior researcher at the Centre for Policy Alternatives, Sri Lanka: [In countries like India], AI language training has comparably very little to learn from, since digitised text in the public domain, available for training purposes and sentiment analysis, is much less than say, English or French.
Vinay Anand, co-founder of Pipes, an AI-based news aggregator app: Most of AI is pattern recognition and deep learning. So if you feed it incorrect data, the bots can also add to the problem by using generative techniques to produce as much authentic looking content as they can.
Sorabh Pant, comedian: A friend is not a legitimate source of information just because you like that friend. Google is free - always verify what you receive before you pass it on. What we need is more human intelligence.