Can artificial intelligence help weed out fake news in a post-truth era?

Earlier this year, while Myanmar and Sri Lanka were caught in the throes of social media-fuelled communal violence, watchdogs and analysts studying the phenomenon of misinformation also turned their attention to Karnataka, where things seemed to be headed the same way. Last month’s state elections were considered to be a preview of sorts for the 2019 national elections, with opposing factions wielding the power of the Internet to compete with each other.

A so-called BBC poll indicating that the BJP would win 135 seats — fake. A 20-second clip of Congress President Rahul Gandhi “exposing” former Karnataka chief minister Siddaramaiah’s corruption (retweeted by minister Smriti Irani) — a 2013 video taken completely out of context.

These were just two of the claims negated by BOOM, a fact-checking website that partnered with Facebook during the lead up to the elections. “Facebook created a dashboard, where you can see what users are flagging as misinformation, and then we would fact check that,” shares Govindraj Ethiraj, co-founder of the Mumbai-based organisation.

Facts vs emotions

Late last month, Facebook released Facing Facts, a 12-minute video directed by Academy Award winner Morgan Neville. Half bashful confession, half cheerful PR stunt, the film features employees emphasising their commitment to fighting the spread of false news. This includes larger fact-checking teams, artificial intelligence, and machine-learning systems. One interviewee confided that the problem was the News Feed. It was algorithmically designed to incite emotional responses from users. But it turned out to be so good at its job that it led to the spread of highly-targeted, incendiary, and false content.

“This is a rapidly transforming, no, transmogrifying entity,” says Ethiraj, about the widening spread of misinformation across India, going on to state that, as the Karnataka elections revealed, Facebook was not the largest source of false news. The credit went to another beast, also owned by the Internet giant. As a recent Washington Post article stated, “Elections in India are now fought and won on WhatsApp.”

Krish Ashok, techie and blogger, points out that with its end-to-end encryption that often makes it impossible to trace the source of a viral forward, WhatsApp poses a bigger problem than Facebook in a country which, with 200 million users, forms its largest market. “It is literally controlled by small groups of people, promoting their own biases and their own fake news,” he says. “There are no algorithms at play at all. In some sense, WhatsApp is a reflection of existing polarisations in society, weaponised by communications technology.”

Tech to the rescue

Can AI and machine-learning, promoted by Internet giants as part of their new manifestos, help solve the spread of misinformation? In some ways, machine-learning algorithms helped create the very targeting system that was successfully exploited in the run up to the 2016 US elections. The technology also works more successfully with flagging clickbait rather than false news, because the former is much easier to identity. However, Vinay Anand, co-founder of Pipes, an AI-based news aggregator app, believes there is some hope. “AI can come in to stem the spread of fake news,” he says, “But before that, we need to have a number of reliable and reputable sources to fact check against.”

As Sanjana Hattotuwa, founding editor of Groundviews, the Sri Lankan civic media platform points out, “There is no way out of employing AI, since the volume of content production is exponentially growing, and simply scaling up human reviewers will not help. What AI can and must do is algorithmic red-flagging.”

In an interview with Wired, Eduardo Ariño de la Rubia, data science manager at Facebook, says, “Misinformation can come from any place that humans touch, and humans touch a lot of places.” Even as artificial intelligence is poised to play a big role in curbing the spread of misinformation, it is impossible to deny that fake news is a human construct, born out of existing biases.

How to create an AI mind

--> Dr Andreas Vlachos (lecturer, right) and James Thorne (PhD student, left) from the Department of Computer Science, University of Sheffield, UK, weigh in.

--> Combating the spread of misinformation poses particular challenges in India, which has 22 official languages and many beyond the list. This linguistic diversity, combined with an increase in smartphone use, can cause misinformation to spread quickly, and on a huge scale.

--> Human fact-checkers try their best to flag and address dubious information, but there are just too many false stories and too few people. Could AI (artificial intelligence) fill this gap? Automated fact-checking, using natural language processing to identify whether the statistics being used in a particular claim are true or false, has been an area of focus in our research.

--> Together with colleagues at Amazon Research Cambridge, we have collaborated on a project called FEVER (Fact Extraction and VERification), using AI to fact-check information. We created a database of 2,00,000 claims written by humans, with each claim labelled with evidence selected from Wikipedia.

--> We designed and introduced this database with the aim of pushing the bounds of current Natural Language technologies: requiring computers to reason about whether claims can be supported or refuted from evidence.

--> We have made the FEVER database publicly available ( so that computer scientists, researchers, social scientists, journalists and industry can work together against the spread of misinformation. We’re not quite there yet, but hopefully one day soon, AI will be a key component in solving this major challenge.

Our code of editorial values

Related Topics
This article is closed for comments.
Please Email the Editor

Printable version | Sep 23, 2021 10:46:49 AM |

Next Story