Advanced AI speech tech could be used to generate disinformation online, study finds

Phpto for representation purpose   | Photo Credit: Shutterstock

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

AI-powered autonomous applications are increasingly being used in our daily lives, but they can be manipulated to generate and spread disinformation online, according to a study titled ‘Truth, Lies, and Automation - How Language Models Could Change Disinformation’ by a team of researchers at Georgetown University.

The team analysed the behaviour of OpenAI’s GPT-3, a powerful AI system that generates texts based on human prompts. They tested the system on several disinformation campaigns based on six content generation skills - - reiteration, elaboration, manipulation, seeding, wedging, and persuasion. In one of the experiments, the researchers allowed the system write sample tweets about the withdrawal of U.S. troops from Afghanistan and U.S. sanctions on China. Participants who viewed the tweets were swayed by the auto-generated statements and found them to be convincing.

The team noted that the AI system could be deployed for a wide range of tactical goals, including hijacking a viral hashtag on social media to make certain extreme perspectives appear more common. For example, the system was able to produce over 15 tweets denying the occurrence of climate change from just few human prompts, indicating how easily it could sway a reader’s opinions, the team noted.

Also Read | Can artificial intelligence help weed out fake news in a post-truth era?

The model could also be used to spread false claims through news story headlines. It was successful in iterating on a series of headlines and coming up with similar-sounding headlines that make unusual factual claims, without human interference, the team stated.

GPT-3’s most significant impact is likely to come in scaling up operations, permitting adversaries to try more possible messages and variations. Deploying an algorithm like GPT-3 is also well within the capacity of governments, especially tech-savvy ones like China and Russia, the team added.

“Our study also hints at an alarming conclusion that systems like GPT-3 seem better suited for disinformation in subtle forms than information, more adept as fabulists than as staid truth-tellers,” the study concluded.

Our code of editorial values

This article is closed for comments.
Please Email the Editor

Printable version | Aug 2, 2021 8:14:12 PM |

Next Story