Advanced AI speech tech could be used to generate disinformation online, study finds

The system was able to produce over 15 tweets denying the occurrence of climate change from just few human prompts, indicating how easily it could sway a reader’s opinions

June 02, 2021 01:23 pm | Updated 09:03 pm IST

Phpto for representation purpose

Phpto for representation purpose

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

AI-powered autonomous applications are increasingly being used in our daily lives, but they can be manipulated to generate and spread disinformation online, according to a study titled ‘Truth, Lies, and Automation - How Language Models Could Change Disinformation’ by a team of researchers at Georgetown University.

The team analysed the behaviour of OpenAI’s GPT-3, a powerful AI system that generates texts based on human prompts. They tested the system on several disinformation campaigns based on six content generation skills - - reiteration, elaboration, manipulation, seeding, wedging, and persuasion. In one of the experiments, the researchers allowed the system write sample tweets about the withdrawal of U.S. troops from Afghanistan and U.S. sanctions on China. Participants who viewed the tweets were swayed by the auto-generated statements and found them to be convincing.

The team noted that the AI system could be deployed for a wide range of tactical goals, including hijacking a viral hashtag on social media to make certain extreme perspectives appear more common. For example, the system was able to produce over 15 tweets denying the occurrence of climate change from just few human prompts, indicating how easily it could sway a reader’s opinions, the team noted.

Also Read | Can artificial intelligence help weed out fake news in a post-truth era?

The model could also be used to spread false claims through news story headlines. It was successful in iterating on a series of headlines and coming up with similar-sounding headlines that make unusual factual claims, without human interference, the team stated.

GPT-3’s most significant impact is likely to come in scaling up operations, permitting adversaries to try more possible messages and variations. Deploying an algorithm like GPT-3 is also well within the capacity of governments, especially tech-savvy ones like China and Russia, the team added.

“Our study also hints at an alarming conclusion that systems like GPT-3 seem better suited for disinformation in subtle forms than information, more adept as fabulists than as staid truth-tellers,” the study concluded.

Top News Today

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.