AI-generated fake reports can trick cybersecurity experts, study says

Researchers used AI to deliberately generate false information on COVID-19   | Photo Credit: UMBC

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

Researchers found that Artificial Intelligence (AI)-generated fake reports can trick cybersecurity experts knowledgeable on all kinds of cyberattacks and vulnerabilities.

The University of Baltimore used AI models to generate false information and presented it to the cybersecurity experts for testing.

They found that the experts failed to spot misinformation generated by Google’s BERT and OpenAI's GPT.

These speech to text interface are used in storytelling, answering questions to help Google and other tech companies improve their search engines, and helping people combat writer’s block.

The researchers fine-tuned GPT-2 transformer model on open online sources discussing cybersecurity vulnerabilities.

They seeded the model with a sentence of a real cyber threat intelligence sample and made it generate the rest of the description.

Also Read | Facebook introduces new AI systems to detect misinformation

The misinformation fooled cyberthreat hunters, who read the threat descriptions to identify potential attacks and adjust the defenses of their systems.

The same technique was also used on some COVID-19-related papers to generate false information on the effects of COVID-19 vaccinations and the experiments conducted, which fooled the experts.

If accepted as accurate, this kind of misinformation could put lives at risk by misleading scientists conducting research, and the general public, who rely on news for health information to make informed decisions, according to the researchers.

Our code of editorial values

This article is closed for comments.
Please Email the Editor

Printable version | Jul 31, 2021 1:43:30 PM |

Next Story