Malware can be injected into AI models, research shows

Researchers have shown that malware can be injected and hidden in neural network models   | Photo Credit: Reuters

Researchers at the Cornell University have shown that malware can be injected and hidden in neural network models, and delivered covertly by evading detection mechanisms.

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

Neural network is the foundation of artificial intelligence (AI). It performs tasks similar to how the human brain works, and is designed to simulate the way the human brain analyses and processes information.

The research team says embedding malware into a neural network has minor or no impact on the performance of the network.

Also Read | Scale, details of massive Kaseya ransomware attack emerge

They can pass the antivirus security scan as the structure of these neural network models remains unchanged even after the malware is injected into them, researchers said.

Also Read | Pegasus Issue | What are zero-click attacks and how do they infect smartphones?

They have shown through experiments that 36.9MB of malware can be embedded into a 178MB-AlexNet model with under 1% accuracy loss and without any suspicion raised by antivirus engines.

Researchers reckon that with widespread application of AI, utilising neural networks to inject malware could become a new way to run malicious campaigns.

Our code of editorial values

This article is closed for comments.
Please Email the Editor

Printable version | Sep 29, 2021 8:47:48 AM |

Next Story