HP threat intelligence finds gen AI being used to craft malware  

HP in its latest threat intelligence report shared that cybercriminals are bypassing restrictions and leveraging generative AI to craft sophisticated malware

Updated - September 26, 2024 12:23 pm IST

Threat actors are using generative AI to write malware in with increased speed and efficiency.

Threat actors are using generative AI to write malware in with increased speed and efficiency. | Photo Credit: Reuters

HP, in its latest threat intelligence report, shared that cybercriminals are using generative AI to craft malicious malware. The report highlights a malicious campaign targeting French-speaking users, where malware was developed using artificial intelligence.

First detected in June, the use of AI in developing the malware was detected due to the presence of comments within the malicious, something AI does when asked to write lines of code.

The campaign reportedly used HTML smuggling to deliver a password-protected ZIP archive, that researchers unlocked using brute force. When the code within the ZIP file was analysed, researchers found that the attackers had commented on the entire code, a rarity for codes written by a human.

Structure of the code, the comments explaining each line, and the use of native language for function names and variables further point to the use of AI for writing the malware.

Security researchers have increasingly warned that cybercriminals may be using gen AI to write phishing emails. Low-level threat actors are also leveraging AI to write malware and customise it to attacks targeting various regions and platforms.

Threat actors are also using AI to increase the speed of creating malware when creating more advanced threats.

Earlier, in 2023, reports emerged that threat actors were using OpenAI’s ChatGPT to write codes and launch cyberattacks. At the time, the company updated its capabilities to ensure threat actors could not use the chatbot to write malicious emails or code. However, it seems that threat actors may have found ways to bypass the security restrictions placed in gen AI models to leverage the technology to craft and increase the scope of their malicious campaigns.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.