How cybersecurity firms are using AI to mitigate online threats 

Artificial Intelligence tools are being used in cybersecurity to improve threat prediction, fill in the gaps for human experience, and build data repositories for integrated cybersecurity platforms  

Updated - September 22, 2023 02:52 pm IST

Published - September 22, 2023 01:09 pm IST

Cybersecurity firms have been investing in machine learning, a subset of AI, for quite some time now to counter such threats.

Cybersecurity firms have been investing in machine learning, a subset of AI, for quite some time now to counter such threats. | Photo Credit: Reuters

As artificial intelligence (AI) tools are increasingly used in content generation, work applications, and even web search, hackers have figured out ways to misuse this technology. AI-generated deepfakes are one of the important areas of concern for cybersecurity experts.

Cybersecurity firms have been investing in machine learning, a subset of AI, for quite some time now to counter such threats. These investments are coming to fruition with the launch of AI models that can predict vulnerabilities and warn users about threat actors. Hacking threats, as predicted by an AI model built by cybersecurity firm Tenable, is “at about 25%,” said Glen Pendley, CTO of Tenable.

Technologies like Security Data Retention (SDR) and ML are used to predict and identify threats, especially those that are anomalous, Pendley added.

“ML is critical, and implementation of AI in cybersecurity takes their use to a different level,” said Vishal Salvi, CEO of Quick Heal Technologies.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

Apart from threat prediction, cybersecurity firms are also deploying AI where there is a lack of experienced human resources.

Despite apparent benefits, “it [AI] should be used with caution” said Pendley. “Like any other tool, while it can be useful, it can also be dangerous. I wouldn’t recommend people shy away from it. I would just say treat it like you would any other tool and try to maximize efficiency through its use.”

An example of why caution is recommended is companies releasing AI-based tools to the public without taking moral or social responsibility. These products, which lack proper security measures, can be misused by criminals. And, while it is difficult to anticipate the misuse of a tool, companies should limit the input into prompts to control the output. “From a responsibility perspective, it is absolutely necessary,” said Pendley.

AI does not change the nature of the cybersecurity industry which has always been in a cat-and-mouse game. According to experts, AI is not a new vector in cybersecurity.

Industry players are using AI to build additional models to give insights on attacks, provide quick responses on known vulnerabilities, and conduct comprehensive tests that can be automated to reduce human error, Salvi said.

But, the cybersecurity industry is playing catch up with threat actors that use generative AI to create deepfakes. “I don’t think the world is ready for that”, Pendley said.

While AI tools used by the cybersecurity industry are really good at recognizing text and images, the existing tools are not yet designed to correctly identify deepfake voices and videos.

In terms of consolidation in the industry, there appears to be a consensus among industry leaders. They envision the consolidation of data could help them apply different generative AI models on the amalgamated data set. The consolidation therefore is seen more in the form of integrated platforms and the consolidation of databases “but there will never be one product to rule them all,” Pendley said.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.