This tool can help detect data vulnerabilities in AI-powered systems

According to NUS, the models are vulnerable to inference attacks that allow hackers to extract sensitive information about training data.

November 12, 2020 06:28 pm | Updated 06:52 pm IST

This tool can help detect data vulnerabilities in AI-powered systems.

This tool can help detect data vulnerabilities in AI-powered systems.

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

A new open-source tool called ‘Machine Learning Privacy Meter’ has been developed to help detect data vulnerabilities in artificial intelligence (AI)-powered systems and prevent them from possible attacks.

A team of researchers at National University of Singapore (NUS) have developed the tool along with a general attack formula that provides a framework to test different types of inference attacks in AI systems.

“When building AI systems using sensitive data, organisations should ensure that the data processed in such systems are adequately protected. Our tool can help organisations perform internal privacy risk analysis or audits before deploying an AI system,” Reza Shokri, Assistant Professor at NUS, said in a release.

AI models used in various services are trained on data sets that include sensitive information. According to NUS, the models are vulnerable to inference attacks that allow hackers to extract sensitive information about training data.

Also Read | Google’s AI tool to auto convert web pages into videos

In an attack, hackers frequently ask the AI service to generate information, and then analyse the data to determine a pattern. Hackers then infer if a specific type of data was used for training the AI programme, and can even reconstruct the original dataset that was most likely used to train the AI engine, the NUS release explained.

“Inference attacks are difficult to detect as the system just assumes the hacker is a regular user while supplying information,” Shokri said.

The tool can simulate such attacks and quantify how much the model leaks about individual data records in its training set. It also highlights the vulnerable areas in the training data, and shows possible techniques that organisations can adopt to mitigate a possible inference attack, in advance, the release noted.

Also Read | Facebook's AI model can translate 100 languages without relying on English

“Data protection regulations such as the General Data Protection Regulation mandate the need to assess the privacy risks to data when using machine learning. Our tool can aid companies in achieving regulatory compliance by generating reports for Data Protection Impact Assessments,” Shokri said.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.