This tool can help detect data vulnerabilities in AI-powered systems

This tool can help detect data vulnerabilities in AI-powered systems.   | Photo Credit: Reuters

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

A new open-source tool called ‘Machine Learning Privacy Meter’ has been developed to help detect data vulnerabilities in artificial intelligence (AI)-powered systems and prevent them from possible attacks.

A team of researchers at National University of Singapore (NUS) have developed the tool along with a general attack formula that provides a framework to test different types of inference attacks in AI systems.

“When building AI systems using sensitive data, organisations should ensure that the data processed in such systems are adequately protected. Our tool can help organisations perform internal privacy risk analysis or audits before deploying an AI system,” Reza Shokri, Assistant Professor at NUS, said in a release.

AI models used in various services are trained on data sets that include sensitive information. According to NUS, the models are vulnerable to inference attacks that allow hackers to extract sensitive information about training data.

Also Read | Google’s AI tool to auto convert web pages into videos

In an attack, hackers frequently ask the AI service to generate information, and then analyse the data to determine a pattern. Hackers then infer if a specific type of data was used for training the AI programme, and can even reconstruct the original dataset that was most likely used to train the AI engine, the NUS release explained.

“Inference attacks are difficult to detect as the system just assumes the hacker is a regular user while supplying information,” Shokri said.

The tool can simulate such attacks and quantify how much the model leaks about individual data records in its training set. It also highlights the vulnerable areas in the training data, and shows possible techniques that organisations can adopt to mitigate a possible inference attack, in advance, the release noted.

Also Read | Facebook's AI model can translate 100 languages without relying on English

“Data protection regulations such as the General Data Protection Regulation mandate the need to assess the privacy risks to data when using machine learning. Our tool can aid companies in achieving regulatory compliance by generating reports for Data Protection Impact Assessments,” Shokri said.

This article is closed for comments.
Please Email the Editor

Printable version | Jan 25, 2021 10:51:39 PM |

Next Story