(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)
A new approach has been developed to quickly assess certainty of neural networks. The model could improve efficiency in real-world systems that rely on AI-assisted decision making.
A team of researchers at Massachusetts Institute of Technology (MIT) and Harvard University has developed the method, and detailed it in a paper titled ‘Deep Evidential Regression’.
The researchers trained their neural network to analyse images and estimate the distance from the camera lens, which is similar to what an autonomous vehicle would do to assess closeness to pedestrians or to another vehicle.
They also tested the network with slightly altered images, however, it was able to spot the changes, which could help detect manipulations such as deepfakes, MIT noted in a release.
Also Read | Countering deepfakes, the most serious AI threat
“By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model,” Daniela Rus, Director of MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), said in a release.
Neural networks are deployed to recognise patterns in large, complex datasets to help make decisions.
The team designed the new approach to generate 'bulked up output', which means, in addition to making a decision, it will also give evidence to support that decision from a single run of neural network.
The evidence produced by the neural network, directly captures the model’s confidence in its prediction, and includes any uncertainty present in the input data, as well as in the model’s final decision, a MIT release explained.
It can also indicate whether uncertainty can be decreased by adjusting the neural network itself, or if it is an issue with the input data, it added.
Also Read | MIT’s upgraded autonomous boat can now ferry passengers
“We need the ability to not only have high-performance models, but also to understand when we cannot trust those models,” Alexander Amini, a researcher at MIT, said in a release.
According to MIT, earlier approaches available to estimate uncertainty have relied on running, or sampling, a neural network many times over to understand its confidence, making the process computationally expensive and relatively slow for split-second decisions.