A new study from the Centre for Neuroscience (CNS) at the Indian Institute of Science (IISc) explores how well deep neural networks compare to the human brain when it comes to visual perception.
According to an IISc release, deep neural networks are machine learning systems inspired by the network of brain cells or neurons in the human brain, which can be trained to perform specific tasks and have played a pivotal role in helping scientists understand how our brains perceive the things that we see. Despite having evolved significantly over the past decade, they are still nowhere close to performing as well as the human brain in perceiving visual cues, it said.
Deep networks work differently from the human brain. “While complex computation is trivial for them, certain tasks that are relatively easy for humans can be difficult for these networks to complete,” it said.
In the recent study, published in Nature Communications, S.P. Arun, Associate Professor at CNS, and his team have compared various qualitative properties of these deep networks with those of the human brain. The team studied 13 different perceptual effects and uncovered previously unknown qualitative differences between deep networks and the human brain.
“An example is the Thatcher effect, a phenomenon where humans find it easier to recognise local feature changes in an upright image, but this becomes difficult when the image is flipped upside-down. Deep networks trained to recognise upright faces showed a Thatcher effect when compared with networks trained to recognise objects. Another visual property of the human brain, called mirror confusion, was tested on these networks. To humans, mirror reflections along the vertical axis appear more similar than those along the horizontal axis,” explained the release.
The researchers, it said, found that deep networks also show stronger mirror confusion for vertical compared to horizontally reflected images.
Another phenomenon peculiar to the human brain is that it focuses on coarser details first. This is known as the global advantage effect. For example, in an image of a tree, our brain would first see the tree as a whole before noticing the details of the leaves in it.
Georgin Jacob, first author and PhD student at CNS, said surprisingly, neural networks showed a local advantage. This means that unlike the brain, the networks focus on the finer details of an image first.
“Lots of studies have been showing similarities between deep networks and brains, but no one has really looked at systematic differences,” Mr. Arun was quoted as saying.
The IISc release said identifying these differences can push us closer to making these networks more brain-like and help researchers build more robust neural networks that not only perform better, but are also immune to “adversarial attacks” that aim to derail them.