Technology

What determines how, when and where we perceive life in a face

ANIMATED: Kathakali dancers paint and mask their faces so much as to look like computer generated images, but their eyes are not hidden. It is where the mood and the action are. Photo: K. Murali Kumar  

Thanks to computer animation, many science fiction movies and cartoon films have become the trend of the day (Avatar, Matrix, Polar Express). The film directors and animation artists try very hard to make the characters as human and life-like as possible. Yet, many viewers are not satisfied with such computer generated imagery. Some have criticized the faces of these characters as “lacking in humanity” and “you see the cladding but not the soul”. People are tough critics when it comes to animacy.

I have taken these lines and also the title from a recent paper by Christine Looser and Thalia Wheatley of Dartmouth College entitled: “How when and where we perceive life in a face”, published in the journal Psychological Science.

They write: “Faces capture humans' attention; yet, beyond aesthetic appreciation, it is presumably not the face itself that interests people but the mind behind it”.

How true! Is this why we see many fashion models cat-walking on the ramp, we wonder why they look lifeless, like Barbie Dolls. No smile, no expression of any kind — lacking animacy, just bored and boring.

On the other hand, look at Shabana Azmi or Vidya Balan (my favorites, you may have others); they capture you with their eyes. Even more to the point are the Kathakali dancers of Kerala. They paint and mask their faces so much as to look like computer generated images, but their eyes are not hidden. It is where the mood and the action are.

Poets have said that eyes are the key to the mind. You ‘look in the eye' and you get an idea whether I am bluffing or telling the truth. Eye to eye contact appears essential for direct and honest interpersonal contacts and transactions. Even a baby seems to know this when you play hide and seek with her by covering your eyes with your hands first and opening them next. Every time she sees your eyes, she smiles. It could be any baby, any stranger adult, yet it works.

The mechanism

The Dartmouth researchers wanted to understand the mechanism behind this ‘eyes as key to the mind' theory, through a set or 60 student volunteers. Using the computer software called FantaMorph (made in China), they created a sequence of images of human and doll faces.

For every human face, there was a corresponding doll face that closely resembled the former. Using the morphing software, they created a series, a continuum, of overlapping or morphed images blending each human and doll pair. The resulting spectrum of 20 morphs ranged from fully human, to part human- part doll, to purely doll.

First, each of the 60 students was asked to rate a series of 220 images for two attributes: animacy and pleasantness. For animacy, they were asked to rank the image on a 7 point scale (1: definitely alive and 7: definitely not alive). Next, as the same images were scrolled on the screen, they had to say where the ‘animacy boundary' was: that is, which image was ‘just noticeably alive'. The scrolling was both ways; from pure doll to pure human and vice versa.

The second experiment was done two months later, using 29 students from the same participant list. Each of them was asked to view the same images, but to judge whether the face (a) is thinking or planning to do something, (b) is able to feel pain, or (c) has a mind.

Now the results. In the animacy test, most students found the images that had morphed 67per cent human with 33per cent doll face to be just noticeably alive. And pleasantness was only when the faces were 90per cent human and 10per cent doll. The least pleasant was the 100per cent doll face.

And in the mind test (plan, pain, mind), the participants' ratings were consistent with the animacy test results. Whether a face had a mind or not correlated well with whether the face was animate (alive) or not. Thus, there appears a ‘tipping point of animacy' around which subtle, perceptual difference determine whether a face has life or not.

But whet is it in the face that is perceived as ‘alive' and ‘ mind'? Eyes, nose, mouth, skin?

This was the third experiment. Now, the same 220 images used earlier were ‘cropped' to reveal only one of the features of interest: eyes, nose, mouth or skin. Invariably, it was the eyes that made the volunteers determine where life appears in a face. Next was the mouth, then nose and last, skin.

The implications

What do these results suggest? First, animacy is carried disproportionately in the eyes. “A rock with eyes is cuter than a rock without”. Sculptors seem to know this. Second, unless the tipping point is reached, close approximations will not do.

Realism may be continuous but life is not. Next time you are at Madame Tussaud's wax museum in London, check this out. This is refereed to by psychologists as categorical perception.

A rainbow is a continuum of wavelengths, yet is seen as seven distinct colours. Third, although eyes are the most informative single feature, other features are used in a holistic fashion to judge a face – the upturned lips, the glower of anger. But, decoding these cues makes sense only in terms of mind, a plan of action. Lastly, we humans are latecomers in this world. Earlier animals seem to have this ability too. The dog is an outstanding example.

D. BALASUBRAMANIAN, > dbala@lvpei.org

This article is closed for comments.
Please Email the Editor

Printable version | Nov 27, 2020 11:21:52 PM | https://www.thehindu.com/sci-tech/technology/What-determines-how-when-and-where-we-perceive-life-in-a-face/article15612994.ece

Next Story