ADVERTISEMENT

Regulating the future

February 26, 2018 12:15 am | Updated October 12, 2018 07:57 pm IST

It is time to work on methods of moderation that can help with AI-human interaction

HYDERABAD, TELANGANA, 20/02/2018: The worlds first humanoid citizen, Sophia enthralls the audience on the Second day of World Congress of Information Technology and Nasscom India Leadership Forum in Hyderabad on February 20, 2018. Photo: Nagara Gopal

Alan Turing speculated in 1950 that around the turn of the century, it would be possible to make computers that matched the capacity of human brains, packing in about a billion neurons. He predicted that if these machines were pitted against a human interrogator in what is now known as the Turing test, they would end up fooling the interrogator into guessing that he or she was playing against a human contestant 70% of the time. It is now nearly 70 years since then, and neither has the Turing test been surpassed by any robot, nor have humans succeeded in creating artificial brains that have this capacity. However, this is not to say that such an event may never come about; rather, the question is, how do we handle that eventuality?

More recently, David Hanson, founder of Hanson Robotics that made the humanoid Sophia , when speaking at the World Congress on Information Technology and Nasscom India Leadership Forum in Hyderabad, invoked the possibility that robots will be alive and conscious in 25 years from now. This may appear to be a far-fetched goal at the outset, judging by our success, or lack of it, with the Turing test. In particular, it is the challenge of programming the human adeptness to learn that is one of the most crucial challenges facing developers of artificial intelligence (AI) that could stand up to human competition. We just have to see a face once to recognise the person the next time. However, AI, powered by neural networks and deep learning, needs to be trained with many exposures of a face before it can recognise it.

An

ADVERTISEMENT

arXiv paper by Delahunt et al (2017) speaks of a biological neural network experiment that by far surmounts this. The researchers mimic a moth’s olfactory system and use neurotransmitters to generate a synthetic neural network that can detect odours and learn relatively faster. If this single task should take that much thought and effort to reproduce, building an AI that can match human behaviour and be self-aware is likely to take a long time. The route to building self-aware AI that can challenge humans will therefore be an arduous route, dotted with milestones in related areas such as robotics. The ongoing rise of AI will also challenge the human condition, for example through displacement from jobs, threat of inhuman errors, and threats of hacking that can damage or even hijack the robot from its assigned duties.

ADVERTISEMENT

The 21st century has seen major breakthroughs in numerous fields, touching what we believe is the core of our humanness — from gene editing methods that can, in principle, produce designer babies to robots that assist in surgery, computer programs that defeat humans at various games, drive cars, and write news reports. Rather than respond with fear or suppression, it is time we started working on methods of regulation and moderation that can deal with the inevitable AI-human interaction.

This is a Premium article available exclusively to our subscribers. To read 250+ such premium articles every month
You have exhausted your free article limit.
Please support quality journalism.
You have exhausted your free article limit.
Please support quality journalism.
The Hindu operates by its editorial values to provide you quality journalism.
This is your last free article.

ADVERTISEMENT

ADVERTISEMENT