Regulating the future

It is time to work on methods of moderation that can help with AI-human interaction

Alan Turing speculated in 1950 that around the turn of the century, it would be possible to make computers that matched the capacity of human brains, packing in about a billion neurons. He predicted that if these machines were pitted against a human interrogator in what is now known as the Turing test, they would end up fooling the interrogator into guessing that he or she was playing against a human contestant 70% of the time. It is now nearly 70 years since then, and neither has the Turing test been surpassed by any robot, nor have humans succeeded in creating artificial brains that have this capacity. However, this is not to say that such an event may never come about; rather, the question is, how do we handle that eventuality?

More recently, David Hanson, founder of Hanson Robotics that made the humanoid Sophia, when speaking at the World Congress on Information Technology and Nasscom India Leadership Forum in Hyderabad, invoked the possibility that robots will be alive and conscious in 25 years from now. This may appear to be a far-fetched goal at the outset, judging by our success, or lack of it, with the Turing test. In particular, it is the challenge of programming the human adeptness to learn that is one of the most crucial challenges facing developers of artificial intelligence (AI) that could stand up to human competition. We just have to see a face once to recognise the person the next time. However, AI, powered by neural networks and deep learning, needs to be trained with many exposures of a face before it can recognise it.

An arXiv paper by Delahunt et al (2017) speaks of a biological neural network experiment that by far surmounts this. The researchers mimic a moth’s olfactory system and use neurotransmitters to generate a synthetic neural network that can detect odours and learn relatively faster. If this single task should take that much thought and effort to reproduce, building an AI that can match human behaviour and be self-aware is likely to take a long time. The route to building self-aware AI that can challenge humans will therefore be an arduous route, dotted with milestones in related areas such as robotics. The ongoing rise of AI will also challenge the human condition, for example through displacement from jobs, threat of inhuman errors, and threats of hacking that can damage or even hijack the robot from its assigned duties.

The 21st century has seen major breakthroughs in numerous fields, touching what we believe is the core of our humanness — from gene editing methods that can, in principle, produce designer babies to robots that assist in surgery, computer programs that defeat humans at various games, drive cars, and write news reports. Rather than respond with fear or suppression, it is time we started working on methods of regulation and moderation that can deal with the inevitable AI-human interaction.

Why you should pay for quality journalism - Click to know more

Related Topics
Recommended for you
This article is closed for comments.
Please Email the Editor

Printable version | Apr 6, 2020 4:47:43 AM |

Next Story