CACHE Technology

The changing AI landscape

Against the machine: South Korean professional Go player Lee Sedol puts his first stone against Google's artificial intelligence program, AlphaGo, during the third Google DeepMind Challenge Match on March 12, 2016 in Seoul, South Korea. Lee Sedol played a five-match series against AlphaGo.   | Photo Credit: Getty Images

French philosopher Rene Descartes, in his Discourse on Method, made the famous philosophical statement: Cogito, ergo sum (I think, therefore I am). The inventor of analytic geometry was the first person to link thought and intelligence to living. But machines, he said, can’t think the same way as humans can.

To find out, nearly four centuries later, British mathematician and computer scientist Alan Turing proposed a way. In his seminal paper, titled ‘Computing Machinery and Intelligence (1950)’, Turing redefined the meaning of “think” and “machine” with a thought experiment, called “imitation game”.

The imitation game is played with three players: A, B, and C, an interrogator. C stays in a separate room, and puts questions to A and B. Their responses come in as type-written messages. C only knows players A and B by labels X and Y, and at the end of the game he has to say either "X is A and Y is B" or "X is B and Y is A”. B’s goal is to help the interrogator with truthful answer.

In his paper, Turing suggests player ‘B’ be replaced with a computer algorithm that will imitate the human, and asks, what would happen if a human interrogator failed to identify the algorithm based on the answers it gives out? Can the machine be considered intelligent?

His idea reframed the ‘Can machines think?’ question, and broke down human intelligence into specialised units. For instance, the intellectual capacities of an engineer are different from a doctor’s. So, the two can’t be compared based on the same knowledge parameter.

Turing’s prediction and Deep Blue

After reframing the question, Turing predicted that within half a century a computer can perform better than humans.

“I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well, that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning,” Turing predicted in the paper.

“I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

His prediction came to pass in just under fifty years, albeit through another machine in a different game.

In 1997, Deep Blue, IBM’s chess-playing supercomputer, defeated the reigning world champion Garry Kasparov in a six-match game in New York City. The victory showed that computers were fast catching up with human intelligence.

Deep Blue won using brute force search, an algorithmic method of examining all possible options before making the move. This method helps machines find answers to a finite set of solutions. That means, in a game like chess, in which each piece is confined to its next step, there are about (8 * 8)! possible combinations for a computer programme to learn.

The DeepMind paradigm

While IBM’s supercomputer mastered chess, another board game had long baffled AI scientists. The ancient game of Go’s rules are simple, but the game play’s complexity is prohibitively high. It is played on a 19 * 19 board, and has theoretically 361! possible combinations to get started. That means brute-force computing alone won’t help.

In 2016, DeepMind, a London-based offshoot of Alphabet Inc., made a breakthrough after its AlphaGo programme defeated Lee Sedol, the Garry Kasparov of Go, in a five-match game.

And unlike other Go algorithms, AlphaGo used the concept of neural network to train itself. It combined smart algorithm and brute-force computing to accomplish specialised tasks that have known rules and clear criteria for success.

To learn the game and improve its odds, the programme played against itself over a million times - - an exercise referred to as reinforcement learning.

“The AlphaGo is unique in its ability of relating the immediate action with the final outcome of the game through its value network, providing a mechanism for closed-loop feedback in decision-making,” according to a research paper titled ‘Where Does AlphaGo Go: From Church-Turing Thesis to AlphaGo Thesis and Beyond’.

DeepMind’s venture into science

More importantly, what AlphaGo accomplished isn’t confined to a single game. In November 2020, DeepMind’s AlphaFold2, predicted the 3D structure of proteins (protein folding) which is an important problem in biology,

“AlphaFold2 has taken structure-prediction strategies to the next level,” Nature said about the discovery in its blog.

The algorithm’s predictions had reduced the number of human proteins for which no structural data was available from 4,800 to just 29.

Just like how Descartes combined algebra and geometry to create a coordinate system, DeepMind’s AI has fused diverse fields of study into a single algorithmic system to solve sector-agnostic problems. Its influence has grown beyond the traditional board games, and spread into fundamental science research.


Our code of editorial values

Related Topics
This article is closed for comments.
Please Email the Editor

Printable version | Jan 27, 2022 9:15:24 AM | https://www.thehindu.com/sci-tech/technology/the-changing-ai-landscape/article37874297.ece

Next Story