IIT Bombay faculty takes a shot at a long-standing problem in computer science

This is the first case in which a super-polynomial lower bound has been established for constant depth circuits in the algebraic domain.

July 15, 2021 08:17 pm | Updated July 16, 2021 04:57 am IST - Chennai:

Image for representation.

Image for representation.

What differentiates humans from computers? Is it only a matter of time before efficient algorithms enable a computer to perform the most creative tasks, even to the extent of proving mathematical theorems — an accomplishment that is considered the acme of human contribution to progress in Mathematics? Or is there an inherent limit to what even a super computer can do?

A problem in Computer Science holds the key to the above questions. No, it is not solved yet, and is, in fact, listed as one of the “millennium problems”, along with six others, including the Riemann Hypothesis. It carries a prize of a million dollars for solving it. The problem is the P versus NP question . This is a problem that classifies computing problems into classes according to the time and resources that will be used up in tackling them and forms the cornerstone problem of computational complexity.

The news today is not that the P versus. NP problem has been solved — we are far from it — but that three computer scientists, including one from IIT Bombay, have made headway in a related problem. This is significant enough to draw the attention of the computational complexity community. The work by Nutan Limaye from IIT Bombay, Srikanth Srinivasan currently at Aarhus University, Denmark, and Sebastien Tavenas from CNRS, France, has been posted online as a preprint in public domains and has been subjected to much scrutiny in the last fortnight.

Understanding complexity

The P versus NP problem may be understood by looking at a situation we have encountered recently, which albeit grim is very illustrative. Consider the problem of vacant hospital beds to which patients need to be assigned — let there be, say, 100 beds available, and these should be matched with demands from 250 patients. In assigning beds to patients, several factors need to be considered, such as the actual state of the patient — whether they really need the bed, the distance that needs to be covered (whether it is the bed that is closest to the needy patient), no two patients must be allotted the same bed, and no beds must go vacant, etc. Now, making a list “efficiently”, that is, in a short time, may be a daunting task. But if some intelligent supervisor writes down such a list allotting the hundred beds to a hundred patients, it can be checked by a computer quickly and efficiently. Thus, verifying a “yes instance”, as the computer scientists call it, is a task that is much easier than finding a solution.

Further, the time that is needed for solving or verifying a yes instance of the problem of allotting hospital beds grows with the number of variables — in this case the number of beds and patients. This is an instance of a problem that would be in the NP category, or nondeterministic polynomial time algorithms. The time needed for verifying a yes instance of problem grows as a constant power of the number of patients, and the verification is said to run in polynomial time

If it were possible to write a programme that could “solve” this problem efficiently, that is, write a programme that takes time that scales as a polynomial in variables, and this is possible, then one can say that this problem of allotting hospital beds is in P — the class of problems for which polynomial time algorithms exist. Any problem that is in P is automatically in NP, but the reverse is not necessarily true.

In fact, it is also believed that P is not equal to NP. That is, problems for which efficient algorithms exist are only a subset of the problems for which yes instances can be verified in polynomial time. However, this has not been proved. For instance, if you are asked to list out all the possibilities of allotting beds to patients, that is a problem that becomes very hard. It is not in P. In fact, it is so hard that it is not even verifiable in polynomial time and it is not even in NP

This problem is the holy grail of computational complexity theory!

Branching off from the Boolean

It must be said that all the above arguments live in the Boolean world — that is, in deciding the P versus NP problem, we had in mind inputs that were bits (or binary digits made up of strings of 0s and 1s) and the algorithms involved operating on these with AND, OR and NOT gates. Thus, the time taken to run the programme grows with the number of gates used and so on.

Another example of a problem involving the P versus NP question concerns very large prime numbers. If a yes instance is given, that is, someone gives you an n-bit representation of a large prime number and asks you to verify whether it is prime, this can be done easily, using a number of logic gates that scales as a polynomial in n. That is, the size of the problem grows as n-raised-to-the-power a constant. So, the size is polynomial in n, and the checking problem is a member of the “non-deterministic polynomial” class or NP. In fact, there is now a polynomial time algorithm which when give an n-bit number as input outputs yes, if the input is the binary representation of a prime, and says no if it is not. This algorithm due to Manindra Agarwal, Nitin Saxena and Neeraj Kayal in the early 2000s was a big breakthrough, but that’s a story in itself.

However, to construct a number that is prime will take much more resources. It is believed to be “exponentially hard”, or that as the number of bits increases to a large n, it will take resources that scale as a constant-raised-to-the-power n. This is in the non-polynomial class. But since it is not proved, it is possible that one day someone will develop a smart algorithm that can devise a way to compute the prime number in polynomial time. The whole P versus NP question is to show if there are problems that can be verified in polynomial time (that is, they belong to in NP) but computing which would take non-polynomial time. This would imply that NP has problems that are outside of P.

“People would like to show this because there are applications in cryptography and in randomised algorithms. Though it looks theoretical there are practical applications as well,” says K.V. Subrahmanyam, a computer scientist from the Chennai Mathematical Institute.

There is a bifurcation in the field of complexity theory: the Boolean world and the algebraic world. While in the Boolean world, the inputs are in the form of bits or binary integers made up of a string of 0s and 1s. The circuits are made up of AND, OR and NOT gates. Outputs are again binary integers. In the algebraic complexity studies, inputs are algebraic variables and the circuit performs addition and multiplication operations on the variables and the output is a polynomial in the variables.

In 1979, Leslie Valiant, who is now with Harvard University, defined the algebraic analogue of the P versus NP problem. This is dubbed the VP versus VNP problem. “He suggested an easier question, and he showed that you will have to settle this question if you want to settle the P versus NP question,” says Prof. Subrahmanyam. This question is also far from reach at present. However, in algebraic complexity theory, there is the possibility of using known mathematical techniques to prove the result. “Proving hardness in algebraic circuits is quite different from proving hardness in Boolean circuits. There are connections, of course, but there are many differences as well,” says Meena Mahajan, computer scientist from the Institute of Mathematical Sciences, Chennai. These differences could work in the favour of the algebraic complexity community for, as she adds, “For algebraic circuits there is a richer set of techniques from mathematics that could potentially be used to prove hardness.”

Because it is difficult to prove the above theorems for the most general classes, people look for simpler models, by restricting some parameters. One of these is the “depth” of the circuit. Prof. Mahajan explains, “Loosely speaking, depth corresponds to the time required to evaluate the circuit on a parallel computer with as many nodes as gates in the circuits.”

The simplest of the lot are circuits with constant depth — in which the depth does not depend on the size of the input for instance.

“For Boolean circuits of constant-depth, super-polynomial and even exponential size lower bounds have long been known,” she adds. She is referring to the so-called parity problem. If you are given a long string of 0s and 1s, an n-bit binary number, to compute whether the number of 1s in this input string is even or odd would require super-polynomial time in constant depth Boolean circuits using NAND gates. This was shown over 35 years ago, yet no analogous result was proved in the algebraic domain. “Until now, that is. Now, the LST paper tells us that computing the determinant of a matrix, a bread-and-butter operation in linear algebra applications, provably requires super-polynomial size if we are allowed constant-depth,” she explains.

The paper has not yet been peer reviewed. But it has been posted online for long enough for there to have been strong reactions in case of mistakes. “So far, no flaws have been discovered, and while we await the final reviews, there is no reason to suspect flaws. The result is significant enough that a large section of the computational complexity theory community, and probably the entire algebraic-complexity-theory sub-community, is looking at it closely!” says Prof. Mahajan.

The collaboration

Nutan Limaye had been working on complexity theory for some time when she and Srikanth Srinivasan met Sébastien Tavenas at a conference (FSTTCS 2019) that she was co-organising at IIT Bombay.

Prof. Tavenas had an interesting result along with Neeraj Kayal, of Microsoft Research, in Bangalore, and Chandan Saha, from the Indian Institute of Science, Bangalore, in 2016 (‘An Almost Cubic Lower Bound for Depth Three Arithmetic Circuits’). Since Prof. Limaye and Prof. Srinivasan (who was also at IIT Bombay at the time) were interested in the area, they were only too overjoyed when Prof. Tavenas decided to visit IIT Bombay for a longer time. They continued to work on this problem and in 2019, they had a small result. Year 2020, with the lockdowns and the onslaught of the pandemic went off “at a crazy pace”. It was in 2021 that things started to fall in place. “In March 2021, I had a chance to visit Srikanth at Aarhus. I had gone with the thought that we must work on this problem. That’s when we had the first breakthrough,” says Prof. Limaye.

Around March 20 to 22, as she was planning to leave, they got another improvement on their results. But it was not until she had returned to Mumbai and they were writing it up that they got this idea of “escalating” the lower bound they had and making it much better.

“When you are able to prove a lower bound in a restricted setting, there are other theorems sitting around with which you can pad this result and prove the dream result you want,” says Prof. Limaye. The whole thing worked out, and around the end of May 2021, the escalation happened.

The main result is that they have constructed a polynomial to be computed in a constant depth (the so-called Sigma-Pi-Sigma) circuit, and they show that this would take more than polynomial time to compute, that is, it will take super-polynomial time to compute.

“The Sigma-Pi-Sigma expressions are the simplest ‘non-trivial’ expressions for polynomials,” says Prof. Srinivasan. “This is the first strong limitation on their power…the previous best bound was cubic.”

To construct the polynomial, they take (n X n) matrices, approximately (log n) of them, and multiply them out. Any entry of the resultant matrix is a polynomial that suits their purpose. This is called the iterative matrix multiplication polynomial.

“The result we have turns out be surprisingly simple in the end. I think most researchers working on these problems will be surprised but happy about this, as it indicates scope for future work. Complicated results are harder to build upon than simple ones,” says Prof. Srinivasan.

While scientists may differ on whether this result will lead up to showing VP not equal to VNP, they will appreciate that this is the first case in which a super-polynomial lower bound has been established for any problem in the algebraic domain, about 35 years since the parity problem had been shown to take non-polynomial time in the Boolean arena.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.