*On March 23, the Norwegian Academy of Science and Letters announced their decision to award this year’s Abel Prize to Dennis Parnell Sullivan, American mathematician who is now at the State University of New York, Stony Brook, U.S.. The Abel Prize is a top honour in mathematics, being similar to the Nobel prize for the sciences in being awarded for major contribution to a field of math. Named after the Norwegian mathematician Niels Henrik Abel, the prize was instituted by the Norwegian government in 2002. In this interview, Prof. Sullivan talks to The Hindu about his interest in mathematics, early influences and more. *

At what stage in your life did you perceive yourself to be a mathematician?

The second year of college, because I didn't know mathematics existed as a profession until then.

I was in chemical engineering [at Rice University, Texas]. But at that university, all the science students, electrical engineers, and everything took math, physics and chemistry. In the second year, when we did complex variables, one day, the Professor drew a picture of a kidney-shaped swimming pool, and a round swimming pool. And he said, you could deform this kidney-shaped swimming pool into the round one. At each point, the distortion is by scaling. A little triangle at this point goes to a similar triangle at the other point. At every point, that's true. We have a formula for the mapping, because we're taking calculus, and we had a notation for discussing it, which we have been studying. But this was like a geometric picture. This mapping was essentially unique. And this was, the nature of this statement was totally different from any math statement I've ever seen before. It was, like, general, deep, and wow! And true! So then, within a few weeks, I changed my major to math.

I was able to use that theorem in the 1980s. This was serious.

I used this wonderful structure in later research… especially, during a ten-year struggle proving mathematically, by 1990, a numerical universality discovered by physicists in the mid-1970s.

Could you tell me the name of the theorem that you proved?

Well I don't like names, I like the theorems though! (Laughs) No, no, I'm just kidding.

I proved something that physicists discovered, I use the theory behind it to prove something called the universality of the geometry of a certain dynamical process that involved renormalization, as in the physics use of the term in quantum field theory. It was sort of in that genre of ideas, but it was a it was a truly math statement. It could be formulated mathematically, and yet the physicists computed this.

The conceptual step in that proof, involved the idea of the Riemann mapping theorem. So proving that universality that was the conceptual part of the proof.

In fact, the theorem is true by experiment, in a whole continuum of situations, it's only the integer ones where it's been proven, because you have to use this idea from Riemann mapping theorem. And that idea doesn't work in the other cases, as far as we know.

You used to organise lectures by various mathematicians, where the format was to discuss the minute details regardless of the time taken. Do you still do that?

That was called the Einstein chair seminar. And it was, well, it was the regular format – you invited speakers, they would come and tell the stuff. But we didn't have a time limit. During an hour-long talk, you can stop the speaker a few times. You can't stop him all the time, you know, so it would be open-ended. Sometimes it would go, you know, more than three hours. In fact, the record is from 2.00 to 8.30. I think, finally, the guy had to have a beer. He was from Germany. (Laughs) He wanted to beat the record, though, and he did beat it.

Eventually, I would ask many of the questions, but then the students would start asking, too, because it was okay, and there's no such thing as a stupid question. That was that was the rule. No such thing as a stupid question. But it should be a precise question. That was the Einstein chair seminar. And it's still going on. But now it's more traditional format, although not always… And now we can do Zoom.

After Grigori Perelman's work, in 3D geometric topology have there been any major advancements? What is the field like after his results?

Well, I mean, I'm an outsider in that field. I'm very interested in it, but I'm not really an expert. It has been very active since then. It's because they now know how to describe in principle, all three dimensional manifolds. If you have any kind of a knot or a link, you can think of cutting out small neighborhoods from space around that knot or link, and then you're left with a three dimensional manifold, you break it up into geometric pieces – the [William] Thurston picture. So it's like when you have a linear transformation matrix and you know its eigenvalues, you know a lot about it. Right. They have something like that now, for knots. It opened up the possibility of proving many things with the basic Poincare group, or fundamental group of every three manifold, using group theoretic properties. And this is interesting, because already in dimension 4, 5, 6, and so on, the possible groups you can get are everything. One knows this is from the 50s, that it's a logically undecidable question. In dimension three, the groups that appear are not arbitrary. They're very rich, and very structured, and Perelman’s, proof of the Thurston picture gives you an opening to it. Thurston, proved a lot of it [but] didn't prove the whole thing. But this step gives you a way to analyse the groups in three dimensional spaces. It's been very active, one of the Breakthrough prizes, for Ian Agol, was based on what he did about these groups after Thurston’s picture and Perelman’s proof. It allowed many breakthroughs, in my opinion, but I'm not really a bonafide expert. Okay? I've been watching it, though. All this time.

I've heard that you have been interested in the Navier Stokes equation for a long time. Can you tell us about how you got interested in it? (The Navier Stokes equation is now counted as one of the seven millennium problems as listed by the Clay Institute. Of the seven only the Poincare Conjecture has been proved, by Grigori Perelman.)

First of all, it's related to being a chemical engineer. If you're in Texas, as a student of chemical engineering, that is, there's the petrochemical industry, the oil industry, and organic chemistry and plastics, all around Houston. If you are good in science, and you work on that and become an engineer you can get a good job and have a nice work at a research center. So it's a good thing to do. In fact, during the summers, I had jobs at various such places. Once I had to study the computer methods that they were using to do what's called secondary recovery. You know, when they find oil, because of the pressure, if they drill a hole, the pressure makes it shoot up, right. But after they drill for 20 years, the pressure goes down. What they do then is go to another part of the field, and they drill and they pump in water to create pressure that will push the oil back to their wells, and for this they have to solve the linearised version of the Navier Stokes equation. I didn't know that name, then but it's a linearised version of the Navier Stokes equation. While at the summer job where I was studying the possible computer programs I had a certain question there. That was around 1960. And that was related to how they would place their wells for getting to secondary recovery.

Moira [Moira Chas] and I were visiting Saudi Arabia 35 years later, and I went to this company, Aramco. This is a big, huge company... and they were studying the same problem from 35 years before.

So in a sense, I was aware that there's this huge industry related to fluid flow through porous media. It was astonishing to me to find out as I found out in the 1990s, that that equations in three dimensions, the beautiful equations, are not solved.

And then later, in 2000, that became one of the millennium problems. There are these famous seven problems. The only one that's been solved is by Perelman [Poincare conjecture], the one that you just referred to.

I got an idea. I had an idea that maybe the idea of calculus and expression in terms of partial differential equations is a little too presumptuous. Namely, you have this physical process, which we know is atomic – it's made out of particles. But you see these smooth flows everywhere. So you say, okay, let's model this, like with Newton's calculus, right? Then you find a differential equation, like in every physics course, you would make a little box and put dx, dy and put force and write something and then take the limit as the box goes to zero to get the equation.

Well, that has worked like a charm for many problems, right? You get a beautiful equation here. I love the equation, because it had a geometric meaning that I understood, but it hasn't worked!

I thought about it. Maybe you've done things out of order. First, you imagine the fluid, and you take this calculus limit, and get a beautiful equation. Then when you want to put it on a computer, what do you do? You go backwards. You say, you can't put an infinite formula on the computer, and you can't put derivatives. Right? Instead of derivatives, you put f (x + h) - f (x) / h. So you put that on the computer, and then you crank away?

Well, here's what you started with! You went to this ideal continuum, made this PDE. Then you take the PDE out, and you do a discrete process. You're going in and out. So I thought why don't I just go this way.

And there's one precedent for this when you study the Laplacian it is called the heat equation. If you just have a conducting medium and you put some heat down, it spreads out like the Gaussian, the heat spreads out. But that formula can be deduced from putting discrete little dots equally spaced and think a particle of heat spreads out with probability half that it goes this way, probability half that goes that way. And then you write that coin flipping process. And then it turns out, you see things in the discrete approximation that allow you to make sense of this equation in a much more general way, it gives you a great advantage as a theory of Markov processes, Kolmogorov’s work, all that stuff comes from this probabilistic viewpoint. So I could hope that if you did a discrete process, there might be some nonlinear version of something new that you would see, that would help you.

I've been trying for 30 years. And I don't have any theorems to show for it. But I'm understanding more and more yet.

How do you choose a problem to work on?

I usually find I want to think about [a question] from the beginning, I want to understand it. That's all I want - to try to understand it. Sometimes it solves problems, it’s not like I choose a problem. I want to understand an area. Yeah.

I've seen instances where, when a story kind of comes to equilibrium. And it's like the perfect answer. If you actually look at that answer. And then you read backwards through the whole history of ideas. You'll see [that] way back at the beginning, if they had looked at it, not the way it went, but they looked at it, turned slightly and went this way, they would have gone directly to the answer,.

You can sort of forget a lot of the false starts, then make it very simple another way, say, you can make a very simple picture, assume this, prove this this way. And there's one idea here, this is one picture, and then some easy stuff. That's often the way math stories are, from beginning to end. That's not exactly the history.

Mathematics, when it's done and perfect, is absolutely perfect and simple. If things are not simple, then they're not done.

It turns out that the real great things have the property that the steps get submerged into the definitions, and they get back to teaching them to the undergraduates and eventually are teaching in high school. Euclid’s geometry, it's high school stuff, Right? But you know, deep, so, everything should be simple and basic.

Do you see math in everything around you?

Oh, yeah. Well, there's a blessing and a curse to that. Because, you know, the beautiful thing about math, which is really one of the most powerful aspects of it is that the concepts can be perfectly clear. There's only one point related to Godel’s theorem, where there's some ambiguity, we start from some simple notion, which is called a collection of objects. That notion has properties and one has to assume that, and then, if you assume that, with those properties, mathematics begins. Relative to that assumption, it’s perfectly precise.

Math has this potential clarity of concept. You'd never see mathematicians arguing about a math statement. They can agree very quickly that they're talking about the same thing. Then, if one of them thinks it is true, then, “Oh, do you have a proof?” Either he has a proof or he doesn’t. If he doesn't have a proof? We agree on the statement, but it becomes unknown. He doesn't say “Well, I think it's true.” You know, that has no meaning. Or [if he says] “I don't think that's true.” Then it is, “Well, do you have a counterexample?” If the answer is “No, but I don't think it's true,” then I don't care. And they agree, they don't get mad at each other because those are the rules of the game. Because the concepts are precise. This is remarkable. Among all the other sciences, there's nothing like this.

Now, that precision, though, is sort of a curse, because even when you're talking to people, almost everything that's said is not precise, because there 're tacit assumptions.

I'm too literal. My wife makes jokes with me every three minutes, and I take them seriously, for example.

But then the other thing, the positive side of the question is that I like to talk to six-year-olds about math, because they're like little mathematicians. They want to know how big numbers are, how big space is, and I like to see the picture in a proof. If you have a picture that shows the essence of proof, you could show it to the child.

It’s just natural to see patterns. But then there is this thing about precision, of language, which is sort of an inconvenience in some ways, although it's a blessing in mathematics itself.

Yeah, so I usually approach things in this mathematical way, a little more than I should sometimes. And I think many difficult math things often have a little picture that can be shared with somebody. In Fact, even Hilbert said, If what you're doing can't be explained to the common man, you don't understand it. He said that when you meet this the person on the street, you don't need formulas. That's why I hate names and notation. Because that allows you to pretend you know what you're talking about. When you don't necessarily know what you're talking about. You have a lot of jargon.

What are your thoughts on the coexistence of faith and science?

Well, I think I kind of replaced my spirituality and Catholicism with mathematics. You want to know what you can know? Right? What can you know to be true? Math is a pretty good, unfortunately, it only deals with very simple questions. You know, psychology, physics deals with the nature of the universe. Mathematics deals with physics. There is something remarkable, and unexplained in the universe we live in, and also mathematics itself. I believe mathematics would be the same. If there's life on other planets, I think they might have discovered different parts and gone to some different direction. Like, if we just were doing computer science, you would sort of emphasize more on graph theory and combinatorics, and algorithms. But you know, they might not have done Lie groups yet. Those are all sort of primitive aspects of your question. But if you want to know what's true, then math is a pretty good place to start establishing what it means to know something. In math, we sort of had this certain point, we don't know any absolute fact, in some sense, unless it involves finite systems, but anything that involves something like calculus with an infinite system can only be rigorous and known to be true relative to this basic starting point, I mentioned about set theory, you have to assume there are sets of points... Then you can build the numbers and build the real, you can build the integers. You can build the numbers, and you can build the continuum, then you can build spaces and Lie groups and the rest of mathematics, but it's all relative to this assumption at the beginning. But that's knowing something, you know! If this is consistent, then all of this is consistent, and this is very simple and very believable. So that's the kind of religion in a way, the mathematicians believe that these systems, this basis is okay. They're willing to spend their lives working on that. So, that's almost religion, right?

Can you tell me something about your experience in India? You've been there a couple of times at least.

I think my first visit was to Chennai. Which I had trouble finding because I knew it as Madras. I remember. I was trying to book a plane ticket to Madras and I had trouble getting there. Let's see. If I just think back about it. I remember the cows in the street in Chennai, and the cars and everyone being together. There's no problem.

I also learned that vegetarian food could be delicious. Well, I've had a lot of Indian graduate students, so I kind of know them. I know Indian people.

Do you have a message for the readers?

I could say something that I say to my graduate students: critical thinking is important. It's good to think critically, examine your beliefs, understand why they're commonly held, and then maybe, in certain circumstances, you have to modify them slightly, to make them work better. That's what has helped me understand mathematics better. For example, even what you learned from your masters, sometimes, it’s their perspective. Having a perspective is excellent, which is kind of like a bias. It is good because it makes you more effective and you can put your energy in those directions, right? But then sometimes, it's not right. In some situations or some points of view, there's a different way to look at it. And this may help you make progress in a direction that was blocked with previous perspective. This is not [being] critical in the sense of [being] negative, it's critical in the sense of examining. So critical thinking is, and I'm borrowing this from a wonderful interview of Bertrand Russell in 1952, He says a lot of very charming and very intelligent things, but he also emphasises this point that when you have a perspective, it sometimes allows you to make irrational rational decisions. you know, So it's good to be critical, even of your own beliefs because it helps. That works in math too.

- Mathematics, when it's done and perfect, is absolutely perfect and simple. If things are not simple, then they're not done.
- It's good to think critically, examine your beliefs, understand why they're commonly held, and then maybe, in certain circumstances, you have to modify them slightly, to make them work better. That's what has helped me understand mathematics better.
- In math, we sort of had this certain point, we don't know any absolute fact, in some sense, unless it involves finite systems, but anything that involves something like calculus with an infinite system can only be rigorous and known to be true relative to this basic starting point.