SCI-TECH & AGRI

Towards a new number-crunching combine

C-DAC's Param Padma can now address a new level of challenges, be it weather modelling, genome sequencing or molecular modelling.  

SUPERCOMPUTERS ARE not dead — they have merely reinvented themselves for the new Cyber-enabled Age. Instead of power-guzzling behemoths occupying the floor space of a basketball court, we have a new generation of super-duper computing combines, whose individual units could be as small as a plain-vanilla desktop PC.

Say hello to Grid Computing — the latest, possibly the most promising new `avatar' of the old-style number crunchers. Stand-alone supercomputers are pass�.

The decade-old massively parallel processing systems (MPPS) are increasingly considered old hat. Cluster computing is `in' and grids are `hot'. This more or less sums up the message of Bangalore's two international conferences on High Performance Computing (HPC). The sixth International Conference on HPC in Asia-Pacific (HPC Asia) was held between Dec 16 and 19, the first time this event came to India thanks to the initiative of the Pune-based Centre for Development of Advanced Computing (C-DAC).

And between December 18 and 21 came that hardy annual, the International Conference of HPC (HiPC 2002). Co-host C-DAC had splendid news for the assembled HPC fraternity: its latest and most powerful platform in the decade-old Param range: Param Padma, a 248 processor-1 teraflop (peak) machine (a tera flop is one trillion or 10 raised to the power 12 floating point operations per second) had just gone on stream at its Bangalore-based terascale computing facility.

Using the new Power 4, 1-GHz processor, the Padma has a primary storage of 5 terabytes scalable to 22 terabytes. C-DAC's Executive Director R.K. Arora, told me during a special briefing for The Hindu, that possibly the most exciting development connected with Padma was the interconnect switch, for ParamNet-II, the associated storage area network (SAN).

Based on a specially developed co-processor chip, the SAN Switch made India only the third country in the world capable of delivering such a fast switching system.

Indeed C-DAC was now in a position to market the Interconnect switch as a separate item to other agencies building large HPCs of their own, he added.

With a platform like this, the nation could now address a whole new level of challenges in weather modelling, genome sequencing, molecular modelling, Mr Arora added.

A glimpse of some of these application directions could be had from some of the excellent papers presented at the HPC Asia Conference, particularly the contribution from S.M. Deshpande of the Indian Institute of Science on a parallel processor for large-scale fluid structure problems.

The current thinking that clusters of distributed computing platforms within the same physical environment could provide cost-effective HPC solutions was best illustrated by the work of the (US) National Centre for Supercomputing Applications whose Robert Penington provided a useful overview of Terascale Clusters and the Centre's Tera Grid initiative. C-DAC meanwhile has begun work on the India Grid or I-Grid to create 10 supercomputing sites in the country contributing totally 10 teraflops of computing power. In the latest June listing, Intel systems powered 56 systems and of these 33 were cluster computing combos. This has helped them move beyond human administrative capability.

Which is why the HPC-Asia corridors were alive with talk of "autonomic" computers — a phrase coined by IBM to illustrate how all big computers in future would self configure, self heal, self protect and self optimise their working.

Tata Elxsi which having marketed high end computing platforms for decades had now built up so much programming expertise in the field and it now offers its services to others who wanted to outsource the time consuming research involved in many HPC applications like drug discovery and computational fluid dynamics.

Their General Manager and Head, HPC and Graphics Solutions, Rajesh Kumar, explained to me how many of the tools they developed like the Hypergraph partitioning-based Load Balancing tool (HLoB) have a `stand alone' life of their own. HLoB was a useful software technique to ensure that the various nodes of a distributed computing system had equal programming loads as they worked in parallel.

Another canny reinventor of proven technology, was SGI, formerly Silicon Graphics. They showed what one could do with HPC platforms to create `Reality Centres' — stunning audio visual demo centres using frontline visual and imaging technology to bring alive challenging tasks in oil exploration and military simulation. SGI's Avinash Fotedar, explained that the company had already created such reality centres for BHEL, ONGC, TELCO and PSG Tech.

And as usual one got a glimpse of fascinating research directions: Scientists working in nanotechnology — the science of the very small — have now taken a peek at atom-sized particles of matter and come up with a number of innovative solutions.

Delivering the opening keynote at HiPC 2002, Priya Vashishta, Professor of Materials Science and Engineering at the University of Southern California described three exciting simulation studies being carried out by his team in the High Performance Computing Centre which he heads.

They have simulated what happens to the layer of oxides about 4 nanometres in width — that forms on surface of aluminium structures, and gradually eats into the metal. One nanometre is one billionth of a metre, but this micro-thin layer can cause fatigue failure in structures like aircraft wings and knowing when this might happen could see a big advance in flight safety.

Another effort now on, is to simulate what happens to the protein cells in the human eye when it is struck by light — research which could hopefully end in developing cells that are resistant to common eye diseases.

How nice it would be if one could develop ceramic materials that will not break! This dream could well become reality if current research where hard metal-like fibres are introduced into the microstructure of ceramic, succeeds. Even if the item suffers a sudden shock, it will not break up.

All these simulations have become very much possible because today's computers can crunch numbers at teraflops, and this has allowed researchers to peer at structures only tens of nanometres in length, explained Dr.Vashishta.

And within ten years computing technology will grow by a factor of more than 1000 he predicted, enabling one to simulate systems of 10 billion atoms. And you would not need massive machines to do this: on the principle of `divide and conquer', engineers were breaking up such complex problems to run on clusters of inexpensive personal computers, he added.