From the Director

DO WE NEED A PETAFLOPS INITIATIVE?

Ken Kennedy, Director, CRPC

Ken Kennedy, Director of the CRPC and Ann and John Doerr Professor in Computational Engineering, Rice University


The recent announcement from Sandia National Laboratory that an experimental parallel computer system had achieved more than a trillion floating point operations per second on the Linpack benchmark marks the end of an era. For most of this decade, achieving sustained teraflops computing has been one of the signal milestones of the High Performance Computing and Communications (HPCC) program. Although there are many caveats in the Sandia announcement-extremely complex programming was required, the peak performance was achieved only for very large problems, and results for other benchmarks will certainly differ-it is truly a moment to be savored.

There is still much to do if we are to make teraflops computing accessible to the average scientist or engineer. We must continue algorithm, language, and tool development to ensure that there is a technology base sufficient to harness the power of thousands of processors. Still, once the importance of achieving this milestone has sunk in, we must ask, what next? Should we follow the trend for the past few years and de-emphasize the high end in favor of a renewed focus on workstation and PC performance? Should we turn our attention to the problems of network computing, focusing on the communications and software technology that would be required to build a truly global computing system? Is there any further need to increase performance at the high end? These are the questions that must be carefully considered by the HPCC community in the wake of the teraflops milestone.

Over the past two years, there have been a series of workshops convened by the federal HPCC funding agencies to consider the enabling technologies, both hardware and software, that would be needed to build petaflops computing systems-three orders of magnitude beyond teraflops. These workshops have focused primarily on technical issues, such as component technologies, architecture, software, and applications. It is clear that some radical new hardware designs will be needed to get to the petaflops level, necessitating corresponding challenges in software and algorithms. Many of the software challenges can be characterized as similar to those encountered on the way to teraflops-management of parallelism and memory hierarchy usage-but the increased scale of computation will make the penalty for failure much greater. On the other hand, new hardware technologies such as processor-in-memory will present new challenges to the software effort. Furthermore, achieving petaflops performance may require integrating components of different architectures into a single distributed computing system. A march toward petaflops computing would provide ample challenges for computer and computational science research for the next decade and beyond.

Yet these technical issues are orthogonal to the policy question: Should we undertake such an initiative at all? Certainly there remain applications that need petaflops systems-weather and climate prediction, multidisciplinary design optimization of aircraft and automobiles, analysis and design of materials at the nano- and mesoscales, computational chemistry and biology, and simulation for disaster prediction and recovery. All of these are important for society as a whole, yet none of them will generate a market large enough to produce petaflops systems on an accelerated scale without government investment.

What would happen if the federal government did not make that investment? Petaflops computing would probably be achieved, but five to 10 years later. Is it worth several hundred million dollars per year to achieve the petaflops level a decade earlier? The cost must be weighed against the value of the lives saved by more predictable weather, safer and more efficient planes and cars, stronger and more robust materials, cheaper and more effective medicines, and faster recovery from natural and man-made disasters.

Some critics of the Federal HPCC Program have made the mistake of viewing it solely as a program to maintain the competitiveness of the U.S. computing industry, concluding that the size of the high-end market did not justify the investment. As I pointed out in this column two years ago, this misses the point by a wide margin. (See "Parallel Computing: What We Did Wrong and What We Did Right," January 1995 Parallel Computing Research, page 2.) High end computing is important for the applications it enables-applications that cannot be run without the most powerful computers. These applications are among the most important to our nation and the world because the results can lead to better, longer lives. If an investment comparable to that required to build two or three military aircraft each year is the cost of making this happen 10 years earlier, it will be worth it.


Other Issues of PCR Back to PCR CRPC Home Page