Search
Sites and Affiliations
Leadership
Research and Applications
Major Accomplishments
Frequently Asked Questions
Knowledge and Technology Transfer
Calendar
Education and Outreach
Media Resources
Technical Reports and Publications
Parallel Computing Research - Our Quarterly Newsletter
Contact Information
CRPC Home Page

UniGuide Featured Site

Q&A with Ken Kennedy

Source: The Schatz Report on High Performance Computing, January 22, 1997

Ken Kennedy is Noah Harding Professor of Computing Science at Rice University and the director of the Center for Research on Parallel Computing (CRPC). One of 24 National Science Foundation-funded Science and Technology Centers, the CRPC's goal is to develop algorithms that are portable across several parallel machines and provide compile, editing and debugging tools to make parallel machines easier to program. Below is his Q&A interview with Willie Schatz which ran in the Jan. 22 issue of The Schatz Report.

SCHATZ: High performance computing is dead. Long live high-performance computing?

KENNEDY: That's a tautology. In its heyday high performance scientific computing was an end in and of itself. But that view's really changed in the past five years. Now "high performance computing" is applied at any level. And the most pressing issues are about commercial computing and network computing. The definition now includes connecting Pentium with the network, and that's an appropriate view. My perspective is that the more parallel computing on the desktop, the better we'll be. Parallel computing will REALLY succeed when the powerful machines are available for people to put on their desk, order another processor, and get double or triple the performance.

High performance computing has gotten much broader coverage in the information technology community. Supercomputing no longer means just the very highest end of computing. It now means the highest end at every level. But there are still many problems that only the very highest end supercomputers can solve.

That fact seems to be getting ignored lately. There's definitely been an overall dip in attention to the "high end." I know that there's a working group in the government of people who are very concerned with that flagging level of attention. I'm not sure who's in the group, but I wouldn't be surprised if the National Security Agency and other similar agencies were very concerned because their problems can only be solved by the absolute highest end machines for which there may be no other uses. But I think this dip in attention is temporary. People will come back to the "high end" because they have to recognize it for the good of society.

Part of the lack of attention is because the government has stepped out of the picture. The government always drove the highest end, and it still needs to because no private company is going to spend the money to solve the most difficult problem for the good of the nation. We're seeing a lot of that now in DOE's (Department of Energy) ASCI (Accelerated Strategic Computing Initiative). The government's paying for one kind of machine from three different vendors (Intel, IBM and Cray Research/SGI). But the government's not supporting development costs like it used to.

SCHATZ: That's because DARPA (Defense Advanced Research Projects Agency) in the 1980s got so much grief for supporting particular architectures to the detriment of others. So how will the government's change in investment strategy affect the development of the high end?

KENNEDY: There's still a need for different kinds of computing. The current model-commodity processors connected to a high-speed network combined with memory hierarchy-may not be the only model. There's considerable skepticism that model will work for every situation. But everyone-SGI, IBM (even if it still uses message passing)-is building them that way. Other models surely will be needed to do certain kinds of computing. That's a logical outgrowth of using workstation technology to build higher performance machines.

But the memory hierarchy in the larger machines may be too difficult to manage. This is pure speculation, but I wouldn't be surprised that inside the NSA they're worried about any application where the memory access pattern is not predictable. When that happens users have a hard job managing the memory hierarchy. That's true for all commercial applications and many integer-based application problems. And when you're operating on an irregular mesh, your results can be all over the map. That's a common problem in technical computing, but I'm optimistic we can solve it.

It's therefore necessary for industry to take a foray into ensuring that the Branscomb Pyramid-investment at the tip, but not the broader parts-works out. Because when the smoke clears, everyone will realize that the nation still needs very high end servers to solve the Grand Challenges and other problems that absolutely require high end computing. But there's no need for enough machines so that only the market can drive and develop future architectures and systems. That's why ASCI is pushing companies to build machines the companies wouldn't otherwise make.

SCHATZ: Then the government is investing as a user, not a developer?

KENNEDY: If the government only invests as a consumer, will the architectural innovations necessary to advance the development of HPC systems happen? No knows yet. That's a very costly area and it's very hard to determine what do. Is this industrial policy? Everyone's got their own answer to that. Is DARPA okay if it invests and doesn't skew the competition? If it does invest, it must be in a pre-competitive way that will accelerate getting the right systems to the government and the public.

There's no doubt in my mind that the government must continue to invest in technology The small companies that started in the early days (1991) of the HPCCI (High Performance Computing and Communications Initiative) are now gone. And small companies can't possibly do advanced technology and architectural work. It's now very difficult to create a small company that will build a supercomputer even with government help? So will the big companies step up?

By the government withdrawing from expenditures that cut down on the development of high end systems and supporting that industry just as a consumer, are we doing something that we'll regret later? I agree with that logic, but I'm not sure the effect will be one that everyone wants to see.

There's going to be a lot of thinking about where the country is going with computing in next two decades. The goal is for robust petaflops architecture and usable software to emerge from any studies. But that means a shrinking market for the high end as more needs are met on the lower end. So the HPC pinnacle is becoming more narrow with less number of applications.

That must be weighed against addressing broader applications. For example, crisis management is never going to create a market, but is it something the government should invest in? Yes. And it's important for the government to invest in other areas-public health, disaster recovery-where there also are no markets. No commercial company is going to build machines for those areas. But there will be considerable fallout for commercial manufacturers from the government's investment. There just won't be that many government installations.

So the conclusion is that even though the traditional "high end" market is getting smaller, the society still needs those machines to do the huge jobs-like simulating nuclear testing-that no other machines can do. The end of the cold war and the shift in the government's spending pattern has brought a major re-adjustment to commercial interests. But I'm still very optimistic about high performance computing. There's definitely been some loss of attention on the very highest end. But that's appropriate, because that segment of high performance computing for a long time occupied a space that was very inappropriate for the amount of revenue it generated.

SCHATZ: You wrote in the Fall 1996 Parallel Computing Research that for the Department of Defense's High Performance Computing Modernization Program (DOD Mod) to succeed "the challenge is to bridge the very different cultures of DOD and the academia." How is that collaboration proceeding?

KENNEDY: The academic and DOD markets are used to very different environments and very different cultures. DOD works on specific markets with specific DOD needs in mind. The military wants accurate simulations before the results are outdated Academic researchers want credit for their ideas , not just using them to solve problems. There are always misunderstandings when get these two cultures come together. DOD really thinks it needs people to simply develop software, get the applications on the right platforms and solve its problems as fast as possible; academics want to test their software on users and then worry about the results. So there's obviously going to be a period of adjustment.

That process requires an attention to tasks that academics do not find pleasing. They're obviously concerned about the rewards for the country , but they're not used to helping users who are really focused on problems. There's no real animosity, but there are real issues that come up in getting things worked out. There's confusion about who's going to do this and who will do that. DOD Mod users aren't used to working with academics and academics are not used to users who are so focused on problems that they don't want to use technology for its own sake. But lots of progress is being made.

SCHATZ: How can PET (Programming Environment Training) accelerate bridging the Great Cultural Divide? [The CRPC is part of the winning PET teams at three of the DOD MOD's four Major Shared Resource Centers (MSRCs)].

KENNEDY: PET is specifically aimed at providing the support DOD users want. If the programming works well, it will drive a lot of progress in HPC architectures and software development at the high end. The goal is for the best academic technology to be actively used by DOD. We want to make sure that there is a short cycle for good academic ideas to find their way into the MSRCs. If the process takes twice as long as takes to get ideas into the NSF supercomputing centers we won't have succeeded.

There's a natural lag, of course. Academic technology starts as an R&D prototype not usable by anyone. Then it goes into the advanced development process. Then it's picked up by commercial world vendors and delivered by them to commercial markets. High Performance Fortran (HPF) did that. But high performance visualization tools and images haven't yet been picked up by the commercial world, so the vendors aren't selling the products.

SCHATZ: Which side is more resistant to absorbing the other's culture?

KENNEDY: DOD holds its applications close. Its people are used to giving needs that they expect to be filled. But in many ways it's the same problem faced by anyone using HPC to solve their biggest problems: what's the fastest, most effective way to get on the right platform?

DOD's sensitivity is understandable. There's a different level of insularity in that community. That's necessary because the users deal with so much secret stuff. But now they're being much more open, so the academic community must be very sensitive to that. Many of the technical problems are similar.

Overall the cultural adjustment is going slowly but very well. No one expected to solve this overnight. So each side is developing lines of communication. I'm pleased with the progress to date. And don't forget that DOD MOD is pushing a lot of technology a very long way. When money talks, people listen.

SCHATZ: Is vector processing "out" and parallel processing "in?"

KENNEDY: That's a fair statement. But it's just the large vector instruction set that's fallen into less use. But there are plenty of those and they were enormously successful. They'll come around again.

I think people tend to forget that vector processing is really fine-grained parallelism. And that's not dead by any means. Vector processing was a very good idea for its time. But the technology pendulum is moving away from the vector form of parallel processing. The key is instruction rates. As those have gotten much faster and memory has gotten much larger and much cheaper, 64-processor and 128-processor machines are no longer that big a deal. Nevertheless, vector processing once dominated the parallel processing field. And lots of technologies developed for vector processing are used effectively for parallel processing. So I'm still going to have a chapter on vector processing in my book.

SCHATZ: The conventional wisdom in some parts of the HPC community is that Pentium Pros with a lot of memory can blow away old supercomputers. Can they? If so, what are the implications for the "traditional" HPC industry?

KENNEDY: I think it was the Brooks-Sutherland Report which developed the model for supercomputers as a time machine. High end supercomputers do a number of years earlier what desktops do a number of years later. So what's the value of traveling forward in time to solve problems now rather than wait for 10 years to solve them on a workstation? That's still an accurate comparison. But now we may have to ask how much closer the high end is getting to the low end? How economical is it to construct a high end machine? But we're still 8-10 years away from a high-end box on every desktop.

So do we build the entire airplane now or wait about a decade? Traditionally we've been willing to pay a premium for a world-wide lead in crucial information technology. We still need to do that. For example, do you really think the design for the current Pentium Pro was laid out on another Pentium Pro?

SCHATZ: How will the HPC concept play in the 105th Congress? Should the HPC community be more visible and more vocal? Or should it maintain its previous profile?

KENNEDY: It's premature to say how our community will be perceived in the new Congress. It's still hard to determine the effect of the setbacks from the last Congress. The HPC community must make a very strong and persuasive case that it is an important technology that must be pre-competitively supported if the country wants to keep its international information technology lead.

But that doesn't mean we have to lobby in the negative sense of that word. We just have to provide Congress with the information it must have to know what's happening. Lots of times the members and the staff don't have the right information. So we've got to make a tremendous effort to make sure the correct information gets into the right hands.

SCHATZ: More so than in the past? Does that mean getting more vocal, more visible and more credible?

KENNEDY: We've definitely got to do more than we've done in the past. It doesn't matter how good your work is if people don't see it. And if they don't understand it, they don't appreciate it. We need to interpret the results of the work we're doing so that members of Congress really understand why it's so valuable to the country.

We're still interpreting the changes of the last Congress. It cut overall science spending, but HPC not positively or negatively affected. And NSF and NIH (National Institutes of Health) did pretty well. But HPC will definitely will be hurt this time if we don't make the strongest possible case for its continuing importance.

We know we've got a good story to tell. We just have to tell it well.


Sites & Affiliations | Leadership | Research & Applications | Major Accomplishments | FAQ | Search | Knowledge & Technology Transfer | Calendar of Events | Education & Outreach | Media Resources | Technical Reports & Publications | Parallel Computing Research Quarterly Newsletter | News Archives | Contact Information


Hipersoft | CRPC