Search
Sites and Affiliations
Leadership
Research and Applications
Major Accomplishments
Frequently Asked Questions
Knowledge and Technology Transfer
Calendar
Education and Outreach
Media Resources
Technical Reports and Publications
Parallel Computing Research - Our Quarterly Newsletter
Contact Information
CRPC Home Page

UniGuide Featured Site

PGI'S DOUG MILES COMMENTS ON THE STATE OF HPF

Source: HPCWire, January 24, 1997
By Alan Beck

Wilsonville, Ore. -- High Performance Fortran (HPF) is experiencing burgeoning popularity among certain groups of HPC users. To obtain more insight into the potential of this new language geared for parallel platforms, HPCwire interviewed Doug Miles, director of marketing for the Portland Group, Inc. (PGI). Following are selected excerpts from that discussion.

HPCwire: How do you view the current state of High Performance Fortran, and where do you anticipate taking it in 1997?

MILES: "What you've seen in the past year are vendors moving closer to full HPF, so there are implementations from companies like PGI that support almost all of the HPF standard. Along with that progression in functionality, there has been significant progression in performance to where you see fairly impressive HPF NAS parallel benchmark results like those NASA reported at SuperComputing 96. That probably is the biggest issue from a user perspective -- seeing that progression of performance. For a long time there was this impression that message-passing was more efficient than HPF, and that perception has changed.

"In terms of where we go over the next year, there will be a lot of under-the-hood changes that go on in our implementation. Especially in regards to implementations on logically shared memory machines such as the Cray T3E, the Hewlett Packard Exemplar and the SGI Origin.

"On those machines there is really no need to do message passing, because the operating systems support logically shared memory. We're bypassing the message-passing layer and taking advantage of those capabilities directly, and it makes for a more efficient implementation on those machines. In some sense, those machines are better suited for HPF than they are for message passing, which is designed for machines that have logically distributed memory."

HPCwire: Can you characterize what was shown in those NAS parallel benchmarks numbers?

MILES: "The two most significant were the NAS SP and the NAS BT benchmarks, which are two of the pseudo applications. The HPF results on the IBM SP2 and the Cray T3D are now within 30 percent, in some cases, of the corresponding MPI numbers. They are getting awfully close to parity at this point, and we're continuing a steady progression in terms of performance improvement. Further improvement will occur in the upcoming releases of the PGI compiler."

HPCwire: Do you see parity emerging in the upcoming year or would you say it is more than a year distant?

MILES: "I would say we will see parity in the upcoming year on the logically shared memory machines."

HPCwire: Can anything be done to address HPF's problems with irregular and unstructured type of problem sets?

MILES: "There are applications that we have seen implemented to do particle simulations, n-body simulations and some of the oil reservoir simulations that do a significant number of irregular data accesses. Those codes, in a lot of cases, run very efficiently. HPF currently does not have the capability to perform irregular array distribution, but that has been added in the next definition of the HPF standard, HPF2. However, in some sense I think that irregular problems are hard to do in parallel, no matter how you express them."

HPCwire: With HPF there is a mutual responsibility for exploiting the parallelism that is shared between the user and the compiler, and the user has to point out that the parallelism exists or where (s)he wants to employ it. Can anything be done to make this more automagical?

MILES: "HPF is not intended to be automagical. The intent of the HPF standard was to present a well-defined parallel programming language so users can create parallel applications. It is compatible with Fortran 90 so that it is relatively easy to migrate Fortran 90 applications to HPF. HPF was not intended as a magic solution; we've never portrayed it that way. There are some things that can be done to automate data distribution. We are certainly capable of parallelizing DO-loops which operate on data that is distributed. HPF is designed to be a language that allows you to write parallel programs, it's not designed as a tool that will parallelize an existing legacy application."

HPCwire: You mentioned there will be the first annual HPF users group meeting in Albuquerque this February. What events are to take place there?

MILES: "I don't have the agenda with me, but Ken Kennedy and Bob Boland at Los Alamos are two of the organizers for that meeting. It's being organized by the Center for Research on Parallel Computation, and the intent is to have it be a very interactive session where current HPF users and potential HPF users can present results, can interact with the compiler vendors and systems vendors, and can let them know what they can do to improve their products."

HPCwire: Do you have any final words about what the Portland Group is doing; things it sees on the horizon, or things it anticipates?

MILES: "The most significant is the support for a native shared memory implementation of HPF, which bypasses message passing. Next year we will have a native HPF and Fortran 90 for Pentium Pro systems, and we see a significant amount of momentum in terms of users adopting Pentium Pro systems for scientific and technical workstations. Our goal is to provide a uniform programming environment from those systems all the way up the pyramid to the largest supercomputing systems available,such as those being developed by Intel, SGI/Cray, and IBM as part of the ASCI program."

For more information, see http://www.pgroup.com

--------------------
Alan Beck is editor in chief of HPCwire. Pamela Richards, associate editor of HPCwire, assisted in the preparation of this feature. Comments are always welcome and should be directed to editor@hpcwire.tgc.com