Volume 7, Issue 1 -
Spring/Summer 1999

Volume 6, Issue 3
Fall 1998

Volume 6, Issue 2
Spring/Summer 1998

Volume 6, Issue 1
Winter 1998

Volume 5, Issue 4
Fall 1997

Volume 5, Issue 3
Summer 1997

Volume 5, Issue 2
Spring 1997

Volume 5, Issue 1
Winter 1997

Volume 4, Issue 4
Fall 1996

Volume 4, Issue 3
Summer 1996

Volume 4, Issue 2
Spring 1996

Volume 4, Issue 1
Winter 1996

Volume 3, Issue 4
Fall 1995

Volume 3, Issue 3
Summer 1995

Volume 3, Issue 2
Spring 1995

Volume 3, Issue 1
January 1995

Volume 2, Issue 4
October 1994

Volume 2, Issue 3
July 1994

Volume 2, Issue 2
April 1994

Volume 2, Issue 1
January 1994

Volume 1, Issue 4
October 1993

Volume 1, Issue 3
July 1993

Volume 1, Issue 2
April 1993

Volume 1, Issue 1
January 1993

Parkbench Committee Releases Parallel Benchmarks

Within the high-performance computing community, there has been a growing need for developing a standardized, rigorous, and scientifically tenable methodology for studying the performance of high-performance computer systems. Such a methodology would help to:

  • increase the understanding of these systems, both at a low-level hardware or software perspective and at a high-level, total system performance perspective
  • assist the purchasers of high-performance computing equipmentin selecting systems best suited to their needs
  • reduce the amount of time and resources vendors must expend in implementing multiple, redundant benchmarks
  • provide valuable feedback to vendors on bottlenecks that can be alleviated in future products
  • improve the status of supercomputer performance analysis as a serious scientific discipline
  • reduce confusion in the high-performance computing literature
  • establish and maintain high standards of honesty and integrity in the high-performance computing profession

The Parkbench committee was established with these goals in mind. Originally called the Parallel Benchmark Working Group, PBWG, the committee was founded at SUPERCOMPUTING '92 in Minneapolis, when a group of about 50 people interested in computer benchmarking met under the joint initiative of Tony Hey of the University of Southampton (UK) and CRPC researcher Jack Dongarra of the University of Tennessee and Oak Ridge National Laboratory. Representatives came from universities, laboratories, and industries and from computer manufacturers and computer users on both sides of the Atlantic. Roger Hockney of the University of Southampton chaired the meeting.

The group agreed upon several objectives to meet their goals. A comprehensive set of parallel benchmarks was needed that was generally accepted by both users and vendors of parallel systems; furthermore, a focus for parallel benchmark activities was needed to avoid unnecessary duplication of effort and proliferation of benchmarks. The group also wanted to set standards for benchmarking methodology and result reporting and to establish a control database/repository for both the benchmarks and the results. Furthermore, the benchmarks and results needed to be freely available in the public domain.

The first year's work has produced a report and an initial set of benchmarks. The committee met at the University of Tennessee in Knoxville on March 1-2, 1993, May 24, 1993, and August 23, 1993 to discuss the evolving draft of the report. The report is the final result of these meetings, and is the first official publication of the Parkbench committee. It will be distributed at a public 'Birds of a Feather' meeting at SUPERCOMPUTING '93, Portland, on November 17, 1993, together with the first release of the Parkbench parallel benchmarks.

The initial focus of the parallel benchmarks is on the new generation of scalable distributed-memory message-passing architectures for which there is a notable lack of existing benchmarks. For this reason, the initial benchmark release concentrates on Fortran 77 message-passing codes using the widely available PVM message-passing interface for portability. Future versions will undoubtedly adopt the proposed MPI interface, when this is fully defined and becomes generally accepted. The committee's aim, however, is to cover all parallel architectures, and this is expected to be achieved by producing versions of the benchmark codes using Fortran 90 and High Performance Fortran (HPF). Many shared-memory architectures provide efficient native implementations of PVM message passing and will use HPF compilers. They will be covered by these routes.

The Parkbench committee agreed to divide its work between five subcommittees, corresponding to the five substantive chapters in the report, each with a leader who is responsible for assembling the contents of his chapter and its benchmarks for the committee's approval.

The subcommittees and their leaders are:

  • Methodology (David Bailey, NASA Ames)
  • Low-level Benchmarks (Roger Hockney, University of Southampton)
  • Kernel Benchmarks (Tony Hey, University of Southampton)
  • Compact Applications (David Walker, Oak Ridge National Laboratory)
  • Compiler Benchmarks (Tom Haupt, Syracuse University)
In order to facilitate discussion and exchange of information, the following email addresses were set up:

Recent practice, however, has been to send all mail to pbwg-comm@cs.utk.edu so that all members may see it.

All mail is being collected and can be retrieved by sending email to netlib@ornl.gov and in the message typing:

     send comm.archive from pbwg
     send lowlevel.archive from pbwg
     send compactapp.archive from pbwg
     send method.archive from pbwg
     send kernel.archive from pbwg
     send index from pbwg

A mail reflector was set up for correspondence called pbwg-comm@cs.utk.edu . Mail to that address will be sent to the mailing list and is also collected in netlib@ornl.gov . To retrieve the collected mail, send email to netlib@ornl.gov and in the mail message type: send comm.archive from pbwg.

The Parkbench committee is open without charge to anyone interested in computer benchmarking and operates similarly to the HPFF (High Performance Fortran Forum). Anyone interested in joining in the discussion or preparing benchmarks should send email to that effect to: dongarra@cs.utk.edu .

It is important to note that researchers in many scientific disciplines have found it necessary to establish and refine standards for performing experiments and reporting the results. Many scientists have learned the importance of standard terminology and notation. Chemists, physicists, and biologists long ago discovered the importance of "controls" in their experiments. Medical researchers have found it necessary to perform "double-blind" experiments in their field. Political scientists have found that subtle differences in the phrasing of a question can affect the results of a poll. In many fields, environmental factors in experiments can significantly influence the measured results. Thus, researchers must carefully report all such factors in their papers.

If supercomputer performance analysis and benchmarking is to be taken seriously as a scientific discipline, its practitioners should be expected to adhere to the kinds of standards that prevail in other disciplines. This effort is dedicated to promoting these standards in the field of high-performance computing.

Table of Contents

News | From the Director | Parallel Profile | Research Focus | Work in Progress | Education / Outreach | Resources | Calendar | CRPC Home