Sites and Affiliations
Research and Applications
Major Accomplishments
Frequently Asked Questions
Knowledge and Technology Transfer
Education and Outreach
Media Resources
Technical Reports and Publications
Parallel Computing Research - Our Quarterly Newsletter
Contact Information
CRPC Home Page

UniGuide Featured Site

Parkbench Committee Releases Parallel Benchmarks Jan 21


Within the high-performance computing community, there has been a growing need for developing a standardized, rigorous and scientifically tenable methodology for studying the performance of high-performance computer systems. Such a methodology would help to:

Increase the understanding of these systems, both at a low-level hardware or software perspective and at a high-level, total system performance perspective

  • Assist the purchasers of high-performance computing equipment in selecting systems best suited to their needs
  • Reduce the amount of time and resources vendors must expend in implementing multiple, redundant benchmarks
  • Provide valuable feedback to vendors on bottlenecks that can be alleviated in future products
  • Improve the status of supercomputer performance analysis as a serious scientific discipline
  • Reduce confusion in the high-performance computing literature
  • Establish and maintain high standards of honesty and integrity in the high performance computing profession

The Parkbench committee was established with these goals in mind. Originally called the Parallel Benchmark Working Group, PBWG, the committee was founded at SUPERCOMPUTING '92 in Minneapolis, when a group of about 50 people interested in computer benchmarking met under the joint initiative of Tony Hey of the University of Southampton (UK) and CRPC member Jack Dongarra of the University of Tennessee and Oak Ridge National Laboratory. Representatives came from universities, laboratories, and industries and from both computer manufacturers and computer users on both sides of the Atlantic. Roger Hockney of the University of Southampton chaired the meeting.

The group agreed upon several objectives to meet their goals. A comprehensive set of parallel benchmarks was needed that was generally accepted by both users and vendors of parallel systems and a focus for parallel benchmark activities was needed to avoid unnecessary duplication of effort and proliferation of benchmarks. The group also wanted to set standards for benchmarking methodology and result-reporting and to establish a control database/repository for both the benchmarks and the results. Furthermore, the benchmarks and results needed to be freely available in the public domain.

The first year's work has produced a report and an initial set of benchmarks. The committee met at the University of Tennessee in Knoxville on March 1-2, 1993, May 24, 1993, and August 23, 1993 to discuss the evolving draft of the report. The report is the final result of these meetings, and is the first official publication of the Parkbench committee. It was distributed at a public 'Birds of a Feather' meeting during SUPERCOMPUTING '93, Portland, on November 17, 1993, together with the first release of the Parkbench parallel benchmarks.

The initial focus of the parallel benchmarks is on the new generation of scalable distributed-memory message-passing architectures for which there is a notable lack of existing benchmarks. For this reason, the initial benchmark release concentrates on Fortran 77 message-passing codes using the widely available PVM message-passing interface for portability. Future versions will undoubtedly adopt the proposed MPI interface, when this is fully defined and becomes generally accepted. The committee's aim, however, is to cover all parallel architectures, and this is expected to be achieved by producing versions of the benchmark codes using Fortran 90 and High Performance Fortran (HPF). Many shared-memory architectures provide efficient native implementations of PVM message passing and will use HPF compilers. They will be covered by these routes.

The Parkbench committee agreed to divide its work between five subcommittees, corresponding to the five substantive chapters in the report, each with a leader (shown in parentheses) who is responsible for assembling the contents of his chapter and its benchmarks for the committee's approval. The subcommittees are:

1. Methodology (David Bailey, NASA Ames) 2. Low-level Benchmarks (Roger Hockney, University of Southampton) 3. Kernel Benchmarks (Tony Hey, University of Southampton) 4. Compact Applications (David Walker, Oak Ridge National Lab) 5. Compiler Benchmarks (Tom Haupt, Syracuse University)

In order to facilitate discussion and exchange of information, the following email addresses were set up: for the Whole committee - for the Methodology subcommittee - for the Low level Benchmarks subcommittee - for the Kernel subcommittee - for the Compact Applications subcommittee

Recent practice, however, has been to send all mail to pbwg-comm so that all members may see it.

All mail is being collected and can be retrieved by sending email to and in the mail message typing:

send comm.archive from pbwg send lowlevel.archive from pbwg send compactapp.archive from pbwg send method.archive from pbwg send kernel.archive from pbwg send index from pbwg

A mail reflector was set up for correspondence called . Mail to that address will be sent to the mailing list and is also collected in . To retrieve the collected mail, send email to and in the mail message type: send comm.archive from pbwg.

The Parkbench committee is open without charge to anyone interested in computer benchmarking and operates similarly to the HPFF (High Performance Fortran Forum). Anyone interested in joining in the discussion or preparing benchmarks should send email to that effect to: .

It is important to note that researchers in many scientific disciplines have found it necessary to establish and refine standards for performing experiments and reporting the results. Many scientists have learned the importance of standard terminology and notation. Chemists, physicists and biologists long ago discovered the importance of "controls" in their experiments. Medical researchers have found it necessary to perform "double-blind" experiments in their field. Political scientists have found that subtle differences in the phrasing of a question can affect the results of a poll. In many fields, environmental factors in experiments can significantly influence the measured results. Thus, researchers must carefully report all such factors in their papers.

If supercomputer performance analysis and benchmarking is ever to be taken seriously as a scientific discipline, its practitioners should be expected to adhere to the kinds of standards that prevail in other disciplines. This effort is dedicated to promoting these standards in the field of high-performance computing.

This article appears courtesy of CRPC, Rice University

H P C W I R E S P O N S O R S Product specifications and company information in this section are available to both subscribers and non-subscribers.

901) ANS 902) IBM Corp. 904) Intel SSD 905) Maximum Strategy 906) nCUBE 907) Digital Equipment 908) Hewlett-Packard 909) Fujitsu America 910) Convex Computer 912) Avalon Computer 914) Applied Parallel Res. 915) Genias Software 916) MasPar Computer 919) Transtech Parallel 921) Cray Research Inc.

Copyright 1993 HPCwire. To receive the weekly HPC Select News Bulletin at no charge, send e-mail to " ".

Sites & Affiliations | Leadership | Research & Applications | Major Accomplishments | FAQ | Search | Knowledge & Technology Transfer | Calendar of Events | Education & Outreach | Media Resources | Technical Reports & Publications | Parallel Computing Research Quarterly Newsletter | News Archives | Contact Information

Hipersoft | CRPC