Volume 7, Issue 1 -
Spring/Summer 1999

Volume 6, Issue 3
Fall 1998

Volume 6, Issue 2
Spring/Summer 1998

Volume 6, Issue 1
Winter 1998

Volume 5, Issue 4
Fall 1997

Volume 5, Issue 3
Summer 1997

Volume 5, Issue 2
Spring 1997

Volume 5, Issue 1
Winter 1997

Volume 4, Issue 4
Fall 1996

Volume 4, Issue 3
Summer 1996

Volume 4, Issue 2
Spring 1996

Volume 4, Issue 1
Winter 1996

Volume 3, Issue 4
Fall 1995

Volume 3, Issue 3
Summer 1995

Volume 3, Issue 2
Spring 1995

Volume 3, Issue 1
January 1995

Volume 2, Issue 4
October 1994

Volume 2, Issue 3
July 1994

Volume 2, Issue 2
April 1994

Volume 2, Issue 1
January 1994

Volume 1, Issue 4
October 1993

Volume 1, Issue 3
July 1993

Volume 1, Issue 2
April 1993

Volume 1, Issue 1
January 1993

The High Performance C++ Consortium

Mani Chandy, Carl Kesselman, California Institute of Technology; Dennis Gannon, Indiana University; Allen Malony, University of Oregon


The California Institute of Technology and Indiana University are collaborating to design and implement a parallel C++ language in which programs that combine both task and data parallelism can be written. The motivation for this collaboration is the belief that programs that contain both task and data parallelism will become increasingly important. This collaboration is lead by Mani Chandy and Carl Kesselman at Caltech and the CRPC and Dennis Gannon at Indiana University. The result of this work will be a High Performance C++ (HPC++), an object- oriented parallel programming language that will meet the needs of the next generation of parallel programs.

Task parallelism in HPC++ is based on the CC++ programming language developed by Chandy and Kesselman. CC++ augments C++ with a small number of extensions. A focus of the CC++ project is to provide a language infrastructure to facilitate writing libraries of parallel programming constructs. CC++ is a general-purpose parallel programming language that can accommodate a wide range of parallel programming styles. While data parallel programs can be written in CC++, the compiler has no special knowledge about data parallelism and consequently is limited in the number of optimizations that it can apply.

By adding data parallel programming constructs, the research group can improve the performance of programs that combine task and data parallelism over what is achievable in CC++ alone. The data parallel part of HPC++ is derived from the pC++ language developed by Gannon's group at Indiana and Allen Malony at the University of Oregon. The pC++ compiler can automatically distribute the elements of an aggregate data structure across the processors of a parallel computer and cause operations on the elements of that data structure to execute in parallel. PC++ is much like High Performance Fortran (HPF) in this regard. However, unlike HPF, the distributed data structures in pC++ are not limited to arrays. Rather, any collection of data elements can be distributed and have data parallel operations performed on it.

The HPC++ project is in an initial, experimentation phase. The pC++ parser has been extended to accept CC++ constructs and researchers have begun the process of producing a version of the pC++ runtime system based on Nexus, the portable run-time system used by CC++. Once these efforts are complete, they will be able to write, compile, and execute programs that combine both CC++ and pC++ constructs.


Table of Contents

News | From the Director | Parallel Profile | Research Focus |
Work in Progress | Education Resources | Resources | Calendar | CRPC Home