Search
Sites and Affiliations
Leadership
Research and Applications
Major Accomplishments
Frequently Asked Questions
Knowledge and Technology Transfer
Calendar
Education and Outreach
Media Resources
Technical Reports and Publications
Parallel Computing Research - Our Quarterly Newsletter
Contact Information
CRPC Home Page

UniGuide Featured Site

Parallel Computation: Practice, Perspectives and Potential

WORKSHOPS HPC SELECT NEWS


California Institute of Technology Center for Research on Parallel Computation (CRPC)

Presents a short course on Parallel Computation: Practice, Perspectives and Potential

Monday, January 24, 1994

Ramo Auditorium California Institute of Technology Pasadena, California

Sponsored by the CRPC Center for Research on Parallel Computation An NSF Science and Technology Center

****************** Page 1 ***********************

Objectives Scalable parallel computers offer the potential of performance far beyond that available from conventional supercomputers, at relatively low cost. The wide availability of networks of workstations is an added impetus for parallel computing. This one-day short course is an introduction to tools, languages, program-development methods, architectures, applications and technology-transfer in the area of parallel computing. The course will provide you with practical information and with perspectives from experts in the field to help with your own evaluation of the potential of parallelism, to help you develop parallel applications and introduce parallel computer technology into your organizations.

CRPC: The Center for Research on Parallel Computation The faculty for this course are members of CRPC, a National Science Foundation Science and Technology Center, which has the goal of making parallel computers as easy to use as vector super computers are today. The Center consists of six member institutions: Rice University, the California Institute of Technology, Syracuse University, and Argonne, Los Alamos and Oakridge National Laboratories. In addition to basic research, the Center conducts several outreach activities, including publication of newsletters and the organization of conferences, workshops and courses.

Who Should Attend This course is designed for people who want to:

  • evaluate the potential of parallel computing
  • introduce parallel computing technology into their organizations
  • develop parallel applications
  • carry out research in parallel computing

******************** Page 2 ***********************

Parallel Computation: Practice, Perspectives and Potential

Schedule

REGISTRATION: 8:00-8:45am Continental Breakfast

PRESENTATIONS:

8:45-9:00 CRPC: An Introduction Ken Kennedy, Director

9:00-9:45 Parallel Architectures Paul Messina, California Institute of Technology

9:50-10:35 PVM and MPI: Tools for Concurrent Computing Jack Dongarra, Univ. of Tennessee/Oak Ridge National Lab.

10:40-10:55 Break

11:00-11:45 Methods for Developing Parallel Programs K. Mani Chandy, California Institute of Technology

12:00-1:00pm Lunch Break

1:10-1:55 Templates and Their Role in Parallel Scientific Computation Dan Meiron, California Institute of Technology

2:00-2:45 Architecture-Independent Parallel Programming Support in Fortran D and HPF Ken Kennedy, Rice University

2:50-3:05 Break

3:10-3:5 5 Multidisciplinary Optimization John Dennis, Rice University

4:00-4:45 InfoMall -- The Virtual Corporation for HPCC Software and Systems Development Geoffrey C. Fox, Syracuse University

5:00-6:30 Reception in Winnett Clubroom 1

6:00-7:30 Demos and Poster Session Tour of Computing Facilities, including the Intel Paragon and Delta machines

*********************** Page 3 ****************************

-- SPEAKERS AND ABSTRACTS --

Paul Messina California Institute of Technology

State of the Art of Commercial Parallel Computers

Abstract Computers with massively parallel architectures (often referred to as MPPs) provide an avenue for computing facilities that are both faster and more cost-effective than conventional ones. A number of MPPs are available as commercial products and some have matured to the point that they are able to deliver the promised advantages of parallelism. The task of parallelizing applications has generally turned out to be less difficult than had been feared. However, most computer users view MPPs as difficult to use, unreliable, and of benefit to only a few applications. CRPC researchers have been using large-scale parallel computers for scientific and engineering applications for five years, including Intel iPSC/860, Delta, and Paragon; TMC CM-2, CM-5; BBN TC2000; IBM SP-1; nCUBE; Kendall Square KSR-1, and MasPar MP-2. This talk surveys these computers with special emphasis on actual field experience in using and operating advanced architecture computers. Examples will be given of scientific and engineering applications that run successfully on MPPs, trends in hardware and software environments will be examined, and shortcomings of current MPPs will be identified.

*********************** Page 4 ****************************

Jack Dongarra University of Tennessee/Oak Ridge National Laboratory

PVM and MPI: Tools for Concurrent Computing

Abstract

Wide area computer networks have become a basic part of today's computing infrastructure. These networks connect a variety of machines, presenting an enormous computing resource. In this talk we focus on developing methods and tools which allow a programmer to tap into this resource, and we describe PVM and HeNCE, tools and methodology under development that assists a programmer in developing programs to execute on a parallel computer. We will also describe a proposed standard message passing interface for MIMD distributed memory concurrent computers called MPI. The design of MPI has been a collective effort involving researchers in the United States and Europe from many organizations and institutions. MPI includes point-to-point and collective communication routines, as well as support for process groups, communication contexts, and application topologies. While making use of new ideas where appropriate, the MPI standard is based largely on current practice.

*********************** Page 5 ****************************

K. Mani Chandy California Institute of Technology

Methods for Developing Parallel Programs

Abstract

This talk describes systematic methods for developing parallel programs. Goals of these methods are (1) develop parallel programs using familiar languages (such as Fortran, C, C++), familiar platforms (workstations or PCs), and familiar environments (editors, compilers, debuggers), and then port the programs to parallel machines in well-defined steps, (2) help ensure reliability of parallel programs, (3) reduce the effort required to get the first parallel program up and running, and (4) use stepwise refinement to obtain satisfactory performance. Archetypes or "templates" for parallelization will be discussed; if your problem fits an archetype, then a specific methodology associated with the archetype is used to guide the parallelization of your program. Examples of using the methodology to produce parallel code using standard sequential compilers with libraries, and also with language extensions of Fortran, C and C++ will be discussed.

*********************** Page 6 ****************************

Dan Meiron California Institute of Technology

Templates and Their Role in Parallel Scientific Computation

Abstract

In this talk we will describe the use of software schemas or "templates" in the design of parallel algorithms for scientific computation. It is well known that many present day scientific calculations from diverse areas utilize data distributions which are in some sense "generic". One example of this is the use of logically rectangular grids in finite difference fluid dynamics calculations; another example is the manipulation of large matrices in molecular orbital calculations. In converting the sequential versions of these algorithms into parallel programs we have found it useful to abstract the particular data distribution and associated communication patterns inherent in each type of application. The collection of data structures and associated communication utilities for each type of application constitute a template. We will describe the implementation of these ideas using Fortran and a channel library, or using a small extension of Fortran called Fortran-M which provides facilities for the development of both data-parallel as well as task-parallel programs.

*********************** Page 7 ****************************

Ken Kennedy Rice University

Architecture-Independent Parallel Programming Support in Fortran D and HPF

Abstract A major problem with current parallel computing systems is that each system provides a machine-dependent programming interface at the Fortran level. As a result, a program written for one parallel machine must be rewritten for any new parallel architecture. This talk will introduce the ideas behind Fortran D, an extended version of Fortran 77 designed to address this problem for "data-parallel" problems. Fortran D extends Fortran 77 or Fortran 90 by providing a set of statements that specify the distribution of data structures across the processor array. Parallelism is derived by the compiler via the "owner computes" rule, which specifies the processor owning a datum computes its value. The talk will give an overview of the compiler techniques and supporting programming environment needed to make this language an effective tool for scientific programming. Most of the features in Fortran D have been incorporated into the new informal standard for High Performance Fortran (HPF), so these methods apply to that language as well.

*********************** Page 8 ****************************

John Dennis Rice University

Multidisciplinary Optimization

Abstract

This talk will introduce the Multidisciplinary Optimization problem and propose an object-oriented computing environment for its solution. This problem arises especially in engineering design, where it is considered of paramount importance in today's competitive global business climate. It is interesting to an optimizer because the constraints involve coupled dissimilar systems of parameterized partial differential equations each arising from a different discipline, like structural analysis, computational fluid dynamics, etc. Usually, these constraints are accessible only through PDE solvers rather than through algebraic residual calculations as we are used to having. Thus, just finding a multidisciplinary feasible point is a daunting task. Many such problems have discrete variable disciplines, multiple objectives, and other challenging features. After discussing some interesting practical features of the design problem, we will give some standard ways to formulate the problem as well as some novel ways that lend themselves to divide-and-conquer parallelism.

*********************** Page 9 ****************************

Geoffrey C. Fox Syracuse University

InfoMall -- The Virtual Corporation for HPCC Software and Systems Development

Abstract InfoMall is a partnership between some twenty-five disparate organizations aimed at jump-starting the High Performance Computing and Communications Software and Systems Industry. The initial project is funded by New York State and companies. InfoMall is divided into four main wings; InfoTech, InfoTeam, InfoSchool and the Commerce Wing. InfoTech receives the best technologies from an international base, evaluates and classifies them and places them in InfoWare. InfoTeam consists of small companies (there are 4400 companies in NYS with an average of 11 employees) and software teams in large companies. InfoTeam gets full economic development (incubator and business) support from InfoMall partners and the special HPCC support. This consists of InfoTech and excellent HPCC facilities and associated consulting in the commerce wing. This contains large vendors (IBM, Digital, NYNEX, Oracle) as well as system integration companies (ISSC, Booz Allen Hamilton). InfoSchool offers a full range of courses and seminars for CEO's, entrepreneurs and software engineers -- including a program aimed at retraining those recently laid off.

*********************** Page 10 ****************************

Location

Ramo Auditorium California Institute of Technology Pasadena, California 91125

Contact: JoAnn Boyd Phone: 818-395-4562 Fax: 818-683-3549 email: joann@sunshine.caltech.edu

Accommodations

A block of rooms has been reserved for registrants at the Ritz-Carlton Hotel. The rate is $101, plus tax, for a single or double room. When calling for reservations, please state that you are attending the *CRPC Annual Meeting*. Room availability cannot be guaranteed after December 23, 1993.

Ritz-Carlton 1401 South Oak Knoll Avenue Pasadena, CA 91106 Phone: 818-568-3900 Toll Free: 800-241-3333

Other hotels in the area:

Pasadena Hilton 150 South Los Robles Avenue Pasadena, CA 91105 Phone: 818-568-3900 Toll Free: 800-445-8667

Holiday Inn 303 East Cordova Street Pasadena, CA 91101 Phone: 818-449-4000 Toll Free: 800-465-4329

Doubletree Hotel 191 North Los Robles Avenue Pasadena, CA 91101 Phone: 818-792-2727 Toll Free: 800-222-8733

Please phone hotels directly for rates.

*********************** Page 11 ****************************

Transportation from Airports

LAX and Burbank are the two closest, most convenient airports. If you do not intend to rent a car, you can phone any of several shuttle services upon your arrival. A few are listed below.

Shuttle Services

SuperShuttle: 818-556-6600 LAX: $23

PrimeTime: 818-504-3600 LAX: $23 Burbank: $17

Golden Star: 818-793-1689 LAX: $17 Burbank: $17

Airport Flyer: 800-244-5755 LAX: $20 Burbank: $15

Driving directions are on the next page.

CRPC Annual Meeting and Research Symposium

The CRPC Annual Meeting and Research Symposium is taking place at Caltech on the two days following this course. All registrants are welcome to stay and attend the research presentations.

Video Tape Information A multi-tape video presentation of this short course will be available in March. An order form will be available at registration. If you do not plan to attend but would like to order the video tapes (VHS only), please contact the short-course coordinator. The price of the tape set is $100.

*********************** Page 12 ****************************

Driving Directions

Burbank to Caltech From Burbank Airport take the Interstate 5 Fwy Eastbound, and stay on it for about 4 miles. Next take the 134 (Venture Fwy.) for about 7 miles until it becomes Interstate 210 (Foothill Fwy) Eastbound. Go about 1.5 miles on I-210 and exit at Hill Ave. Turn right onto Hill (southbound) and go a little over a half a mile to Del Mar. Turn right onto Del Mar. Go two short blocks to Chester and make a left. Chester ends at the Caltech Parking Lot. Park in any unmarked space. If the lot is full, try street parking on Del Mar, Chester or Holliston (1 block east of Chester). The drive from Burbank should take about 30 minutes or less.

LAX to Caltech From LAX take Century Blvd. East to the 405 Fwy North (posted San Fernando). Take the Interstate 10 East (posted Los Angeles) for 10 miles to the 110 (Pasadena) Fwy. You will exit from the left lane onto the Pasadena Fwy. Take the 110 North for about 10 miles to where it ends and becomes Arroyo Parkway. Take Arroyo Pkwy for about a 1/2 mile to Del Mar and turn right. Take Del Mar for 6 lights to Wilson Avenue. Two blocks past Wilson turn right on Chester Ave. Chester will end at the Caltech Parking Lot. Park in any available parking spot that doesn't have a name painted on it. If the lot is full, try street parking on Del Mar, Chester or Holliston (1 block east of Chester). The drive from LAX should take about 50 minutes.

*********************** Page 13 ****************************

Short-Course Coordinator

JoAnn Boyd Center for Research on Parallel Computation California Institute of Technology 217-50 Pasadena, California 91125

Phone: 818-395-4562 Fax: 818-683-3549 email: joann@sunshine.caltech.edu

Registration Information

All attendees must register.

CRPC/CIT Affiliates N/C non-CRPC/CIT Affiliates $ 100

Registration includes:

o Lecture notes o Continental breakfast o Lunch o Refreshments at all breaks o Reception o Demos and tour of CCSF Machine Room

Deadline: December 15, 1993

Please enclose check or money order (payable to Caltech) with registration form. We will be unable to process your registration unless your fee is enclosed.

Full registration fee must accompany the registration form. A $25 administrative fee will be deducted from refunds for cancellations.

*********************** Page 14 ****************************

[tear here and return] --------------------------------------------------------------------------

REGISTRATION FORM

Parallel Computation: Practice, Perspectives and Potential

Monday, January 24, 1994

Name:_________________________________________________________________

Affiliation:__________________________________________________________

Mailing Address:______________________________________________________

______________________________________________________________________

City:_________________________________________________________________

State:________________________________________________________________

Zip____________________Country:_______________________________________

Registration Fee:

No Charge CRPC/CIT Affiliates (Includes students, faculty, and Industrial Associates)

$100 non-CRPC/CIT Affiliates

Deadline: December 15, 1993

Please return form and check to:

JoAnn Boyd CRPC Caltech 217-50 Pasadena, California 91125

*********************** Page 15 ****************************

[tear here and return]
-------------------------------------------------------------------------

Parallel Computation: Practice, Perspectives and Potential

Monday, January 24, 1994

Videotape Request Form

Please send me _______ multi-tape videos at $100/set.

My check for _______ is enclosed.

Name:_________________________________________________________

Mailing Address:______________________________________________

______________________________________________________________

______________________________________________________________

______________________________________________________________

______________________________________________________________

The deadline for ordering videotape sets is February 25. Tapes will be mailed to you in March.

Please send check or money order, payable to "Caltech", to:

JoAnn Boyd CRPC Caltech 217-50 Pasadena, CA 91125

Copyright 1993 HPCwire. To receive the weekly HPC Select News table of contents at no charge, send e-mail to " trial@hpcwire.ans.net ".


Sites & Affiliations | Leadership | Research & Applications | Major Accomplishments | FAQ | Search | Knowledge & Technology Transfer | Calendar of Events | Education & Outreach | Media Resources | Technical Reports & Publications | Parallel Computing Research Quarterly Newsletter | News Archives | Contact Information


Hipersoft | CRPC