Search
Sites and Affiliations
Leadership
Research and Applications
Major Accomplishments
Frequently Asked Questions
Knowledge and Technology Transfer
Calendar
Education and Outreach
Media Resources
Technical Reports and Publications
Parallel Computing Research - Our Quarterly Newsletter
Contact Information
CRPC Home Page

UniGuide Featured Site

PPI Project Strives to Advance Metacomputing

Source: HPCwire, January 31, 1997
by Alan Beck, editor in chief


Pasadena, Calif. -- Metacomputing via highly distributed heterogeneous environments stands to become one of the most important features of HPC in the next century. To learn more about the challenges facing researchers in this arena, HPCwire interviewed Mani Chandy, Director of the NSF-funded Center for Research on Parallel Computation's Parallel Paradigm Integration (PPI) project. Chandy is a professor at Caltech. Following are selected excerpts from that discussion.

HPCwire: Please give us an overview of the Parallel Paradigm Integration project's key focus areas.

CHANDY: "We have five focus areas: distributed computing, concurrent objects, exploitation of multithreaded computers, integration of task and data parallelism, and problem-solving environments. Rather than describing each area let me discuss three fundamental trends in concurrent technology and what our group in the center is doing about them. They are: One -- Parallel computing is entering the mainstream; it's not just for techno-geeks any more. Two -- Multithreaded single-address space machines are becoming vitally important at all levels, from inexpensive desktops to supercomputers. Third -- Distributed computing is playing an increasingly important role in our personal lives and in multi-institutional collaboration. Let me start with the last trend: distributed computing.

"We have two projects that approach distributed computing from different points of view. At one level, we're looking at distributed computing for metacomputing which involves harnessing powerful resources at different locations to carry out high performance applications. Our second project deals with distributed computing at what we call a 'personal level.' These projects are closely related though they have different customers.

"Metacomputing deals with high-performance infrastructures to harness powerful computing and IO devices at different locations. In our second project we are transitioning command and control ideas from the military to ordinary people who are not (and don't want to be) programmers. An example customer is a mother in say Portland, Oregon, who needs to connect security systems in her home in Portland, medical monitoring devices in her parents' retirement village in Florida, global position sensing devices in her son's car in LA, and work-related processes that manage her appointments calendar, travel, and other files. We call our system `personal infesters' (or infospheres) to borrow another term from the military. Performance may be less critical for your infosphere than for metacomputing applications. Infospheres and metacomputing are, however, related in three important ways.

"One -- they both deal with distributed computing aspects, and the problem is efficient coupling of resources at different sites. Two -- they both deal with the problem of setting up and managing secure transactions or sessions: reserving, acquiring, holding, using and then releasing resources. Security is critical in both cases: in metacomputing a top-secret simulation conducted by multiple laboratories must not be accessible to outsiders, and hackers shouldn't be able to penetrate your infosphere and turn off your home security system. Three --- a session of your infosphere can be implemented on top of the metacomputing infrastructure if performance is a concern; the infrastructures are designed so that an infospheres session can be layered on top of our metacomputing infrastructure.

"In some cases, you want to reserve resources for exclusive access. For instance you want a visualization engine to be used for this session, and only this session; multiple tasks using the same visualization engine concurrently can lead to chaos. In other cases, you might be able to have a resource that is held without exclusive access. The resource could be a device such as a supercomputer or a software agent like a calendar manager. Your calendar manager could participate in concurrent sessions -- for example, sessions could consist of networks of calendar agents of members of (i) your scout troop selecting a weekend for the next hike, and (ii) a task force in your company planning a meeting this week. Many fascinating problems in distributed databases dealing with atomic transactions, security, and fault tolerance appear in both the metacomputing and personal infosphere areas but in very different guises.

"This area has several attributes that make it a compelling research area and a fun place to be. Firstly many of the old concepts from operating and distributed systems have to be thought through anew and reimplemented in very different contexts: for metacomputing the issue is performance; for personal infospheres the issues are being safely usable by people uncomfortable with programming VCRs, and scaling to billions of objects in the global information infrastructure. Second -- emerging technologies such as Java and CORBA offer powerful new ways for design and implementation. Third -- the infrastructure has a plethora of devices and protocols from OC48 channels to 10Megabit ethernets, and negotiating the most effective reservation of resources to obtain desired quality of service for a session is a major challenge.

"The metacomputing effort called GLOBUS/NEXUS is led by Ian Foster at Argonne National Lab, and Carl Kesselman at ISI, the Information Science Institute at USC. The infospheres work is being done at Caltech by a team of superb undergrads and graduate students. One of the nice things about our center is that it is truly distributed, and so it has synergy from putting together work by scientists such as Foster at Argonne, Kesselman at ISI, with work at Caltech and other institutions.

"Now let me go on to other aspects of our thrust: the penetration of parallel computing to the computing industry as a whole. This is due to two related trends: the growing importance of multithreaded single-address space machines from inexpensive desktops to supercomputers, and parallel applications becoming much more general purpose. Much of the work in research for the last ten years for parallel applications focused on the problem of array decomposition across multiple-distributed memories and distribution of iterations across multiple processors. The focus on structured processing of arrays, loop distribution, prior reservation of resources, gang-scheduling of heavy-weight processes with one process per processor, and message-passing interfaces, was critically important. Our group, however, has also been concerned with different issues driven by a different vision: We are concerned with transitioning the benefits of parallelism to ordinary people, in addition to computational scientists. How can parallelism benefit ordinary people?

"We see parallel computing becoming a huge mass market with Windows NT running on Intel Pentium Pro Processors with quad-processors that you can buy now for $22,000 and dual processors for as little as $4,000. The advent of the Intel multiprocessor motherboard and commodity OSs such as NT and Solaris for multiprocessors are critically important. Prices of multiprocessor systems are going to drop and commodity motherboards with 8 and 16 processors will become available. In a very short time many people outside the traditional scientific domain will use parallel computing, and then the problem is dealing with many different kinds of applications: Applications that are very dynamic with complex data structures, complex dependencies, where earlier research efforts on array and loop decomposition, while important, are insufficient. That's one trend that deals with really exciting growth in the application domain, and the explosion in the size and variety of the parallel user community.

"The other trend is architecture. Machines are changing in a very exciting way. In particular multithreaded single-address-space machines are becoming widespread. Examples of these are the multiprocessor Intel desktops, and powerful machines from SGI, HP, IBM and Sun, just to name a few. An interesting true multithreaded machine on the horizon is the Tera. So, multithreaded single-address space machines are becoming increasingly common from the desktop to the supercomputer. The penetration of this architecture at the very low end is as important as its impact at the high end.

"This is a fundamental shift away from traditional vector processors and message-passing multicomputers. But, many people think that the way to exploit these new architectures is just to put a message-passing interface such as MPI on top of them. We think that's fine for the short term but the wrong thing to do for the longer term. These multithreaded machines give you a very powerful API, much more powerful than massage passing gives you, much more fine grained. To use message-passing as the primary mechanism to exploit these machines is to hide a powerful interface by an inferior one.

"So questions for my CRPC group are: What sort of research allows you to exploit these thread-based machines to the fullest, from machines for a few thousand dollars to supercomputers that cost millions? What sorts of nontraditional applications in graphics, entertainment, home computing, small-business computing and crisis management can benefit from parallelism? How can we simplify parallel-program development to the point that thousands of ISVs start using these ideas to develop thousands of applications? How can we use developments in object-oriented technology, specifically in C++, CORBA and Java to further our mission of truly widespread benefits from concurrency?"

HPCwire: How do you intend to approach these novel applications and platforms?

CHANDY: "If you look at these multi-threaded machines, the API that you get today is either a thread library -- Solaris threads, NT threads, Pthreads. -- or a parallelizing compiler, which takes a sequential program and attempts to create threads that can run independently. The problem with the threads packages available today is that they were designed for a very different purpose: to support operating systems and to control devices and user interfaces. So the functionality that they provide is for a totally different kind of application, a very different kind of problem.

"And the problem with parallelizing compilers is that they appear inadequate for dealing with complex data structures and complex control structures. Let me give you an example of an application where the structures are complex: take command and control, which comprise very, very important applications because they are used in crisis situations and they have to be high performance; timely completion of tasks is a matter of life and death. The data structures for command and control are really quite complex. For example route optimization problems for aircraft or terrain-masking problems have complex data structures. Also, crisis situations are fluid and so if a high priority task comes in, then the high priority task has to get resources immediately. The resource-base that a parallel application can use in crisis management can change dramatically and rapidly; so, compiler optimizations for a static resource base are inadequate.

"The approach we're taking is three-fold. One is to extend the thread libraries by providing structured operations on collections of threads, operations that deal not so much with forking-off an individual thread, but collective thread operations for threads working together in a structured way such as structured thread-creation corresponding to parallel for-loops.

"The constructs -- such as monitors and semaphores -- that you get with standard thread libraries are useful for operating systems applications but don't always fit the need for many kinds of applications that need collective thread operations. We've borrowed ideas from declarative programming and we have synchronization constructs similar to single-assignment variables. Our library has barriers, synchronization flags and monotonic counters. So, that's one aspect of what we're building -- a structured thread library called S-threads. Note that S-threads is not a replacement for other threads libraries; S-threads is used in conjunction with underlying threads libraries such as NT, P or Solaris threads. It's also important that S-threads has few constructs and is a very small package; so it's easy to learn and use.

"The second part is pragmas in C for collective thread operations. We've begun thinking about C++ and Ada (and Ada makes sense for command and control). But, we really haven't started a Fortran project as yet. We have designed pragmas that can be inserted into C that give programmers explicit control in collective thread creation and synchronization. If you remove the pragmas you get a sequential program in C. So, you can reason about, or debug, your program using familiar tools. But the pragmas can also be used to tell a source-to-source translator how to map a program on to a threads interface. It's very important to us that the source-to-source translation be extremely simple so that there's an obvious direct relationship between the original and translated code; this allows you to use standard tools and standard libraries directly at the C and threads-library level.

"We also use explicit parallel block constructs, in addition to pragmas, for problems that are nondeterministic. For instance, parallel route optimization is nondeterministic; if your program was restricted to sequential semantics you wouldn't get adequate performance on the route-optimization problem on parallel machines. In such cases you use explicit parallel blocks (which is a very old idea from Algol). The parallel block is the only new language construct, and we use it only if we have to, and only for those solutions that are fundamentally nondeterministic.

"The third part is implementing applications that are dynamic and have interesting data and control structures. The implementations use S-threads and our source-to-source C translators on top of thread libraries such as those obtained with NT, Posix, and Solaris. Implementation of applications helps in evaluating our tools. Applications that we are working on include command and control, graphics, as well as computational science and combinatorics. This research is being carried out in CRPC by a group led by John Thornley, who is a postdoc here at Caltech. They get help from superb graphics people such as Peter Schroder. Computational-science multithreaded applications are being developed by Rajesh Bordawekar.

"So those are the approaches we're taking. Two side issues: Smallness, and commodities. Our goal is to have tools that are very small, simple, and robust. We are not trying to replace existing thread libraries or programming languages or compilers. We believe that the multithreaded single-address space programming model is terrific, and just a little bit more can make it so much more useful for a variety of applications. By restricting attention to that `little bit more,' our small group can put our arms around the problem to produce truly useful tools. The second side issue is that we are very concerned with commodity parallelism, with Windows NT and Solaris on commodity multiprocessor motherboards. We expect to run parallel applications all the way from home computers to supercomputers, and home computers are as important to us as supercomputers."

HPCwire: What software is PPI making available now, and what will be available in the near future?

CHANDY: "The software that's available now, especially software produced under the auspices of CRPC, is generally available, but it's not sold. An example of one of the compilers for parallel languages that's available is a compiler for CC++, that's 'Compositional C++,' which focuses on the ideas of multithreading as well as processes within different address spaces. CC++ has a truly beautiful way of integrating multithreading in a single address space and multiple processes each with its own address space. CC++ is a straight extention of C++; so, any C++ program will run on CC++. The compiler for CC++ is available free of charge and can be downloaded from Caltech or Argonne. Carl Kesselman's group developed it and they maintain it. CC++ is used by several groups, and it forms a foundation for HPC++. That's one example of software we produce.

"Another example is the layer that CC++ runs on, which exploits distributed heterogeneous meta-computing applications. That's called Nexus and is also available from the same sources.

"Two packages that don't receive the same degree of support as CC++ and Nexus are compilers for Fortran M and PCN (Program Composition Notation), both of which integrate task and data parallelism. Ian Foster's group at Argonne has built a library, used with HPF, for integrating task and data parallelism.

"Another package is Infospheres. That's written primarily in Java but with some C. The beta release has been out since November with version 1.0 planned for the end of February 97.

"A different kind of package, developed by a team led by Prof. Donald Dabdub at UCI is interesting for HPCC dissemination. The idea is to focus on a group of end-users rather than on developers. In this case, the focus is on people who want to use parallel computing to study air-pollution and eventually to understand the consequences of public policy on pollution. The software in this case is a shrink-wrapped problem-solving environment tailored explicitly to airshed modeling, with an immediate focus on the Southern California region. You can use this package and run it on a variety of parallel computers without having to be concerned about parallel programming. The extensive user interface is written in Tcl/TK but the system has been used with many different supercomputers as backends. The goal of our center is to make parallel computers truly usable, and surely, the ultimate in usability is your ability to deal with your problem -- air pollution -- without being concerned about parallelism at all.

"As far as industrial collaborations go, we have collaborations with various companies. We've received support from IBM, and now also HP and Novell. Our collaboration with Novell is in the personal infosphere area. An important part of our collaborations are the flow of ideas between research groups in the companies and the center. Our students work with companies during summer and sometimes during the year. The flow of ideas is as important as support in the form of funds. So we have these collaborations and support, but the software we provide is free."

HPCwire: Where do you expect PPI will be by the next century?

CHANDY: "A key point that is almost religion for us is: the application space is changing, parallel architectures are changing, the user-community is changing, the world is changing and becoming more interconnected, and there isn't going to be a single programming paradigm that's appropriate for the entire space.

"The splendid research done on array and loop distribution, parallelizing compilers for Fortran, HPF constructs, gang scheduling, MPI,... have contributed immensely to HPCC. That's an important way of looking at the world, but there are other ways too. Look at the space of applications: some fraction of that space is going to be covered by data-parallelism and message passing, but there's a huge space that isn't covered by that, and that space is just coming into the parallel domain. Look at the space of users: Of course computational chemists, physicists, and cosmologists are critically important, but so are small-business people, ISVs developing graphics packages, and people doing crisis management. Look at the space of machines: Supercomputers that cost millions of dollars must remain the flagships of HPCC, but commodity machines that cost a few thousand are going to spearhead the dissemination of parallel concepts. Distributed computing with collaboration between people at different sites using resources at remote locations will become much more common. And that's what makes this whole thing so exciting -- when parallelism gets to the small businesses, homes, personal digital assistants and collaboration in addition to computational science applications running on supercomputers. This transition requires different ways of thinking about parallelism, and new methods and tools; that's where our group is headed in the next century.

"As regards technological focus, we expect to remain focused on our core competencies into the next century. These are: program composition and object technologies, distributed heterogeneous computing, multithreaded computing, integration of parallel programming paradigms, and problem-solving environments. These are large areas with large problems that will keep us busy well into the next century. Also, since our goal is to produce software that is truly usable, we have to put a great deal of time into managing software releases, documentation, maintenance and supporting application development; this takes an immense amount of time. Exciting technologies -- such as Java, object-request brokers, ATM -- are emerging; understanding novel technologies and using them to further our goals takes a lot of time too. Our group is small, and our plate is not just full, it's overflowing. So, I don't anticipate major shifts in our group's direction till well after the turn of the century.

"I do want to emphasize that I help coordinate a group of very independent thinkers with very strong opinions. My role is to facilitate a common vision where there is one, but not to impose any kind of group-think. It's truly amazing that we've had such brilliant, but sometimes opinionated, people work together for so well and for so long.

"Let me leave you with one of the themes of our group's direction as we build a bridge to the next century: `Concurrency is for everybody.' Parallelism isn't just for geeks any more."

For more information, see http://www.infospheres.caltech.edu and http://www.compbio.caltech.edu

-------------------- Alan Beck is editor in chief of HPCwire. Pamela Richards, associate editor of HPCwire, assisted in the preparation of this feature. Comments are always welcome and should be directed to editor@hpcwire.tgc.com

Copyright 1997 HPCwire. Redistribution of this article is forbidden by law without the expressed written consent of the publisher. For a free trial subscription to HPCwire, send e-mail to trial@hpcwire.tgc.com.


Sites & Affiliations | Leadership | Research & Applications | Major Accomplishments | FAQ | Search | Knowledge & Technology Transfer | Calendar of Events | Education & Outreach | Media Resources | Technical Reports & Publications | Parallel Computing Research Quarterly Newsletter | News Archives | Contact Information


Hipersoft | CRPC