Parallel Profile - Padma Raghavan

Assistant Professor, Department of Computer Science, University of Tennessee

Padma Raghavan began her college education at the Indian Institute of Technology (IIT) at Kharagpur, a small college town near Calcutta. "Solving problems using computers is an intriguing process—I was introduced to it in an undergraduate engineering class," she says. "The computing environment was truly primitive, using Fortran 70 on punch-cards on a Soviet-made IBM-360 EC1030 with manuals in Russian. But it only added to the challenge and the excitement of getting a program to run and give answers."

Raghavan received her Bachelor of Technology degree in computer science engineering in 1985 from IIT. She moved to the United States soon after and joined the graduate program in computer science at the Pennsylvania State University. She received her M.S. in August 1987 and her Ph.D. in December 1991. She continued post-doctoral work on sparse-matrix computations as part of the Defense Advanced Research Projects Agency (DARPA)-funded Scalapack project, working with Michael T. Heath at the National Center for Supercomputing Applications (NCSA) at the University of Illinois. During this period, she was exposed to the work of CRPC researchers Jack Dongarra (University of Tennessee), Ken Kennedy (Rice University), Joel Saltz (University of Maryland), Danny Sorenson (Rice University), and Mary Wheeler (Rice University [now University of Texas]). "I was really impressed by the spectrum of work by CRPC researchers, from parallel programming languages, run-time environments, scientific computing libraries, and large-scale applications to education and outreach," says Raghavan. "The best part of it is, they did all of it well."

In the fall of 1994, Raghavan joined the Computer Science Department at the University of Tennessee as an assistant professor. Raghavan's current work concerns techniques for solving very large sets of linear equations. Linear systems of equations can be represented in matrix notation as Ax=b. A wide variety of applications, from structural mechanics to materials modeling, gives rise to sparse linear systems—ones in which most of the elements of the matrix A are zero. Schemes that use the sparsity of the matrix A can reduce execution time and storage requirements by orders of magnitude. The challenge is in devising schemes that further reduce the execution time by using parallelism.

Raghavan is best known for her work in parallelizing sparse direct solvers. Sparse direct solvers rely on factorization; that is, representing A as the product of two triangular matrices. These triangular matrices are then used to solve the original system by substitution. Such matrix factorization results in fill-in; zeroes in the matrix can become nonzero in the factors. Consequently, sparse direct solvers have two stages: a symbolic stage to control and predict fill-in, followed by a numeric stage to do the factorization and triangular solution. In collaboration with Mike Heath, she developed the first fully parallel sparse direct solver suitable for matrices associated with a geometry. When the matrix comes from a discretization involving a mesh, the geometry of the mesh is used in a parallel "Cartesian" scheme to control fill-in.

One of Raghavan's major contributions has been to make parallel sparse direct solvers truly useful in an application. Most applications factor the matrix once and then use the triangular factors to solve for a sequence of right-hand-side vectors. Such repeated triangular solves easily become a bottleneck on parallel machines—the latency of communication makes this step inefficient. Raghavan developed a "selective inversion" scheme that provides an efficient latency-tolerant alternative. Slow parallel substitution is replaced by fast parallel-matrix vector multiplication at the expense of a one-time overhead limited to approximately 5% of the cost of factorization.

A more recent focus has been on the development of parallel hybrid sparse solvers. The goal is to develop hybrids that combine the robustness of a direct solver with the low memory requirements of an iterative scheme. Another aspect of Raghavan's work concerns applying ideas in sparse matrix methods to other areas. An example is using sparse graph traversals to speed up query processing in information retrieval using low-rank models of the term-document space. She is working with Michael Berry of the University of Tennessee on this project.

Raghavan has authored or co-authored more than 30 technical papers and has served on numerous Ph.D. and M.S. committees. She received a CAREER award from the National Science Foundation for her research in sparse matrix computations. She is currently a member of the CRPC Technical Steering Committee.

Of her involvement with the CRPC, Raghavan says: "I have been greatly influenced by CRPC and its goal of making high-performance parallel computing truly usable. Developing a new computing technique is just the beginning. The payoff is in making it usable for a wide range of applications."

Other Issues of PCR Back to PCR CRPC Home Page