Date: Wed, 20 Nov 1996 19:40:58 GMT Server: Apache-SSL/0.4.3b Content-type: text/html Content-length: 12841 Last-modified: Thu, 10 Oct 1996 18:24:27 GMT
The CHAOS research group at the University of Maryland College Park
has developed methods that are making it possible to produce portable
compilers
and runtime libraries to map a broad range of challenging applications
onto high performance computer architectures. A major focus of this work
has been to develop techniques for irregular scientific problems, i.e.
problems that are unstructured, sparse, adaptive or block structured. The
group works extensively with
applications
developers in many disciplines and with parallel compiler vendors.
Many of the concepts first described and prototyped by this project are
making their way into High
Performance Fortran during the ongoing second round of language
definition.
This work is also leading to the development of runtime support ( Meta-Chaos
) to couple runtime libraries used in data and task parallel compilers.
Meta-Chaos is a central component for the common compiler runtime support
being developed by the Parallel
Compiler Runtime Consortium.
We are currently developing techniques that will allow parallel compute and data objects to offer their services to remotely connected clients. The goal is to develop techniques that will make it possible to compose programs running on any combination of distributed memory, shared memory or networked microcomputers or workstations. We are motivating this research by software interoperability scenarios associated with two classes of applications. The first class is sensor data processing and integration, and the second is complex physical simulations. We have developed early prototypes of our data parallel program coupling software and have employed our prototype to demonstrate the ability to couple separately executing High Performance Fortran programs, and to couple High Performance Fortran programs with applications developed using the Maryland CHAOS and Multiblock Parti libraries.
Based upon our experiences in developing runtime libraries and parallelizing applications, we have developed several compilation techniques. Our goal is to be able to automate our hand parallelization/optimization techniques through the use of compilers. We have employed the Fortran~D compilation system (developed primarily at Rice University) as the infrastructure for implementation of our techniques.
For generating efficient code from applications having multiple levels of indirection, we have developed an index flattening technique (based upon the notion of program slicing). This technique transforms a loop having multiple levels of indirection into a series of loops having at most a single level of indirection. We have also observed that aggressive interprocedural optimizations are required to deal with large applications that have irregular accesses to data or large I/O requirements. We have developed an Interprocedural Partial Redundancy Elimination (IPRE) technique for performing interprocedural placement of communication preprocessing and collective communication statements. We are currently working on Interprocedural Balanced Code Placement (IBCP), which will allow us to overlap computation and communication across procedure boundaries. We are also working on generating distributed memory code from Fortran~90 codes that use pointers and recursive data structures. -by Joel Saltz
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
![]() |
|
![]() |
Research Programmer