


Powerful message passing system interfaces, such as the PVM and MPI APIs, 
simplify the task of manually creating parallel programs by layering 
easy to understand semantics over the communications hardware.  However, 
automatic parallel compilers can rarely make use of the full functionality 
of such interfaces.  Parallel Compilers can manage the complexity of low 
level interfaces that expose the communications hardware to produce 
efficient code.  



Although this dicotomy seems to argue for separate message passing systems 
for compilers and programmers, it is interesting to note that the overall 
steps involved in accomplishing communication are the same.  The difference 
arises because parallel compilers attempt to peform steps at compile time.  
However, the difference has its limits - parallel compilers rely on 
human-written run-time systems to perform tasks for which there was 
insufficient information to perform at compile time.  Consider an HPF 
array statement:  If sufficient information about distribution and access 
patterns are known at compile time, precisely which indices to 
communicate can be determined at compile time.  On the other hand, if 
this information is available only at run-time, the compiler may be best 
served in calling the run-time system.


