Next: Communication Hierarchy Up: Optimizations Previous: Optimizations

Single Node Parallelism

In terms of computation optimization, it is expected that the scalar node compiler performs a number of classic scalar optimizations within basic blocks. These optimizations include common subexpression elimination, copy propagation (of constants, variables, and expressions), constant folding, useless assignment elimination, and a number of algebraic identities and strength reduction transformations. However, to use parallelism within the single node (e.g. using attached vector units), our compiler propagates the information to the node compiler using node directives. Since, in the original data parallel constructs such as the forall statement, there is no data dependency between different loop iterations, vectorization can be performed easily by the node compiler.


zbozkus@
Thu Jul 6 21:09:19 EDT 1995