Image Buttons HELP! * GREY=local LOCAL IMAGE version of Foils prepared Feb 22 1996

Foil 49 Work of SU Graduate student Kevin Roe at ICASE Summer 95 -- HPF for TLNS3D

From Collection of GIF Images for General NPAC Projects 1995-March96 General NPAC Foilsets -- 1995-1996. by Geoffrey C. Fox * HTML Version


See Two Postscript Foils
First Foil--TLNS3D is a Production Level Code for the solving 3D Inviscid + Viscous Flows
  • TLNS3D has two versions: Single Block and MultiBlock
  • The Single Block Version is limited to the type of Configurations that it can solve for. The code itself is not limited but the grid generation for a single block version is impossible for complex configurations such as those listed under the things that the MultiBlock code can handle. The Multiblock Version is not limited by the above b/c grid generation for complex configuration is no longer a problem.
  • The actual work that has been done so far is only on a 'model' code created for ICASE. It includes 2 subroutines from the full code, one for the computation of the convective fluxes and the other for the transfer of information between blocks.
Second Foil -- work done on HPF for TLNS3D
  • There is multigrid in the original code and the structure of the code is such that at the top level of the program the arrays are one dimensional and at the lower level the 1D array was changed to 3D using pointers. Although the HPF language allows for this using (REDISTRIBUTE, defined types, etc.) the current compilers do not support this. Thus the arrays had to mad multi-dimensional on all levels and the sizes hard-wired.
  • The data decomposition that I am testing is where each processor has all the data of one block: i.e. 2 block case is done using 2 processors, 4 blocks - 4 processors....
  • The Convective Flux Sub. had to be restructured so that the compiler could analyze the code correctly/efficiently.
  • Scalar Expansion was done with all compilers except APR's which was tedious and required more memory. Note that with this data distribution this sub. was ~embarassingly parallel.
  • The Block Boundary Transfer Subroutine also required restructuring of the do loops and conditionals b/c to simplify compiler runtime analysis (before I was getting broadcast where there shouldn't be).
  • Other Data Decomp. : each block on all processors (the arrays are of dimensions (i,j,k) and the 1st dimension, i, is distributed among the processors) was done prior to my arrival. Although the flux subroutine returned respectable speedups the boundary transfer sub. did not b/c the exchange of information can be none to very complex.
  • future plans: Each block will be distributed amoung a subset of processors. We are currently testing this idea now but have no results to show at this time.



Northeast Parallel Architectures Center, Syracuse University, npac@npac.syr.edu

If you have any comments about this server, send e-mail to webmaster@npac.syr.edu.

Page produced by wwwfoil on Tue Feb 18 1997