Scripted HTML version of Foils prepared 27 october 1996

Foil 14 Parallel Computer Architecture Memory Structure

From CPS615-Lecture on Performance(end) and Computer Technologies(start) Delivered Lectures of CPS615 Basic Simulation Track for Computational Science -- 10 September 96. by Geoffrey C. Fox *
Secs 205.9
Memory Structure of Parallel Machines
  • Distributed
  • Shared
  • Cached
and Heterogeneous mixtures
Shared (Global): There is a global memory space, accessible by all processors.
  • Processors may also have some local memory.
  • Algorithms may use global data structures efficiently.
  • However "distributed memory" algorithms may still be important as memory is NUMA (Nonuniform access times)
Distributed (Local, Message-Passing): All memory is associated with processors.
  • To retrieve information from another processors' memory a message must be sent there.
  • Algorithms should use distributed data structures.



© Northeast Parallel Architectures Center, Syracuse University, npac@npac.syr.edu

If you have any comments about this server, send e-mail to webmaster@npac.syr.edu.

Page produced by wwwfoil on Fri Aug 15 1997