From Processing-In-Memory (PIM) Architectures for Very High Performance MPP Computing PAWS 96 Mandalay Beach -- April 21-26 1996. byPeter Kogge Notre Dame
Huge bandwidths available to processing logic
"Free" 4 line cache at the sense amps
Minimal addressing delays => minimal latency
Tremendous internode bandwidths
"Built in" local shared memory
Huge bandwidths available at chip periphery
2D tiling prevents wires "over memory"
Opportunity for "mix and match"
Memory macros
Processing logic
External I/O protocols
Northeast Parallel Architectures Center, Syracuse University, npac@npac.syr.edu If you have any comments about this server, send e-mail to webmaster@npac.syr.edu.