Course Materials for CPS 615 Fall Semester 1996, Introduction to Computational
Science
Computational Techniques for Scientific and Engineering Problems
The overall introductory material is contained in
Collection of Various Basic Subject Matter
Then we took a break to discuss
- parallelism in 2D Laplace equation and get
- speedup and performance formulae with reference to Society analogies
Here we set the Second Homework
as discussed at end of foilset Programming Models and Performance for Laplace's Equation
- This (september 3)lecture included first foils 1-6 upto: Basic Structure of Domain to be Updated in Parallel Version
- And then from foil 25: General Features of Laplace Example upto end of foilset.
We returned to Collection of Various Basic Subject Matter to discuss
- Basic Computer Architectures with more detail on technologies
- On September 5 with audio lecture, we got to foil: Superconducting Technology -- Problems with a discussion of:
- This starts by considering the analytic form for communication overhead
and illustrates its stencil dependence in simple local cases -- stressing
relevance of grain size. This was covered in Homework 3.
- The implication for scaling and generalizing from Laplace example is covered
- We covered scaled speedup (fixed grain size) as well as fixed problem size
- We noted some useful material was missing and this was continued in next
lecture (Sept 10,96)
- The lecture starts coverage of computer architecture covering base technologies
with both CMOS covered in an earlier lecture contrasted to Quantum
- and Superconducting technology
- On September 10 with audio lecture, we got to foil: MIMD Distributed Memory Architecture with a discussion of:
- Details of communication overhead in parallel processing for case where
"range" of interaction is large
- We show two old examples from Caltech illustrates correctness of analytic
form
- We return to discussion of computer architectures describing
- Vector Supercomputers
- General Relevance of data locality and pipelining
- Flynn's classification (MIMD,SIMD etc.)
- Memory Structures
- Initial issues in MIMD and SIMD discussion
- On September 12 with audio lecture, we continued upto foil: Latency/Bandwidth Space for 0-byte message(Latency) and 1 MB message(bandwidth) with discussion of:
- MIMD and SIMD with distributed shared memory
- MetaComputers
- Special Purpose Architectures
- Granularity with technological changes forcing larger process sizes
- Overview of Communication Networks with
- Switches versus topologies versus buses
- Typical values in today's machines
We then started a more detailed discussion of software technologies in HPCC
where we will cover Fortran 90, HPF and MPIwith the basic overview and Fortran90/HPF
covered in the foilset HPCC Software Technologies Fall 96 -- Overview and HPF
- On September 17, we returned to Laplace equation discussion (which we had
used for performance analysis) starting with foil: Discretized Form of Laplace'e Equation on a Parallel Processor in Programming Models and Performance for Laplace's Equation and discussed sequential and HPF implementation. We briefly mentioned MPI
version and introduced Fortran90 ending with foil: Important Features of Fortran90 in HPCC Software Technologies Fall 96 -- Overview and HPF.
Homework4 introduced here covered basic HPCC software resources
- This returns to a discussion of parallel software in the context of example
of Jacobi Iteration for Laplace equation in a square box of 256 grid points
on 16 processors
- We already used this example to discuss performance earlier in the semester
- In this lecture we studied the High Performance Fortran and a briefer discussion
of MPI implementation of this problem
- We will return to MPI later on!
- We finished lecture with initial remarks on Fortran90
- On September 24 with audio lecture, we started at foil with title: Introduction to Fortran90 Arrays - I in HPF discussion with a discussion of Fortran90 features at a high level, compared programming
models to put HPF in context ending with foil Data Parallel Programming Model and then started a diversion in foilset Overview of Programming Paradigms and Relation to Applications to discuss our theory of problem architecture ending at foil with title:The Mapping of Heterogeneous Metaproblems onto Heterogeneous Metacomputer Systems
- This continues the discussion of Fortran 90 with a set of overview remarks
on each of the key new capabilities of this language
- We also comment on value of Fortran90/HPF in a world that will switch to
Java
- We disgress to discuss a general theory of problem architectures as this
explains such things as tradeoffs
- HPCC v Software Engineering
- HPF versus MPI
- And the types of applications each software model is designed to address
- On September 26 with audio lecture, we started with a discussion by Kivanc Dincer of our Web based Virtual
Programming Laboratory and then returned to Problem Architecture discussion
from foil The map of Problem ---> Computer is performed in two or more stages to What determines when Parallelism is Clear ? .
- Then we skipped more descriptive material on HPF to begin the language discussion from foils Parallelism in HPF to Examples of Align Directive . Homework5 and Homework6 included basic exploration of Kivanc's technology and the first Fortran90
examples
- On October 1 with audio lecture, we continued the HPF discussion covering not only the basic syntax but also examples of the different distribution
strategies BLOCK, CYCLIC and CYCLIC(number) and we got up to foil: WHERE (masked array assignment) in HPF
- The discussion of HPF was continued by Tom Haupt on October 3 and 8 but no audio is available. This HPF discussion was completed on October
24 with a few advanced topics.
- On October 10 with audio lecture, we started our discussion of Ordinary Differential Equations and the N-body problem
- We completely covered numerical methods for ODE's and started a discussion
of parallel N-body algorithms getting upto foil Simple Data Parallel Version of N Body Force Computation -- Grav -- I
- On October 15 with audio lecture, we first finished discussion of ODE's and particle dynamics in CPS615 Foils -- set E: ODE's and Particle Dynamics stopping at foil N-body Problem is a one dimensional Algorithm and then started a discussion of Numerical Integration in Slitex Foilset CPS615 Numerical Integration Module getting upto foil 9:Use of High Order Newton Cotes
- We set Homework 7 which was some pretty solid programming in HPF using VPL
- On October 17, where there is NO audio lecture, we continued with numerical integration from foils: 10:Strategies for Manipulating Integrals before using Standard Numerical
Integration to 33:Why Monte Carlo Methods Are Best in Multidimensional Integrals
- On October 22 with an audio lecture, we first finish CPS615 Foils -- set D: Statistics and Random Numbers (In preparation for
Monte Carlo) covering:
- Completion of the simple overview of statistics
- Covering Gaussian Random Numbers, Numerical Generation of Random Numbers
both sequentially and in parallel
- Then we describe the central limit theorem which underlies Monte Carlo method
- October 22 is completed by a return to Numerical Integration with the first part of discussion of Monte Carlo Integration finishing
at foil: 48:Stock Market Example of Multiple Monte Carlos --- I
- Homework 8 combines random number generation with the N-body problem.
- October 24 with audio lecture covers two rather different topics starting with last remarks on Monte
Carlo methods from Numerical Integration which includes
- Monte Carlo Integration for large scale Problems using Experimental and
Theoretical high energy physics as an example
- This includes accept-reject methods, uniform weighting and parallel algorithms
- Then October 24 finally finishes HPF discussion with foil: !HPF$ INDEPENDENT, NEW Variable embarassingly parallel DO INDEPENDENT discussed in Monte Carlo case and
HPF2 Changes
- On Halloween Oct 31, we started MPI covering the foilset CPS615 Foils -- Message Passing Interface MPI for users and with audio lecture upto foil with title Blocking Receive MPI_Recv(C) MPI_RECV(Fortran).
- On the following November 7 lecture with audio lecture, we covered the rest of CPS615 Foils -- Message Passing Interface MPI for users and then returned to Programming Models and Performance for Laplace's Equation to discuss MPI implementation of 2D Jacobi Iteration for Lapace's Equation
- The next day November 8 with audio lecture, we started our study of Partial Differential Equations covering basic
overview of types of equation and a start on iteration methods in CPS615 Module on Iterative PDE Solvers upto foil Matrix Notation for Iterative Methods
- On November 14 with audio lecture described physical simulation methods and in particular CFD from foilset Background in Partial Differential Equations with attention to CFD
- We stressed computational complexity differences between Laplace's Equation
and those of CFD (Navier Stokes Equation).
- Well after this the class fell into confusion with preparation and attendance
at Supercomputing 96 being totally distracting.
- As computer equipment was used in Supercomputing 96, we could not record the lectures of November 21 and November 25.
- The lecture of November 26 with an audio lecture, has the heart of the discussion of the Finite Element method and its solution
with the conjugate gradient approach using the foilset CPS615 Finite Element and Conjugate Gradient Presentation . A new reference which seems very good on the conjugate gradient is An Introduction to Conjugate Gradient Method Without the Agonizing Pain by Jonathan R. Shewchuk from CMU.(Look in section called reports in this URL)
- Here we set The final problem set which was a mini-project which will count
to 25% of the grade.
- The final lecture of December 5 was sandwiched between a trip to Houston and one to England. Maybe you can
here the stress in the voice of the available audio lecture! This lecture covers linear programming extracted from CPS615/713 Some Practical Optimization Methods and full matrices from foilset Parallel Full Matrix Algorithms
- It starts with a survey of applications which use full matrix algorithms
- As usual we do matrix multipliction in detail including our method and that
due to Cannon
- We describe MPI implementation using nifty group communicators
- A Performance analysis shows that full matrix algorithms behave like two
dimensional problems
Here we record last years course
Course Materials for CPS 615 Fall Semester 1995, Introduction to Computational
Science
This course is the graduate level introductory course in the discipline
of Computational Science (the computer simulation of natural systems). This
course is designed to teach the basic tools from mathematics and computer
science that are needed to give computational solutions to scientific and
engineering problems.
This is a new version of the CPS615 course materials, replacing those from
Spring 1994. The course is being taught by Geoffrey Fox in Fall 1995, and
the materials will be updated during the semester.
Introduction to High Performance Computing and Computational Science
- Lecture slides for the first section of the course. Covering Course Structure, Technology Driving Forces, Computational Science,
NPAC, National HPCC Program with Grand Challenges and why Parallel Computing
is natural
- Also see Recent Review of status of parallel computing technology.
- It is interesting to compare this 1995 paper with its 1992 Predecessor
- This is complemented by an Application discussion focussing on industry with three application areas : Chemistry, CFD and
Monte Carlo Simulations covered as case studies
Parallel Computing and Architectures
Algorithms and Applications
Technology Modules
World Wide Web
Fortran90 and HPF
Message Passing Interface (MPI)
General Resources for the Course
Northeast Parallel Architectures Center (NPAC) at Syracuse University. The
content of these pages may be used freely for educational purposes. Some
of the material is under individual copyright. This page maintained by Nancy
McCracken, njm@npac.syr.edu. Last updated 11/10/95.