Day 1
|
Day 2
|
001 Designing and Building Parallel Programs 002 Outline 003 Message Passing Interface (MPI) 004 What is MPI? 005 Compiling and Linking (in MPICH) 006 Running MPI Programs (in MPICH) 007 Sending/Receiving Messages: Issues 008 What Gets Sent: The Buffer 009 Generalizing the Buffer in MPI 010 Advantages of Datatypes 011 To Whom It Gets Sent: Process Identifiers 012 Generalizing the Process Identifier in MPI 013 How It Is Identified: Message Tags 014 Sample Program using Library 015 Correct Execution 016 Incorrect Execution 017 What Happened? 018 Solution to the Tag Problem 019 MPI Basic Send/Receive 020 Six-Function MPI 021 Simple Fortran Example 022 Simple Fortran Example (2) 023 Simple Fortran Example (3) 024 Advanced Features in MPI 025 Collective Communication 026 Synchronization 027 Data Movement (1) 028 Data Movement (2) 029 Collective Computation Patterns 030 List of Collective Routines 031 Example: Performing a Sum 032 Buffering Issues 033 Avoiding Buffering Costs 034 Combining Blocking and Send Modes 035 Connecting Programs Together 036 Connecting Programs via Intercommunicators 037 Regular (Cartesian) Grids 038 Regular Grid Example: Getting the Decomposition 039 Regular Grid Example: Conclusion 040 Designing MPI Programs 041 Jacobi Iteration: The Problem 042 Jacobi Iteration: MPI Program Design 043 Jacobi Iteration: MPI Program Design 044 Jacobi Iteration: MPI Program 045 Jacobi Iteration: MPI Prog. II