Cornell Theory Center

MPI Example Programs: Laplace


Table of Contents


Quick summary

This is an example of a 2-d block decomposition program. It works on the SPMD (single program, multiple data) paradigm. A complete description of the algorithm is found in Fox, et al., "Solving problems on concurrent Processors, Volume 1: General Techniques and Regular Problems," Prentice Hall, Englewood Cliffs, New Jersey. Within certain limits (outlined below) it is scalable. Included in the program are examples of: nodes exchanging edge values, convergence checking, and use of some of the MPI collective communications routines.


Discussion of Problem

This program uses a finite difference scheme to solve Laplace's equation for a square matrix, which must be (4m+2) x (4m+2).


Parallel Implementation

The program is currently configured to do a 48x48 matrix, divided over four processors.

Each worker decides for itself whether it is an edge, corner, or interior node, as well as which other workers it must communicate with. The edge nodes get their "local" boundary values from the "global" boundary values as well as communicating with their neighboring interior nodes. The initial value of all points is set to the average of the global boundary values. The sequence for an iteration is as follows:

Each worker exchanges edgevalues with its four neighbors. Then new values are calculated for the upper left and lower right corners (the "red" corners) of each node's matrix. The workers exchange edge values again. The upper right and lower left corners (the "black" corners) are then calculated.

Every 20 iterations, the nodes calculate the average difference of each point with its value 20 iterations ago. These local average differences are collected by task 0, and the global average difference is found. If this is less than some acceptable value, task 0 collects the pieces of the matrix. Otherwise, 20 more iterations are run.


Instructions for Compiling and Running

  1. First, copy the source files to your current working directory. You may copy lab and solution files from within your browser, or use the cp command. Some browsers add or replace characters when files are copied.

    Lab directory: /usr/local/doc/www/Edu/Tutor/MPI/Templates/laplace/

    C files (there is no Fortran version of this program):

  2. Compile using the make file provided.

    make -f parallel.laplace.makefile

  3. Specify how many nodes to run on:

    setenv MP_PROCS 4

  4. Execute the program. The code takes about 120 seconds to run. Results are automatically stored in parallel.laplace.out.

    parallel.laplace


Cleanup

Some or all of the following files will be left in your directory, and should be removed to save space:

parallel.laplace.out
parallel.laplace
parallel.laplace.makefile
parallel.laplace.c


Return to: MPI Tutorial Page * CTC's MPI Home Page

[CTC Home Page] [Search] [Education] [Resources]
[Copyright Statement] [Feedback] [Tutorials] [IBM SP Documentation]

URL http://www.tc.cornell.edu/Edu/Tutor/MPI/Templates/groupio/
updated June 5, 1996