Assignment 4 -- The first MPI assignment
Solving the Wave Equation in MPI
For this assignment, you are to write a program to solve a wave equation
using either C/MPI or Java/MPI. Whichever language you pick this week,
next week's problem will be to translate your solution to the other
language.
Let us consider the problem of a vibrating string which has fixed end
points. The time dependent motion of the string is represented by the
partial differential equation:
1/c2 * (partial2 * psi) / (partial * t2) -
(partial2 * psi) / (partial * x2) = 0
This problem is described in Chapter 5 of the book Solving Problems on
Concurrent Processors, Volume 1 by Fox et al. (And is much better type set
than in HTML!)
The solution of this equation psi(x,t) is the vibration amplitude
of the string expressed as a function of the x position of the string
and time t. If we give an initial position and velocity of the string
as a function of x, then we can solve the equation to find the position
of the string at any time t.
Let us assume that the string stretches over the interval from 0
to 1 in the x direction, and that the ends are anchored at
amplitude 0. To solve this problem numerically, we choose uniform
intervals delta x and delta t, of space and time.
Then for each point i along the x direction,
we can derive a scheme for computing the value of the amplitude at each
time step. This assumes that we are computing the value at a new time
step (t + delta t)(let's call this t+1) where we already know the
values at the current time step t and the previous time step
(t - delta t)(let's call it t-1):
psii(t+1)= (2 * psii(t)) - psii(t-1) +
tau2 [psii-1(t) - (2 * psii(t)) +
psii+1(t)]
where tau = c * (delta t / delta x)
To set up this problem for MPI, you should choose N, the number of points
in the discretization of the x direction. If the total x
interval is from 0 to 1, this determines delta x as
1/(N-1), and position i as i/(N-1).
You can choose time units so that delta t is 1.
The units for the string equation can also be chosen so that the constant
c is 1.
This is set up to be a one-dimensional problem, so you can divide up the
points among the processors, so each processor is computing psi for
N/NumProcs points. You will need arrays to record the values of the
points at the various time steps. Then note the equation for point i
also requires you to know the value of point i-1 and point i+1.
Except for the global left and right endpoint which remains fixed at
psi = 0, this leads to the following algorithm for each processor:
- Initialize starting values for time steps 1 and 2.
- Identify left and right neighbor processors.
- For each time step
{ exchange end values with neighbors;
perform update psii(t) -> psii(t+1) }
- Send resulting values at final time step to one "control" processor,
say processor 0, which will print out all results showing the
position of the string at the final time step.
So, in general, here are some of the key things that need to be
handled in the program: number of processors, number of time steps,
total points along the string, number of points handled by a
processor, values at time t, values at time (t - delta t),
and values at time (t + delta t).
Due Date
Friday, October 17, 1997. Please try to work on it as soon as
possible.
References
- For more details, we have copied for you Chapter 5 of Fox et al.
book cited above. Copies are available with Nora, room: 3-206, phone:
443-1722, email: nora@npac.syr.edu.
- At the top level of the class homepage, click on "Fall 96 Page" option.
There, under the "Student Activities and Assignments" section, you
will find lots of MPI/C and MPI/Fortran examples.
- Also in the homepage, click on "Resources" - "Software" -
"Languages". There, you will find lots of MPI links to various tutorials,
documents, examples, etc.
- In the "mpi-examples" directory in your VPL accounts, you will find
ready to compile and run examples that you can experiment with.
- For any question you may have about the assignment or VPL,
email saleh@npac.
MPI functions you need to know
-
MPI_Send
prototype: int MPI_Send(void* buf, int count, MPI_Datatype datatype,
int dest, int tag, MPI_Comm comm)
void* buf: the block of memory where the contents of
the message are stored.
int count: number of values need to be sent.
MPI_Datatype datatype: type of values needs to be sent.
It must be an MPI type. The most often used are MPI_INT, MPI_DOUBLE,
MPI_CHAR, etc.
int dest: the rank(procid) of the receiving process.
int tag: an integer associated with the massage sent in order
to help identifying the message.
MPI_Comm comm: the communicators included in the environment in
this routine. For this assignment, it will always be MPI_COMM_WORLD
which is predefined in the system and contains all the processes
running when the execution of the main program begins.
-
MPI_Recv
prototype: int MPI_Recv(void* buf, int count, MPI_Datatype datatype,
int source, int tag, MPI_Comm comm, MPI_Status *status)
most arguments have similar meaning as in MPI_Send.
int tag: it is similar to the case in MPI_Send,
with a minor difference that here, there is a predefined constant
MPI_ANY_TAG that MPI_Recv can use for the tag.
int source: the rank(procid) of the sending process. MPI allows
source to be a "wildcard." There is a predefined constant MPI_ANY_SOURCE
that can be used if a process is ready to receive a message from any
sending process rather than a particular sending process. Note there is *no*
such a constant called MPI_ANY_DEST.
MPI_Status *status: returns information on the data that was
actually received. It contains one record for source and one for tag.
If the source of the receive was MPI_ANY_SOURCE, status will contain
the rank of the process that sent the message.
-
MPI_Sendrecv
prototype: int MPI_Sendrecv(void *sendbuf, int sendcount,
MPI_Datatype sendtype, int dest, int sendtag, void *recvbuf,
int recvcount, MPI_Datatype recvtype, int source, int recvtag, MPI_Comm comm,
MPI_Status *status)
All the arguments have similar meaning as in MPI_Send and MPI_Recv.
MPI_Sendrecv can actually be considered as a combination of the two.
- When we pass a message using the above functions, we can imagine as if we
are sending a letter with an envelope of the message containing the
following information:
- The rank(procid) of the receiver; this is like the return address.
- The rank(procid) of the sender; this is the address.
- A tag; this helps the receiver to identify the contents of the message.
- A communicator; for this, there is no good postal analogy.
In MPI, messages must be received using the communicator with which they
are sent.
The four pieces of information on the envelope are what MPI uses to
identify a message.
Please send any questions to Nancy McCracken at njm@npac.syr.edu or to
Saleh Elmohamed at saleh@npac.syr.edu.
Last modified: Sat Nov 22 23:42:09 EST 1997