Assignment 6 -- The third MPI assignment
Calculation of Pi via Monte Carlo
In Fox et al., 1988, Solving Problems on Concurrent Processors,
volume 1, chapter 12, page 207, an example was given to compute
Pi using the "dartboard" algorithm. Copies of this chapter will
be made available in class.
In the serial implementation of this algorithm,
calculation of Pi involves throwing n darts for each of
k iterations, with the cumulative average reported at each
iteration. For the parallel implementation, each processor performs this
work independently, reporting its calculated Pi value to the
master processor(ID 0), which prints the cumulative average.
Following the idea of this algorithm and using Monte Carlo techniques,
Pi was computed as an area of a circle of radius 1 (in two
dimensions). The parallel implementation to this was made available
in class early in the semester and can be retrieved from here. Also, a short description
including the code and output is available here.
Furthermore, this implementation calculates the volume of a
d-dimensional hypersphere of radius 1. The value of d is
is between 1 and 10. It is clear that when d = 2
the volume of the sphere and the value of Pi are the same.
What need to be done
This parallel implementation is quite
simple and uses only
point-to-point communication. For this assignment you need
to do the following:
- Re-write or modify this program to use collective
communication. Examples are Barrier, Broadcast (Bcast), Scatter,
Gather, Reduce, etc. Of course, no need to use all of them, just
those suitable for the abovementioned application.
- You can also keep the point-to-point communication if your
modification would require both. As you know, the two forms of
communication are transparent to one another.
- Time your MPI code in the original version and in the version you produce. Use
MPI_Wtime routine and compare the timing of the
two versions.
- You can write your own version using any data structure or
program structure you like. That is, you don't have to follow the
structure or the data structure of the original code.
- Test the two codes on your VPL account and provide a neat and
informative output. You always need to document your code and make
sure comments and code agree.
Due Date
Monday, Dec 1, 1997.
References
For more details, copies of Chapter 12 of Fox et al. book cited
above will be made available. Check with Nora, room: 3-206, phone:
443-1722, email: nora@npac.
At the top level of the class homepage, click on "Fall 96 Page" option.
There, under the "Student Activities and Assignments" section, you
will find lots of MPI/C and MPI/Fortran examples.
In the "mpi-examples" directory in your VPL accounts, you will find
ready to compile and run examples that you can experiment with.
Here are a couple of summaries on MPI point-to-point and collective communications.
Copies of useful examples and MPI tutorials will be made
available in class. A good starting point is this list of
MPI
tutorials and talks. More presentations, tutorials, and books
can be found here.
Also here
is a complete list of MPI routines.
Here is a useful page on Monte Carlo Methods
in Parallel Computing. Also, take a look at this useful
page for other Monte Carlo links.
For any question you may have about the assignment or VPL,
contact saleh@npac. Phone: 443-1073.
Saleh Elmohamed
Last modified: Mon Dec 8 19:26:02 EST 1997