LAM 6.1 Release Notes
The major enhancements in LAM 6.1 are:
- client-to-client MPI communication that utilizes shared memory
in conjunction with TCP/IP
- an implementation of MPI-2 dynamic processes
- improved UNIX standard I/O capability from all nodes
Multi-protocol Communication
MPI processes that reside on the same machine communicate via
shared memory. MPI processes that reside on different machines
communicate via TCP/IP. This is now the factory supplied
behaviour of LAM/MPI "client-to-client" communication, enabled
by the -c2c option of mpirun(1).
A basic shared memory communication path, based on System V IPC,
is available for all supported machines. In addition, some machines
offer faster alternatives for either the shared memory or the
shared memory locks, or both.
- Sun Solaris
- Solaris semaphores
- SGI IRIX
- SGI shared arenas, SGI locks
- HP HPUX
- HPUX System V shared memory, HPUX user level locks
MPI-2 Dynamic Processes
LAM 6.1 includes an implementation of the dynamic processes chapter
of the SC96 draft of the MPI-2 standard.
Both parent/child spawning and client/server rendez-vous are supported.
The capability is described in the MPI-2 document, the LAM 6.1
document and the LAM 6.1 manual pages.
It is possible to start singleton MPI applications from the shell
(without mpirun(1)) and then start other processes via MPI_Spawn(2).
PVM programs with a similar design should port easily to MPI.
LAM's dynamic node and fault tolerant features in conjunction
with MPI dynamic processes make a good environment for developing
load balancing and scheduling systems.
UNIX Standard I/O
LAM 6.1 enables all processes, local and remote, to write to
standard output and error, with the data appearing on the terminal
and node where mpirun(1) was invoked. This is accomplished via
the scalable LAM daemon. Standard input is possible for local
processes, who also have their current working directory set
to match the mpirun(1) invocation. It is possible to redirect
an application's standard I/O by using the shell redirects with
mpirun(1).
Ohio Supercomputer Center, lam@tbag.osc.edu, http://www.osc.edu/lam.html