MPI
From HCL
Contents
Documentation
Implementations
Manual installation
Install in separate subfolder $HOME/SUBDIR
, because you may need some MPI implementations (see Libraries)
Tips & Tricks
- For safe consecutive communications create new context, for example:
int communication_operation(MPI_Comm comm) {
MPI_Comm newcomm;
MPI_Comm_dup(comm, &newcomm);
... // work with newcomm
MPI_Comm_free(&newcomm);
}
Mind the overhead of MPI_Comm_dup
and MPI_Comm_free
.
- If you are having trouble with the multi-homed nature of the HCL Cluster, check here
Debugging
- Add the following code:
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (!rank)
getc(stdin);
MPI_Barrier(MPI_COMM_WORLD);
- Compile your code with
-g
option - Run parallel application
- Attach to process(es) from GDB
- MPICH-1 runs a background process for each application process: 0, 0b, 1, 1b, ..., therefore, attach to the first ones.
Profiling
Paraver by Barcelona Supercomputing Center is a "a flexible performance visualization and analysis tool"