MPI

From HCL
Revision as of 23:40, 16 July 2012 by Davepc (talk | contribs) (Profiling)

Jump to: navigation, search

Documentation

Implementations

Manual installation

Install in separate subfolder $HOME/SUBDIR, because you may need some MPI implementations (see Libraries)

Tips & Tricks

  • For safe consecutive communications create new context, for example:
int communication_operation(MPI_Comm comm) {
MPI_Comm newcomm;
MPI_Comm_dup(comm, &newcomm);
... // work with newcomm
MPI_Comm_free(&newcomm);
}

Mind the overhead of MPI_Comm_dup and MPI_Comm_free.

  • If you are having trouble with the multi-homed nature of the HCL Cluster, check here

Debugging

  • Add the following code:
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (!rank)
  getc(stdin);
MPI_Barrier(MPI_COMM_WORLD);
  • Compile your code with -g option
  • Run parallel application
  • Attach to process(es) from GDB
    • MPICH-1 runs a background process for each application process: 0, 0b, 1, 1b, ..., therefore, attach to the first ones.

Profiling

Paraver by Barcelona Supercomputing Center is a "a flexible performance visualization and analysis tool"

Download from here. Use Extrae to create trace files.

 mpirun -np 3 ~/bin/trace.sh ./executable

Where trace.sh is a script containing:

#!/bin/bash
export EXTRAE_HOME=$HOME
export EXTRAE_CONFIG_FILE=$HOME/bin/extrae.xml
export LD_LIBRARY_PATH=${EXTRAE_HOME}/lib:@sub_MPI_HOME@/lib:@sub_PAPI_HOME@/lib:@sub_UNWIND_HOME@/lib:$LD_LIBRARY_PATH
export LD_PRELOAD=${EXTRAE_HOME}/lib/libmpitrace.so

## Run the desired program
$*