Using MPI

Parallelism on Condo Cluster is obtained by using MPI. All accounts are set up so that MPI uses the high performance Infiniband communication network. One can use either Intel MPI library (module load intel) or OpenMPI (module load openmpi).

To compile Fortran MPI programs use mpiifort with Intel MPI or mpif90 with OpenMPI. To compile C and C++ MPI programs use mpiicc/mpiicpc with Intel MPI or mpicc with OpenMPI.

Use Slurm Job Script Generator to create a script to submit to the Slurm Workload Manager. Remember that there are 16 processors per node, so for a 32 processor job, you only need 2 nodes.

In the script add module commands, e.g.:

module purge
module load intel

and the mpirun command, e.g.:

mpirun -np 32 ./a.out

to start 32 MPI processes. OpenMPI modules may not have mpirun command available. Instead use orterun command:

orterun -np 32 ./a.out

to start 32 MPI processes.

Make sure that the executable (a.out in the example above) resides in one of the following locations:
       /home/user     (where 'user' is your user name)
       /work/group    (where 'group' is your group name, issue 'groups' to find it out)
       /ptmp
All these locations are mounted on each of the compute nodes. Don't place the executable in the local filesystem (/tmp) as each node has its own /tmp . Files placed into /tmp on the front end node won't be available on the compute nodes, so mpirun won't be able to start processes on compute nodes.

One can use the storage on the disk drive on each of the compute nodes by reading and writing to $TMPDIR.  This is temporary storage that can be used only during the execution of your program. Only processors executing on a node have access to this disk drive. Since 16 processors share this same storage, you must include the rank of the executing MPI processes when reading and writing files to $TMPDIR. The size of $TMPDIR is about 2.5 TB on the regular compute nodes.