Standard way(s) of launching MPI executable program?

I want to poll the community to gather knowledge on one very basic thing: What is the standard way of launching an MPI executable program? Is this pretty universally acceptable across different clusters, different job schedulers, different hardware platform, etc.? The MPI standard since version 2.0 specifies mpiexec as the standardized name to launch MPI codes (see, e.g. standard version 4.1, sec. 12.5 “Portable MPI Process Startup”).

Background: Evidently, at this present time, there are many different ways to launch MPI programs, and although mpiexec is supposed to be a standard, even implementors and site operators do not always support this standard command name.

Since I first learned MPI, at that time the command mpirun was the prevailing way among many clusters (read: the so-called “Linux Beowulf” clusters). Then I learned afterwards that there are many variations, e.g.

Anyone having insight of this varying ways launching MPI jobs? Do we even have a common way? I want to use this info to build a novice-level lesson for MPI that can apply to as many HPC cluster as possible, so that the best practice will become the first memory for the learners when they have to invoke their own MPI-parallel codes.

1 Like

This is indeed an interesting question.
mpiexec is supposed to be a standardized way of running an MPI application and most if not MPI implementations provide this command.
The OpenMPI manual pages state that mpiexec and mpirun are synonyms for each other (a soft link to exactly the same executable) and will produce the exact same behavior. Since many clusters use OpenMPI, mpirun command is widely used there. However, for other MPI implementations, this might not be true.
One would need to view the documentation for each implementation. For Intel MPI, you can see this discussion.

srun is a slurm - specific command and is designed to deal with various MPI implementations using SLURM scheduler: not only OpenMPI but also IntelMPI, MPICH, etc. The idea is to make any MPI application run optimally on SLURM infrastructure. Details can be found on SLURM’s MPI Documentation website.

If you introduce users to an MPI programming, I would stick with mpirun and mpiexec commands. If, however, you teach them how to run MPI applications on a SLURM-based cluster, then you may want to consider srun.

I’ll “second” ktrn’s response. mpirun and mpiexec are about as standard as they come, and SLURM installations use srun for better integration with the job scheduling system. These are the start commands I used to use as part of the training I used to give for MPI.

For mpirun and mpiexec I’d also include all the necessary options for setting the number of hosts, the host file, etc. One of the advantages of srun is SLURM takes care of all these options.

1 Like