Ask.Cyberinfrastructure

Cgroups with MPI process affinity

parallelization

#1

Hi all,

We have recently upgraded to using Torque with cgroups. We have been happy with the use of cgroups in general however, I have recently found that MPI process affinity usually does nothing to improve run times and sometimes slows down certain MPI applications. I assume this is due to process affinity interfering with cgroups(?).

My question is, since the MPI versions we support here at W&M (IntelMPI, Mvapich2 and OpenMPI) all have some sort of process binding by default, do you somehow disable MPI process affinity system-wide, do you simply warn your users to investigate the effect of process affinity, or do you not see any effect of processes affinity + cgroups.

Just wondering what the common wisdom is since most google searching and looking at the MPI websites don’t seem to address effects of cgroups and

Thanks for any information that can be shared.

Regards,

Eric


#2

Actually, upon further research:

  1. cgroups is not really involved, the effect of affinity seems to happen outside of Torque.

  2. The effect of process affinity seems to be minor for other codes I have tested. I guess the one benchmark (lu-mz from the NASA parallel benchmark suite: https://www.nas.nasa.gov/publications/npb.html) seems to be quite sensitive to affinity a process placement.

  3. Also, I have found one of our sub-cluster which seems to yield much better run times without process affinity enforced however however, this doesn’t happen on any other machines in our cluster. Probably something sub-optimal about the configuration.

So, the bottom line seems to be that each code needs to be tested separately to determine whether it benefits from process affinity or not. Also, the cgroups being involved was an assumption / confusion on my part.

Eric