Intel MPI and OpenMPI Guide

MPI versions

There are many different versions of MPI available on HPC.

The versions of MPI depend heavily on which compilers support various pieces of code.

Intel MPI

Intel provides a toolkit that encompasses their built-in compilers and MPI functionality.

The toolkit is called oneAPI.

There is a HPC extension to the Intel oneAPI suite that gets installed along with the base oneAPI suite on HPC.

To view the versions of the Intel oneAPI available on HPC, the following command can be entered:

module avail intel --------------------------------------------------------------- /cm/shared/modulefiles ---------------------------------------------------------------- intel/oneapi/2021.1.0 intel/oneapi/2022.3 intel/oneapi/2024.0 intel/oneapi/2024.1.0 intel/oneapi/2024.2.1 intelics/2016.3-full intelics/2017

HPC updates the intel oneAPI suite regularly when new releases come out and the above list can change.

One of the modules provided in the list can be loaded to be able to compile code with Intel’s built-in MPI suite.

To load one of the versions, enable the intel oneAPI environment, and build code using Intel’s built-in MPI functionality, the following examples can be used:

MPI C:

module load intel/oneapi/2024.2.1 source /gpfs/sharedfs1/admin/hpc2.0/apps/intel/oneapi/2024.2.1/setvars.sh mpiicx CCodehere -o nameOfExecutableHere

MPI CXX:

module load intel/oneapi/2024.2.1 source /gpfs/sharedfs1/admin/hpc2.0/apps/intel/oneapi/2024.2.1/setvars.sh mpiicpx CXXCodehere -o nameOfExecutableHere

MPI Fortran:

However, Intel’s built-in MPI suite tends to be a slower version on the AMD hardware on HPC compared to openmpi functionality.

If there is a piece of software that uses a custom Makefile that needs the fortran Intel option passed to the F90 flags, the following line would need to be entered:

Job Submission Script Example:

The -np 8 option in the mpirun command example above can change depending on how many MPI tasks that would like to be requested for the job.

The export UCX_TLS=tcp,self,sysv,posix line tells the Intel MPI compiler to use the first available UCX transport layer.

Without the export command, Intel’s built-in MPI can potentially crash due to not knowing what transport layers are available and will not be able to determine which back-end fabric to use.

The export line above can be commented out or added back depending if the specific code in question has problems running on HPC.

The source /gpfs/sharedfs1/admin/hpc2.0/apps/intel/oneapi/2024.2.1/setvars.sh line enables the Intel oneAPI environment to be able to call MPI successfully.

Without the source line above, Intel’s built-in MPI will not be able to understand what components are available and which ones to use for the pml framework.

OpenMPI

OpenMPI is an open source High Performance Message Passing Library.

OpenMPI is a great message passing interface that can be used to allow for compiled code or programs run across multiple nodes on HPC.

OpenMPI has some functionality benefits outside of other MPI software.

To view the versions of OpenMPI available on HPC, the following command can be entered:

There are many versions of openmpi available on HPC depending on the need of the user.

The openmpi versions with the -ics extension refers to openmpi versions that were built with the Intel Compiler Suite

The openmpi versions without the -ics extension were built with the GNU compilers in mind. (gcc, g++, gfortran)

It is recommended to use the latest openmpi/5.0.5 versions to avoid any potential library/compatibility issues that can be associated with the older versions.

The procedure to build fortran code with MPI functionality is almost the same as Intel’s built-in MPI section, but the mpi commands will change.

MPI C:

MPI CXX:

MPI Fortran:

Job Submission Script Example:

Basic Hello World MPI code example