Introduction
The Open MPI Project is an open source Message Passing Interface implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers.
Use With Intel Compilers
When you wish to use OpenMPI with the Intel compilers then you need to specify the underlying Intel compilers via an environment variable after loading the appropriate Intel compiler module.
Language | Compiler setting |
---|---|
C | OMPI_CC=icc |
C++ | OMPI_CXX=icpc |
Fortran | OMPI_FC=ifort |
Note that using Intel MPI is recommended over OpenMPI where possible.
Web Page
Versions
2.0.2
Flavours
gcc54
Architecture
ppc64le
Group
module load cv-ppc64le
2.0.1
Flavours
gcc54
Architecture
ppc64le
Group
module load cv-ppc64le
ofed/gcc49
Architecture
x86_64
Group
module load cv-standard
Version for use with GCC compilers.
Build Details
Installed by Cluster Vision.
Usage
module load openmpi/ofed/gcc49/2.0.1
Example Job Scripts
None required.
Status
Tested.
ofed/icc16
Architecture
x86_64
Group
module load cv-standard
Version for use with the 2016 Intel compilers.
Build Details
Installed by Cluster Vision.
Usage
module load openmpi/ofed/icc16/2.0.1
Example Job Scripts
None required.
Status
Tested.
ofed/icc17
Architecture
x86_64
Group
module load cv-standard
Version for use with the 2017 Intel compilers.
Build Details
Installed by Cluster Vision.
Usage
module load openmpi/ofed/icc17/2.0.1
Example Job Scripts
None required.
Status
Tested.
3.0.1
This is available in flavours for the basic compiler (4.8.5, no module load), and versions for various flavours of the GCC compilers, for Fortran compatiblity.
Not all IMB Intel tests run:
- One of the EXT tests has a bug and fails
- All hang in the RMA tests in One_put_all
- All hang in the IO tests in C_IWrite_Indv
Flavours
01
Architecture
x86_64
Group
module load site-local
Build Details
This is a binary distribution from Mellanox, and loads the version of OpenMPI in /opt/hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-4.2-1.2.0.0-redhat7.3-x86_64/ompi-v3.0.x/
This currently only supports GCC 4.8.5 for linking Fortran code.
Usage
module load openmpi/3.0.1/01
Example Job Scripts
None required.
Status
Runs Intel IMB tests over multiple nodes, except for known bugs in Intel IMB tests.
02
Architecture
x86_64
Group
module load site-local
Build Details
This is a build of the vanilla openmpi 3.0.1, but with Mellanox platform enhancements, using other elements from the binary distribution in /opt/hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-4.2-1.2.0.0-redhat7.3-x86_64/, and also /opt/mellanox/hcoll for compatibility.
This currently only supports GCC 4.9.3 for linking Fortran code.
Usage
module load openmpi/3.0.1/02
Example Job Scripts
None required.
Status
Runs Intel IMB tests over multiple nodes, except for known bugs in Intel IMB tests.
However, MPI I/O performance is subpar (sometimes by a large margin) in some areas, likely due to HCOLL issues, and so this version is not recommended.
03
Architecture
x86_64
Group
module load site-local
Build Details
This is a build of the vanilla openmpi 3.0.1, but with Mellanox platform enhancements, using other elements from the binary distribution in /opt/hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-4.2-1.2.0.0-redhat7.3-x86_64/, and also /opt/mellanox/hcoll for compatibility.
This currently only supports GCC 5.4.0 for linking Fortran code.
Usage
module load openmpi/3.0.1/03
Example Job Scripts
None required.
Status
Runs Intel IMB tests over multiple nodes, except for known bugs in Intel IMB tests.
04
Architecture
x86_64
Group
module load site-local
Build Details
This is a build of the vanilla openmpi 3.0.1, but with Mellanox platform enhancements, using other elements from the binary distribution in /opt/hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-4.2-1.2.0.0-redhat7.3-x86_64/, and also /opt/mellanox/hcoll for compatibility.
This currently only supports GCC 6.3.0 for linking Fortran code.
Usage
module load openmpi/3.0.1/04
Example Job Scripts
None required.
Status
Runs Intel IMB tests over multiple nodes, except for known bugs in Intel IMB tests.
1.10.4
Flavours
03
Architecture
x86_64
Group
module load site-local-unsupported
Build Details
- gcc/4.9.3
- hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-4.2-1.2.0.0-redhat7.3-x86_64
- numactl/2.0.11/1
- binutils/2.29/1
- libxml2/2.9.4/1
- –with-knem=/opt/hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-4.2-1.2.0.0-redhat7.3-x86_64/knem
- –with-mxm=/opt/hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-4.2-1.2.0.0-redhat7.3-x86_64/mxm
- –with-hcoll=/opt/hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-4.2-1.2.0.0-redhat7.3-x86_64/hcoll
- –with-slurm
- –with-hwloc=internal
- –with-verbs=yes
- –enable-shared
- –enable-static
- –with-devel-headers
Usage
module load openmpi/1.10.4/03
Example Job Scripts
None required.
Status
Untested.
1.10.2
Flavours
pgi2016
Architecture
ppc64le
Group
module load cv-ppc64le
3.0.0
Flavours
06
Architecture
x86_64
Group
module load site-local-unsupported
Build Details
- gcc/4.9.3
- hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-4.2-1.2.0.0-redhat7.3-x86_64
- numactl/2.0.11/1
- binutils/2.29/1
- –with-knem=${HPCX_KNEM_DIR}
- –with-mxm=${HPCX_MXM_DIR}
- –with-hcoll=${HPCX_HCOLL_DIR}
- –with-ucx=${HPCX_UCX_DIR}
- –with-platform=contrib/platform/mellanox/optimized
- –with-slurm
- –with-hwloc=internal
- –with-verbs=yes
- –enable-shared
- –with-devel-headers
- –with-pmix –with-fca=${HPCX_FCA_DIR}
Usage
module load openmpi/3.0.0/06
Example Job Scripts
None required.
Status
Untested.
2.1.2
Flavours
09
Architecture
x86_64
Group
module load site-local-unsupported
Build Details
- gcc/4.9.3
- hpcx-v2.0.0-gcc-MLNX_OFED_LINUX-4.2-1.2.0.0-redhat7.3-x86_64
- numactl/2.0.11/1
- binutils/2.29/1
- ***–with-knem=${HPCX_HOME}/knem
- –with-mxm=${HPCX_HOME}/mxm
- –with-hcoll=/opt/mellanox/hcoll/
- –with-slurm
Usage
module load openmpi/2.1.2/09
Example Job Scripts
None required.
Status
Untested.