Introduction
LAMMPS is a Molecular Dynamics Simulator.
Web Page
Job Scripts
The value of $SLURM_NTASKS is worked out by SLURM and so changing the value of –nodes is reflected automatically. This script sends to the compute partition.
This script assumes a 10 hour run time, but adapt as required.
#!/bin/bash -l
#SBATCH --time=10:00:00
#SBATCH --job-name={some_lammps_job}
#SBATCH --account={replace_this_with_your_project_account}
#SBATCH --partition=compute
module purge
module load lammps/version/flavour
mpirun -np $SLURM_NTASKS lmp_mpi < {some_input_file}
Where version is replaced by a version name, e.g. 17Nov16 and a flavour with a flavour version, e.g. 2.
and submit with
sbatch --partition=compute --ntasks={some_number} myjob.sh
where {some_number} is the required level of parallelism. SLURM will select the number of nodes required for the job based on the number of cores asked for (1 rank to 1 core) and the number of cores on the nodes in that partition.
Versions
17Nov16
Flavours
1
Architecture
x86_64
Group
module load site-local
Build Details
MPI build, lmp_mpi.
Compiled with:
- intel/compilers/64/2017.1.132
- intel/mkl/64/2017.1.132
- intel/mpi/64/2017.1.132
- libjpeg/9a/1
- gcc/4.9.3
- zlib/1.2.11/1
- python/2.7.12
- -DLAMMPS_GZIP
- -DLAMMPS_MEMALIGN=64
- No additional libraries.
- Internal FFTW.
- No PNG or JPEG support.
- No Python wrappers
- No AVX2
Usage
module load lammps/17Nov16/1
NB set
OMP_NUM_THREADS=1
to avoid mixed-mode parallelism.
Example Job Scripts
None yet.
Status
Untested.
2
Architecture
x86_64
Group
module load site-local
Build Details
This uses an outdated MLNX_OFED and should not be used.
MPI build, lmp_mpi.
Compiled with:
- intel/compilers/2017.1.132
- intel/mkl/2017.1.132
- openmpi/ofed/icc17/2.0.1
- libjpeg/9a/1
- gcc/4.9.3
- zlib/1.2.11/1
- python/2.7.12
- Internal FFTW used (MKL FFTW does not apparently confer an advantage).
- No PNG or JPEG support.
- CFLAGS=-DLAMMPS_GZIP -DLAMMPS_MEMALIGN=64 -xCORE-AVX2 -DMKL_ILP64 -mkl=parallel -qopenmp
Compiled with the following additional libaries:
- asphere
- body
- class2
- colloid
- coreshell
- dipole
- granular
- kokkos
- kspace
- manybody
- mc
- misc
- molecule
- mpiio
- peri
- qeq
- replica
- rigid
- shock
- snap
- srd
- user-cg-cmm
- user-diffraction
- user-dpd
- user-drude
- user-eff
- user-fep
- user-intel
- user-lb
- user-manifold
- user-mgpt
- user-molfile
- user-nc-dump
- user-phonon
- user-qtb
- user-reaxc
- user-smtbq
- user-sph
- user-tally
Usage
module load lammps/17Nov16/2
NB: set
OMP_NUM_THREADS=1
to avoid mixed-mode parallelism.
Example Job Scripts
None yet.
Status
Passes most of the test cases that are relevant to the above packages and for which the data is available.
5
Architecture
x86_64
Group
module load site-local
This is strictly experimental.
Build Details
MPI build, lmp_mpi.
Compiled with:
- intel/compilers/2016.3.210
- intel/mkl/2016.3.210
- intel/mpi/2016.3.210
- libjpeg/9a/1
- gcc/4.9.3
- zlib/1.2.11/1
- python/2.7.12
- Internal FFTW used (MKL FFTW does not apparently confer an advantage).
- No PNG or JPEG support.
- -DLAMMPS_GZIP
- -DLAMMPS_MEMALIGN=64
- No Python wrappers
- -xCORE-AVX2 -DMKL_ILP64 -mkl=parallel -qopenmp
- no -ip or -ipo
Compiled with the following additional libaries:
- asphere
- body
- class2
- colloid
- coreshell
- dipole
- granular
- kokkos
- kspace
- manybody
- mc
- misc
- molecule
- mpiio
- peri
- qeq
- replica
- rigid
- shock
- snap
- srd
- user-cg-cmm
- user-diffraction
- user-dpd
- user-drude
- user-eff
- user-fep
- user-intel
- user-lb
- user-manifold
- user-mgpt
- user-molfile
- user-nc-dump
- user-phonon
- user-qtb
- user-reaxc
- user-smtbq
- user-sph
- user-tally
Usage
module load lammps/17Nov16/5
NB: to avoid mixed mode parallelism set
export OMP_NUM_THREADS=1
Example Job Scripts
None yet.
Status
Untested.