The new 7Aug19 staff-supported version of LAMMPS runs on CPUs and the newer rtx2080 GPUs (NVIDIA):
The 22Aug2018 staff-supported version of LAMMPS runs on CPUs and most of the 60 GPUs (NVIDIA):
- The 4 newest GPUs, four (4) RTX 2080Ti NVIDIA
- The 48 older GPUs, M2070Q + M2090
- The 3 older GPUs, 3 M2070 (1 GPU per node, instead of 2)
To test LAMMPS version 22Aug2018 for GPUs (interactively) do
bsub -Is -n 2 -W 10 -q gpu -R "select[m2070q || m2090]" -gpu "num=1:mode=shared:mps=no" tcsh module load PrgEnv-intel/2016.0.109 source /usr/local/apps/cuda/cuda-8.0/cuda.csh mpirun /usr/local/apps/lammps/lammps-22Aug2018/src/lmp_gpu -sf gpu < /usr/local/apps/lammps/lammps-22Aug2018/examples/accelerate/in.ljTo test lammps-22Aug2018 in batch mode, create "testGPU.bsub" with the following lines:
#! /bin/tcsh #BSUB -n 2 #BSUB -W 20 #BSUB -q gpu #BSUB -R "select[m2070q || m2090]" #BSUB -gpu "num=1:mode=shared:mps=no" #BSUB -o out.%J #BSUB -e err.%J module load PrgEnv-intel/2016.0.109 source /usr/local/apps/cuda/cuda-8.0/cuda.csh mpirun /usr/local/apps/lammps/lammps-22Aug2018/src/lmp_gpu -sf gpu < /usr/local/apps/lammps/lammps-22Aug2018/examples/accelerate/in.lj
To test LAMMPS version 7Aug19 for GPUs in batch mode, create "testGPU.bsub" with the following lines (mps=yes helps multi-processor jobs run faster concurrently on newer GPUs - rtx2080, gtx1080, k20m):
#! /bin/tcsh #BSUB -n 2 #BSUB -W 20 #BSUB -q gpu #BSUB -R "select[rtx2080]" #BSUB -gpu "num=1:mode=shared:mps=yes" #BSUB -o out.%J #BSUB -e err.%J module load PrgEnv-intel/2018.2.199 module load cuda/10.1 mpirun /usr/local/apps/lammps/lammps-7Aug19/src/lmp_gpu-rtx2080 -sf gpu < /usr/local/apps/lammps/lammps-7Aug19/examples/accelerate/in.lj
bsub -Is -n 2 -W 10 tcsh module load PrgEnv-intel/2018.2.199 mpirun /usr/local/apps/lammps/lammps-7Aug19/src/lmp_cpu < /usr/local/apps/lammps/lammps-7Aug19/examples/accelerate/in.ljTo do the same thing in batch mode, create "testCPU.bsub" with the following lines:
#! /bin/tcsh #BSUB -n 2 #BSUB -W 10 #BSUB -o out.%J #BSUB -e err.%J module load PrgEnv-intel/2018.2.199 mpirun /usr/local/apps/lammps/lammps-7Aug19/src/lmp_cpu < /usr/local/apps/lammps/lammps-7Aug19/examples/accelerate/in.ljRun this batch job with:
bsub < test.bsub
Users are encouraged to install and maintain their own version of LAMMPS in:
/usr/local/usrapps/projectidWhere "projectid" is a directory name given to the project by the HPC administrator; a request must be made (an alternative is to install it in /home/username, which takes up about 310 MB out of 1000 MB of /home/username space).
module load conda conda init tcsh [log out then back in again]Then do
conda config --add channels conda-forge conda create --prefix /usr/local/usrapps/[your_path]/my_lammps_env conda activate /usr/local/usrapps/[your_path]/my_lammps_env conda install lammpsWhen installation is complete, you can test it with a batch job "test.bsub", that looks like:
#! /bin/tcsh #BSUB -n 2 #BSUB -W 10 #BSUB -o out.%J #BSUB -e err.%J conda activate /usr/local/usrapps/[your_path]/my_lammps_env mpirun lmp_mpi < /usr/local/apps/lammps/lammps-7Aug19/examples/accelerate/in.ljRun this batch job with:
bsub < test.bsubThe output should be similar to: out.37038
Users are encouraged to install and maintain their own version of LAMMPS in:
/usr/local/usrapps/projectidWhere "projectid" is a directory name given to the project by the HPC administrator (a request must be made). You can download LAMMPS in several ways, but the git or svn way allows you to easily update. The downside of these ways is that the documentation is not downloaded. Go to /usr/local/usrapps/projectid. Then
git clone -b stable https://github.com/lammps/lammps.git /usr/local/usrapps/projectidWeeks later, you can check for updates with
git checkout stable git pullIn /usr/local/usrapps/projectid, gunzip and untar the file. Copy /usr/local/apps/lammps/lammps-5Jun19/src/MAKE/MINE/Makefile.cpu to /usr/local/usrapps/projectid/lammps-5Jun19/src/MAKE/MINE/Makefile.cpu Note that in Makefile.cpu, the line with CCFLAGS has
CCFLAGS = -g -O3 -restrict -xSSSE3 -axSSE4.2,AVX,CORE-AVX-I,CORE-AVX2By using using this, a "fat" binary is created. That will allow the executable to run on the whole range of cpus available on the HPC.
Now compile: You have a choice of compilers The lammps-5Jun19 distribution was tested to compile with intel2018 (up to -O3 optimization) The lammps16Mar18 distribution was tested to compile with intel2016 (up to -O3 optimization) intel2018 (up to -O3 optimization) PGI-Porland (up to -O1 optimization) Invoke the compiler environments with either
module load PrgEnv-intel/2018.2.199or
module load PrgEnv-intel/2016.0.109or
module load PrgEnv-pgi/18.4Then go to directory /usr/local/usrapps/projectid/lammpsxxx/src Then compile it with "make -j 2 cpu"
The list of installed packages (as of 10/22/2019) in 7Aug19's version:
make pi
Installed YES: package ASPHERE
Installed YES: package BODY
Installed YES: package CLASS2
Installed YES: package COLLOID
Installed YES: package COMPRESS
Installed YES: package CORESHELL
Installed YES: package DIPOLE
Installed YES: package GPU
Installed YES: package GRANULAR
Installed YES: package KSPACE
Installed YES: package MANYBODY
Installed YES: package MC
Installed YES: package MISC
Installed YES: package MOLECULE
Installed YES: package MPIIO
Installed YES: package OPT
Installed YES: package PERI
Installed YES: package PYTHON
Installed YES: package QEQ
Installed YES: package REPLICA
Installed YES: package RIGID
Installed YES: package SHOCK
Installed YES: package SNAP
Installed YES: package SPIN
Installed YES: package SRD
Installed YES: package USER-REAXC
Last modified: January 14 2022 09:21:18.