Typically, it's not a good idea to run lammps from your /home directory, as its output can easily overflow your quota.
2024Aug29, the latest staff-supported version of LAMMPS, runs on CPUs. The previous 2022 versions run on the newer GPUs (NVIDIA):
The modulefiles for the older 2022 installations can be seen with
module avail lammps/2022Jun23
They are:
lammps/2022Jun23/intel/gpu
lammps/2022Jun23/intel/cpu
lammps/2022Jun23/gnu/gpu
lammps/2022Jun23/gnu/cpu
The new default modulefile is:
lammps/2022Jun23/intel/cpu
It was
lammps/29Sep21u3
Older versions:
The 29Sep21u3 staff-supported version of LAMMPS runs on CPUs and the rtx2080 GPUs (NVIDIA):
The 7Aug19 staff-supported version of LAMMPS runs on CPUs and the rtx2080 GPUs (NVIDIA):
The 22Aug2018 staff-supported version of LAMMPS runs on CPUs and most of the older GPUs (NVIDIA), including the four (4) RTX 2080Ti NVIDIA
To test LAMMPS on the latest 2022Jun23 version for GPUs (interactively) do
bsub -Is -n 16 -R "span[hosts=1]" -W 5 -q gpu -R "select[a30]" -gpu "num=2:mode=shared:mps=yes" bash . /usr/share/Modules/init/bash module load lammps/2022Jun23/intel/gpu mpirun lmp_hazel -sf gpu -in /usr/local/apps/lammps/lammps2022Jun23intelgpu/bench/in.lj.512k
To test LAMMPS version 2022Jun23 for GPUs in batch mode, create "testGPU.bsub" with the following lines (mps=yes helps multi-processor jobs run faster concurrently on newer GPUs - a30, rtx2080, gtx1080, k20m):
#!/bin/bash #BSUB -n 16 #BSUB -R "span[hosts=1]" #BSUB -W 5 #BSUB -q gpu #BSUB -R "select[a30]" #BSUB -gpu "num=2:mode=shared:mps=yes" #BSUB -o out.%J #BSUB -e err.%J export UCX_TLS=ud,sm,self . /usr/share/Modules/init/bash module load lammps/2022Jun23/intel/gpu mpirun lmp_hazel -sf gpu -in /usr/local/apps/lammps/lammps2022Jun23intelgpu/bench/in.lj.512kand run it with:
bsub < testGPU.bsub
bsub -Is -n 16 -R "span[hosts=1]" -W 10 bash module load PrgEnv-intel/2024.1.0 mpirun /usr/local/apps/lammps/lammps29Aug2024/src/lmp_cpu < /usr/local/apps/lammps/lammps2022Jun23intelcpu/bench/in.lj.512kTo do the same thing in batch mode, create "testCPU.bsub" with the following lines:
#! /bin/bash #BSUB -n 16 #BSUB -R "span[hosts=1]" #BSUB -W 5 #BSUB -o out.%J #BSUB -e err.%J module load PrgEnv-intel/2024.1.0 mpirun /usr/local/apps/lammps/lammps29Aug2024/src/lmp_cpu < /usr/local/apps/lammps/lammps2022Jun23intelcpu/bench/in.lj.512kRun this batch job with:
bsub < testCPU.bsub
Users are encouraged to install and maintain their own version of LAMMPS in:
/usr/local/usrapps/projectidWhere "projectid" is a directory name given to the project by the HPC administrator; a request must be made (an alternative is to install it in /home/username, which takes up about 310 MB out of 1000 MB of /home/username space).
module load conda conda init bash [log out then back in again]Then do
conda config --add channels conda-forge conda create --prefix /usr/local/usrapps/[your_path]/my_lammps_env conda activate /usr/local/usrapps/[your_path]/my_lammps_env conda install lammpsWhen installation is complete, you can test it with a batch job "test.bsub", that looks like:
#! /bin/bash #BSUB -n 2 #BSUB -W 10 #BSUB -o out.%J #BSUB -e err.%J conda activate /usr/local/usrapps/[your_path]/my_lammps_env mpirun lmp_mpi < /usr/local/apps/lammps/lammps-7Aug19/examples/accelerate/in.ljRun this batch job with:
bsub < test.bsubThe output should be similar to: out.37038
Users are encouraged to install and maintain their own version of LAMMPS in:
/usr/local/usrapps/projectidWhere "projectid" is a directory name given to the project by the HPC administrator (a request must be made). You can download LAMMPS in several ways, but the git or svn way allows you to easily update. The downside of these ways is that the documentation is not downloaded. Go to /usr/local/usrapps/projectid. Then
git clone -b stable https://github.com/lammps/lammps.git /usr/local/usrapps/projectidWeeks later, you can check for updates with
git checkout stable git pullIn /usr/local/usrapps/projectid, gunzip and untar the file. Copy /usr/local/apps/lammps/lammps-5Jun19/src/MAKE/MINE/Makefile.cpu to /usr/local/usrapps/projectid/lammps-5Jun19/src/MAKE/MINE/Makefile.cpu Note that in Makefile.cpu, the line with CCFLAGS has
CCFLAGS = -g -O3 -restrict -xSSSE3 -axSSE4.2,AVX,CORE-AVX-I,CORE-AVX2By using using this, a "fat" binary is created. That will allow the executable to run on the whole range of cpus available on the HPC.
Now compile: You have a choice of compilers; intel, gnu, or PGI. The latest version was compiled with
module load PrgEnv-intel/2022.1.0or
module load openmpi-gcc/openmpi4.1.0-gcc10.2.0and
module load cuda/12.0After invoking the compiler, go to directory /usr/local/usrapps/projectid/lammpsxxx/src Then compile it with "make -j 2 cpu"
The list of installed packages (as of 2023/03/13) in 2022Jun23's version:
make yes-most
make yes-gpu
make pi
nstalled YES: package ASPHERE
Installed YES: package BOCS
Installed YES: package BODY
Installed YES: package BPM
Installed YES: package BROWNIAN
Installed YES: package CG-DNA
Installed YES: package CG-SDK
Installed YES: package CLASS2
Installed YES: package COLLOID
Installed YES: package CORESHELL
Installed YES: package DIELECTRIC
Installed YES: package DIFFRACTION
Installed YES: package DIPOLE
Installed YES: package DPD-BASIC
Installed YES: package DPD-MESO
Installed YES: package DPD-REACT
Installed YES: package DPD-SMOOTH
Installed YES: package DRUDE
Installed YES: package EFF
Installed YES: package EXTRA-COMPUTE
Installed YES: package EXTRA-DUMP
Installed YES: package EXTRA-FIX
Installed YES: package EXTRA-MOLECULE
Installed YES: package EXTRA-PAIR
Installed YES: package FEP
Installed YES: package GPU
Installed YES: package GRANULAR
Installed YES: package INTERLAYER
Installed YES: package KSPACE
Installed YES: package MANYBODY
Installed YES: package MC
Installed YES: package MEAM
Installed YES: package MISC
Installed YES: package ML-IAP
Installed YES: package ML-SNAP
Installed YES: package MOFFF
Installed YES: package MOLECULE
Installed YES: package OPENMP
Installed YES: package OPT
Installed YES: package ORIENT
Installed YES: package PERI
Installed YES: package PHONON
Installed YES: package PLUGIN
Installed YES: package QEQ
Installed YES: package REACTION
Installed YES: package REAXFF
Installed YES: package REPLICA
Installed YES: package RIGID
Installed YES: package SHOCK
Installed YES: package SPH
Installed YES: package SPIN
Installed YES: package SRD
Installed YES: package UEF
Installed YES: package YAFF
conda env create –prefix /path_somewhere/test0_env -f test0.ymlwhere test0.yml contains:
name: test0 channels: - pytorch - nvidia dependencies: - pytorch - torchvision - pytorch-cuda=12.4 - nequip
conda activate ./test0_env python >>> torch.cuda.get_device_name(0) 'GeForce GTX 1080'
conda activate ./test0_envNotice above that nequip is already in the test0_env conda environment. nequip does not need wandb, so this part of the nequip installation is left out of these instructions. Please test the examples given in the nequip README documentation. To get the example files, follow a small segment of their instructions, with
git clone https://github.com/mir-group/nequip.gitNote that you are not trying to install nequip from this source directory (don’t follow the “pip nequip” instruction), you are only downloading this git folder to get access to the examples, which will be in the
../nequip/configs directory
nequip-train configs/minimal.yaml
or nequip-train configs/example.yaml
configs/example.yaml
file, you need to make a slight modification by ./toluene_ccsd_t-train.npz
module load cuda/12.3 conda activate ../../test0_env/ module load PrgEnv-intel/2022.1.0 module load cmake/3.24.1Noticed that I used the older PrgEnv-intel/2022.1.0 module, instead of the newer one (PrgEnv-intel/2024.1.0). The latter causes an error.
$ cmake ../cmake -DCMAKE_PREFIX_PATH=`python -c 'import torch;print(torch.utils.cmake_prefix_path)'` -DCMAKE_CXX_COMPILER=/usr/local/apps/intelbin/2024.1.0/icpc -DCMAKE_C_COMPILER=/usr/local/apps/intelbin/2024.1.0/icc -DCMAKE_CXX_STANDARD=17Then follow the “Build LAMMPS” instructions.
Last modified: April 12 2025 12:18:38.