LAMMPS
LAMMPS stands for Large-scale Atomic/Molecular Massively Parallel Simulator and is copywrited by Sandia (DOE). See the On-line Manual for LAMMPS documentation, which is linked below. Here we give instructions how to run it on the CPUs and GPUs with the staff supported version on the Hazel HPC. Also given are instructions for users to install their own version.External Links:
LAMMPS website
LAMMPS Tutorials
LAMMPS benchmarks
Typically, it's not a good idea to run lammps from your /home directory, as its output can easily overflow your quota.
2024Aug29, the latest staff-supported version of LAMMPS, runs on CPUs. The previous 2022 versions run on the newer GPUs (NVIDIA):
The modulefiles for the older 2022 installations can be seen with
module avail lammps/2022Jun23
They are:
lammps/2022Jun23/intel/gpu
lammps/2022Jun23/intel/cpu
lammps/2022Jun23/gnu/gpu
lammps/2022Jun23/gnu/cpu
The new default modulefile is:
lammps/2022Jun23/intel/cpu
It was
lammps/29Sep21u3
Older versions:
The 29Sep21u3 staff-supported version of LAMMPS runs on CPUs and the rtx2080 GPUs (NVIDIA):
The 7Aug19 staff-supported version of LAMMPS runs on CPUs and the rtx2080 GPUs (NVIDIA):
The 22Aug2018 staff-supported version of LAMMPS runs on CPUs and most of the older GPUs (NVIDIA), including the four (4) RTX 2080Ti NVIDIA
To test LAMMPS on the latest 2022Jun23 version for GPUs (interactively) do
bsub -Is -n 16 -R "span[hosts=1]" -W 5 -q gpu -R "select[a30]" -gpu "num=2:mode=shared:mps=yes" bash . /usr/share/Modules/init/bash module load lammps/2022Jun23/intel/gpu mpirun lmp_hazel -sf gpu -in /usr/local/apps/lammps/lammps2022Jun23intelgpu/bench/in.lj.512k
To test LAMMPS version 2022Jun23 for GPUs in batch mode, create "testGPU.bsub" with the following lines (mps=yes helps multi-processor jobs run faster concurrently on newer GPUs - a30, rtx2080, gtx1080, k20m):
#!/bin/bash #BSUB -n 16 #BSUB -R "span[hosts=1]" #BSUB -W 5 #BSUB -q gpu #BSUB -R "select[a30]" #BSUB -gpu "num=2:mode=shared:mps=yes" #BSUB -o out.%J #BSUB -e err.%J export UCX_TLS=ud,sm,self . /usr/share/Modules/init/bash module load lammps/2022Jun23/intel/gpu mpirun lmp_hazel -sf gpu -in /usr/local/apps/lammps/lammps2022Jun23intelgpu/bench/in.lj.512kand run it with:
bsub < testGPU.bsub
How to select a specfic GPU:
- lshosts | grep gpu | more This shows a list of the hosts with gpus. The far right column shows the gpu model as names that could be used in the "-R select" command.
- To select a specfic hostname, -m (not recommended) To select a specific hostname, use:
- To avoid a specific hostname, hname!= (not recommended) To avoid a specific hostname, use:
For example, to select the newest A30 GPU, use
#BSUB -R "select[a30]"
To use any of the 6 oldwer gpus (the RTX2080 or the GTX1080), use:
#BSUB -R "select[rtx2080 || gtx1080]"
To use the P100 GPU for applications that require double precision (eg. ab-initio applications), use:
#BSUB -R "select[p100]"
Most of the GPU nodes have 2 GPUs per node, but the rtx2080 has 4.
#BSUB -m "c041n01"
#BSUB -R "(hname != c014n01)"....
bsub -Is -n 16 -R "span[hosts=1]" -W 10 bash module load PrgEnv-intel/2024.1.0 mpirun /usr/local/apps/lammps/lammps29Aug2024/src/lmp_cpu < /usr/local/apps/lammps/lammps2022Jun23intelcpu/bench/in.lj.512kTo do the same thing in batch mode, create "testCPU.bsub" with the following lines:
#! /bin/bash #BSUB -n 16 #BSUB -R "span[hosts=1]" #BSUB -W 5 #BSUB -o out.%J #BSUB -e err.%J module load PrgEnv-intel/2024.1.0 mpirun /usr/local/apps/lammps/lammps29Aug2024/src/lmp_cpu < /usr/local/apps/lammps/lammps2022Jun23intelcpu/bench/in.lj.512kRun this batch job with:
bsub < testCPU.bsub
Here is the link to the Hazel benchmarks.
lj model, Gold6226 node,
Loop time of 2.13852 on 16 procs for 100 steps with 512000 atoms
lj model, a30 2x GPU node,
Loop time of 0.283579 on 16 procs for 100 steps with 512000 atoms
Users are encouraged to install and maintain their own version of LAMMPS in:
/usr/local/usrapps/projectidWhere "projectid" is a directory name given to the project by the HPC administrator; a request must be made (an alternative is to install it in /home/username, which takes up about 310 MB out of 1000 MB of /home/username space).
These two links give an overview of the installation process via conda:
Initialize conda environment on Hazel
Install via conda
Go to /usr/local/usrapps/projectid. Then initialize conda with:
module load conda conda init bash [log out then back in again]Then do
conda config --add channels conda-forge conda create --prefix /usr/local/usrapps/[your_path]/my_lammps_env conda activate /usr/local/usrapps/[your_path]/my_lammps_env conda install lammpsWhen installation is complete, you can test it with a batch job "test.bsub", that looks like:
#! /bin/bash #BSUB -n 2 #BSUB -W 10 #BSUB -o out.%J #BSUB -e err.%J conda activate /usr/local/usrapps/[your_path]/my_lammps_env mpirun lmp_mpi < /usr/local/apps/lammps/lammps-7Aug19/examples/accelerate/in.ljRun this batch job with:
bsub < test.bsubThe output should be similar to: out.37038
Users are encouraged to install and maintain their own version of LAMMPS in:
/usr/local/usrapps/projectidWhere "projectid" is a directory name given to the project by the HPC administrator (a request must be made). You can download LAMMPS in several ways, but the git or svn way allows you to easily update. The downside of these ways is that the documentation is not downloaded. Go to /usr/local/usrapps/projectid. Then
git clone -b stable https://github.com/lammps/lammps.git /usr/local/usrapps/projectidWeeks later, you can check for updates with
git checkout stable git pullIn /usr/local/usrapps/projectid, gunzip and untar the file. Copy /usr/local/apps/lammps/lammps-5Jun19/src/MAKE/MINE/Makefile.cpu to /usr/local/usrapps/projectid/lammps-5Jun19/src/MAKE/MINE/Makefile.cpu Note that in Makefile.cpu, the line with CCFLAGS has
CCFLAGS = -g -O3 -restrict -xSSSE3 -axSSE4.2,AVX,CORE-AVX-I,CORE-AVX2By using using this, a "fat" binary is created. That will allow the executable to run on the whole range of cpus available on the HPC.
Now compile: You have a choice of compilers; intel, gnu, or PGI. The latest version was compiled with
module load PrgEnv-intel/2022.1.0or
module load openmpi-gcc/openmpi4.1.0-gcc10.2.0and
module load cuda/12.0After invoking the compiler, go to directory /usr/local/usrapps/projectid/lammpsxxx/src Then compile it with "make -j 2 cpu"
Then test it in a way similar to "LAMMPS for CPUs" above.
For the GPU package, ../lib/gpu/libgpu.a must be created - follow instructions at the end of the ../lib/gpu/README file. In order for NVIDIA's mps service to work on the newer GPUs (a30, rtx2080, gtx1080, k20m), use -DCUDA_PROXY in the CUDR_CPP variable in the ../lib/gpu/Makefile.linux file.
The list of installed packages (as of 2023/03/13) in 2022Jun23's version:
make yes-most
make yes-gpu
make pi
nstalled YES: package ASPHERE
Installed YES: package BOCS
Installed YES: package BODY
Installed YES: package BPM
Installed YES: package BROWNIAN
Installed YES: package CG-DNA
Installed YES: package CG-SDK
Installed YES: package CLASS2
Installed YES: package COLLOID
Installed YES: package CORESHELL
Installed YES: package DIELECTRIC
Installed YES: package DIFFRACTION
Installed YES: package DIPOLE
Installed YES: package DPD-BASIC
Installed YES: package DPD-MESO
Installed YES: package DPD-REACT
Installed YES: package DPD-SMOOTH
Installed YES: package DRUDE
Installed YES: package EFF
Installed YES: package EXTRA-COMPUTE
Installed YES: package EXTRA-DUMP
Installed YES: package EXTRA-FIX
Installed YES: package EXTRA-MOLECULE
Installed YES: package EXTRA-PAIR
Installed YES: package FEP
Installed YES: package GPU
Installed YES: package GRANULAR
Installed YES: package INTERLAYER
Installed YES: package KSPACE
Installed YES: package MANYBODY
Installed YES: package MC
Installed YES: package MEAM
Installed YES: package MISC
Installed YES: package ML-IAP
Installed YES: package ML-SNAP
Installed YES: package MOFFF
Installed YES: package MOLECULE
Installed YES: package OPENMP
Installed YES: package OPT
Installed YES: package ORIENT
Installed YES: package PERI
Installed YES: package PHONON
Installed YES: package PLUGIN
Installed YES: package QEQ
Installed YES: package REACTION
Installed YES: package REAXFF
Installed YES: package REPLICA
Installed YES: package RIGID
Installed YES: package SHOCK
Installed YES: package SPH
Installed YES: package SPIN
Installed YES: package SRD
Installed YES: package UEF
Installed YES: package YAFF
lammps + nequip + PyTorch instructions: nequip is a neural network model for force fields that can be integrated into lammps. It uses PyTorch and GPUs. PyTorch is a prerequisite. What follows are instructions to install PyTorch, nequip, and lammps so that they work together. The installation will follow the general guidelines shown in the PyTorch documentation: https://pytorch.org/get-started/locally/, the nequip documentation: https://github.com/mir-group/nequip > README and the lammps-nequip pairstyle documentation: https://github.com/mir-group/pair_nequip
- First step is the installation of PyTorch, and verifying that it works on the GPUs
- Second step is installing nequip, and verifying that it works on the GPUs
- Third step is installing lammps, and verifying that an nequip example works on the GPUs
PyTorch installation
- Assuming your account is initialized for conda:
conda env create –prefix /path_somewhere/test0_env -f test0.yml
where test0.yml contains:name: test0 channels: - pytorch - nvidia dependencies: - pytorch - torchvision - pytorch-cuda=12.4 - nequip -
Verify that PyTorch works by running a simple example on a GPU compute node. A minimal check could be:
conda activate ./test0_env python >>> torch.cuda.get_device_name(0) 'GeForce GTX 1080'
nequip installation
conda activate ./test0_envNotice above that nequip is already in the test0_env conda environment. nequip does not need wandb, so this part of the nequip installation is left out of these instructions. Please test the examples given in the nequip README documentation. To get the example files, follow a small segment of their instructions, with
git clone https://github.com/mir-group/nequip.gitNote that you are not trying to install nequip from this source directory (don’t follow the “pip nequip” instruction), you are only downloading this git folder to get access to the examples, which will be in the
../nequip/configs directory To verify the nequip works, run and example by logging into a GPU node, and run an example as shown in the nequip README like
nequip-train configs/minimal.yaml or nequip-train configs/example.yamlNotice that in the
configs/example.yaml file, you need to make a slight modification by - Making sure you pre-download the required zip file, and unzip it: http://quantum-machine.org/gdml/data/npz/toluene_ccsd_t.zip
./toluene_ccsd_t-train.npz You may have to make similar modifications for the other examples.
LAMMPS installation
module load cuda/12.3
conda activate ../../test0_env/
module load PrgEnv-intel/2022.1.0
module load cmake/3.24.1
Noticed that I used the older PrgEnv-intel/2022.1.0 module, instead of the newer one (PrgEnv-intel/2024.1.0). The latter causes an error.Follow (generally) the lammps-build instructions shown in https://github.com/mir-group/pair_nequip to “Download LAMMPS”, “Download this repository”, and “Patch LAMMPS”. For the “Configure LAMMPS” section, modify those instructions with the following cmake arguments:
$ cmake ../cmake
-DCMAKE_PREFIX_PATH=`python -c 'import torch;print(torch.utils.cmake_prefix_path)'`
-DCMAKE_CXX_COMPILER=/usr/local/apps/intelbin/2024.1.0/icpc -DCMAKE_C_COMPILER=/usr/local/apps/intelbin/2024.1.0/icc
-DCMAKE_CXX_STANDARD=17
Then follow the “Build LAMMPS” instructions. Then test your LAMMPS installation with a pair_nequip example.
Last modified: October 27 2025 13:19:34.