Amber - Assisted Model Building with Energy Refinement - is a suite of biomolecular simulation programs.
External Links:
Amber website
Amber Tutorials
Here we provide some sample files (from AMBER) and give instructions how to run the most popular AMBER executables on the Henry2 cluster. However, many more exeutable are available for use, and can be seen with:
ls /usr/local/apps/amber20/bin
Typically, it's not a good idea to run Amber from your /home directory, as its output can easily overflow your quota. Instead make a directory in your group's /share directory and run Amber from there. However, be aware that files in /share are NOT backed up and files which have not been accessed in some time are automatically deleted.
Not all the Amber executables are parallel or run on GPUs. Some have serial, parallel, and GPU versions. For example, sander.MPI is parallel, and sander is serial. pmemd.coda_SPFP.MPI is parallelized and runs on GPUs.
GPU support, AMBER 20: pmemd.cuda and pmemd.cuda.MPI can run on the newer GPU nodes (rtx2080, gtx1080, p100). However the p100 node should be reserved for jobs that run quantum-mechanics applications, or need double-precision MD.
GPU support, AMBER 18: pmemd.cuda and pmemd.cuda.MPI can run on the newer GPU nodes (rtx2080, gtx1080, k20m).
GPU support, AMBER 14: pmemd.cuda and pmemd.cuda.MPI can run on the older GPU nodes (m2070, m2070q, m2090).
Table of Contents - Select for a Particular Topic
#!/bin/tcsh #BSUB -n 8 #BSUB -o out.%J #BSUB -e err.%J #BSUB -W 40 module load amber/20 cp /usr/local/apps/amber20/test/sander_OIN_MPI/trpcge.crds . cp /usr/local/apps/amber20/test/sander_OIN_MPI/TC5b.top . cp /usr/local/apps/amber20/test/sander_OIN_MPI/gbin . mpirun sander.MPI -O -i gbin -c trpcge.crds -p TC5b.top -o sander.out
If the above script is in a file named "bamber", you would submit it by
bsub < bamber
module load PrgEnv-intel/2017.1.132 module load amber/18Here's an example job submission file
#!/bin/csh #BSUB -n 8 #BSUB -R "avx span[ptile=8]" #BSUB -o out.%J #BSUB -e err.%J #BSUB -W 44 module load PrgEnv-intel/2017.1.132 module load amber/18 mpiexec sander.MPI -O -i gbin -c eq1.x -o mdout.vrand_long
This job script requests 44 minutes of run time and 8 processor cores. A real production job would likely need more processors and time. The resource request string -R requests processors that have the AVX instruction set and that all 8 processor cores be on the same node. Amber 18 was compiled to use the AVX instruction set, so that resource is required.
If the above script is in a file named "bamber", you would submit it by
bsub < bamber
pmemd (pmemd.cuda & pmemd.cuda.MPI) with AMBER 18 can run on the newer GPU nodes (rtx2080, gtx1080, p100). The p100 node should be reserved for running quantum mechanical jobs, or for running double-precision MD.
Use this input file (mdin) for the following test with the following submission script.
#!/bin/tcsh #BSUB -n 2 #BSUB -W 10 #BSUB -R "select[rtx2080 || gtx1080 || p100]" #BSUB -gpu "num=1:mode=shared:mps=yes" #BSUB -q gpu #BSUB -o out.%J #BSUB -e err.%J module load amber/20 cp /usr/local/apps/amber20/test/cuda/4096wat/prmtop . cp /usr/local/apps/amber20/test/cuda/4096wat/eq1.x . mpirun pmemd.cuda.MPI -O -i mdin -c eq1.x -p prmtop -o mdout
If the above script is in a file named "test.bsub", you would submit it by
bsub < test.bsub
#!/bin/csh #BSUB -n 2 #BSUB -W 10 #BSUB -R "select[rtx2080 || gtx1080 || p100 || k20m]" #BSUB -gpu "num=1:mode=shared:mps=yes" #BSUB -q gpu #BSUB -o out.%J #BSUB -e err.%J module load PrgEnv-intel/2017.1.132 module load amber/18 module load cuda/9.1 cp /usr/local/apps/amber18/test/cuda/4096wat/prmtop . cp /usr/local/apps/amber18/test/cuda/4096wat/eq1.x . mpirun pmemd.cuda.MPI -O -i mdin -c eq1.x -p prmtop -o mdout
If the above script is in a file named "test.bsub", you would submit it by
bsub < test.bsub
pmemd (pmemd.cuda & pmemd.cuda.MPI) with AMBER 14 can run on the older GPU nodes (m2070, m2070q, m2090).
Use this input file (mdin) for the following test with the following submission script.
#!/bin/csh #BSUB -n 2 #BSUB -W 10 #BSUB -R "select[m2070 || m2070q || m2090]" #BSUB -gpu "num=1:mode=shared:mps=no" #BSUB -q gpu #BSUB -o out.%J #BSUB -e err.%J module load PrgEnv-intel/2016.0.109 module load amber/14 cp /usr/local/apps/amber18/test/cuda/4096wat/prmtop . cp /usr/local/apps/amber18/test/cuda/4096wat/eq1.x . mpirun pmemd.cuda.MPI -O -i mdin -c eq1.x -p prmtop -o mdout
If the above script is in a file named "test.bsub", you would submit it by
bsub < test.bsub
For users holding an AMBER license who are interested in building their own AMBER20 executables from source, follow the general installation guidelines, with the following modifications for Henry2:
Use the cmake 3.18.2, gcc 6.3.0, cuda 10.1, and for MPI, install the recomended MPICH (mpich-3.3.2) from mpich.org/downloads/
module load cmake/3.18.2 module load gcc/6.3.0 module load cuda/10.1
For users holding an AMBER license who are interested in building their own pmemd executables from source for running on the older GPUs, follow the general guidelines shown here, http://ambermd.org/gpus14/index.htm, with the following modifications for Henry2:
Use the intel 2016 MPI environment, and the cuda 6 environment:
module load PrgEnv-intel/2016.0.109 source /usr/local/apps/cuda/cuda6.cshIn config.h, replace -xHost (which compiles against the highest instruction set on the host, which is too high for the compute node) with -xSSE4.2.
Last modified: December 23 2020 21:11:12.