• Resources
    • Request Access
    • Manage Existing Project
    • Compute Resources
    • Software Packages
    • Partner Program
  • Documentation
    • Get Started
    • Log In
    • Transfer Files
    • Storage
    • Compiling
    • Running Jobs
  • Support
    • Contact Us
    • Training and Events
    • Links for Learning
  • Cluster Status
  1. OIT HPC
  2. Software
  3. Amber

    Amber

    Amber - Assisted Model Building with Energy Refinement - is a suite of biomolecular simulation programs.

    External Links:
    Amber website
    Amber Tutorials

    Here we provide some sample files (from AMBER) and give instructions how to run the most popular AMBER executables on the Hazel cluster. However, many more exeutable are available for use, and can be seen with:

    ls /usr/local/apps/ambertools25/bin
    
    and
    ls /usr/local/apps/pmemd24/bin
    

    Typically, it's not a good idea to run Amber from your /home directory, as its output can easily overflow your quota. Instead make a directory in your group's /share directory and run Amber from there. However, be aware that files in /share are NOT backed up and files which have not been accessed in some time are automatically deleted.

    Not all the Amber executables are parallel or run on GPUs. Some have serial, parallel, and GPU versions. For example, sander.MPI is a parallel executable, and sander is serial. pmemd.cuda.MPI is parallelized and runs on GPUs.

    The A100 and H100 nodes should be reserved for jobs that run quantum-mechanics applications, or need double-precision MD.

    Table of Contents - Select for a Particular Topic

  • running sander on CPUs
  • running sander and pmemd on GPUs
  • Installation tips for installing your own AMBER25 version.
  • running sander
  • Here's an example job submission file

    #!/bin/bash
    #BSUB -n 8
    #BSUB -o out.%J
    #BSUB -e err.%J
    #BSUB -W 5
    source ~/.bashrc
    cd /share/$GROUP/$USER
    module load amber/25
    cp /usr/local/apps/ambertools25/test/sander_OIN_MPI/trpcge.crds .
    cp /usr/local/apps/ambertools25/test/sander_OIN_MPI/TC5b.top .
    cp /usr/local/apps/amber20/test/sander_OIN_MPI/gbin .
    mpirun sander.MPI -O -i gbin -c trpcge.crds -p TC5b.top -o sander.out
    

    If the above script is in a file named "test.bsub", you would submit it by

    bsub < test.bsub
    

  • running sander and pmemd on GPUs
  • sander and pmemd (pmemd.cuda & pmemd.cuda.MPI) can run on GPU nodes. The A100 and H100 nodes should be reserved for running quantum mechanical jobs, or for running double-precision MD. Use this input file (mdin) for the following test with the following submission script.

    #!/bin/bash
    #BSUB -n 2
    #BSUB -W 5
    #BSUB -gpu "num=2"
    #BSUB -q short_gpu
    #BSUB -o out.%J
    #BSUB -e err.%J
    source ~/.bashrc
    cd /share/$GROUP/$USER
    module load amber/25
    cp /usr/local/apps/pmemd24/test/cuda/4096wat/prmtop .
    cp /usr/local/apps/pmemd24/test/cuda/4096wat/eq1.x .
    mpirun pmemd.cuda.MPI -O -i mdin -c eq1.x -p prmtop -o mdout
    

    If the above script is in a file named "test.bsub", you would submit it by

    bsub < test.bsub
    

  • Installation tips for installing your own AMBER25 software.
  • For users holding an AMBER license who are interested in building their own AMBER25 executables from source, follow the general installation guidelines, with the following modifications for Hazel: Use the cmake 3.18.2, gcc 6.3.0, cuda 10.1, and for MPI, install the recomended MPICH (mpich-3.3.2) from mpich.org/downloads/

    module load cmake/3.24.1 
    module load openmpi-gcc/openmpi4.1.4-gcc11.4.1
    module load cuda/12.6
      
    

    Last modified: October 27 2025 13:19:29.

    Copyright © 2025 · Office of Information Technology · NC State University · Raleigh, NC 27695 · Accessibility · Privacy · University Policies