To access Gaussian you must sign a license
acknowledgement form. Select the "Request Access"
button under Gaussian on HPC Software page to
obtain the necessary form.
External Links:
Gaussian website
Gaussian Tutorial Videos
Gaussian is a 64-bit application. There are currently two versions of Gaussian available on Henry2: g09 (revision D.01) and g16 (revision A.03). To use g16, use the command
module load gaussianto set up environment. To use g09, use the command
module load gaussian/09to set up environment.
NOTE: g16 needs processor to support AVX instructions. Thus, in your job script, you need to have a line of
#BSUB -R avxto request compute nodes with cores having AVX support. (In using g09 you don't need that.)
Gaussian can run on CPUs and GPUs. Also, Gaussian has an auxiliary GUI program GaussView for you to do visualization.
The job script for a g16 run would look similar to the following file named run_g16:
#!/bin/bash #BSUB -n 12 #BSUB -R "span[hosts=1]" #BSUB -R "select[avx]" #BSUB -W 120 #BSUB -q single_chassis #BSUB -x #BSUB -o out.%J #BSUB -e err.%J module load gaussian g16 < h2o.in > h2o.out
By default GAUSS_SCRDIR is set to /scratch, a fast file system local to the compute node. On many nodes, this has a capacity between 180-320GB. However, for very large systems, this may not be enough. In such cases, a setting of export GAUSS_SCRDIR=/share/$group/$user/scratch/$LSB_JOBID is more appropriate.
The latest gaussian version (revision C.02) is the default. This version can run on the latest A100 and H100 NVIDIA GPUs. Use the command module avail gaussian to see the older versions that are available.
Gaussian can do parallel computations using multiple cores on a single node. (Gaussian is a shared memory code and cannot do distributed memory parallel computations across multiple nodes.) The number of processors specified in the g16 input file (file h2o.in specified below) should match the number of cores requested with the "-n" option in the job script.
The input file h2o.in should be in the same directory
as the job script and that should be your current working
directory when you issue the bsub command. For
this example the input file contains the following content:
%nprocshared=12 %Chk=water #RHF/6-31G(d) water energy 0 1 O H 1 1.0 H 1 1.0 2 120.0 END ECHO "Job done. "This Gaussian job can be submitted by
bsub < run_g16
Use an input h2o.in file like:
%Mem=24GB %Chk=water %CPU=0-1 %GPUCPU=0,1=0,1 #RHF/6-31G(d) water energy 0 1 O H 1 1.0 H 1 1.0 2 120.0 END ECHO "Job done. "
Use a job submission script, run_g16, like:
#!/bin/bash #BSUB -n 2 #BSUB -R "span[hosts=1]" #BSUB -W 100 #BSUB -q gpu #BSUB -R "select[a30]" #BSUB -gpu "num=2" #BSUB -o out.%J #BSUB -e err.%J module load gaussian g16 < h2o.in > h2o.out
Submit the job with:
bsub < run_g16
Generally, Gaussian jobs with more atoms can effectively use more cores.
GaussView is a GUI utility and you should not use a GUI on HPC login nodes. You should use the HPC environment on VCL nodes to do GUI, such as GaussView. To use the HPC environment on VCL nodes you need to be in the HPC "vcl" group. You can send an e-mail to
oit_hpc@help.ncsu.eduto request to be added to "vcl" group. As for how to use HPC environment on VCL nodes please visit HPC-VCL.
Once on a VCL node, you can do
module load gaussian/16 gview.exe
to launch the GaussView GUI coming with Gaussian 16. If you want to use the older version of GaussView coming with Gaussian 09. Then you can do
source /usr/local/apps/gv9/g09.sh gview.exe
to launch the GaussView GUI.
Last modified: April 02 2024 13:40:33.