HPC maintained software

The following tables show the status of software that has been requested and/or installed on HPC systems for general use. Many of the applications require group membership for access. If a permission denied error is encountered for one of the packages, please contact HPC Support to be added, license permitting, to the list of users. For ANSYS, please request access and agree to the licensing terms using the Request Access links provided below. For Gaussian, a license acknowledgement needs to be signed and returned to obtain access.

Quota
There is a quota for each HPC Project at /usr/local/usrapps/$GROUP. To check the quota, issue the command:

Officially Supported Applications

These applications have been approved as HPC maintained applications - the application review is NOT complete; all other applications are currently under review.

Application Name Default Version Load Environment Description How to Use... Benchmarks
Abaqus 2024.HF4 module load abaqus Commercial finite element package Abaqus | Slurm Abaqus benchmarks
Amber (Request access) Amber25 module load amber Parallel molecular dynamics Amber | Slurm
ANSYS (Request access) 25.2 module load ansys Commercial finite element package ANSYS | Slurm ANSYS benchmarks
Apptainer (Singularity) 1.4.2-1 module load apptainer Apptainer (formerly Singularity) is a secure, portable, and easy-to-use container system Apptainer | Slurm
BLAST+ 2.17.0 module load blast Basic Local Alignment Search Tool BLAST+ | Slurm
CMake 3.20.4 module load cmake Build, test, and package software CMake | Slurm
Conda 24.5.0 module load conda Package manager for installing software Conda | Slurm
Gaussian (Request access) 16 (A.03) module load gaussian Commercial quantum chemistry code Gaussian | Slurm Gaussian benchmarks
Gurobi 8.1 module load gurobi Commercial optimzation solver Gurobi | Slurm
Julia 1.7.0 module load julia High performance scripting language Julia | Slurm
Jupyter 24.5.0 conda activate (your_conda_notebook_env) Interactive computing IDE Jupyter | Slurm
LAMMPS 2022Jun23 module load lammps/2022Jun23/intel/cpu Molecular dynamics simulator LAMMPS | Slurm LAMMPS benchmarks
MAKER 2.31.10 module load maker Genome annotation pipeline MAKER | Slurm
Maple 2021 module load maple Symbolic and numeric computing
Maple | Slurm
Mathematica 12.0.0 module load mathematica Technical computing
Mathematica | Slurm
MATLAB R2023a module load matlab Commercial language for matrix computations
MATLAB | Slurm MATLAB benchmarks
ParaView 5.6.0 module load paraview Open source parallel application for visualization and analysis ParaView | Slurm
Perl 5.16.3
5.28.0
[none required]
module load perl
General purpose scripting language Perl | Slurm
Python 2.7.5
3.9.21
3.12.2

[none required]
module load conda
Interpreted programming language Python | Slurm
R 4.4.0 module load R Open source statistics package
R | Slurm R benchmarks
RStudio 4.4.0 Access via Open OnDemand IDE for R, an open source statistics package
RStudio | Slurm
SAS 9.4 module load sas Commercial statistical analysis package SAS | Slurm
Stata (UNCG Users only) 17.0 module load stata Commercial statistical analysis package Stata | Slurm
VASP (Request access) 5.4.1 module load vasp Vienna Ab initio Simulation Package VASP | Slurm
VMD 1.9.3 module load vmd Software package for visualizing molecules VMD | Slurm

Officially Supported Libraries

These libraries have been approved as HPC maintained applications - the library review is NOT complete; all other libraries are currently under review.

Library Name Default Version Load Environment Description How to Use...
CUDA 12.0 module load cuda NVIDIA library and compiler for using GPUs CUDA
HDF5 1.10.5 (GNU)
1.10.2 (Intel)
module load hdf5/1.10.5-gcc4.8.5
module load hdf5/1.10.2-intel2017
Hierarchical Data Format HDF5
Intel MKL 2017 module load mkl (GNU)
module load PrgEnv-intel (Intel)
Intel math kernel libraries, including BLAS, LAPACK, and ScaLAPACK Intel MKL
NetCDF 4.6.3 (GNU)
4.6.1 (Intel)
module load netcdf/4.6.3-gcc4.8.5
module load netcdf/4.6.1-intel2017
Self-describing, machine-independent data formats NetCDF
OpenMPI 4.0.0 module load openmpi-gcc Open source Message Passing Interface implementation

These applications are not officially maintained, that is, HPC does not update this software or provide official support or documentation. Sponsored applications are sponsored by users who have volunteered to share their software installations with users outside of their project group and to field basic questions about usage.

Bioinformatics tools

  • How to request access and instructions for use | Slurm
  • Application Type Description Includes:
    Bioinformatics tools Various packages available, either as binary installations or Conda environments: Assemblers, Metagenomics, Quality Assessment and Trimming... SPAdes, Canu, Trimmomatic, QIIME2, DADA2, hmmer, metabat, prodigal, sratoolkit, and many others.

Geospatial tools

Machine learning, neural network frameworks

  • Self-installation guidelines | Slurm
  • Application Type Description Includes:
    Machine learning, neural network frameworks Conda environments for TensorFlow and PyTorch TensorFlow, PyTorch

Marine, earth, and atmospheric modeling tools

  • How to request access and instructions for use | Slurm
  • Application Type Description Includes:
    NetCDF utilities Conda environment with various applications for analyzing NetCDF output CDO, GEOS, NCO, NCL, Ncview, xarray,
    PseudoNetCDF, PyNGL, PyNIO, nccmp,
    Cartopy, MetPy, MONET
    Geophysical numerical modeling and analysis Applications for numerical modeling and analysis MET
    CMAQ dependencies Intel 2018 based software stack for compiling CMAQ. Does not include CMAQ itself, includes a sample configure script and LSF script tested with CMAQ 5.3.2 Benchmark. NetCDF, I/O API
    UFS dependencies Intel 2018 based software stack for compiling Unified Forecasting System. Does not include UFS itself, tested with simple-test-case. NCEP and NCEP-external libs, including NetCDF, JasPer, JPEG, PNG, Wgrib2
    WRF dependencies Intel 2017 based software stack for compiling WRF. Does not include WRF itself, tested with WRF 4.2.2 using compile option 15. HDF5, Perl5, NetCDF, JasPer, GRIB2

Other Applications

These applications are under review for adding to either the Officially Supported or Obselete lists.

Name Latest Installed Version Location Description How to Use...
FALCON 0.5 /usr/local/apps/falcon Hierarchical Sequencing FALCON | Slurm
GenomeTools 1.5.5 /usr/local/apps/genometools Genome informatics tools GenomeTools | Slurm
GROMACS
2016 /usr/local/apps/gromacs Molecular dynamics of biochemicals and polymers GROMACS | Slurm
Java 1.8.0 module load java Programming language Java | Slurm
MAPS 4.1.0 /usr/local/apps/scienomics Software package for material modeling, simulation and analysis MAPS | Slurm
NAMD 2.10 /usr/local/apps/NAMD Parallel molecular dynamics package NAMD | Slurm
SIESTA 4.0.1 /usr/local/apps/siesta Open Source Pseudo Potential Package SIESTA | Slurm

User maintained software

If a /usr/local/usrapps/group_name directory does not exist, see the following requirements for requesting the space:

Space for user maintained executables

Acceptable use
    The directory /usr/local/usrapps provides space for user installed and maintained applications.
    A project may request a directory in which all group members may install software.
    All new /usrapps directories will be named for the project group unless otherwise requested.
    Directories in /usr/local/usrapps may not be used for data or as a working space from which to execute jobs. A compute node cannot write to /usr/local/usrapps. Globus and HPC-VCL cannot write to this space either.
    Applications must be maintained/patched to minimize potential security vulnerabilities.
    Access should be managed via Linux group permissions to comply with license restrictions.
    Applications that require root access to install are not permitted.
Procedure to request space under /usr/local/usrapps
  • The Project PI, or a group member with the Project PI cc'ed, should submit a request via email to HPC Support including the following information:
    • Name of HPC project that will be responsible for the application
    • The following statement: "The HPC group [mygroup] certifies that we will only install appropriately licensed applications on the HPC Linux cluster — e.g., applications where the license is fully open source with no applicable restrictions, applications for which NCSU has approved a clickwrap, or applications with licenses purchased by our group. For licenses purchased by our group, we will maintain proper file and directory permissions to comply with the license. The group will maintain software by installing security patches or upgrades when necessary."
  • A directory will be created with group read/write access for the requesting project.
  • The project group will be responsible for installing and maintaining the application.
  • Please contact HPC staff with any questions about licensing when installing software.
Quota

There is a quota for each HPC Project at /usr/local/usrapps/$GROUP.
To check the quota, issue the command:

 quota_display 

Where to install?

A space in /usrapps is by default writable by everyone in the project, so all members have permissions to write directly to the /usrapps/$GROUP space. This leads to confusion and potential permissions problems. Users should create their own space within the group directory. Each user should make their own directory to install software and adjust permissions as necessary.

mkdir /usr/local/usrapps/$GROUP/$USER

How to install?

Please see the general guidance for installing software.