Skip to main content
  • Resources
    • Overview
    • Request Access
    • Manage Existing Project
    • HPC Resources
    • Compute Resources
    • Software Packages
    • Partner Program
  • Documentation
    • Get Started
    • Log In
    • Transfer Files
    • Storage
    • Compiling
    • Running Jobs
  • Support
    • Contact Us
    • Training and Events
    • Proposals and Publications
    • Links for Learning
  • Cluster Status
  1. OIT HPC
  2. Introduction

Introduction to High Performance Computing at NC State

    High Performance Computing at NC State

    The Hazel High Performance Computing (HPC) cluster is a shared system that helps researchers and students run computational work that is too large, too time-consuming, or too complex for a typical desktop or lab server. HPC enables faster turnaround, larger models, and the ability to run many analyses in supporting research and instruction across a broad range of disciplies.

    How the HPC cluster used

    Work on the HPC cluster is submitted as a job to a scheduler. Users request the resources their work such as CPU cores, memory, GPUs, and run time and the scheduler runs the job when resources are available. This approach supports efficient, fair sharing while enabling jobs that scale from quick tests to large production campaigns.

    Common HPC usage patterns include:

    • Large simulations that require many cores or high memory
    • Parameter sweeps and ensembles (many related runs exploring different inputs or scenarios)
    • High-throughput workloads (large numbers of independent tasks)
    • GPU-accelerated computing for workloads that benefit from GPUs (e.g., AI/ML, imaging, molecular simulation)
    • Post-processing and analysis of large outputs and datasets

    What software runs on HPC

    The HPC cluster supports a broad mix of software: licensed commercial applications, community-supported research codes, and custom tools developed by research groups.

      Commercial applications (licensed)

      Many researchers use HPC to scale widely adopted commercial applications (such as Ansys and Gaussian) running larger models, higher fidelity simulations, or more design iterations:

      Community-supported research applications (open / shared)

      HPC is also commonly used for open research codes maintained by scientific communities and optimized for parallel computing such as WRF (Weather Research and Forecasting Model).

      User-developed applications (custom code and workflows)

      A substantial portion of HPC use is research-group models, pipelines, and analysis tools written in Python, C/C++, Fortran, R, CUDA, and other languages. These applications often combine multiple steps (simulation, data processing, visualization, AI) and scale across many cores, nodes, or GPUs.

      Containerized Applications

      Increasingly applications are available as container that help to enhance reproducibility of results as well as ease of use. Apptainer is provided on the HPC cluster for running containers.

    When HPC may not be the best choice

    HPC is optimized for batch, compute-intensive workloads. In some situations, other computing options are a better fit:

    • Interactive, latency-sensitive work: If you need instant feedback (e.g., frequent GUI-driven interactions, real-time control, or rapid exploratory editing), a workstation or dedicated server may be more productive.
    • Small jobs with heavy overhead: If the work finishes in seconds or a few minutes and needs constant re-runs, scheduler queue time and job setup may outweigh the benefits of the cluster.
    • Workloads that don't parallelize well: Software that cannot be run in parallel provides no speed advantage on a cluster.
    • Highly specialized hardware or always-on services: Long-running services (web apps, databases, dashboards) or workloads requiring unique peripherals are usually better hosted on a managed server platform rather than a shared batch cluster.

    Value of HPC

    HPC helps NC State users:

    • Reduce time-to-result for complex analyses
    • Run larger and more realistic simulations
    • Explore more scenarios with ensembles and high-throughput workflows
    • Work reproducibly with consistent software environments and job-based execution

    To get started:

    Go to the Request Access page for instructions on how to start a project

    We look forward to working with you.

    So take a look at what OIT-HPC has to offer and schedule a consultation any time.

Copyright © 2026 · Office of Information Technology · NC State University · Raleigh, NC 27695 · Accessibility · Privacy · University Policies