Compute Resources
There is no charge for using these compute resources.
- Boilerplate language for contracts and grant proposals (opens in new window)
- Citing Hazel compute resources or OIT-HPC consulting services
- HPC Partner Program — purchase additional compute nodes or storage
Hazel Linux Cluster
The Hazel cluster is a heterogeneous cluster that includes state-of-the-art equipment such as the newest CPUs, GPUs, and networking architecture while maintaining older resources as long as feasible. Compute nodes include a mix of x86-64 processors (Intel Xeon and AMD EPYC) in dual-socket servers, some with attached GPUs.
Compute resources
Because of the dynamic nature of the cluster, the types and amounts of compute resources are always in flux. Currently:
- On the order of 400 compute nodes with over 14,000 cores.
- Majority of the nodes are connected with InfiniBand.
- Several nodes have one or more attached GPUs of various models.
- Most nodes have more than 128 GB of memory. Standard compute node configuration is 512 GB, with a few 1024 GB nodes available.
See the cluster status pages for up-to-date specifics on nodes by CPU model, memory, GPU, or interconnect.
Scheduling and partitions
Work is scheduled on the cluster using the Slurm job scheduler. Various partitions are available to all HPC accounts. Resources may be specified according to:
Various partitions are available with different priority, time limits, and core limits.
- Higher priority is given to jobs with a greater degree of parallelism (MPI jobs).
- Higher priority is given to shorter jobs. Time limits range from 10 minutes to 2 weeks.
- By default, all jobs are scheduled on nodes of the same type (homogeneous).
Storage resources
- 1 GB per account home directory space /home
For source code, scripts, and small executables. - 20 TB per project scratch space /share
Temporary space for running applications and working with large data. - 100 GB per project application storage space /usr/local/usrapps (by request)
Backed-up space for installing larger applications and storing individual conda environments.
See Storage for full details on directory locations, size limits, and backup policies.
Cluster monitoring
- News, status updates, outages, and maintenance schedules
- Usage by CPU model
- Usage by memory size
- Usage by GPU model
- Usage by interconnect
- Software packages and installed applications
Citing Hazel compute resources or OIT-HPC consulting services
Please acknowledge compute resources as:
We acknowledge the computing resources provided by North Carolina State University High Performance Computing Services Core Facility (RRID:SCR_022168).
And acknowledge any significant user support resources as:
We also thank consultant name(s) for their assistance with (describe tasks such as porting, optimization, visualization, etc.).
Partner compute nodes
If existing compute resources are inadequate for the needs of a project, there is an opportunity to purchase additional compute nodes or storage under the HPC Partner Program.