Hazel Linux cluster

The Hazel cluster is a heterogeneous cluster that includes state-of-the-art equipment such as the newest CPUs, GPUs, and networking architecture while maintaining older resources as long as feasible. The cluster is an Intel Xeon based Linux cluster, and compute nodes include a mix of several generations of Intel Xeon processors primarily in dual-socket blade servers, some having attached GPUs.

Types of resources

Work is scheduled on the cluster using a queuing system (LSF). Various queues are available to all HPC accounts. Submitted jobs are placed in queues based on how long they will run and how many processor cores they will use.

Resources may be specified according to:

Availability of resources

Availability of resources can be monitored using the Cluster Status pages:

Cluster monitoring: News, updates, outages, and maintenance schedules
Available software: Software packages and installed applications

Computing resources available to all accounts

Because of the dynamic nature of the cluster, the types and amounts of compute resources available are always in flux. There are currently:

  • On the order of 400 compute nodes with over 14,000 cores.
  • Majority of the nodes are connected with InfiniBand.
  • Several nodes have one or more attached GPUs of various models.
  • Most nodes have more than 128 GB of memory. Standard compute node configuration now has 512GB. There are also a few 1024 GB nodes.

Various queues are available with varying priority, time limit, and core limit.

  • Higher priority is given to jobs with a greater degree of parallelism, i.e., MPI jobs.
  • Higher priority is given to shorter jobs. Job limits for various queues range from 10 minutes to 2 weeks.
  • The number of simultaneous cores or nodes available for a single job is always changing. In general, the largest MPI jobs currently running on the cluster would range between 128 and 256 cores.
  • By default, all jobs are scheduled on nodes of the same type (homogeneous). For users with unique applications having minimal communication or dependence on architecture, we have a specialty heterogeneous queue that allows for the utilization of over 1000 cores.

See the cluster status pages for up-to-date specifics on the current number of nodes based on model, memory, or interconnect.

The types of GPU models are listed on the LSF resource page. For guidance on querying LSF to display the current resource limits available on each queue, see the LSF FAQ.

Storage resources available to all accounts

  • 1 GB per account home directory space
    The home directory is for source code, scripts, and small executables.
  • 20 TB per project scratch space
    Scratch space is temporary space used for running applications and working with large data.
  • 1 TB per project archive storage space
    Archive storage is long term storage space for files not being actively used. New archive space is not being provided as projects can easily obtain Research Storage that is accessible from all cluster nodes

Citing Hazel compute resources or OIT-HPC consulting services

Please acknowledge compute resources as:
We acknowledge the computing resources provided by North Carolina State University High Performance Computing Services Core Facility (RRID:SCR_022168).

and acknowledge any significant user support resources as:
We also thank consultant name(s) for their assistance with (describe tasks such as porting, optimization, visualization, etc.).

Partner compute nodes

If existing compute resources are inadequate for the needs of a project, there is an opportunity to purchase additional compute nodes or storage under the HPC Partner Program.

Learn More

Copyright © 2024 · Office of Information Technology · NC State University · Raleigh, NC 27695 · Accessibility · Privacy · University Policies