Compute nodes

Partners purchase compatible HPC compute nodes, and OIT houses the nodes in a secure campus data center with appropriate power, cooling, and networking. Partner nodes share software licenses and storage infrastructure with the other HPC cluster nodes.

A dedicated queue is created for the partner with access to the quantity of compute resources that the partner added to the cluster. Partner and any other Unity IDs the partner specifies have access to the dedicated queue. Queue parameters are set based on what the partner needs.

In addition to a dedicated queue, the partner project has increased priority in all the queues using a fair-share scheduling methodology.

Compute resources not being actively used by the partner are made available to other NC State HPC projects for short duration jobs.

Learn More

Partner Advantages

The HPC compute node Partner Program offers compelling advantages for both the faculty partner and for the university.

Partner Advantages (services provided by university)

  • secure space
  • power (including UPS and diesel generator)
  • cooling
  • rack (including rack power distribution)
  • network infrastructure (including message passing network for distributed memory nodes)
  • system administration and maintenance
  • priority access to additional compute resources
  • access to shared storage and file systems
  • access to university licensed software (compilers, debuggers, optimized math libraries, performance analyzers, ...)
  • system and computational science support from HPC staff

University Advantages

  • multiplies resources provided by university HPC investment
  • increased HPC resource utilization yields more efficient use of university-wide research computing dollars
  • scaling benefits reduce university-wide cost of HPC facilities (a few large power and cooling units vs. many small power and cooling units)
  • scaling benefits reduce university-wide cost of HPC system support (incremental system administration and maintenance load for compatible hardware is very small - that is it takes nearly the same work to operate an 8-processor cluster as it does to operate a 100-processor cluster)
Copyright © 2024 · Office of Information Technology · NC State University · Raleigh, NC 27695 · Accessibility · Privacy · University Policies