Request Access
HPC projects are available to all NC State faculty members on request at no charge.
- See this page for
a very general introduction to HPC Hazel Cluster.
- See the information below for instructions on getting HPC access for faculty members. Student access is managed by being added to a faculty member's HPC project.
Before using the HPC
- All users must learn the basics of Linux and the basics of using the cluster.
- After getting access but before running jobs, we strongly recommend completing
- the HPC Beginner Workshop.
- or, complete the text based tutorial
- or watch this series of video tutorials.
How to request access to HPC
- the HPC Beginner Workshop.
- or, complete the text based tutorial
- or watch this series of video tutorials.
Note: we are transitioning to closer to real-time implementation of changes made via Research Computing Web App. Previously these were implemented once a day at 6:30pm. Now, on most cluster nodes, these are implemented within ~10 minutes of the action on the Web App. VCL HPC images and Open OnDemand server are still (as of Jan 2026) being updated once daily at 6:30pm.
For faculty
Research projects
NC State faculty members can request an HPC project by clicking the Request Access button at the bottom of the page, and then clicking the button for Create Project on the page that follows. Faculty can only request a single HPC research project.
HPC project owners may add additional Unity IDs to an existing HPC project using the research services web application.
Here are additional instructions for adding a user.Note, this above process of adding a user to your HPC project group is different from simply adding a user as a member of your linux group so they can access files and folders throughout your HPC project space (but the user will remain in their current HPC project).
Instructional use
Instructors may request an HPC Project to use for teaching a class. Please see the following for details.
For non-faculty
Graduate students, postdocs, and other collaborators must be added to a project by the faculty PI.
IT Staff may request to be added to the HPC Support Project.
Directors of Core Facilities or other university group leaders may request an HPC project. Please contact HPC to discuss the details of an HPC project for the group.
For Researchers at Partner Universities
East Carolina University
ECU researchers should review the information and request access from High Performance Computing Program page.
UNC Greensboro
UNC Greensboro faculty, staff, and students may request an account on Hazel at no charge. Visit 6-TECH Online to open a request. Please include the following information in the request:
First name: Last name: Department: UNCG username (As on your uncg email): UNCG email address: Phone number:Students requesting an account should submit written proof of faculty sponsorship - an email is fine. The student's advisor must already have an account and be a PI on Hazel
UNC Wilmington
UNCW researchers may request access from Academic Research Computing page.
Reading and accepting the HPC AUP is a requirement for gaining access to Hazel.
This video on the HPC Acceptable Use Policy explains some of the technical details of the AUP including the difference between login nodes and compute nodes, and it discusses some of the actions that would result in violating the AUP.
HPC Acceptable Use Policy
OIT's HPC services are provided to support the core university mission of instruction, research, and engagement. All use of HPC services must be under or in support of a project directed by a faculty member. The faculty member directing each project is ultimately responsible for ensuring that HPC services are used appropriately by all accounts associated with their project. Students wanting to use HPC resources for academic purposes who are not working with a faculty member or enrolled in a course may request access through an arrangement with the NC State Libraries.
OIT's HPC services are provided on shared resources. To ensure equitable access for all HPC projects, certain restrictions on acceptable use are necessary. All use of HPC services must adhere to the following guidelines. Accounts repeatedly violating these guidelines will have their HPC access disabled.
- Access to the HPC Linux cluster (Hazel) is to be via Secure Shell protocol (SSH)
to respective login nodes or other interfaces that may be intended for
direct access (e.g. Web Portals) and all access to compute nodes must be via
the job scheduler LSF. Direct access to compute nodes is not permitted.
- A maximum number of concurrent login sessions will be enforced on login nodes.
- SSH sessions that have been idle for an amount of time will be automatically disconnected.
- These limits will be adjusted as necessary to manage login node resources and to comply with applicable security standards.
- The purpose of a login node is to provide access to the cluster via SSH and to prepare for running a program (e.g., editing files, compiling, and submitting batch jobs). Processes that use a non-trivial amount of compute or memory resources must not be run on any of the shared login nodes. These processes may be run via LSF either as a batch job or as an interactive session on a compute node. The HPC-VCL nodes should be used only to run GUI applications. Processes running on login nodes that have used significant CPU time or that are using significant memory resources will be terminated without notice.
- Scratch file systems (/share*, /gpfs_share) are intended to be used as data storage for running jobs or files under active analysis. Use of techniques for purpose of protecting files from the automated purge of scratch file systems is not permitted. Files on these file systems may be deleted at any time and are not backed up - the only copy of important files should not be kept on these file systems.
- To the extent feasible, jobs run on HPC resources are expected to make efficient use of the resources.
- Resources requested for jobs should be as accurate as possible even if such requests result in longer queue waiting times (e.g. if jobs require majority of per node memory they should use exclusive option even though this will likely increase the time the job will wait in queue).
- Compute nodes which have lost contact with LSF for more than a few hours or which are unreachable from the console will be rebooted without notice.