Request Access

HPC projects are available to all NC State faculty members on request at no charge.

  • See this page for a very general introduction to HPC Computing Services.
  • See the information below for instructions on getting HPC access, or see this video for help on getting HPC access.
  • Before using the HPC

    All users must learn the basics of Linux and the basics of using the cluster. After getting access but before running jobs, we recommend completing the HPC Beginner Workshop. Alternatively, complete either the text based tutorial or watch this series of video tutorials.

    How to request access to HPC

    Note: New projects and access for new Unity IDs are created once daily at 6:30 p.m.

    For faculty

    Research projects

    NC State faculty members can request an HPC project by clicking the Request Access button at the bottom of the page, and then clicking the button for Create Project on the page that follows. Faculty can only request a single HPC research project.

    HPC project owners may add additional Unity IDs to an existing HPC project using the research computing web application. Here are additional instructions for adding a user.

    Instructional use

    Instructors may request an HPC Project to use for teaching a class. Please see the following for details.

  • Hazel for Instructional use: access, storage, software, and training.
  • For non-faculty

    Graduate students, postdocs, and other collaborators must be added to a project by the faculty PI.

    Undergraduates or non-thesis master's students who are not working in a research group with a faculty advisor, and not registered for a course using HPC may request trial access through an arrangement with the NC State University Libraries. Here is the application form to request access.

    IT Staff may request to be added to the HPC Support group.

    Directors of Core Facilities or other university group leaders may request an HPC project. Please contact HPC to discuss the details of an HPC project for the group.

    For Researchers at Partner Universities

    FOR East Carolina University

    ECU researchers should review the information and request access from High Performance Computing Program page.

    For UNC Greensboro

    UNC Greensboro faculty, staff, and students may request an account on Hazel at no charge. Visit 6-TECH Online to open a request. Please include the following information in the request:

    First name: 
    Last name: 
    Department: 
    UNCG username (As on your uncg email): 
    UNCG email address:  
    Phone number: 
    

    Students requesting an account should submit written proof of faculty sponsorship - an email is fine. The student's advisor must already have an account and be a PI on Hazel

    UNC Wilmington

    UNCW researchers may request access from Academic Research Computing page.

    Reading and accepting the HPC AUP is a requirement for gaining access to Hazel.

    This video on the HPC Acceptable Use Policy explains some of the technical details of the AUP including the difference between login nodes and compute nodes, and it discusses some of the actions that would result in violating the AUP.

    HPC Acceptable Use Policy

    OIT's HPC services are provided to support the core university mission of instruction, research, and engagement. All use of HPC services must be under or in support of a project directed by a faculty member. The faculty member directing each project is ultimately responsible for ensuring that HPC services are used appropriately by all accounts associated with their project. Students wanting to use HPC resources for academic purposes who are not working with a faculty member or enrolled in a course may request access through an arrangement with the NC State Libraries.

    OIT's HPC services are provided on shared resources. To ensure equitable access for all HPC projects, certain restrictions on acceptable use are necessary. All use of HPC services must adhere to the following guidelines. Accounts repeatedly violating these guidelines will have their HPC access disabled.

    1. Access to the HPC Linux cluster (Hazel) is to be via Secure Shell protocol (SSH) to respective login nodes or other interfaces that may be intended for direct access (e.g. Web Portals) and all access to compute nodes must be via the job scheduler LSF. Direct access to compute nodes is not permitted.
      • A maximum number of concurrent login sessions will be enforced on login nodes.
      • SSH sessions that have been idle for an amount of time will be automatically disconnected.
      • These limits will be adjusted as necessary to manage login node resources and to comply with applicable security standards.
    2. The purpose of a login node is to provide access to the cluster via SSH and to prepare for running a program (e.g., editing files, compiling, and submitting batch jobs). Processes that use a non-trivial amount of compute or memory resources must not be run on any of the shared login nodes. These processes may be run via LSF either as a batch job or as an interactive session on a compute node. The HPC-VCL nodes should be used only to run GUI applications. Processes running on login nodes that have used significant CPU time or that are using significant memory resources will be terminated without notice.
    3. Scratch file systems (/share*, /gpfs_share) are intended to be used as data storage for running jobs or files under active analysis. Use of techniques for purpose of protecting files from the automated purge of scratch file systems is not permitted. Files on these file systems may be deleted at any time and are not backed up - the only copy of important files should not be kept on these file systems.
    4. To the extent feasible, jobs run on HPC resources are expected to make efficient use of the resources.
    5. Resources requested for jobs should be as accurate as possible even if such requests result in longer queue waiting times (e.g. if jobs require majority of per node memory they should use exclusive option even though this will likely increase the time the job will wait in queue).
    6. Compute nodes which have lost contact with LSF for more than a few hours or which are unreachable from the console will be rebooted without notice.