HPC Acceptable Use Policy
OIT's HPC services are provided to support the core university mission of
instruction, research, and engagement. All use of HPC services must be
under or in support of a project directed by a faculty member. The
faculty member directing each project is ultimately responsible for
ensuring that HPC services are used appropriately by all accounts
associated with their project. Students wanting to use HPC resources for academic purposes who are not working with a faculty member or enrolled in a course may request access through an arrangement with the NC State Libraries.
OIT's HPC services are provided on shared resources. To ensure
equitable access for all HPC projects, certain restrictions on
acceptable use are necessary. All use of HPC services must
adhere to the following guidelines. Accounts repeatedly
violating these guidelines will have their HPC access disabled.
- Access to the HPC Linux cluster (Hazel) is to be via Secure Shell protocol (SSH)
to respective login nodes or other interfaces that may be intended for
direct access (e.g. Web Portals) and all access to compute nodes must be via
the job scheduler LSF. Direct access to compute nodes is not permitted.
- A maximum number of concurrent login sessions will be enforced on login nodes.
- SSH sessions that have been idle for an amount of time will be automatically disconnected.
- These limits will be adjusted as necessary to manage login node resources and to comply with applicable security standards.
- The purpose of a login node is to provide access to the cluster via SSH and to prepare for running a program
(e.g., editing files, compiling, and submitting batch jobs).
Processes that use a non-trivial amount of compute or memory resources must
not be run on any of the shared login nodes. These processes may be run
via LSF either as a batch job or as an interactive session on a compute node. The HPC-VCL nodes should be used only to run GUI applications. Processes running
on login nodes that have used significant CPU time or that are using
significant memory resources will be terminated without notice.
- Scratch file systems (/share*, /gpfs_share) are intended to be used
as data storage for running jobs or files under active analysis. Use of
techniques for purpose of protecting files from the automated purge of
scratch file systems is not permitted. Files on these file systems may
be deleted at any time and are not backed up - the only copy of important
files should not be kept on these file systems.
- To the extent feasible, jobs run on HPC resources are expected to make
efficient use of the resources.
- Resources requested for jobs should be as accurate as possible even
if such requests result in longer queue waiting times (e.g. if jobs require
majority of per node memory they should use exclusive option even though
this will likely increase the time the job will wait in queue).
- Compute nodes which have lost contact with LSF for more than a few hours or which are unreachable from the console will be rebooted without notice.