Partitions

Partitions group nodes by hardware type and access level. Specify a partition with #SBATCH --partition=NAME.

Partition Description Access Default QOS
compute Standard CPU compute nodes All users normal
compute_partners Extended CPU pool including partner nodes Partner projects partner
gpu Standard GPU nodes All users gpu
gpu_partners Extended GPU pool including partner nodes Partner projects partner_gpu

If no partition is specified, the default partition (compute) is used. Partner projects should use compute_partners or gpu_partners to access their dedicated resources.

Partition Relationships

The partner partitions include many nodes from the standard partitions plus additional partner-contributed nodes:

Venn diagram: the compute partition (available to all users) is mostly contained within the larger compute_partners partition (available to partner projects).
Venn diagram: the gpu partition (available to all users) is mostly contained within the larger gpu_partners partition (available to partner projects).

Quality of Service (QOS)

QOS controls job priority and resource limits. Specify with #SBATCH --qos=NAME. Each partition has a default QOS.

QOS Priority Max Wall Time Description
normal Standard 4 days Standard CPU jobs on compute partition
gpu Standard 4 days Standard GPU jobs on gpu partition
short Higher 2 hours Short jobs on compute_partners; access to idle partner nodes
short_gpu Higher 2 hours Short jobs on gpu_partners; access to idle partner GPUs
partner Highest unlimited Partner CPU jobs on compute_partners partition
partner_gpu Highest unlimited Partner GPU jobs on gpu_partners partition

Partition and QOS Availability

Each partition allows specific QOS values. Jobs must use a QOS that is allowed in the requested partition:

Partition normal gpu short short_gpu partner partner_gpu
compute Yes - - - - -
compute_partners - - Yes - Yes -
gpu - Yes - - - -
gpu_partners - - - Yes - Yes

Note: The short and short_gpu QOS allow all users to access idle partner resources for jobs under 2 hours.

See Job Priority and Fairshare for details on how QOS affects scheduling priority.

Partner Resources

Research groups that have purchased nodes for the cluster have access to partner partitions and QOS with higher priority on their resources.

Compute Node Types

The cluster contains several generations of compute hardware. You can request specific node types using --constraint.

CPU Nodes

Constraint CPU Model Cores/Node Memory
genoa AMD EPYC 4th Gen 192 768 GB
sapphirerapids Intel Xeon 4th Gen 64 256-512 GB
icelake Intel Xeon 3rd Gen 64 256 GB
cascadelake Intel Xeon 2nd Gen 32 192 GB
skylake Intel Xeon Scalable 32 192 GB
broadwell Intel Xeon E5 v4 24 128 GB
haswell Intel Xeon E5 v3 20 128 GB

Example: Request Sapphire Rapids nodes:

#SBATCH --constraint=sapphirerapids

GPU Nodes

GPU Type GPU Memory GPUs/Node Request
NVIDIA H200 141 GB 4 --gres=gpu:h200:N
NVIDIA H100 80 GB 4 --gres=gpu:h100:N
NVIDIA A100 40/80 GB 4 --gres=gpu:a100:N
NVIDIA L40S 48 GB 4 --gres=gpu:l40s:N
NVIDIA L40 48 GB 4 --gres=gpu:l40:N
NVIDIA A30 24 GB 2 --gres=gpu:a30:N
NVIDIA A10 24 GB 2 --gres=gpu:a10:N
NVIDIA P100 16 GB 2 --gres=gpu:p100:N
NVIDIA RTX 2080 8 GB 4 --gres=gpu:rtx_2080:N
NVIDIA GTX 1080 8 GB 4 --gres=gpu:gtx1080:N

Example: Request 2 A100 GPUs:

#SBATCH --partition=gpu
#SBATCH --gres=gpu:a100:2

Additional Constraints

Beyond CPU architecture, you can constrain jobs by other node features using --constraint:

CategoryConstraintDescription
Vendorintel, amdCPU vendor
GPU VendornvidiaNodes with NVIDIA GPUs
Instruction Setavx, avx2AVX vector instructions
Instruction Setavx512AVX-512 (Skylake and newer)
Instruction Setsse4_1, sse4_2SSE 4.x instructions
NetworkibInfiniBand interconnect

Combining Constraints

# AND - require both features
#SBATCH --constraint="intel&avx512"

# OR - accept either feature
#SBATCH --constraint="icelake|sapphirerapids"

See Submission FAQ for more constraint examples.

Checking Resource Availability

View partition status

sinfo

View detailed node information

sinfo -N -l

View available GPUs

sinfo -p gpu -o "%N %G %t"

View your account limits

sacctmgr show assoc user=$USER format=account,qos,maxcpus,maxnodes

Check your fairshare

sshare -u $USER

Cluster Status

For a graphical view of node availability, see the cluster status page.