Copy and modify this template for your jobs. Lines beginning with #SBATCH are directives that Slurm interprets. Remove or comment out options you don't need.

Generic Template

#!/bin/bash
#======================================================
# Job name and output files
#======================================================
#SBATCH --job-name=myjob           # Job name
#SBATCH --output=stdout.%j         # Standard output (%j = job ID)
#SBATCH --error=stderr.%j          # Standard error

#======================================================
# Resource requests
#======================================================
#SBATCH --ntasks=1                 # Number of tasks (MPI ranks)
#SBATCH --cpus-per-task=1          # CPUs per task (OpenMP threads)
#SBATCH --nodes=1                  # Number of nodes
#SBATCH --time=02:00:00            # Time limit (HH:MM:SS)
#SBATCH --mem=4G                   # Memory per node

#======================================================
# Partition and QOS (optional - uses defaults if omitted)
#======================================================
#SBATCH --partition=compute       # Partition name
#SBATCH --qos=normal               # Quality of Service

#======================================================
# Environment setup
#======================================================
module purge                       # Clear modules
module load PrgEnv-intel           # Load compiler environment

#======================================================
# Change to submission directory (optional)
#======================================================
cd $SLURM_SUBMIT_DIR

#======================================================
# Run application
#======================================================
./myprogram.exe

# For MPI programs, use srun:
# srun ./my_mpi_program.exe

Directive Reference

Basic Options

DirectiveDescription
--job-name=NAMEJob name (default: script name)
--output=FILEStdout file. %j=jobid, %x=jobname
--error=FILEStderr file (default: combined with output)
--time=D-HH:MM:SSTime limit. Examples: 02:00:00, 1-00:00:00

CPU Resources

DirectiveDescription
--ntasks=NNumber of tasks (MPI ranks)
--cpus-per-task=NCPUs per task (for OpenMP)
--nodes=NNumber of nodes
--ntasks-per-node=NTasks per node
--exclusiveExclusive node access

Memory

DirectiveDescription
--mem=SIZEMemory per node (e.g., 16G, 128000M)
--mem-per-cpu=SIZEMemory per CPU

Partitions and QOS

DirectiveDescription
--partition=NAMEPartition (queue) name
--qos=NAMEQuality of Service

See available partitions and QOS.

Node Features

DirectiveDescription
--constraint=FEATURERequest nodes with feature

Examples: --constraint=avx2, --constraint=sapphirerapids

#!/bin/bash
#SBATCH --job-name=gpu_job
#SBATCH --output=gpu.out.%j
#SBATCH --error=gpu.err.%j
#SBATCH --ntasks=1
#SBATCH --time=01:00:00
#SBATCH --partition=gpu
#SBATCH --gres=gpu:a100:1         # Request 1 A100 GPU

module load cuda

./my_cuda_program

GPU Options

DirectiveDescription
--gres=gpu:TYPE:NRequest N GPUs of specified type (type is required)

Available GPU Types

TypeMemoryExample
a10040/80 GB--gres=gpu:a100:1
h10080 GB--gres=gpu:h100:1
l40s48 GB--gres=gpu:l40s:1
l4048 GB--gres=gpu:l40:1
a1024 GB--gres=gpu:a10:1
a3024 GB--gres=gpu:a30:1
#!/bin/bash
#SBATCH --job-name=hybrid
#SBATCH --output=hybrid.out.%j
#SBATCH --error=hybrid.err.%j
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4        # 4 MPI tasks per node
#SBATCH --cpus-per-task=8          # 8 threads per task
#SBATCH --time=04:00:00
#SBATCH --exclusive                # Full node access

module load openmpi-gcc

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_PROC_BIND=spread
export OMP_PLACES=threads

srun ./my_hybrid_program

Notes

  • This example uses 2 nodes × 4 tasks × 8 threads = 64 total threads
  • --exclusive ensures no other jobs share the nodes
  • $SLURM_CPUS_PER_TASK automatically sets OMP_NUM_THREADS correctly
  • See hybrid jobs guide for more details

Slurm sets these environment variables in your job:

VariableDescription
$SLURM_JOB_IDJob ID
$SLURM_JOB_NAMEJob name
$SLURM_SUBMIT_DIRDirectory where sbatch was run
$SLURM_NTASKSNumber of tasks
$SLURM_CPUS_PER_TASKCPUs per task
$SLURM_NNODESNumber of nodes
$SLURM_NODELISTList of allocated nodes
$SLURM_ARRAY_TASK_IDArray job task index