Generic Template
#!/bin/bash
#======================================================
# Job name and output files
#======================================================
#SBATCH --job-name=myjob # Job name
#SBATCH --output=stdout.%j # Standard output (%j = job ID)
#SBATCH --error=stderr.%j # Standard error
#======================================================
# Resource requests
#======================================================
#SBATCH --ntasks=1 # Number of tasks (MPI ranks)
#SBATCH --cpus-per-task=1 # CPUs per task (OpenMP threads)
#SBATCH --nodes=1 # Number of nodes
#SBATCH --time=02:00:00 # Time limit (HH:MM:SS)
#SBATCH --mem=4G # Memory per node
#======================================================
# Partition and QOS (optional - uses defaults if omitted)
#======================================================
#SBATCH --partition=compute # Partition name
#SBATCH --qos=normal # Quality of Service
#======================================================
# Environment setup
#======================================================
module purge # Clear modules
module load PrgEnv-intel # Load compiler environment
#======================================================
# Change to submission directory (optional)
#======================================================
cd $SLURM_SUBMIT_DIR
#======================================================
# Run application
#======================================================
./myprogram.exe
# For MPI programs, use srun:
# srun ./my_mpi_program.exe
Directive Reference
Basic Options
| Directive | Description |
| --job-name=NAME | Job name (default: script name) |
| --output=FILE | Stdout file. %j=jobid, %x=jobname |
| --error=FILE | Stderr file (default: combined with output) |
| --time=D-HH:MM:SS | Time limit. Examples: 02:00:00, 1-00:00:00 |
CPU Resources
| Directive | Description |
| --ntasks=N | Number of tasks (MPI ranks) |
| --cpus-per-task=N | CPUs per task (for OpenMP) |
| --nodes=N | Number of nodes |
| --ntasks-per-node=N | Tasks per node |
| --exclusive | Exclusive node access |
Memory
| Directive | Description |
| --mem=SIZE | Memory per node (e.g., 16G, 128000M) |
| --mem-per-cpu=SIZE | Memory per CPU |
Partitions and QOS
| Directive | Description |
| --partition=NAME | Partition (queue) name |
| --qos=NAME | Quality of Service |
See available partitions and QOS.
Node Features
| Directive | Description |
| --constraint=FEATURE | Request nodes with feature |
Examples: --constraint=avx2, --constraint=sapphirerapids