Quick Start Guide
Get from zero to running your first job in 5 minutes.
Step 1: Connect to the Cluster
Option A: Web browser — Go to https://servood.hpc.ncsu.edu (Open OnDemand) and log in with your Unity credentials.
Option B: Terminal — Open a terminal and run:
ssh YOUR_UNITY_ID@login.hpc.ncsu.edu
Enter your password and complete Duo authentication. See detailed login instructions.
Step 2: Navigate to Your Scratch Directory
cd /share/$(groups | awk '{print $1}')
mkdir -p $USER
cd $USER
Your scratch directory (/share/group_name/user_name) is where you run jobs and store working data. See storage documentation for details.
Step 3: Create a Batch Script
Create a file called test_job.sh:
#!/bin/bash #SBATCH --job-name=test #SBATCH --output=test.out.%j #SBATCH --error=test.err.%j #SBATCH --ntasks=1 #SBATCH --time=00:10:00 echo "Hello from $(hostname)" echo "Job ID: $SLURM_JOB_ID" echo "Working directory: $(pwd)" date
Copy-paste the above into a file, or use: nano test_job.sh
Step 4: Submit the Job
sbatch test_job.sh
Slurm will respond with: Submitted batch job 123456
Step 5: Monitor Your Job
squeue -u $USER
You'll see your job listed with state PD (pending) or R (running).
Step 6: Check the Results
When the job completes, check the output file:
cat test.out.*
What's Next?
| I want to... | Go to... |
|---|---|
| Learn batch script options | Batch Script Template |
| Run a GPU job | GPU Jobs |
| Run many similar jobs | Array Jobs |
| Use installed software | Software Packages |
| Transfer files to/from the cluster | File Transfer |
| Understand partitions and resources | Partitions and Resources |
| Migrate from LSF | LSF to Slurm Migration |
For a more comprehensive overview of the cluster, see Understanding the Cluster or the full Getting Started guide.