Apptainer (Singularity)
Apptainer (formerly Singularity) simplifies the creation and execution of containers, ensuring software components are encapsulated for portability and reproducibility. Apptainer is a secure, portable, and easy-to-use container system that provides absolute trust and security. It is widely used across industry and academia in many areas of work. Apptainer is 100% OCI (Open Containers Initiative) compatible. It aims for maximum compatibility with Docker, allowing you to pull, run, and build from most containers on Docker Hub without changes. This makes it easy to work with Docker containers while benefitting from Apptainer's secure, portable, and easy-to-use container system.External Links
You are highly recommended to read into the "Apptainer User Guide" linked below to learn how to use Apptainer appropriately.One of the Important Things in Using Apptainer
When you run a container, Apptainer swaps the host operating system for the one inside your container, and the host file systems become inaccessible. Then, if in the container you need to access the host file systems, such as read input data or write result files to the host system from within the container, you would need to bind the host file systems to the file systems in your container. For an example, if in the container a data file is read from a directory named "/mnt/datafiles". On HPC, you stored your input datafiles in the directory "/share/groupname/userid/my_inpudata" and in the container you want to used the datafiles in your HPC directory "/share/groupname/userid/my_inpudata", then you need to bind the host directory "/share/groupname/userid/my_inpudata" to the container directory "/mnt/datafiles". You can do this by adding the flag "--bind" to your Apptainer command, such asapptainer exec --bind /share/groupname/userid/my_inpudata:/mnt/datafiles myimage.sif mycommandYou are highly recommended to go through details of Apptainer Bind Paths and Mounts topic.
Load Apptainer
Currently installed version of Apptainer is 1.2.2-1. To set up Apptainer environment, load the default module with the commandmodule load apptainer
Run Apptainer Containers
To run Apptainer On HPC, you create an LSF job script, say submit.sh, similar to the following:
#!/bin/bash #BSUB -W 15 #BSUB -n 1 #BSUB -o out.%J #BSUB -e err.%J module load apptainer apptainer exec myimage.sif mycommand
The job can be submitted to LSF by the command
bsub < submit.shThe above sample LSF job script is for running a serial job in the Apptainer container mycode.sif. If you run a shared memory parallel job in the container, such as a 12 threads parallel job, then you need to change the #BSUB -n 1 to
#BSUB -n 12 #BSUB -R "span[hosts=1]"Also, in your container, the shared memory parallel threads should be equal to the number of cores you request in #BSUB -n.
Build Apptainer Containers from Docker Images
Build Apptainer Containers from Docker Images in Public Docker Repositories
- Build Apptainer containers from Docker images in the Docker Hub
Docker Hub is the most common place that projects publish public container images. Assume the image is sylabsio/lolcow, then you can use the command
apptainer pull lolcow.sif docker://sylabsio/lolcow
to build an Apptainer container lolcow.sif from that Docker image, and then run the Apptainer container. You may also use the command apptainer build lolcow.sif docker://sylabsio/lolcow to build the Apptainer container. The apptainer pull command actually runs an apptainer build behind the scenes, which translates Docker oci format to Apptainer sif format. Actually, you can also run a Docker container directly with the command apptainer run docker://sylabsio/lolcow. However, that is not recommended. The reason is that Docker Hub has limits on anonymous access to its API. Every time you use a docker:// URI to run (pull, etc.) a container, Apptainer will make requests to Docker Hub in order to check whether the container has been modified there. On a shared system like NCSU HPC cluster, and when many Apptainer container jobs run in parallel, that can quickly exhaust the Docker Hub API limits. It is recommended that you pull a Docker image form the Docker Hub for one time, and then run jobs using the local Apptainer container. - Build Apptainer containers from other Docker images repositories
There are other public Docker repositories that you may access to build Apptainer containers. Below is a list of some of them. Access to some of them need may need authentication. For details of how to pull Docker images with authentication from those repositories, please refer to Support for Docker and OCI Containers in Apptainer User Guide.
- Quay.io
Quay is an OCI container registry used by a large number of projects, and hosted at https://quay.io. To pull public containers from Quay, just include the quay.io hostname in your docker:// URI, such as
apptainer pull python3.7.sif docker://quay.io/bitnami/python:3.7
Pulling images from Quay.io may need authentication. - NVIDIA NGC
The NVIDIA NGC catalog at NVIDIA NGC contains various GPU software, packaged in containers. Many of these containers are specifically documented by NVIDIA as supported by Apptainer, with instructions available. You can pull images with an apptainer pull command, such as
apptainer pull pytorch.sif docker://nvcr.io/nvidia/pytorch:21.09-py3
- GitHub Container Registry
GitHub Container Registry is increasingly used to provide Docker containers alongside the source code of hosted projects. You can pull a public container from GitHub Container Registry using a ghcr.io URI, for an example,
apptainer pull alpine.sif docker://ghcr.io/containerd/alpine:latest
- AWS ECR To work with an AWS hosted Elastic Container Registry (ECR) generally requires authentication.
- Azure ACR An Azure hosted Azure Container Registry (ACR) will generally hold private images and require authentication to pull from.
- Quay.io
Quay is an OCI container registry used by a large number of projects, and hosted at https://quay.io. To pull public containers from Quay, just include the quay.io hostname in your docker:// URI, such as
Build Apptainer Containers From Local Docker Images
If you store a Docker image locally (e.g., on your laptop or lab computer), then you can create a tar archive file from the image, upload the tar file to HPC, and then build an Apptainer container from the tar file. For an example, if you do the command sudo docker images and see its IMAGE ID of your Docker image mycode is 5a15b484bc65, then you can dodocker save 5a15b484bc65 -o mycode.tarto create a tar archive file from the image. Then, you can upload the tar file to HPC and use the command
apptainer build mycode.sif docker-archive:mycode.tarto build an Apptainer container.
HPC Provided Apptainer Containers
We provide some popular Apptainer containers that HPC users can use directly. Those Apptainer containers are all under the directory/usr/local/apps/apptainer/containers
For examples, we provide the Nvidia PyTorch and TensorFlow containers in the location of /usr/local/apps/apptainer/containers/nvidia:
/usr/local/apps/apptainer/containers/nvidia/pytorch-24.06.sif
/usr/local/apps/apptainer/containers/nvidia/tensorflow-24.08.sif
Below is a sample LSF job script for how to use the Nvidia PyTorch container:
#!/bin/bash #BSUB -W 90 #BSUB -n 1 #BSUB -o out.%J #BSUB -e err.%J #BSUB -q gpu #BSUB -gpu "num=1:mode=shared:mps=no" #BSUB -R "select[h100]" module load apptainer apptainer exec --nv /usr/local/apps/apptainer/containers/nvidia/pytorch-24.06.sif python example.py
The --nv flag in the Apptainer command is necessary for running jobs on GPU.
MPI Applications in Apptainer
The most popular way of executing MPI applications installed in an Apptainer container is to rely on the MPI implementation available on the host, namely the HPC Hazel cluster. This is called the Host MPI or the Hybrid model since both the MPI implementation in the container and the MPI implementation on the host will be used.The MPI in the container must be compatible with the MPI available the host. The MPI that both Apptainer supports and the HPC cluster supports is Open MPI. Thus, the MPI you use to compile your MPI application in your Apptainer container needs to be Open MPI. HPC cluster has various versions of Open MPI. When you run your containerized MPI app, in which the MPI used is Open MPI, you can use a version of Open MPI on HPC that is compatible with the version of the Open MPI used in your Apptainer container.
Assume your Open MPI based Apptainer container is mpibcast.sif and the MPI based app in the container is /opt/mpibcast. Then, the command in your LSF job script to run your containerized MPI app is:
mpirun apptainer exec mpibcast.sif /opt/mpibcastOf course, in your LSF job script, you should have already loaded the Open MPI module and the Apptainer module before you do the "mpirun apptainer exec ......" command. Below is a sample LSF job script:
#!/bin/bash #BSUB -W 15 #BSUB -n 6 #BSUB -R "span[ptile=2]" #BSUB -o out.%J #BSUB -e err.%J module load openmpi-gcc/openmpi4.1.0-gcc9.3.0 module load apptainer mpirun apptainer exec mpibcast.sif /opt/mpibcast
You can find an example of MPI application installed in an Apptainer container in the "Some Examples" section below. It contains source code for the MPI application, definition file for the Apptainer container, and the LSF job script for running the container. It also contains information about how to create the Apptainer container. For further information, please refer to Apptainer and MPI applications in Apptainer User Guide.
Some Tips About Apptainer Containers
- Get into an Apptainer container to check things
You can submit an LSF job using the LSF interactive mode to get on a compute node (see HPC documentation on Running Jobs on the HPC to see how to use LSF interactive mode to submit jobs). On the compute node, do the following commands to open a shell in the container (just like a shell on any Linux server):
- ON a pure CPU node, do
module load apptainer apptainer shell myimage.sif - ON a GPU node, do
module load apptainer apptainer shell --nv myimage.sifWith "--nv" flag, Apptainer can properly inject the required Nvidia GPU driver libraries into the container to match the host's kernel.
- ON a pure CPU node, do
- Get an Apptainer container's definition file
Do the following commands to inspect the definition file:
module load apptainer apptainer inspect --deffile myimage.sif
Some Examples
- An example of serial job: Lolcow
-
Assume you log on to an HPC login node.
- Set up Apptainer environment by the command
module load apptainer - Pull Docker image from the Docker Hub to generate apptainer container lolcow.sif by the command
apptainer pull lolcow.sif docker://godlovedc/lolcow - Create an LSF job script submit.sh with the following content:
#!/bin/bash #BSUB -W 15 #BSUB -n 1 #BSUB -o out.%J #BSUB -e err.%J module load apptainer apptainer run lolcow.sif - Submit the job by the following command
bsub < submit.sh - The result is in the LSF output file. You can see the drawing of a cow.
- Set up Apptainer environment by the command
- An example of MPI application in an Apptainer container
Below are the steps that you can follow to create the container and run the container.- Create the MPI application source code mpibcast.c.
This MPI C code is a simple example of MPI_Bcast. Initially, the int variable aNumber is initialized to value 17 on process 0 and is initialized to value −1 on all other processes. The code prints out the initial value on MPI each process. Then, the code calls MPI_Bcast to broadcast the value of aNumber to all other processes. Finally, the code prints out the final value on each process. The aNumber on all the processes should have the final value of 17. - Create the Apptainer definition file mpibcast.def.
The definition file selects the OS for the container, sets up needed building and running time environment variables, installs system packages that are needed for installing and using Open MPI, installs Open MPI, and finally compiles the MPI source code mpibcast.c using the Open MPI in the container. - Make a VCL reservation using the environment "RHEL 9 base (with xRDP)" with adequate length of time.
- On VCL host, do the following command to install Apptainer:
sudo yum install apptainer - Upload the C source code mpibcast.c and Apptainer file mpibcast.def to the VCL host.
- On VCL host, do the following command to create the Apptainer container mpibcast.sif
sudo apptainer build mpitest.sif mpitest.def - Transfer the Apptainer container mpibcast.sif to your working directory on HPC.
- Create the LSF job script mpibcast.bsub.
The LSF job script first loads the Open MPI module and the Apptainer module on HPC, and then does the following command to run the container:mpirun apptainer exec mpibcast.sif /opt/mpibcast - The result is in the LSF output file.
- Create the MPI application source code mpibcast.c.
Last modified: November 26 2025 11:09:03.