Rivanna and Singularity

TODO: thats where the images are /share/resources/containers/singularity


Singularity is a container runtime that implements a unique security model to mitigate privilege escalation risks and provides a platform to capture a complete application environment into a single file (SIF).

Singularity is often used in HPC centers.

University of Virginia granted us special permission to create Singularity images on rivanna. We discuss here how to build and run singularity images.


In order for you to be able to access singularity and build images, you must be in the following groups:


To find out if you are, ssh into rivanna and issue the command

$ groups

If any of the groups is missing, please send Gregor an e-mail at laszewski@gmail.com.


To build an image you will need a build definition file

We show next an exxample of a simple buid.def file that uses internally a NVIDIA NGC PyTorch container.

Bootstrap: docker
From: nvcr.io/nvidia/pytorch:23.02-py3

Next you can follow the steps that are detailed in https://docs.sylabs.io/guides/3.7/user-guide/definition_files.html#sections

However, for Rivanna we MUST create the image as discussed next.

Creating the Singularity Image

In order for you to create a singularity container from the build.def file please login to either of the following special nodes on Rivanna:

  • biihead1.bii.virginia.edu
  • biihead2.bii.virginia.edu

For example:

ssh $USER@biihead1.bii.virginia.edu

where $USER is your computing ID on Rivanna.

Now that you are logged in to the special node, you can create the singularity image with the following command:

sudo /opt/singularity/3.7.1/bin/singularity build output_image.sif build.def

Note: It is important that you type in only this command. If you modify the name output_image.sif or build.def the command will not work and you will recieve an authorization error.

In case you need to rename the image to a better name please use the mv command.

In case you also need to have a different name other then build.def the following Makefile is very useful. We assume you use myimage.def and myimage.sif. Include it into a makefile such as:


	cp ${BUILD} build.def
	sudo /opt/singularity/3.7.1/bin/singularity build output_image.sif build.def
	cp output_image.sif ${IMAGE}
	make -f clean

	rm -rf build.def output_image.sif

Having such a Makefile will allow you to use the command

make image

and the image myimage.sif will be created. with make clean you will delete the temporary files build.def and output_image.sif

Create a singularity image for tensorflow


Work with Singularity container

Now that you have an image, you can use it while using the documentation provided at https://www.rc.virginia.edu/userinfo/rivanna/software/containers/

Run GPU images

To use NVIDIA GPU with Singularity, --nv flag is needed.

singularity exec --nv output_image.sif python myscript.py


Since Python is defined as the default command to be excuted and singularity passes the argument(s) after the image name, i.e. myscript.py, to the Python interpreter. So the above singularity command is equivalent to

singularity run --nv output_image.sif myscript.py

Run Images Interactively

ijob  -A mygroup -p gpu --gres=gpu -c 1
module purge
module load singularity
singularity shell --nv output_image.sif

Singularity Filesystem on Rivanna

The following paths are exposed to the container by default

  • /tmp
  • /proc
  • /sys
  • /dev
  • /home
  • /scratch
  • /nv
  • /project

Adding Custom Bind Paths

For example, the following command adds the /scratch/$USER directory as an overlay without overlaying any other user directories provided by the host:

singularity run -c -B /scratch/$USER output_image.sif

To add the /home directory on the host as /rivanna/home inside the container:

singularity run -c -B /home:/rivanna/home output_image.sif


Adding singularity to slurm scripts


Running on v100


Running on a100-40GB


Running on a100-80GB


RUnning on special fox node a100-80GB


Last modified May 8, 2023: update groups (01d57f0)