How to push a singularity image to docker hub from cluster.
I don’t believe it is possible to push an sif image to Docker Hub in general.
However, you may use the remote-build tool from Sylabs to host your images in your account and keep the image “public” so that one can easily pull it from Sylabs.
Thank You, Anthony for the reply.
However, I could build the singularity image and made it available to all the cluster users in the cluster itself. I will try the procedure you said.
Hi Sasmita,
Anthony is correct that DockerHub does not allow you to upload Singularity images. However, there are public registries that allow you to push/pull Singularity images. The ones I’m aware of are:
- Quay.io: I recommend this option, as it is the most straightforward and accessible. Quay handles both OCI (Docker etc.) images and Singulary images. As of today, a free tier account allows for unlimited public images. I have up-to-date instructions for using it below.
- GitHub Container Registry: The Apptainer docs mention that GitHub Container Registry supports the correct protocol for uploading Singularity images, but I have not tried it myself. Might be another accessible option.
- Azure Container Registry: I had this working a couple years ago, but it’s not easy to setup, not very accessible, and we never revisited it.
Here’s how you would push a Singularity image to Quay.io
- Create an account for https://quay.io .
- Create a repository: https://quay.io/new/. You can also re-use an existing repo that you have permissions to upload to, since a repo can and usually does have multiple images.
- On the server/computer with your image file, login to quay through apptainer. Use the username and password that you created in Step 1. Make sure you use the
oras://URI, since Singularity images are pushed/pulled as OCI ORAS artifacts, not as Docker images
apptainer remote login --username YOUR_USERNAME oras://quay.io
- Push your .sif file. YOUR_REPO is the name of the repo you created in step 2. The TAG can be anything; I usually like to use a software version, similar to the way Docker images are tagged. The name of the local .sif file and the image on Quay don’t have to match.
apptainer push YOUR_IMAGE.sif oras://quay.io/YOUR_REPO/YOUR_IMAGE:YOUR_TAG
And to pull the image somewhere else, you must also use the oras:// URI. Again, the name of the local .sif file and the image on Quay don’t have to match. In my experience, you don’t have to use apptainer remote login before pulling an image, but it’s possible I have some cached credentials somewhere. If you get permissions errors, try logging-in first.
apptainer pull YOUR_IMAGE.sif oras://quay.io/YOUR_REPO/YOUR_IMAGE:YOUR_TAG
Hope this helps!
Best,
Ron
Ron Rahaman
Research Scientist II, Research Software Engineer
Partnership for an Advanced Computing Environment (PACE)
Open Source Programming Office (OSPO)
Georgia Institute of Technology
One procedure I use is to make copies of the container SIF file available in a common directory. I currently support a couple of different groups and each group has their own shared common data directories. For example, /data/group1/data. I will put the SIF files in a directory such as /data/group1/data/common/sw/sif/. Your directory naming structure would be different. I like to keep this naming convention consistent across all groups. It makes it easier to have common documentation, for example.
As our clusters use SLURM for job management, I have a collection of SLURM sbatch files that I distribute as well. The SLURM scripts get rather lengthy as I have built-in error checking and other consistency checks.
Below is an example, although for brevity I am leaving out a lot of error checking and diagnostic code. Also, note I build the sandbox image first. I just found this is a method that works best for us.
singularity build --sandbox --force $SIF_IMAGE_DIR $SIF_FILE
local_rc=$?
if [[ $local_rc == 0 ]]
then
echo "Singularity build successful `hostname`"
echo " "
else
echo "Singularity build failed on `hostname` with code $local_rc. Exiting"
exit $local_rc
fi
if [ ! -d $SIF_IMAGE_DIR ]
then
echo "SIF Directory $SIF_IMAGE_DIR does not exist, unable to continue"
exit 2
fi
singularity instance start \
-B $MODEL_STORAGE_ROOT:/model \
$SIF_IMAGE_DIR \
$current_instance
local_rc=$?
if [[ $local_rc == 0 ]]
then
echo "* Singularity instance $current_instance started on `hostname`"
echo " "
else
echo "* Singularity instance $current_instance failed on `hostname` with code $local_rc. Exiting"
exit $local_rc
fi
Once the container starts, users then jump into the container by using a singularity exec or singularity shell call.
Example:
/bin/time singularity exec instance://$current_instance sh -c "perl -Mdiagnostics /model/bin/model.pl -v -some_option_here 12 -out some_other_variable_here"
When all is said and done, I attempt a clean shutdown
singularity instance stop $current_instance
local_rc=$?
if [[ $local_rc == 0 ]]
then
echo "* Instance $current_instance shutdown on host `hostname`"
echo "* "
else
echo "* ERROR: Instance $current_instance shutdown failed with code $local_rc."
echo "* You may need to manually check host `hostname` to delete the instance"
echo "* "
fi
As you can see this makes for a lengthy SLURM script, but a lot of this is standardized boilerplate you can give the users, informing them of what and where they can change things. I usually load up the user input variables atop the SLURM script, followed bu consistency checks, followed by the container start part.
It is a little daunting at first, but it is a model that has been working well for a couple of years.
Thank you all for your responses.
I have followed the following steps to build my own container then convert it into a singularity image in my institution HPC environment:
-
Singularity is compatible with docker container, which is a widely-used implementation. So it is recommend building a singularity image from docker.
-
However, if the HPC environment doesn’t support docker, first build the docker image on a local machine, and pack it into a tar file using:
docker save -o test-image.tar local/test-image # Dump the docker image to a tar file on local machine -
Then, transport this tar file to an HPC machine using scp or other protocols. To build the singularity image from a docker tar file, use:
singularity build test-image.simg docker-archive://test-image.tar
make sure you have done module load apptainer or singularity
4. The approach described above produces images in Singularity Image File (SIF) format. SIF images are read-only and suitable for production. However, you might want to make changes on the image frequently. In such a case, you can add --sandbox option to build command to create a writable directory for interactive development.
singularity build --sandbox test-image.simg docker-archive://test-image.tar
-
After the image is created, launch the container with:
singularity run test-image.simg -
Similar to Docker, the Singularity executes the ENTRYPOINT script in the container when initiated by the run command.