Singularity working differently inside Nextflow or Seqera Platform?

Hi,

I’m experiencing a strange problem where a lot of my container builds seem to fail when running inside a pipeline vs when I call singularity inside my compute environment

For example:

The last pipeline I ran I encountered:

Error executing process > 'DORADO_UNTAR'

Caused by:
  Failed to pull singularity image
    command: singularity pull  --name liuyangzzu-nanome-v1.4.img.pulling.1769512179155 docker://liuyangzzu/nanome:v1.4 > /dev/null
    status : 255
    hint   : Try and increase singularity.pullTimeout in the config (current is "3h")
    message:
      INFO:    Converting OCI blobs to SIF format
      INFO:    Starting build...
      Getting image source signatures
      Copying blob sha256:2f94e549220aea96f00cae7eb95f401e61b41a16cc5eb0b4ea592d0ce871930a
      Copying blob sha256:8a196a8ba4058173774e13e3ac21a76216041fbfd5361e434bb1b4bf547d7d3a
      Copying blob sha256:076c5cb4a0b59cd635d46976d59eba6b611f1e06233a7528697b0c2299855062
      Copying blob sha256:2c392aca5e2c3ddfc1dba912ae3920abcd8d91b8f6608958a76b2b05af21f4c8
      Copying blob sha256:c6e912b1cab0080e88030b99415f21595c955869e394f4ae209bfccc606c4564
      Copying blob sha256:d84024e05c6665ba81f30fc5cbd93e4ee68fd7d5e77e3a9f2ab005c8bbb5b217
      Copying blob sha256:50f07b304382aadc33d6b5e7da4de13c73a336cdb6026bec934c425ab45e11fb
      Copying blob sha256:1fad95eebeee7b0ff523ef57d553444091731855bf4bc1b519cd014c00c8ad39
      Copying blob sha256:a84fa0205b967fb6078487f3ab7f7d5a22b499a5e8e884a22db2fe2e66fc157f
      Copying blob sha256:8364bc323f89a514a207e2b0c46dc398b3cd8b66e7b46a84130ee47c43375f93
      Copying blob sha256:50275fb50bfd412716607f237f6bf08c1e26904fecb8e1e3266960717d65680a
      Copying blob sha256:df8f293bb665432bac43f5f0b583c7a9c3eaf775807a23861687ff34aa0db872
      Copying config sha256:e5f56268d73f37f0257c4ca1e9ffa2e60c4ec75f47fd013daa52b954887cf8f9
      Writing manifest to image destination
      Storing signatures
      FATAL:   While making image from oci registry: error fetching image to cache: while building SIF from layers: conveyor failed to get: no descriptor found for reference "b3ba94bc337ae0535dde8d73033ceb99680b83ea45b5dff3764d87ddca5185d6"

My compute environment is defined to load: module load singularity in the preRunScript

{
  "discriminator" : "slurm-platform",
  "environment" : [ {
    "name" : "NXF_OPTS",
    "value" : "'-Xms3G -Xmx5G'",
    "head" : true,
    "compute" : false
  }, {
    "name" : "NXF_SINGULARITY_CACHEDIR",
    "value" : "/ibex/scratch/projects/c2303/NXF_SINGULARITY_CACHEDIR",
    "head" : true,
    "compute" : false
  }, {
    "name" : "NXF_APPTAINER_CACHEDIR",
    "value" : "/ibex/scratch/projects/c2303/NXF_APPTAINER_CACHEDIR",
    "head" : true,
    "compute" : false
  }, {
    "name" : "NXF_WORK",
    "value" : "/ibex/scratch/projects/c2303/work",
    "head" : true,
    "compute" : false
  }, {
    "name" : "SINGULARITY_BUILDDIR",
    "value" : "/ibex/scratch/projects/c2303/NXF_SINGULARITY_CACHEDIR/build",
    "head" : true,
    "compute" : false
  }, {
    "name" : "SINGULARITY_CACHEDIR",
    "value" : "/ibex/scratch/projects/c2303/NXF_SINGULARITY_CACHEDIR/cache",
    "head" : true,
    "compute" : false
  }, {
    "name" : "SINGULARITY_TMPDIR",
    "value" : "/ibex/scratch/projects/c2303/NXF_SINGULARITY_CACHEDIR/tmp",
    "head" : true,
    "compute" : false
  } ],
  "launchDir" : "/ibex/scratch/projects/c2303/.tower-launches",
  "headJobOptions" : "--time=13-00:00:00 --cpus-per-task=1 --mem=5G",
  "workDir" : "$TW_AGENT_WORK",
  "preRunScript" : "module load singularity nextflow",
  "nextflowConfig" : "process {\n  executor = 'slurm'\n  clusterOptions = \"-p batch --time=1-00:00:00\"\nmemory = { 256.GB * Math.pow(2, task.attempt - 1) }\n    errorStrategy = { task.exitStatus in 137..140 ? 'retry' : 'ignore' }\n    maxRetries = 3\n}\nsingularity {\n      pullTimeout = '3 hours'\n   }\naws.client.anonymous = true // fixes S3 access issues on self-hosted runners\nworkflow {\n    failOnIgnore = true\n}",
  "headQueue" : "batch",
  "computeQueue" : "batch",
  "labels" : [ ]
}

I thought this was a version problem with my HPC’s Singularity version and it not being able to unpack Docker-wrapped Singularity images on Dockerhub so I downloaded Apptainer and ran this:

$ apptainer pull liuyangzzu-nanome-v1.4.sif docker://liuyangzzu/nanome:v1.4
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
INFO:    Fetching OCI image...
25.5MiB / 25.5MiB [================] 100 % 4.0 MiB/s 0s
6.9MiB / 6.9MiB [==================] 100 % 4.0 MiB/s 0s
125.5MiB / 125.5MiB [==============] 100 % 4.0 MiB/s 0s
22.5MiB / 22.5MiB [================] 100 % 4.0 MiB/s 0s
2.2MiB / 2.2MiB [==================] 100 % 4.0 MiB/s 0s
1.6GiB / 1.6GiB [==================] 100 % 4.0 MiB/s 0s
893.6MiB / 893.6MiB [==============] 100 % 4.0 MiB/s 0s
1.3GiB / 1.3GiB [==================] 100 % 4.0 MiB/s 0s
INFO:    Extracting OCI image...
INFO:    Inserting Apptainer configuration...
INFO:    Creating SIF file...
[============================================] 100 % 0s

This created the Singularity image no problem. I then did the following.

$ module load singularity

$ singularity pull liuyangzzu-nanome-v1.4.sif docker://liuyangzzu/nanome:v1.4
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
Getting image source signatures
Copying blob 076c5cb4a0b5 skipped: already exists  
Copying blob 2c392aca5e2c skipped: already exists  
Copying blob d84024e05c66 skipped: already exists  
Copying blob 2f94e549220a skipped: already exists  
Copying blob 8a196a8ba405 skipped: already exists  
Copying blob c6e912b1cab0 skipped: already exists  
Copying blob 50f07b304382 skipped: already exists  
Copying blob 1fad95eebeee skipped: already exists  
Copying blob a84fa0205b96 skipped: already exists  
Copying blob 8364bc323f89 skipped: already exists  
Copying blob 50275fb50bfd skipped: already exists  
Copying blob df8f293bb665 skipped: already exists  
Copying config e5f56268d7 done  
Writing manifest to image destination
Storing signatures
2026/01/27 14:45:56  info unpack layer: sha256:2f94e549220aea96f00cae7eb95f401e61b41a16cc5eb0b4ea592d0ce871930a
2026/01/27 14:45:57  info unpack layer: sha256:8a196a8ba4058173774e13e3ac21a76216041fbfd5361e434bb1b4bf547d7d3a
2026/01/27 14:45:57  info unpack layer: sha256:d84024e05c6665ba81f30fc5cbd93e4ee68fd7d5e77e3a9f2ab005c8bbb5b217
2026/01/27 14:45:57  info unpack layer: sha256:c6e912b1cab0080e88030b99415f21595c955869e394f4ae209bfccc606c4564
2026/01/27 14:45:57  info unpack layer: sha256:2c392aca5e2c3ddfc1dba912ae3920abcd8d91b8f6608958a76b2b05af21f4c8
2026/01/27 14:45:57  info unpack layer: sha256:076c5cb4a0b59cd635d46976d59eba6b611f1e06233a7528697b0c2299855062
2026/01/27 14:46:27  info unpack layer: sha256:50f07b304382aadc33d6b5e7da4de13c73a336cdb6026bec934c425ab45e11fb
2026/01/27 14:46:31  info unpack layer: sha256:1fad95eebeee7b0ff523ef57d553444091731855bf4bc1b519cd014c00c8ad39
2026/01/27 14:46:31  info unpack layer: sha256:a84fa0205b967fb6078487f3ab7f7d5a22b499a5e8e884a22db2fe2e66fc157f
2026/01/27 14:47:41  info unpack layer: sha256:8364bc323f89a514a207e2b0c46dc398b3cd8b66e7b46a84130ee47c43375f93
2026/01/27 14:48:03  info unpack layer: sha256:50275fb50bfd412716607f237f6bf08c1e26904fecb8e1e3266960717d65680a
2026/01/27 14:48:03  info unpack layer: sha256:df8f293bb665432bac43f5f0b583c7a9c3eaf775807a23861687ff34aa0db872
INFO:    Creating SIF file...

This worked too.

So for some reason, module load singularity -> singularity pull ... failed when orchestrated from Seqera Platform or Nextflow but it worked when I did it manually using what I think are the same commands manually using the same softwares and versions Nextflow should be using.

Hi Mark,

Thanks for your question, and I think we can get this resolved for you. Which version of Apptainer are you using? We’ve seen this problem with older versions of Apptainer (prior to v1.3.6) due to how Apptainer handles multiple concurrent downloads (Github issue). Interestingly, the error doesn’t occur when the images are already in the Apptainer cache.

Would you be able to upgrade your Apptainer version and test it again? Alternatively, you can pre-download all the images using Apptainer and set the NXF_APPTAINER_CACHEDIR to the location of your cached images (if you haven’t done so already).

Let us know how your troubleshooting goes!

Warm regards,

Rob, Product @ Seqera

I’ve been having the same Try and increase singularity.pullTimeout in the config error both locally and on our cluster for as long as I can remember :P. Whenever I get it, I’ll just run the pull command manually, and then everything will continue working. Just make sure your NXF_SINGULARITY_CACHEDIR environment variable (or singularity.cacheDir Nextflow variable) are set to the location where your images are stored.

Hi Rob,

My apptainer version was 1.4.5

My singularity version was 3.9.7

This was more of a singularity issue than an apptainer issue I felt.

Yeah this is what I’ve been doing. A little annoying when a an online pipeline has dozens of containers. I wish it could just resolve automatically without manually downloading them first