kanishka.M
(kanishka.M)
February 27, 2026, 5:50am
1
Hi Team,
We have added a post‑run cleanup script in our Nextflow workflow.onComplete{} block to automatically delete S3 input and intermediate folders only when the pipeline fully succeeds on a Seqera Cloud AWS Batch compute environment.
We want community validation that our approach is safe and recommended.
Our logic includes:
Cleanup runs only if workflow.success == true
Cleanup runs only if at least one task executed (checked via execution trace)
Cleanup enabled only when params.enable_cleanup = true
S3 deletion uses aws s3 rm --recursive s3:///…
Local work/ remains on Batch workers (not deleted)
Looking for confirmation on:
Whether workflow.onComplete{} is the correct mechanism for conditional cleanup in Seqera Cloud
Whether trace‑file + success guard is a valid best‑practice
Any recommended alternative patterns for safe S3 cleanup in managed Batch environments
Want to ensure we follow community‑approved methods before enabling this in production.
I believe it’s expected to use Life cycle rules to clean up cloud caches
opened 08:52AM - 30 Aug 24 UTC
stale
## Bug report
If I use a cloudcache, Nextflow clean does not work.
### Ex… pected behavior and actual behavior
Nextflow should be able to use the cloudcache and clean up the working dir, however it fails when using a cloud cache because it looks for the index locally.
### Steps to reproduce the problem
I used Azure because I had credentials set up but this works with AWS as well. I haven't tested GCP yet but I expect the same.
```
> NXF_CLOUDCACHE_PATH=az://path/cache/ nextflow run hello
N E X T F L O W ~ version 24.04.4
Launching `https://github.com/nextflow-io/hello` [gigantic_mendel] DSL2 - revision: afff16a9b4 [master]
[75/49599d] Submitted process > sayHello (4)
[13/463f92] Submitted process > sayHello (3)
[fe/f00128] Submitted process > sayHello (1)
[ef/12b7cb] Submitted process > sayHello (2)
Hola world!
Hello world!
Bonjour world!
Ciao world!
> NXF_CLOUDCACHE_PATH=az://path/cache/ nextflow run hello -resume
N E X T F L O W ~ version 24.04.4
Launching `https://github.com/nextflow-io/hello` [extravagant_kay] DSL2 - revision: afff16a9b4 [master]
[75/49599d] Cached process > sayHello (4)
[ef/12b7cb] Cached process > sayHello (2)
[13/463f92] Cached process > sayHello (3)
[fe/f00128] Cached process > sayHello (1)
Ciao world!
Hola world!
Hello world!
Bonjour world!
> NXF_CLOUDCACHE_PATH=az://path/cache/ nextflow clean -n 3646f660-5b97-4a49-a459-db95fc93ce5f
Missing cache index file: /Users/adam.talbot/Documents/GitHub/nf-experiments/.nextflow/cache/3646f660-5b97-4a49-a459-db95fc93ce5f/index.gigantic_mendel
```
### Program output
From output of `NXF_CLOUDCACHE_PATH=az://path/cache/ nextflow -log clean.log clean -n 3646f660-5b97-4a49-a459-db95fc93ce5f`
```
Aug-30 09:50:06.078 [main] DEBUG nextflow.cli.Launcher - $> nextflow -log clean.log clean -n 3646f660-5b97-4a49-a459-db95fc93ce5f
Aug-30 09:50:06.115 [main] DEBUG nextflow.plugin.PluginsFacade - Setting up plugin manager > mode=prod; embedded=false; plugins-dir=/Users/adam.talbot/.nextflow/plugins; core-plugins: nf-amazon@2.5.3,nf-azure@1.6.1,nf-cloudcache@0.4.1,nf-codecommit@0.2.1,nf-console@1.1.3,nf-ga4gh@1.3.0,nf-google@1.13.2-patch1,nf-tower@1.9.1,nf-wave@1.4.2-patch1
Aug-30 09:50:06.120 [main] INFO o.pf4j.DefaultPluginStatusProvider - Enabled plugins: []
Aug-30 09:50:06.120 [main] INFO o.pf4j.DefaultPluginStatusProvider - Disabled plugins: []
Aug-30 09:50:06.121 [main] INFO org.pf4j.DefaultPluginManager - PF4J version 3.12.0 in 'deployment' mode
Aug-30 09:50:06.127 [main] INFO org.pf4j.AbstractPluginManager - No plugins
Aug-30 09:50:06.147 [main] DEBUG nextflow.cache.CacheFactory - Using Nextflow cache factory: nextflow.cache.DefaultCacheFactory
Aug-30 09:50:06.164 [main] DEBUG nextflow.util.CustomThreadPool - Creating default thread pool > poolSize: 11; maxThreads: 1000
Aug-30 09:50:06.197 [main] DEBUG nextflow.cli.Launcher - Operation aborted
nextflow.exception.AbortOperationException: Missing cache index file: /Users/adam.talbot/Documents/GitHub/nf-experiments/.nextflow/cache/3646f660-5b97-4a49-a459-db95fc93ce5f/index.gigantic_mendel
at nextflow.cache.DefaultCacheStore.openForRead(DefaultCacheStore.groovy:117)
at nextflow.cache.DefaultCacheStore.openForRead(DefaultCacheStore.groovy)
at nextflow.cache.CacheDB.openForRead(CacheDB.groovy:69)
at nextflow.cli.CmdClean.cleanup(CmdClean.groovy:136)
at nextflow.cli.CmdClean.access$0(CmdClean.groovy)
at nextflow.cli.CmdClean$_run_closure1.doCall(CmdClean.groovy:100)
at nextflow.cli.CmdClean$_run_closure1.call(CmdClean.groovy)
at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2394)
at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2379)
at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2420)
at nextflow.cli.CmdClean.run(CmdClean.groovy:100)
at nextflow.cli.Launcher.run(Launcher.groovy:503)
at nextflow.cli.Launcher.main(Launcher.groovy:657)
```
### Environment
* Nextflow version: 24.04.4
* Java version: openjdk version "17.0.3" 2022-04-19
* Operating system: macOS
* Bash version: zsh 5.9 (x86_64-apple-darwin23.0)
Looking for a way to trigger a Nextflow cleanup process (deleting input files) only after the entire workflow completes successfully—any ideas or best practices to handle this efficiently? - #10 by Charles_A_Roy has posted their solution