Limiting parallelism with max memory cap?

I have a branching workflow where, if processes are freely allowed to run in parallel, they consume more memory than my system has available. I’m running on my local machine for now, but this will eventually be run on SLURM and other HPC environments.

I know I can specify how much memory to request, like this:

process processName {
    maxRetries 3
    memory {14.GB + 2.GB * task.attempt}
}

Is there a way to use this to limit the number of processes nextflow runs simultaneously to stay under a total memory cap?

You can use maxForks (docs here) to control how many tasks Nextflow will instantiate from a process, but I’m having some trouble to make a closure in maxForks work with the value from memory :frowning_face:

Thank you! I submitted a feature request to github. This is something Snakemake has as a feature, and I think it would really help both in cutting down the complexity of implementing workflows and increasing the efficiency of Nextflow if NF had it too.

1 Like