Spades job in nf-core/denovotranscript pipeline fails with memory issues

I am trying to launch GitHub - nf-core/denovotranscript: A pipeline for de novo transcriptome assembly of paired-end short reads from bulk RNA-seq pipeline via the slurm cloud executer but faced issue related to spades “Cannot allocate memory” Seqera Cloud

so i tried to manually increase the resources to spades which did manage to finish the spades job it seems
notes.txt (170.2 KB)
but when i tried to resume the pipeline via the cloud it again created spades job and failed Seqera Cloud


with same error :crazy_face:

The changes made were

diff /cluster/work/users/ash022/work/fe/d95a0e1229b80f0e9f505293f800f8/.command.sh /cluster/work/users/ash022/work/cd/0398682dc60dafb772a1e2a2988011/.command.sh
4,5c4,5
<     --threads 32 \
<     --memory 1000 \
---
>     --threads 12 \
>     --memory 72 \

diff /cluster/work/users/ash022/work/fe/d95a0e1229b80f0e9f505293f800f8/.command.run /cluster/work/users/ash022/work/cd/0398682dc60dafb772a1e2a2988011/.command.run
3c3
< #SBATCH -o /cluster/work/users/ash022/work/fe/d95a0e1229b80f0e9f505293f800f8/.command.log
---
> #SBATCH -o /cluster/work/users/ash022/work/cd/0398682dc60dafb772a1e2a2988011/.command.log
6,10c6,10
< #SBATCH -c 30
< #SBATCH -t 256:00:00
< #SBATCH --mem 1000G
< #SBATCH --account=nn9036k --time=256:00:00 --partition=bigmem
< NXF_CHDIR=/cluster/work/users/ash022/work/fe/d95a0e1229b80f0e9f505293f800f8
---
> #SBATCH -c 12
> #SBATCH -t 16:00:00
> #SBATCH --mem 73728M
> #SBATCH --account=nn9036k --mem=512G --time=256:00:00 --partition=bigmem
> NXF_CHDIR=/cluster/work/users/ash022/work/cd/0398682dc60dafb772a1e2a2988011
133c133
<     /bin/bash -euo pipefail /cluster/work/users/ash022/work/fe/d95a0e1229b80f0e9f505293f800f8/.command.sh
---
>     /bin/bash -euo pipefail /cluster/work/users/ash022/work/cd/0398682dc60dafb772a1e2a2988011/.command.sh
160c160
<     /bin/bash -euo pipefail /cluster/work/users/ash022/work/fe/d95a0e1229b80f0e9f505293f800f8/.command.sh &
---
>     /bin/bash -euo pipefail /cluster/work/users/ash022/work/cd/0398682dc60dafb772a1e2a2988011/.command.sh &
291c291
<     printf -- $exit_status > /cluster/work/users/ash022/work/fe/d95a0e1229b80f0e9f505293f800f8/.exitcode
---
>     printf -- $exit_status > /cluster/work/users/ash022/work/cd/0398682dc60dafb772a1e2a2988011/.exitcode
302c302
<     set +u; env - PATH="$PATH" ${TMP:+SINGULARITYENV_TMP="$TMP"} ${TMPDIR:+SINGULARITYENV_TMPDIR="$TMPDIR"} ${NXF_TASK_WORKDIR:+SINGULARITYENV_NXF_TASK_WORKDIR="$NXF_TASK_WORKDIR"} SINGULARITYENV_NXF_DEBUG="${NXF_DEBUG:=0}" singularity exec --no-home --pid -B /cluster/work/users/ash022/work /cluster/work/users/ash022/work/singularity/depot.galaxyproject.org-singularity-spades-4.0.0--h5fb382e_1.img /bin/bash -c "cd $PWD; eval $(nxf_container_env); /bin/bash /cluster/work/users/ash022/work/fe/d95a0e1229b80f0e9f505293f800f8/.command.run nxf_trace"
---
>     set +u; env - PATH="$PATH" ${TMP:+SINGULARITYENV_TMP="$TMP"} ${TMPDIR:+SINGULARITYENV_TMPDIR="$TMPDIR"} ${NXF_TASK_WORKDIR:+SINGULARITYENV_NXF_TASK_WORKDIR="$NXF_TASK_WORKDIR"} SINGULARITYENV_NXF_DEBUG="${NXF_DEBUG:=0}" singularity exec --no-home --pid -B /cluster/work/users/ash022/work /cluster/work/users/ash022/work/singularity/depot.galaxyproject.org-singularity-spades-4.0.0--h5fb382e_1.img /bin/bash -c "cd $PWD; eval $(nxf_container_env); /bin/bash /cluster/work/users/ash022/work/cd/0398682dc60dafb772a1e2a2988011/.command.run nxf_trace"
328c328
<     touch /cluster/work/users/ash022/work/fe/d95a0e1229b80f0e9f505293f800f8/.command.begin
---
>     touch /cluster/work/users/ash022/work/cd/0398682dc60dafb772a1e2a2988011/.command.begin

I am wondering how to get past this problem, any help will be appreciated :pray:

Hi @animesh, were you able to solve this yourself or do you still need help?

not yet :saluting_face: