Env: can't execute 'perl

I am trying to run rnaseq pipeline via tower:

nextflow run 'https://github.com/nf-core/rnaseq'
         -name voluminous_joliot
         -params-file 'https://api.tower.nf/ephemeral/POavQ6D1KNKRnyJy4T0zXA.json'
         -with-tower
         -r 3.9
         -profile singularity

but it is failing with error

Workflow execution completed unsuccessfully
The exit status of the task that caused the workflow execution to fail was: 127

Error executing process > 'NFCORE_RNASEQ:RNASEQ:PREPARE_GENOME:GTF2BED (Homo_sapiens.GRCh38.110.gtf)'

Caused by:
  Process `NFCORE_RNASEQ:RNASEQ:PREPARE_GENOME:GTF2BED (Homo_sapiens.GRCh38.110.gtf)` terminated with an error exit status (127)

Command executed:

  gtf2bed \
      Homo_sapiens.GRCh38.110.gtf \
      > Homo_sapiens.GRCh38.110.bed
 
  cat <<-END_VERSIONS > versions.yml
  "NFCORE_RNASEQ:RNASEQ:PREPARE_GENOME:GTF2BED":
      perl: $(echo $(perl --version 2>&1) | sed 's/.*v\(.*\)) built.*/\1/')
  END_VERSIONS

Command exit status:
  127

Command output:
  (empty)

Command error:
  INFO:    Environment variable SINGULARITYENV_TMPDIR is set, but APPTAINERENV_TMPDIR is preferred
  INFO:    Environment variable SINGULARITYENV_NXF_TASK_WORKDIR is set, but APPTAINERENV_NXF_TASK_WORKDIR is preferred
  INFO:    Environment variable SINGULARITYENV_NXF_DEBUG is set, but APPTAINERENV_NXF_DEBUG is preferred
  WARNING: Skipping mount /var/apptainer/mnt/session/etc/resolv.conf [files]: /etc/resolv.conf doesn't exist in container
  env: can't execute 'perl
  ': No such file or directory

Work dir:
  /cluster/projects/nn9036k/work/70/b03677c095baf6f1a7da46cd4f43be

Tip: view the complete command output by changing to the process work dir and entering the command `cat .command.out`

any ideas how to proceed?

Hi @animesh,

This looks like a singularity error. Have you managed to run any other pipelines, or use singularity before?

I would start by stepping outside of Seqera Platform and Nextflow and making sure that you can use Singularity successfully on your system.

Phil

Yes, GitHub - nf-core/differentialabundance: Differential abundance analysis for feature/ observation matrices from platforms such as RNA-seq works fine?

Ok I did a quick search on the nf-core Slack and found one other user with the same error. @pontus was trying to help debug there and got as far as identifying an error with Windows line-endings within the pipeline source code but no further.

It makes even less sense that this error is happening on Nextflow Tower Seqera Platform. Especially when the file in question hasn’t been touched since before version 3.1 of the pipeline.

Can I ask why you’re running v3.9? Do you get the same error with the latest version, v3.14?

Yes, now you mention, i do recall facing this earlier as well (nf-core/rnaseq#1056) but that was for python? It magically disappeared after removing $HOME/.gitignore?

The v3.9 was just to reproduce an old work. I also tried v3.14 and that fails for a different reason: See the workflow run:

Workflow execution completed unsuccessfully
The exit status of the task that caused the workflow execution to fail was: 127

Error executing process > 'NFCORE_RNASEQ:RNASEQ:FASTQ_FASTQC_UMITOOLS_TRIMGALORE:TRIMGALORE (/cluster/projects/nn9036k/work/TK/TK1050)'

Caused by:
  Process `NFCORE_RNASEQ:RNASEQ:FASTQ_FASTQC_UMITOOLS_TRIMGALORE:TRIMGALORE (/cluster/projects/nn9036k/work/TK/TK1050)` terminated with an error exit status (127)

Command executed:

  [ ! -f  /cluster/projects/nn9036k/work/TK/TK1050_1.fastq.gz ] && ln -s TK10_50_1.fq.gz /cluster/projects/nn9036k/work/TK/TK1050_1.fastq.gz
  [ ! -f  /cluster/projects/nn9036k/work/TK/TK1050_2.fastq.gz ] && ln -s TK10_50_2.fq.gz /cluster/projects/nn9036k/work/TK/TK1050_2.fastq.gz
  trim_galore \
      --fastqc_args '-t 12' \
      --cores 8 \
      --paired \
      --gzip \
      /cluster/projects/nn9036k/work/TK/TK1050_1.fastq.gz \
      /cluster/projects/nn9036k/work/TK/TK1050_2.fastq.gz
  
  cat <<-END_VERSIONS > versions.yml
  "NFCORE_RNASEQ:RNASEQ:FASTQ_FASTQC_UMITOOLS_TRIMGALORE:TRIMGALORE":
      trimgalore: $(echo $(trim_galore --version 2>&1) | sed 's/^.*version //; s/Last.*$//')
      cutadapt: $(cutadapt --version)
  END_VERSIONS

Command exit status:
  127

Command output:
  (empty)

Command error:
  .command.sh: line 5: trim_galore: command not found

Work dir:
  /cluster/projects/nn9036k/work/b6/dae40810d21d0714bee0473c0f44ba

Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`

The error is the same, exit code 127 which generally means “command not found” (it just happened to trigger on a different process).

What executor are you using here - is this on a local HPC? Something is wrong with the Singularity setup, somehow.

Yes, it is uni HPC cluster. It works fine with differentialabundance pipeline, so not sure if singularity-setup is the issue? It might be the specific singularity*img though used for rna-seq?

The warning messages in the previous error from Singularity are not present in this error. Is singularity definitely enabled?

1 Like

I am probably misunderstanding @mahesh.binzerpanchal . I tried sigularity via CLI

(base) [ash022@login-1.SAGA ~/scripts]$ cp testVersion.py $HOME/.
(base) [ash022@login-1.SAGA ~/scripts]$ cat $HOME/testVersion.py
#cp ./testVersion.py $HOME/testVersion.py
#singularity exec https://depot.galaxyproject.org/singularity/python:3.9--1 "python" "./testVersion.py"
#3.9.5 | packaged by conda-forge | (default, Jun 19 2021, 00:32:32)
#[GCC 9.3.0]
#!/usr/bin/env python3
import sys
print (sys.version)

(base) [ash022@login-1.SAGA ~/scripts]$ singularity exec https://depot.galaxyproject.org/singularity/python:3.9--1 "python" "./testVersion.py"
3.9.5 | packaged by conda-forge | (default, Jun 19 2021, 00:32:32)
[GCC 9.3.0]

that seems to work? Also the differentialabudance pipeline works fine AFAIK

What was the command you ran to get the trim_galore error?

Running it via seqera/ tower web ui Seqera Cloud

I am guessing it somehow pulled codebase with CRLF before because it looks like deleting .nextflow along with those two letter directories in work folder and resubmitting has solved this issue somehow? If not, will trouble you guys again :crazy_face: Thanks a tonne @ewels @mahesh.binzerpanchal none the less :pray:

Aha, so the same problem as was previously reported!

How odd. Ok, well glad it’s working - thanks for letting us know :pray:

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.