Memory allocation issue

Hi guys, I am trying to run the following pipeline (GitHub - PengNi/ccsmethphase: Methylatino phasing using PacBio CCS reads) on my school’s cluster but am unable to move past a memory allocation issue. I’ve been trying to edit the config file to run singularity and request more CPUs accordingly, but my program errors out saying not enough CPUs are available. There are more than enough resources on the cluster side of things and so I think the issue is with how the resources are being requested (?). Would appreciate some insight into this:

Error executing process > ‘CheckGenome (CheckGenome)’

Caused by:
Process requirement exceed available CPUs – req: 8; avail: 1

If it’s a cluster, I wonder if you’re not trying to run it from the login node (which usually has resource limitations). Does your cluster use a workload manager such as SLURM or PBS? That could be a reason for such small availability of resources in a supposedly powerful infrastructure.

Hi Marcel,

The cluster Im using uses SLURM. In the past, I have not had this issue when trying to run nextflow pipelines.

Could you please run the command below in your shell and let me know what value is returned?

echo "$(grep -c processor /proc/cpuinfo)"

Are you setting slurm as the executor somewhere? For example, in nextflow.config with process.executor = 'slurm', or executor = 'slurm' in your processes.