Nextflow and GCP - Resources allocation Issue

I have a new error:

WARN: Batch job cannot be run: VM in Managed Instance Group meets error: Batch Error: code - CODE_GCE_QUOTA_EXCEEDED, description - error count is 6, latest message example: Instance 'nf-b9527374-171409-c4d28657-d505-44910-group0-0-gmqm' creation failed: Quota 'SSD_TOTAL_GB' exceeded. Limit: 1000.0 in region europe-west2.
WARN: Batch job cannot be run: VM in Managed Instance Group meets error: Batch Error: code - CODE_GCE_QUOTA_EXCEEDED, description - error count is 5, latest message example: Instance 'nf-2359e5e8-171409-b8ed7aaa-963f-4f4a0-group0-0-01d9' creation failed: Quota 'CPUS_ALL_REGIONS' exceeded. Limit: 32.0 globally.

I am obviously not allocating the resources in the right way.
Could anyone please point me where I’ve got my config file wrong?
This is my config file:

google {
   		process.executor = 'google-batch'
   		google.location = 'europe-west2'
   		google.region  = 'europe-west2'
   		google.project = 'dev-uk'
  		batch.spot = true
}
trace {
  enabled = true
  file = "pipeline_execution_trace.txt"
  fields = 'task_id,hash,native_id,process,tag,name,status,exit,module,container,cpus,time,disk,memory,attempt,submit,start,complete,duration,realtime,queue,%cpu,%mem,rss,vmem,peak_rss,peak_vmem,rchar,syscr,syscw,read_bytes,write_bytes'
}
process {
    executor = 'google-batch'
    google.location = 'europe-west2'
    google.region  = 'europe-west2'
    google.project = 'dev-uk'
    withLabel:cpus_8 {
    machineType = 'n1-standard-8'
    cpus = {check_resource(8)}
    }
    disk = '200 GB'
    withName:GATK4_MARKDUPLICATES{
        memory = 120.GB
    }
}
process {
    executor = 'google-batch'
    google.location = 'europe-west2'
    google.region  = 'europe-west2'
    google.project = 'dev-uk'
    withLabel:cpus_8 {
    machineType = 'n1-standard-8'
    cpus = {check_resource(8)}
    }
    disk = '200 GB'
    withName:'NFCORE_SAREK:sarek:BAM_VARIANT_CALLING_GERMLINE_ALL:BAM_VARIANT_CALLING_HAPLOTYPECALLER:GATK4_HAPLOTYPECALLER'{
        ext.args = '-ERC GVCF'
        memory = 120.GB
    }
}
process {
    executor = 'google-batch'
    disk = '400 GB'
    withLabel:cpus_8 {
    machineType = 'n1-standard-8'
    cpus = {check_resource(8)}
    }
    google.location = 'europe-west2'
    google.region  = 'europe-west2'
    google.project = 'dev-uk'
    withName:'NFCORE_SAREK:sarek:BAM_VARIANT_CALLING_GERMLINE_ALL:BAM_VARIANT_CALLING_HAPLOTYPECALLER:MERGE_HAPLOTYPECALLER'{
        memory = 120.GB
    }
}
process {
    executor = 'google-batch'
    disk = '200 GB'
    withLabel:cpus_8 {
    machineType = 'n1-standard-8'
    cpus = {check_resource(8)}
    }
    google.location = 'europe-west2'
    google.region  = 'europe-west2'
    google.project = 'dev-uk'
    withName:GATK4_CNNSCOREVARIANTS{
        ext.args = "--transfer-batch-size 4092 --inference-batch-size 4092"
        cpus = 8
        memory = 120.GB
    }
}
// THIS SHOULD BE DEFINED IN BASIC CONFIG BUT IT NEEDS IT HERE ANYWAY
// Return the minimum between requirements and a maximum limit to ensure that resource requirements don't go over
def check_resource(obj) {
    try {
      if (obj.getClass() == nextflow.util.MemoryUnit && obj.compareTo(params.max_memory as nextflow.util.MemoryUnit) == 1)
        return params.max_memory as nextflow.util.MemoryUnit
      else if (obj.getClass() == nextflow.util.Duration && obj.compareTo(params.max_time as nextflow.util.Duration) == 1)
        return params.max_time as nextflow.util.Duration
      else if (obj.getClass() == java.lang.Integer)
        return Math.min(obj, params.max_cpus as int)
      else
        return obj
    } catch (all) {
        println "   ### ERROR ###   Max params max_memory:'${params.max_memory}', max_time:'${params.max_time}' or max_cpus:'${params.max_cpus}'  is not valid! Using default value: $obj"
    }
}

Thank you

Hi @William_Sproviero.

Why do you have so many conflicting process blocks in your configuration file? You also have conflicting scopes. There’s no process.google.location. google is an outer scope by itself… Please read the configuration section of Nextflow’s official docs here. And here for Google Cloud.

In order to have your resources requests within the limits, you should use the new resourceLimits. You can read about it here.