GPU selection with HPC scheduler and singularity containers

I’m running Nextflow on an HPC system where each node has multiple GPUs and the job scheduler sets the environment variable $SGE_CUDA_sgr, based on which GPUs are assigned to the job.

My Nextflow processes run from Apptainer containers. I’d like to set
export APPTAINERENV_CUDA_VISIBLE_DEVICES=$(echo ${SGE_HGR_gpu} | tr ' ' ',')

based on the instructions here:
GPU Support (NVIDIA CUDA & AMD ROCm) — Apptainer User Guide 1.3 documentation

Is that possible? Basically, how can I execute a little shell script to set environment variables on the job node, before apptainer is called?

Edit:
I found beforeScript in the docs, I think that will probably work.
https://www.nextflow.io/docs/latest/process.html#beforescript

Let us know if it works, I often call nvidia-smi in the beforeScript to check if the GPU is fully working for GPU enabled processes. This can help catch errors early.

1 Like