I am trying to run the NVIDIA Parabricks example on Google Cloud Batch. It appears that Parabricks container is not seeing the GPUs. If anyone has succeeded at this, I’d welcome some help!
The error is:
[Parabricks Options Error]: Could not find accessible GPUs. Please make sure the container options enable GPUs
[Parabricks Options Error]: Run with -h to see help
I think the issue is a result of the Parabricks container configuration. I have connected with a Nextflow expert at NVIDIA. I’ll ask if they have had any success with the effort and let you know.
Hi, I just ran into the same problem, so I would be interested if you find a solution.
In the meantime, using an older version of the parabricks container (4.5.0-1) worked for me without changing anything in the nextflow config.
And more generally, I noticed that there seems to be a problem with containers in nextflow processes not finding the GPU when using containers based on newer CUDA versions (>=12.8) and google batch, even though the installed driver (580.105.08) should be able to support them.
I found a solution and got the latest version to work by following this documentation:
In my understanding, CUDA containers stopped shipping their libs and utils within the container and instead rely on their availability from the host system.
Usually, this is managed by the nvidia-docker-toolkit. But Batch VMs use container-optimized-OS which does not use this toolkit. So, containers within this OS need to be exposed to system libs manually.
After telling the container where to find them via runOptions = ‘–volume /var/lib/nvidia/lib64:/usr/local/nvidia/lib64 –volume /var/lib/nvidia/bin:/usr/local/nvidia/bin –volume /usr/local/cuda-12.9/lib64:/usr/local/cuda-12.9/lib64 –device /dev/nvidia0:/dev/nvidia0 –device /dev/nvidia-uvm:/dev/nvidia-uvm –device /dev/nvidiactl:/dev/nvidiactl’
and reading these paths within the script export PATH=“$PATH:/usr/local/nvidia/bin” export LD_LIBRARY_PATH=“$LD_LIBRARY_PATH:/usr/local/nvidia/lib64”
parabricks works as expected.