Libz.so missing for AWS CLI in AWS batch tasks

Hi, following on from a previous issue where someone had problems with aws locating the libz shared object.

We have encountered similar - and have now narrowed it down to the underlying docker image for the tasks.

So our minimal task is either based

FROM quay.io/biocontainers/mulled-v2-4dde50190ae599f2bb2027cb2c8763ea00fb5084:4163e62e1daead7b7ea0228baece715bec295c22-0

or

FROM docker.io/continuumio/miniconda3:23.10.0-1

and consists of (as per an example launch from seqera)

CMD ["bash","-o","pipefail","-c","trap \"{ ret=$?; /opt2/awscliv2/bin/aws s3 cp --only-show-errors .command.log s3://stub_run/work/.command.log||true; exit $ret; }\" EXIT; /opt2/awscliv2/bin/aws s3 cp --only-show-errors s3://stub_run/work/.command.run - | bash 2>&1 | tee .command.log"]

Where mulled in used - the task fails with a libz.so error. Where miniconda is used - the tasks succeeds (well - the aws bits run anyhow).

Now I realise there is possibly a massive difference in libraries between the two images - but I thought the point of mapping in the underlying aws v2 cli implementation from the base Batch machine was that it provides its own dependencies in /opt2/aws-cli (in our case).

Are there some hidden extra dependencies perhaps?

We have correctly set cliPath - and made sure it is living in a part of the filesystem not otherwise interfered with with Docker mount points. And the /opt2/aws-cli/bin/aws binary does run succesfully on the batch instance itself.

The precise error message btw

/opt2/awscliv2/bin/aws: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory

As a closure for this - we did some analysis by using ldd and it seems the aws v2 CLI does not locate the libz.so.1 that is distributed with its binary. And on a (slim) docker image with no existing libz - the AWS CLI then fails.

We have fixed this by manually adding a LD_LIBRARY_PATH in nextflow config that explicitly points the directory of the mounted AWS CLI.