Background and Problem
I built our team's own version RStudio docker image to support any additional R / Python packages we need for internal usage that based on nice community work rocker/rstudio, it has some limitation, for example, not support GPU, of course, to solve that, I borrowed the idea from Nvidia's Dockerfile, which in general added all basic CUDA libraries for you until I hit the issue to support Tenforflow GPU usage.
Tensorflow GPU gives you details for what you need to do for Ubuntu 18.04
# Add NVIDIA package repositories
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /"
sudo apt-get update
wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb
sudo apt install ./nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb
sudo apt-get update
# Install NVIDIA driver
sudo apt-get install --no-install-recommends nvidia-driver-450
# Reboot. Check that GPUs are visible using the command: nvidia-smi
wget https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/libnvinfer7_7.1.3-1+cuda11.0_amd64.deb
sudo apt install ./libnvinfer7_7.1.3-1+cuda11.0_amd64.deb
sudo apt-get update
# Install development and runtime libraries (~4GB)
sudo apt-get install --no-install-recommends \
cuda-11-0 \
libcudnn8=8.0.4.30-1+cuda11.0 \
libcudnn8-dev=8.0.4.30-1+cuda11.0
# Install TensorRT. Requires that libcudnn8 is installed above.
sudo apt-get install -y --no-install-recommends libnvinfer7=7.1.3-1+cuda11.0 \
libnvinfer-dev=7.1.3-1+cuda11.0 \
libnvinfer-plugin7=7.1.3-1+cuda11.0
Wow, doing that is crazy for my current dockerfile build regardless of whether you use the kind of multi-stage solution or not, the size is just huge and had difficult to install on top of my existing RStudio image, I decided to take a look at what Tensorflow gpu docker did. A quick look at Tensorflow gpu.Dockerfile makes me decided to use its logic to get all CUDA libraries / runtime install except Python part since we will install our own python version in RStudio image.
So now the solution become
- Use some part of Tensorflow
gpu.Dockerfile
and build a good base image - On top of that base image, we install the R version we want and then use rocker/rstudio logic to install RStudio.
Everything looks great and got successfully built except that now my R environment in the new build docker image does not right with additional odd character, which caused by these lines of my dockerfile
&& echo '\n\
\n# Configure httr to perform out-of-band authentication if HTTR_LOCALHOST \
\n# is not set since a redirect to localhost may not work depending upon \
\n# where this Docker container is running. \
\nif(is.na(Sys.getenv("HTTR_LOCALHOST", unset=NA))) { \
\n options(httr_oob_default = TRUE) \
\n}' >> /usr/lib/R/etc/Rprofile.site \
Why is that, I used it so many times to build our RStudio docker image.
Why?
It is caused by SHELL ["/bin/bash", "-c"]
that I borrowed the logic from Tensorflow gpu.Dockerfile now the Dockerfile build shell changed to use bash
Solution
Change back to use sh when I continue build my RStudio part in dockerfile
# change back to sh due to base image was using bash
SHELL ["/bin/sh", "-c"]
Now everything is working!
Top comments (0)