DEV Community

Cover image for Using OpenCV on Windows Subsystem for Linux
Naufan Rusyda Faikar
Naufan Rusyda Faikar

Posted on • Updated on

Using OpenCV on Windows Subsystem for Linux

I am not going to walk you through the whole story of using GPU-supported OpenCV in Python on the WSL2, because I have not been a very good storyteller. But rather I am going to tell you the interesting passages of was being a Fedora user with a spring in my step. Let us get ready to fly towards the sky!

Installing the Requirements

Please note, it is important to understand how it works as shown on the NVIDIA blog.

First, at the time of writing, there is no GPU support for WSL2 by default. So, we had to sign-up as a Windows Insider participant, make an update from the System Settings and then reboot our machine.

Alt Text

Second, we had to sign-in to our NVIDIA Developer account, install the NVIDIA driver for CUDA on WSL and then reboot one more time.



> nvidia-smi.exe
Fri Nov 20 23:08:43 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 465.12       Driver Version: 465.12       CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce GTX 1050   WDDM  | 00000000:01:00.0 Off |                  N/A |
| N/A   50C    P8    N/A /  N/A |     75MiB /  4096MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
...


Enter fullscreen mode Exit fullscreen mode

Third, we had to install the CUDA toolkit along with the cuDNN and then add it into the Fedora system path;



$ curl https://developer.download.nvidia.com/compute/cuda/11.1.1/local_installers/cuda_11.1.1_455.32.00_linux.run -o cuda_11.1.1_455.32.00_linux.run
...

$ sudo sh cuda_11.1.1_455.32.00_linux.run
...

$ curl https://developer.nvidia.com/compute/machine-learning/cudnn/secure/8.0.5/11.1_20201106/RHEL8_1-x64/libcudnn-8.0.5.39-1.cuda11.1.x86_64.rpm -o libcudnn8-8.0.5.39-1.cuda11.1.x86_64.rpm
...

$ sudo rpm -ivh libcudnn8-8.0.5.39-1.cuda11.1.x86_64.rpm
...

$ echo 'export PATH="/usr/local/cuda-11.1/bin:$PATH"' >> ~/.zshrc # or .bashrc if you are using BASH
$ echo 'export LD_LIBRARY_PATH="/usr/local/cuda-11.1/lib64"' >> ~/.zshrc
$ source ~/.zshrc


Enter fullscreen mode Exit fullscreen mode

Fourth, we had to make sure that we have installed all the following necessary build tools;



$ sudo dnf install -y cmake python-devel ffmpeg-devel gstreamermm-devel
...


Enter fullscreen mode Exit fullscreen mode

Fifth, it is highly recommended to install OpenCV into a virtual environment rather than directly to the system. So, we need to create one and do not forget to install the numpy module in it;



$ python3.8 -m pip install virtualenv # Fedora 33 has made Python 3.9 the default
...

$ pwd # just to see where we are
/home/naru

$ cd Researches # or wherever you have decided
$ python3.8 -m virtualenv venv # or whatever name you have preferred
$ source venv/bin/activate

$ pip install numpy
...


Enter fullscreen mode Exit fullscreen mode

Sixth, we had to download OpenCV with the extra modules source codes and extracted them;



$ cd ..
$ mkdir Downloads
$ cd Downloads

$ curl -L https://github.com/opencv/opencv/archive/4.5.0.tar.gz | tar xz
...

$ curl -L https://github.com/opencv/opencv_contrib/archive/4.5.0.tar.gz | tar xz
...


Enter fullscreen mode Exit fullscreen mode

Once everything was set up, we were ready for the real journey starting from here!

Installing OpenCV

Seventh, we are going to compile all needed. But we need to adjust the arguments to suit our system:

  • CUDA_ARCH_BIN specifies the NVIDIA GPU architecture version is used. See yours on the official website.
  • OPENCV_EXTRA_MODULES_PATH specifies the path to the source code of the extra modules relative to the build directory. It is also possible to use an absolute path.
  • PYTHON_EXECUTABLE determines which Python interpreter we are going to use.
  • PYTHON3_NUMPY_INCLUDE_DIRS determines where the numpy module is installed.
  • CMAKE_INSTALL_PREFIX defines the prefix installation path where the files to be compiled are placed.
  • ../opencv-4.5.0 defines the path to the source code of the main modules.


$ mkdir --parents build
$ cd build

$ cmake -D CUDA_ARCH_BIN=6.1 \
> -D WITH_CUDA=ON \
> -D WITH_CUDNN=ON \
> -D OPENCV_DNN_CUDA=ON \
> -D ENABLE_FAST_MATH=1 \
> -D CUDA_FAST_MATH=1 \
> -D WITH_CUBLAS=1 \
> -D OPENCV_ENABLE_NONFREE=ON \
> -D OPENCV_EXTRA_MODULES_PATH=../opencv_contrib-4.5.0/modules \
> -D PYTHON_EXECUTABLE=$HOME/Researches/venv/bin/python \
> -D PYTHON3_NUMPY_INCLUDE_DIRS=$HOME/Researches/venv/lib64/python3.8/site-packages/numpy/core/include \
> -D CMAKE_INSTALL_PREFIX=$HOME/Researches/venv \
> -D CUDNN_LIBRARY=/usr/lib64/libcudnn.so.8 \
> -D CUDNN_INCLUDE_DIR=/usr/include \
> ../opencv-4.5.0
...

$ make
...


Enter fullscreen mode Exit fullscreen mode

Get a cup of coffee or two. But I would have preferred tea to coffee. Hahaha... Once it has done, eighth, install it using a simple command;



$ sudo make install
...


Enter fullscreen mode Exit fullscreen mode

Tada! We are done! You could run it and see if everything works! Well, I do not have enough experience to give you examples of using any CUDA OpenCV modules.



>>> import cv2
>>> cv2.cuda.getDevice()
0
>>> cv2.cuda.getCudaEnabledDeviceCount()
1


Enter fullscreen mode Exit fullscreen mode

Ninth, unfortunately, at the moment, there is no audio and video support for WSL2 by default. But we could try to hack a few things to make it happen! There is a guide on the Ubuntu wiki for that. Just for now, I do not really need any kind of access to the audio/video input device. Might be sometime in the near future, I will make a post about it. But I seriously doubt that I will.

Tips

First, we could not rely on the nvidia-smi but a few. It has been broken in many aspects both on Windows and WSL2. Anyway, if you want but could not find the nvidia-smi.exe command, just add C:\Program Files\NVIDIA Corporation\NVSMI into the Windows system path and then reopen the Windows Powershell or Command Prompt. Still indeed, there is no point having it on Windows!

Second, we might have become aware of an error message that has faced our world after installing the CUDA toolkit. It has told us that it has failed to install the CUDA driver. But there is no need to worry even a slightest bit. We do not need it and not ever will.



===========
= Summary =
===========

Driver:   Not Selected
Toolkit:  Installed in /usr/local/cuda-11.1/
Samples:  Installed in /home/naru/, but missing recommended libraries

Please make sure that
 -   PATH includes /usr/local/cuda-11.1/bin
 -   LD_LIBRARY_PATH includes /usr/local/cuda-11.1/lib64, or, add /usr/local/cuda-11.1/lib64 to /etc/ld.so.conf and run ldconfig as root

To uninstall the CUDA Toolkit, run cuda-uninstaller in /usr/local/cuda-11.1/bin
***WARNING: Incomplete installation! This installation did not install the CUDA Driver. A driver of version at least 455.00 is required for CUDA 11.1 functionality to work.
To install the driver using this installer, run the following command, replacing <CudaInstaller> with the name of this run file:
    sudo <CudaInstaller>.run --silent --driver

Logfile is /var/log/cuda-installer.log


Enter fullscreen mode Exit fullscreen mode

Third, we might have been confused since we could not find any libcudnn*.so.* files at /usr/local/cuda-11.1/lib64 after installing the libcudnn*.rpm file. The reason behind this is the RPM file has installed all compiled library files into a different directory, i.e. /usr/lib64.



$ rpm -qlp libcudnn8-8.0.5.39-1.cuda11.1.x86_64.rpm
...
/usr/lib64/libcudnn.so.8
/usr/lib64/libcudnn.so.8.0.5
/usr/lib64/libcudnn_adv_infer.so.8
/usr/lib64/libcudnn_adv_infer.so.8.0.5
/usr/lib64/libcudnn_adv_train.so.8
/usr/lib64/libcudnn_adv_train.so.8.0.5
/usr/lib64/libcudnn_cnn_infer.so.8
/usr/lib64/libcudnn_cnn_infer.so.8.0.5
/usr/lib64/libcudnn_cnn_train.so.8
/usr/lib64/libcudnn_cnn_train.so.8.0.5
/usr/lib64/libcudnn_ops_infer.so.8
/usr/lib64/libcudnn_ops_infer.so.8.0.5
/usr/lib64/libcudnn_ops_train.so.8
/usr/lib64/libcudnn_ops_train.so.8.0.5


Enter fullscreen mode Exit fullscreen mode

If we did not like this behaviour, we could download the archived file instead and then extract it manually;



$ curl https://developer.nvidia.com/compute/machine-learning/cudnn/secure/8.0.5/11.1_20201106/cudnn-11.1-linux-x64-v8.0.5.39.tgz -o cudnn-11.1-linux-x64-v8.0.5.39.tgz
...

$ tar -xzvf cudnn-11.1-linux-x64-v8.0.5.39.tgz
...

$ sudo cp cuda/include/cudnn*.h /usr/local/cuda-11.1/include
$ sudo cp cuda/lib64/libcudnn* /usr/local/cuda-11.1/lib64
$ sudo chmod a+r /usr/local/cuda-11.1/include/cudnn*.h /usr/local/cuda-11.1/lib64/libcudnn*


Enter fullscreen mode Exit fullscreen mode

Fourth, to list all available OpenCV build options as we might want to configure more, we could run cmake -LA;



$ cmake -LA
...
BUILD_CUDA_STUBS:BOOL=OFF
BUILD_DOCS:BOOL=OFF
BUILD_EXAMPLES:BOOL=OFF
BUILD_IPP_IW:BOOL=ON
BUILD_ITT:BOOL=ON
BUILD_JASPER:BOOL=OFF
BUILD_JAVA:BOOL=ON
BUILD_JPEG:BOOL=OFF
BUILD_LIST:STRING=
BUILD_OPENEXR:BOOL=OFF
BUILD_OPENJPEG:BOOL=OFF
BUILD_PACKAGE:BOOL=ON
BUILD_PERF_TESTS:BOOL=ON
BUILD_PNG:BOOL=OFF
BUILD_PROTOBUF:BOOL=ON
BUILD_SHARED_LIBS:BOOL=ON
BUILD_TBB:BOOL=OFF
BUILD_TESTS:BOOL=ON
BUILD_TIFF:BOOL=OFF
BUILD_USE_SYMLINKS:BOOL=OFF
BUILD_WEBP:BOOL=OFF
BUILD_WITH_DEBUG_INFO:BOOL=OFF
BUILD_WITH_DYNAMIC_IPP:BOOL=OFF
BUILD_ZLIB:BOOL=OFF
...

Enter fullscreen mode Exit fullscreen mode




References

Top comments (0)