DEV Community

Mạnh Vũ
Mạnh Vũ

Posted on • Updated on

Setup Nx lib and EXLA to run NX/AXON with CUDA

Steps for you setup and run Machine Learning with Axon or simple Nx script with EXLA with CUDA (GPU) on Linux(Ubuntu).

1. Setup CUDA on local machine.

For case you want to try run model on GPU (Linux/Ubuntu) you need setup CUDA environment follow steps.

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin
sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda-repo-ubuntu2204-11-8-local_11.8.0-520.61.05-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2204-11-8-local_118.0-520.61.05-1_amd64.deb
sudo cp /var/cuda-repo-ubuntu2204-11-8-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda
Enter fullscreen mode Exit fullscreen mode

Note: Remember check installed version CUDA is matched to toolkit version by commands

nvcc --version
Enter fullscreen mode Exit fullscreen mode
nvidia-smi
Enter fullscreen mode Exit fullscreen mode

2. Setup LiveBook in local for easy access local environment.

git clone https://github.com/livebook-dev/livebook.git
cd livebook
mix deps.get --only prod

# Run the Livebook server
MIX_ENV=prod mix phx.server
Enter fullscreen mode Exit fullscreen mode

Access LiveBook from url in terminal.

LiveBook site

3. Create & setup new LiveBook.

setup at LiveBook for running XLA with CUDA

Mix.install(
  [
    #...
    {:nx, "~> 0.7"},
    {:exla, "~> 0.7"}
  ],
  config: [
    nx: [
      default_backend: EXLA.Backend
      ]
    ],
  system_env: [
    XLA_TARGET: "cuda120"
  ]
)
Enter fullscreen mode Exit fullscreen mode

4. Add compiler to EXLA in Axon if needed.

Axon.Loop.run(test_pipeline, trained_model_state, compiler: EXLA)
Enter fullscreen mode Exit fullscreen mode

Top comments (0)