<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gabriel Lavoura</title>
    <description>The latest articles on DEV Community by Gabriel Lavoura (@gabriellavoura).</description>
    <link>https://dev.to/gabriellavoura</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gabriellavoura"/>
    <language>en</language>
    <item>
      <title>How to install CUDA on WSL2</title>
      <dc:creator>Gabriel Lavoura</dc:creator>
      <pubDate>Fri, 21 Mar 2025 02:20:16 +0000</pubDate>
      <link>https://dev.to/gabriellavoura/how-to-install-cuda-on-wsl2-45g6</link>
      <guid>https://dev.to/gabriellavoura/how-to-install-cuda-on-wsl2-45g6</guid>
      <description>&lt;p&gt;Hello again!&lt;br&gt;
This time I was trying to get my Ollama setup running on the GPU because I noticed that, by default, it was only using my CPU cores. I suspected that my CUDA drivers weren't set up correctly on WSL2.&lt;/p&gt;

&lt;p&gt;Here are the steps I followed to install it on my machine:&lt;/p&gt;

&lt;p&gt;First, I searched on google and found Microsoft's documentation, which you can access here: &lt;a href="https://learn.microsoft.com/en-us/windows/ai/directml/gpu-cuda-in-wsl" rel="noopener noreferrer"&gt;Enable NVIDIA CUDA on WSL&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The firsts steps envolve ensuring you have a WSL instance running with a glibc-based distribution, such as Ubuntu or Debian.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Just a side note: By default, when you install the CUDA drivers on a windows machine, they also include a fully supported driver for WSL2. Therefore, running precompiled CUDA applications shouldn't be a problem. However, if you need to compile a CUDA application targeting a Linux or WSL2 backend, you must follow these steps to install the latest CUDA Toolkit for x86 Linux.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, let's get started.&lt;/p&gt;
&lt;h2&gt;
  
  
  Get started with NVIDIA CUDA
&lt;/h2&gt;

&lt;p&gt;Updated instructions can be found in the &lt;a href="https://docs.nvidia.com/cuda/wsl-user-guide/index.html" rel="noopener noreferrer"&gt;NVIDIA CUDA on WSL User Guide&lt;/a&gt;, but I will summarize the steps in this article.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1 - Check for WSL updates
&lt;/h2&gt;

&lt;p&gt;Open a PowerShell instance and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wsl.exe &lt;span class="nt"&gt;--update&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures that you have the latest WSL kernel available.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2 - Remove the old GPG Key
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-key del 7fa2af80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3 - Install the Linux x86 CUDA Toolkit Using WSL-Ubuntu Package
&lt;/h2&gt;

&lt;p&gt;Detailed instructions can be found at &lt;a href="https://developer.nvidia.com/cuda-downloads?target_os=Linux&amp;amp;target_arch=x86_64&amp;amp;Distribution=WSL-Ubuntu&amp;amp;target_version=2.0&amp;amp;target_type=deb_local" rel="noopener noreferrer"&gt;CUDA Toolkit 12.8 Update 1&lt;/a&gt;. However, for convenience, I will replicated them here. I still recommend checking the URL above to ensure you're using the latest available instructions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4 - Download and Organize the Files
&lt;/h2&gt;

&lt;p&gt;First, download the base installer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Move it to the correct location:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo mv &lt;/span&gt;cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, download the &lt;code&gt;.deb&lt;/code&gt; package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;wget https://developer.download.nvidia.com/compute/cuda/12.8.1/local_installers/cuda-repo-wsl-ubuntu-12-8-local_12.8.1-1_amd64.deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5 - Install the &lt;code&gt;.deb&lt;/code&gt; Package and Copy the Key
&lt;/h2&gt;

&lt;p&gt;Use &lt;code&gt;dpkg -i&lt;/code&gt; to install the &lt;code&gt;.deb&lt;/code&gt; package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;dpkg &lt;span class="nt"&gt;-i&lt;/span&gt; cuda-repo-wsl-ubuntu-12-8-local_12.8.1-1_amd64.deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the generated key (which will be prompted in your shell—pay attention!) and replace &lt;code&gt;*&lt;/code&gt; in the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo cp /var/cuda-repo-wsl-ubuntu-12-8-local/cuda-*-keyring.gpg /usr/share/keyrings/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 6 - Run &lt;code&gt;apt update&lt;/code&gt; and Install CUDA
&lt;/h2&gt;

&lt;p&gt;Now, lets run  &lt;code&gt;apt-get update&lt;/code&gt; to reload package information and ensure we have the lastest versions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, install the CUDA Toolkit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo sudo &lt;/span&gt;apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;cuda-toolkit-12-8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additional installation options are detailed &lt;a href="https://docs.nvidia.com/cuda/cuda-installation-guide-linux/#meta-packages" rel="noopener noreferrer"&gt;here&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;That's it! You're all set.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>tutorial</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Running Ollama on Docker: A Quick Guide</title>
      <dc:creator>Gabriel Lavoura</dc:creator>
      <pubDate>Thu, 13 Mar 2025 04:04:28 +0000</pubDate>
      <link>https://dev.to/gabriellavoura/running-ollama-on-docker-a-quick-guide-475l</link>
      <guid>https://dev.to/gabriellavoura/running-ollama-on-docker-a-quick-guide-475l</guid>
      <description>&lt;p&gt;Hi it's me again! Over the past few days, I've been testing multiples ways to work with LLMs locally, and so far, Ollama was the best tool (ignoring UI and other QoL aspects) for setting up a fast environment to test code and features.&lt;br&gt;
I've tried &lt;a href="https://github.com/nomic-ai/gpt4all" rel="noopener noreferrer"&gt;GPT4ALL&lt;/a&gt; and other tools before, but they seem overly bloated when the goal is simply to set up a running model to connect with a &lt;a href="https://www.langchain.com/" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt; API (on Windows with WSL).&lt;/p&gt;

&lt;p&gt;Ollama provides an extremely straightforward experience. Because of this, today I decided to install and use it via Docker containers — and it's surprisingly easy and powerful..&lt;/p&gt;

&lt;p&gt;With just five commands, we can set up the environment. Let's take a look.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1 - Pull the latest Ollama Docker image
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull ollama/ollama
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If you want to download an older version, you can specify the corresponding tag after the container name. By default, the &lt;code&gt;:latest&lt;/code&gt; tag is downloaded. You can check a list of available Ollama tags &lt;a href="https://hub.docker.com/r/ollama/ollama/tags" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2 - Create a Docker network
&lt;/h2&gt;

&lt;p&gt;Since we'll typically use and connect multiple containers, we need to specify a shared communication channel. To achieve this, it's a  good practice create a Docker &lt;a href="https://hub.docker.com/r/ollama/ollama/tags" rel="noopener noreferrer"&gt;network&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker network create &amp;lt;network-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can check a list of created Docker networks by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker network list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3 - Run the Ollama container
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we're going to run Ollama with CPU only. If you need to use GPU, the &lt;a href="https://hub.docker.com/r/ollama/ollama" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; provide a step-by-step guide.&lt;/p&gt;

&lt;p&gt;The command to run the container is also listed in the documentation, but we need to specify which network it should connect to, so we must add the &lt;code&gt;--network&lt;/code&gt; parameter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--netowk&lt;/span&gt; &amp;lt;network-name&amp;gt; &lt;span class="nt"&gt;-v&lt;/span&gt; ollama:/root/.ollama &lt;span class="nt"&gt;-p&lt;/span&gt; 11434:11434 &lt;span class="nt"&gt;--name&lt;/span&gt; ollama ollama/ollama
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4 - Run commands inside the Ollama container
&lt;/h2&gt;

&lt;p&gt;To download Ollama models, we need to run &lt;code&gt;ollama pull&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;To do this, we simply execute the command below, which enables the execution inside the container by enabling the interative mode (&lt;code&gt;-it&lt;/code&gt; parameter). &lt;br&gt;
Then, we run &lt;code&gt;ollama pull&lt;/code&gt; to download the &lt;code&gt;llama3.2:latest&lt;/code&gt; (3B), quantized model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; ollama ollama pull llama3.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Visit the Ollama website to check the list of &lt;a href="https://ollama.com/search" rel="noopener noreferrer"&gt;available models&lt;/a&gt;. Now, wait for the download to finish.&lt;/p&gt;

&lt;p&gt;You'll get this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffquglga0f92236szqw73.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffquglga0f92236szqw73.png" alt="Download model progress bar screenshot" width="705" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5 - Check the downloaded models
&lt;/h2&gt;

&lt;p&gt;To list the locally available models, just run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should get this output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fas4wsn0486h12gbrdzsl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fas4wsn0486h12gbrdzsl.png" alt="List of available downloaded models" width="607" height="64"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So you're done! Now you have Ollama running (using only the CPU), with the &lt;code&gt;llama3.2:latest&lt;/code&gt; model available locally. To run it with a GPU, check the documentation link in &lt;strong&gt;Step 3&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I'll share more short notes on working with Ollama and LangChain in the next few days. Stay tuned!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>langchain</category>
      <category>llm</category>
    </item>
    <item>
      <title>How to Setup Ollama on Google Colab</title>
      <dc:creator>Gabriel Lavoura</dc:creator>
      <pubDate>Tue, 11 Mar 2025 04:32:17 +0000</pubDate>
      <link>https://dev.to/gabriellavoura/setup-ollama-on-google-colab-4hng</link>
      <guid>https://dev.to/gabriellavoura/setup-ollama-on-google-colab-4hng</guid>
      <description>&lt;p&gt;Ok, this is just another "how-to on Google Colab" tutorial... but the purpose of this post is more like a note for my future self.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1 - Install colab-xterm
&lt;/h2&gt;

&lt;p&gt;First, we need to have access to a terminal within the Google Colab code cell. This way, we can install Ollama using the shell script approach.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;colab-xterm  
%load_ext colabxterm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script will install the &lt;a href="https://github.com/InfuseAI/colab-xterm" rel="noopener noreferrer"&gt;colab-xterm&lt;/a&gt; package (Perhaps an awesome tool! Thanks @popcornylu, you're awesome!), which allows us to use a terminal within our Colab notebook,  even on the free tier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0a5ehdiqttoov0xlijl.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0a5ehdiqttoov0xlijl.gif" alt="thanks mr. popcornylu, applauses" width="200" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2 - Open a terminal and install Ollama
&lt;/h2&gt;

&lt;p&gt;To open a terminal, just run the following command in a new cell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;%xterm
&lt;span class="c"&gt;#curl -fsSL https://ollama.com/install.sh | sh&lt;/span&gt;
&lt;span class="c"&gt;#ollama serve &amp;amp; ollama pull llama3.2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, paste the curl and ollama commands into the terminal and wait. Once it finishes, you're done!&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3 - Test the environment
&lt;/h2&gt;

&lt;p&gt;Create a new cell and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;ollama list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything is good, it should return the name, ID, size, and last modified date of the model, as shown in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkw9cgr8pbffkkuhfw805.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkw9cgr8pbffkkuhfw805.png" alt="A screenshot from Google Colab showing the result of ollama list" width="666" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point, you should have a running Ollama server with the Llama3.2:latest model (with 3B parameters) ready to be used.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4 - Clear
&lt;/h2&gt;

&lt;p&gt;Once your session is over, you might want to remove Ollama models to free up space, so create a new cell and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /usr/local/bin/ollama
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And it's done!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow9au2favoc5edrz42uw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow9au2favoc5edrz42uw.gif" alt="A looney tunes gif with the " width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PS: A note from the past: you're welcome, future  Gabriel! :)&lt;/p&gt;

</description>
      <category>llm</category>
      <category>googlecolab</category>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>Como usar Matplotlib com WSL2</title>
      <dc:creator>Gabriel Lavoura</dc:creator>
      <pubDate>Sun, 31 Jan 2021 04:59:11 +0000</pubDate>
      <link>https://dev.to/gabriellavoura/como-usar-matplotlib-com-wsl2-4ijn</link>
      <guid>https://dev.to/gabriellavoura/como-usar-matplotlib-com-wsl2-4ijn</guid>
      <description>&lt;p&gt;Nos últimos meses tenho migrado meu ambiente de desenvolvimento para &lt;a href="https://docs.microsoft.com/pt-br/windows/wsl/install-win10" rel="noopener noreferrer"&gt;WSL2&lt;/a&gt; (Windows Subsystem for Linux 2), sempre fui um usuário assíduo de &lt;em&gt;dual boot&lt;/em&gt;, mas tenho trabalhado em diversos projetos em paralelo, sendo necessário mudar de sistema varias vezes ao dia, seja devido ao &lt;em&gt;Home Office&lt;/em&gt; ou por reuniões.&lt;br&gt;
Neste caso Windows torna-se um sistema amigável e com "&lt;em&gt;Batteries Included&lt;/em&gt;", porém sinto falta da liberdade e agilidade que o pinguim me oferece.&lt;br&gt;
WSL2 veio para suprir essa demanda, atualmente tenho utilizado como sistema principal de desenvolvimento, a experiência tem sido muito positiva, algumas horas para configurar tudo de primeira, mas para a próxima vez que for necessário reinstalar já está tudo automatizado, utilizando &lt;em&gt;shell script&lt;/em&gt; para configurar o ambiente.&lt;br&gt;
Hoje precisei rodar um script Python direto pelo WSL2, um plot simples de uma distribuição gaussiana e me deparei com um problema clássico:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    "UserWarning: Matplotlib is currently using agg, which is a non-GUI.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Para resolver é bem simples, bora lá.&lt;/p&gt;

&lt;h2&gt;
  
  
  Escolha um X-server do seu gosto!
&lt;/h2&gt;

&lt;p&gt;Eu testei com o Xming e o &lt;a href="https://sourceforge.net/projects/vcxsrv/" rel="noopener noreferrer"&gt;VcXsrv&lt;/a&gt;, ambos funcionaram que é uma beleza, vou seguir com o Xming neste post.&lt;/p&gt;

&lt;p&gt;Baixe a última versão disponível do &lt;a href="https://sourceforge.net/projects/xming/" rel="noopener noreferrer"&gt;Xming&lt;/a&gt; para Windows e basta instalar com next, next, next, install. Aconselho liberar a ferramenta no firewall do Windows para não dar dor de cabeça.&lt;/p&gt;

&lt;p&gt;Agora deve-se ter um pouco de atenção, segue os passos ilustrados:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Abra o XLauch, escolha &lt;em&gt;Multiple Windows&lt;/em&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0lxt3uxud96b1p5q1mlc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0lxt3uxud96b1p5q1mlc.png" alt="Escolha Multiple Windows" width="497" height="383"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Nesta tela deixe como está.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ft1p3gm0qmnyxtt82xyei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ft1p3gm0qmnyxtt82xyei.png" alt="Deixe Como está, apenas next." width="499" height="386"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Marque a caixa chamada "No Access Control".&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9hhmcai2e90kyx7ss2xy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9hhmcai2e90kyx7ss2xy.png" alt="Marque a caixa chamada " width="494" height="385"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Agora é só next, finish e está feito!&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2tgc9ip8q89p061nhwsh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2tgc9ip8q89p061nhwsh.png" alt="Agora é só next, finish e está feito!" width="499" height="387"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Vamos para o WSL2
&lt;/h2&gt;

&lt;p&gt;Neste caso eu estou utilizando a imagem do Ubuntu 20.04 disponível na Microsoft Store.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instale o python-tk:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;python3-tk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Coloque o ip no arquivo resolv.conf, aconselho colocar no seu arquivo .zshrc / .bashrc, para nao precisar inserir novamente, pois o ip do wsl2 não é fixo.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DISPLAY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;grep &lt;/span&gt;nameserver /etc/resolv.conf | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s/nameserver //'&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="s2"&gt;:0"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;E está feito, relativamente simples e eficiente, a seguir uma imagem do resultado final:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqq4rvp4h1v0agjvsewlz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqq4rvp4h1v0agjvsewlz.png" alt="Resultado WSL2 com matplotlib" width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Claro que não podia faltar o código que usei, peguei um dos exemplos do &lt;a href="https://matplotlib.org/3.2.1/tutorials/introductory/sample_plots.html" rel="noopener noreferrer"&gt;matplotlib&lt;/a&gt; para ilustrar este post:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;  &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;matplotlib&lt;/span&gt;
  &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;matplotlib.pyplot&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;plt&lt;/span&gt;
  &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;

  &lt;span class="c1"&gt;# Data for plotting
&lt;/span&gt;  &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;arange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;2.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.01&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pi&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="n"&gt;fig&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ax&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subplots&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="n"&gt;ax&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;plot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="n"&gt;ax&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;xlabel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;time (s)&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ylabel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;voltage (mV)&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;About as simple as it gets, folks&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;ax&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

  &lt;span class="n"&gt;fig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;savefig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test.png&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;show&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Então é isso, espero ter ajudado e poupado um pouco do seu tempo! :)&lt;/p&gt;

</description>
      <category>wsl2</category>
      <category>pyplot</category>
      <category>python</category>
      <category>matplotlib</category>
    </item>
  </channel>
</rss>
