<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: a2n</title>
    <description>The latest articles on DEV Community by a2n (@a2nof).</description>
    <link>https://dev.to/a2nof</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/a2nof"/>
    <language>en</language>
    <item>
      <title>Enabling dead key on Ghostty/KDE</title>
      <dc:creator>a2n</dc:creator>
      <pubDate>Wed, 29 Apr 2026 11:12:15 +0000</pubDate>
      <link>https://dev.to/a2nof/enabling-dead-key-on-ghosttykde-2n10</link>
      <guid>https://dev.to/a2nof/enabling-dead-key-on-ghosttykde-2n10</guid>
      <description>&lt;p&gt;It's been some month since this issue land on GTK app: &lt;a href="https://github.com/ghostty-org/ghostty/discussions/8899" rel="noopener noreferrer"&gt;https://github.com/ghostty-org/ghostty/discussions/8899&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Especially for me, I wanted to switch to Ghostty and when I wanted to use &lt;code&gt;~&lt;/code&gt; I was left why a blank space. &lt;/p&gt;

&lt;p&gt;So according to the selected answer the most reliable way of fixing that on KDE, while waiting for a fix, is the following: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;first you need to install all needed package for &lt;code&gt;fcitx5&lt;/code&gt; to work&lt;/li&gt;
&lt;li&gt;then we update the config&lt;/li&gt;
&lt;li&gt;logout / login&lt;/li&gt;
&lt;li&gt;you can now enjoy your dead key working again!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;pacman &lt;span class="nt"&gt;-S&lt;/span&gt; fcitx5 fcitx5-qt fcitx5-gtk fcitx5-configtool
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next create / open the config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nvim ~/.config/environment.d/im.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you can put the config in the &lt;code&gt;im.conf&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;GTK_IM_MODULE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;fcitx
&lt;span class="nv"&gt;QT_IM_MODULE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;fcitx
&lt;span class="nv"&gt;XMODIFIERS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;@im&lt;span class="o"&gt;=&lt;/span&gt;fcitx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that you can logout / login and everything should be fine :)&lt;/p&gt;

&lt;p&gt;If needed you can check that the layout that use &lt;code&gt;fcitx5&lt;/code&gt; is the same has the one KDE is using, and also you maybe want to auto start it on session startup but for me that was the case by default!&lt;/p&gt;

&lt;p&gt;See ya!&lt;/p&gt;

</description>
      <category>archlinux</category>
      <category>kde</category>
      <category>ghostty</category>
    </item>
    <item>
      <title>Local coding with AI: It works… Just not on my laptop</title>
      <dc:creator>a2n</dc:creator>
      <pubDate>Sat, 25 Apr 2026 14:04:22 +0000</pubDate>
      <link>https://dev.to/a2nof/local-coding-with-ai-it-works-just-not-on-my-laptop-1n7m</link>
      <guid>https://dev.to/a2nof/local-coding-with-ai-it-works-just-not-on-my-laptop-1n7m</guid>
      <description>&lt;h2&gt;
  
  
  Digression
&lt;/h2&gt;

&lt;p&gt;It's been a while since the AI have hit the public in a way that some user use it to plan vacation and do basic addition with it. That pretty much at the same time that we (developer) discover a new tool can help (&lt;del&gt;to ruin the market&lt;/del&gt;) to allow us to work a little bit faster.&lt;/p&gt;

&lt;p&gt;I have a little of a hate/love relationship with this tool mainly because I wonder were (has humanity) we're going with our "pollution" problem. But I also love this tool because, you know, it has lowered many steps for smart people to be able to build stuff. And I love this idea that people whatever their country and knowledge are able to build something for fun or else. That for me is also the main purpose of internet, sharing knowledge, being able to learn something that is not accessible for us where we live or because of money.&lt;/p&gt;

&lt;p&gt;Another subject is privacy.&lt;br&gt;
And what a good privacy job we get with AI! All the data are sent somewhere, and nobody (at least honest people) have read the term of use and other boring stuff. But that make me think: right now, we have all the tool we need to run a local model to help us work with code, so why not. &lt;/p&gt;
&lt;h2&gt;
  
  
  What were doing
&lt;/h2&gt;

&lt;p&gt;So the plan was pretty basic: we got &lt;a href="https://github.com/ggml-org/llama.cpp" rel="noopener noreferrer"&gt;&lt;code&gt;llama.cpp&lt;/code&gt;&lt;/a&gt; to get local model working has an api (basically), and we got &lt;a href="https://pi.dev/" rel="noopener noreferrer"&gt;&lt;code&gt;pi&lt;/code&gt;&lt;/a&gt; (or &lt;a href="https://opencode.ai/" rel="noopener noreferrer"&gt;&lt;code&gt;opencode&lt;/code&gt;&lt;/a&gt;) to get the "chat" like experience.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setting up
&lt;/h2&gt;

&lt;p&gt;I'm using arch (btw) but all the step here are pretty much the same for any distro or os.&lt;/p&gt;

&lt;p&gt;The first step is to install all the tool we need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yay &lt;span class="nt"&gt;-S&lt;/span&gt; llama.cpp-vulkan
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @mariozechner/pi-coding-agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first command install the version of &lt;code&gt;llama&lt;/code&gt; that is optimized for my machine, on the AUR repository you can find a huge variety of optimized version.&lt;/p&gt;

&lt;p&gt;The second one installed the &lt;code&gt;pi&lt;/code&gt; harness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the model
&lt;/h2&gt;

&lt;p&gt;Like the rest of this little experiment running the model with &lt;code&gt;llama&lt;/code&gt; is actually very simple, the only "hard" step is to actually choose the model.&lt;/p&gt;

&lt;p&gt;By advance sorry no big reveal here, no "blowing my sock out" website: I have chosen the model because other people have already used it, and it works somewhat great. Some website propose some ranking for model and I think this is great, but we also knew for a fact that ratting LLM's are a pretty hard tasks mainly because the nature of how LLM's work. &lt;/p&gt;

&lt;p&gt;Also, this is a little fun experiment so if we really need to choose we can always switch later on another model :)&lt;/p&gt;

&lt;p&gt;Let's get back to action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;llama-server &lt;span class="nt"&gt;-hf&lt;/span&gt; AaryanK/Qwen3.6-27B-GGUF:Q4_K_M &lt;span class="nt"&gt;--port&lt;/span&gt; 8080 &lt;span class="nt"&gt;--host&lt;/span&gt; 127.0.0.1 &lt;span class="nt"&gt;-c&lt;/span&gt; 8192 &lt;span class="nt"&gt;-t&lt;/span&gt; 8 &lt;span class="nt"&gt;-ngl&lt;/span&gt; 40
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I guess for the most part of the cmd you already know what is doing, but let me explain the weird one:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;c&lt;/code&gt; controls context size&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;t&lt;/code&gt; controls CPU threads&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ngl&lt;/code&gt; controls GPU offload&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the most important stuff here because that the value we need to edit and test to allow to get the most performance of our "model/pc" couple.&lt;/p&gt;

&lt;p&gt;If everything is working correctly you should now have something like that on your console:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;main: model loaded
main: server is listening on http://127.0.0.1:8080
main: starting the main loop...
srv  update_slots: all slots are idle
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  LLM talks to Pi
&lt;/h2&gt;

&lt;p&gt;Ok right after that we have to actually get the model "linked" to our chat. With &lt;code&gt;pi&lt;/code&gt; that pretty simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; ~/.pi/agent
nvim ~/.pi/agent/models.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;~/.pi/agent/models.json&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"providers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"llama-cpp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"baseUrl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://127.0.0.1:8080/v1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"api"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"openai-completions"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"apiKey"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"none"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"models"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Qwen3.6-27B"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"contextWindow"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8192&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"maxTokens"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4096&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here the two most important values are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;contextWindow: how much the model can remember&lt;/li&gt;
&lt;li&gt;maxTokens: response size limit&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can now check if the model is actually linked by launching &lt;code&gt;pi&lt;/code&gt; and selecting it via the &lt;code&gt;/model&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn60seidnirk96z36wm49.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn60seidnirk96z36wm49.png" alt=" " width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Say "hello" and go for a walk
&lt;/h2&gt;

&lt;p&gt;Okai, my machine is a tuxedo notebook (InfinityBook Pro something) without a graphic card and certainly not a macpro chipset.&lt;/p&gt;

&lt;p&gt;So there the reality check for me: saying "what's the day?" to the model take a really good amount of time to actually get a response. And a pretty good damn chunk of the compute capacity of this pc:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dfx7h25bcem4h9bft4w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dfx7h25bcem4h9bft4w.png" alt=" " width="800" height="648"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  So what next?
&lt;/h2&gt;

&lt;p&gt;For me, I guess this the end. Without a good pc for this kind of compute I'm pretty much no able to use a local model, or a more smaller one, so the result are not going to be has good comparing to a Claude Code or a ChatGpt. And I don't want to set up a VPS for that (maybe another time?).&lt;/p&gt;

&lt;h2&gt;
  
  
  Mellum
&lt;/h2&gt;

&lt;p&gt;Another tasks were a local model can help: code completion.&lt;/p&gt;

&lt;p&gt;I'm going to use &lt;a href="https://www.jetbrains.com/mellum/" rel="noopener noreferrer"&gt;Mellum&lt;/a&gt; from Jetbrain to test that.&lt;/p&gt;

&lt;p&gt;Easy has before for the setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;llama-server &lt;span class="nt"&gt;-hf&lt;/span&gt; JetBrains/Mellum-4b-base-gguf &lt;span class="nt"&gt;--port&lt;/span&gt; 8989 &lt;span class="nt"&gt;--host&lt;/span&gt; 127.0.0.1 &lt;span class="nt"&gt;-c&lt;/span&gt; 8192 &lt;span class="nt"&gt;-t&lt;/span&gt; 8 &lt;span class="nt"&gt;-ngl&lt;/span&gt; 40
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and for our &lt;code&gt;pi&lt;/code&gt; we just add another option to the model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;
   &lt;span class="s2"&gt;"providers"&lt;/span&gt;:&lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"llama-cpp"&lt;/span&gt;:&lt;span class="o"&gt;{&lt;/span&gt;
         &lt;span class="s2"&gt;"baseUrl"&lt;/span&gt;:&lt;span class="s2"&gt;"http://127.0.0.1:8080/v1"&lt;/span&gt;,
         &lt;span class="s2"&gt;"api"&lt;/span&gt;:&lt;span class="s2"&gt;"openai-completions"&lt;/span&gt;,
         &lt;span class="s2"&gt;"apiKey"&lt;/span&gt;:&lt;span class="s2"&gt;"none"&lt;/span&gt;,
         &lt;span class="s2"&gt;"models"&lt;/span&gt;:[
            &lt;span class="o"&gt;{&lt;/span&gt;
               &lt;span class="s2"&gt;"id"&lt;/span&gt;:&lt;span class="s2"&gt;"Qwen3-27B"&lt;/span&gt;,
               &lt;span class="s2"&gt;"contextWindow"&lt;/span&gt;:8192,
               &lt;span class="s2"&gt;"maxTokens"&lt;/span&gt;:4096
            &lt;span class="o"&gt;}&lt;/span&gt;,
            &lt;span class="o"&gt;{&lt;/span&gt;
               &lt;span class="s2"&gt;"id"&lt;/span&gt;:&lt;span class="s2"&gt;"Mellum-4B"&lt;/span&gt;,
               &lt;span class="s2"&gt;"contextWindow"&lt;/span&gt;:8192,
               &lt;span class="s2"&gt;"maxTokens"&lt;/span&gt;:4096
            &lt;span class="o"&gt;}&lt;/span&gt;
         &lt;span class="o"&gt;]&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;
   &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If needed you can always run two model on different port but if I'm doing that I guess my pc are going to melt.&lt;/p&gt;

&lt;p&gt;We can select it via the same command: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwtm72nsafi77nwm5azm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwtm72nsafi77nwm5azm.png" alt=" " width="526" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The response to &lt;code&gt;hey&lt;/code&gt; was very quicker that the other model for the same amount of fan speed: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo3hj8lr6pxgs94cadz7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo3hj8lr6pxgs94cadz7.png" alt=" " width="777" height="535"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So what's the deal? This is a code completion model so it can't answer question I guess:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7u1ep4bxja2aictsst8o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7u1ep4bxja2aictsst8o.png" alt=" " width="301" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But! Let's try code completion so:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwsahm8ux1ykote6h9px.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwsahm8ux1ykote6h9px.png" alt=" " width="240" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even from my machine this is quick enough to use it on Neovim for example with a code completion plugin.&lt;/p&gt;

&lt;h2&gt;
  
  
  You know what, let's try it in a real project
&lt;/h2&gt;

&lt;p&gt;Ok the model is rapid enouth (it seem) so let's try it in a real project. &lt;/p&gt;

&lt;p&gt;For that I'm going to add that to my neovim config (with Lazy):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight lua"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"milanglacier/minuet-ai.nvim"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
      &lt;span class="nb"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"minuet"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;setup&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="n"&gt;provider&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"openai_fim_compatible"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;n_completions&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;      &lt;span class="c1"&gt;-- Use 1 for local models to save resources&lt;/span&gt;
        &lt;span class="n"&gt;context_window&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;4096&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;-- Adjust based on your GPU's capability&lt;/span&gt;
        &lt;span class="n"&gt;throttle&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="c1"&gt;-- Minimum time between requests in ms&lt;/span&gt;
        &lt;span class="n"&gt;debounce&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="c1"&gt;-- Wait time after typing stops before requesting&lt;/span&gt;

        &lt;span class="n"&gt;provider_options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="n"&gt;openai_fim_compatible&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;api_key&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"TERM"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;-- Ollama doesn't need a real API key&lt;/span&gt;
            &lt;span class="n"&gt;name&lt;/span&gt;      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Ollama"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;end_point&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"http://localhost:8080/v1/completions"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;model&lt;/span&gt;     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"JetBrains/Mellum-4b-base-gguf"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

            &lt;span class="n"&gt;optional&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="n"&gt;max_tokens&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;       &lt;span class="c1"&gt;-- Maximum tokens to generate&lt;/span&gt;
              &lt;span class="n"&gt;stop&lt;/span&gt;       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="c1"&gt;-- Stop at double newlines&lt;/span&gt;
              &lt;span class="n"&gt;top_p&lt;/span&gt;      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;       &lt;span class="c1"&gt;-- Nucleus sampling parameter&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="c1"&gt;-- Virtual text display settings&lt;/span&gt;
        &lt;span class="n"&gt;virtualtext&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="n"&gt;auto_trigger_ft&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="c1"&gt;-- Enable for all filetypes&lt;/span&gt;

          &lt;span class="n"&gt;keymap&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;accept&lt;/span&gt;      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;Tab&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;accept_line&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;C-y&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="nb"&gt;next&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;C-n&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;prev&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;C-p&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;dismiss&lt;/span&gt;     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;C-e&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Saghen/blink.cmp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After reloading that into a project of mine (&lt;a href="https://postier.a2n.dev/" rel="noopener noreferrer"&gt;postier&lt;/a&gt;) I can try to use it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwldq0mjlogx023h6lx1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwldq0mjlogx023h6lx1.png" alt=" " width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hey that not bad! After some experiment the result is not alway very good but I guess the model is maybe not the more efficient and the settings can also be changed to allow for better result.&lt;/p&gt;

&lt;p&gt;But, hey, you got a mostly free auto-completion AI in you neovim that cool and that can spare some token usage I guess ^^&lt;/p&gt;

&lt;p&gt;And this work when your offline too, so this can be a good option for certain use case too.&lt;/p&gt;

&lt;h2&gt;
  
  
  So… is it worth it?
&lt;/h2&gt;

&lt;p&gt;Yeah I guess, just maybe not on this machine.&lt;/p&gt;

&lt;p&gt;What I’ve built here does work. It proves the point. You can run your own local models, wire them into tools like &lt;code&gt;pi&lt;/code&gt;, plug them into Neovim, and get something that feels very close to the "big AI experience"… without sending your data somewhere else.&lt;/p&gt;

&lt;p&gt;But hardware matters. A lot. (&lt;a href="https://pcpartpicker.com/trends/price/memory/" rel="noopener noreferrer"&gt;outch&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Right now I’m running this on a CPU-only laptop, and honestly, it shows. The experience is slow, sometimes frustrating, and clearly not comparable to cloud models. So not a great experience to use it everyday to work with.&lt;/p&gt;

&lt;p&gt;But if you take the exact same setup and drop it onto a more modern machine, things can change drastically.&lt;/p&gt;

&lt;p&gt;Take something like a MacBook Pro with Apple Silicon. These chips come with powerful integrated GPUs and unified memory, which are insanely good for this kind of workload. You can offload a large part of the model to the GPU, increase context size, and suddenly responses go from "go grab a coffee" to "this is actually usable."&lt;/p&gt;

&lt;p&gt;Same story on the PC side. A desktop or laptop with a decent GPU will run circles around a CPU-only setup. With proper GPU offloading (&lt;code&gt;-ngl&lt;/code&gt;), quantized models, and a bit of tuning, you can get real-time or near real-time responses, even with larger models.&lt;/p&gt;

&lt;p&gt;What I like most about this whole thing isn’t just performance, it’s control:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;code stays local&lt;/li&gt;
&lt;li&gt;prompts stay private&lt;/li&gt;
&lt;li&gt;you can tweak, break, and rebuild everything&lt;/li&gt;
&lt;li&gt;you’re not tied to an API or pricing model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That feels very close to what I want: owning your tools and understanding how they work.&lt;/p&gt;

&lt;p&gt;So yeah, my poor CPU-only laptop is struggling.&lt;br&gt;
But the experiment itself? Totally worth it.&lt;/p&gt;

&lt;p&gt;And honestly… it kind of makes me want to upgrade my hardware (if I win the lotery).&lt;/p&gt;

&lt;p&gt;So I let you there, have fun!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>neovim</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Flashing the Lily58 R2G with QMK</title>
      <dc:creator>a2n</dc:creator>
      <pubDate>Sun, 12 Apr 2026 14:36:46 +0000</pubDate>
      <link>https://dev.to/a2nof/flashing-the-lily58-r2g-with-qmk-1nef</link>
      <guid>https://dev.to/a2nof/flashing-the-lily58-r2g-with-qmk-1nef</guid>
      <description>&lt;p&gt;Lately I got a Lily58 R2G (&lt;a href="https://mechboards.co.uk/" rel="noopener noreferrer"&gt;from mechboards uk&lt;/a&gt;) and broke the code for some reason. So to fix that I have to flash the keyboard with the default layout for Vial.&lt;/p&gt;

&lt;p&gt;The process is not so hard, but the lack of clear examples can lead to lost time and errors, so here is the full to-do:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The first step is quite simple: you have to install QMK MSYS exactly as described here at step #2: &lt;a href="https://docs.qmk.fm/newbs_getting_started#set-up-your-environment" rel="noopener noreferrer"&gt;https://docs.qmk.fm/newbs_getting_started#set-up-your-environment&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;After having QMK MSYS installed, the next step is to set it up, but not from the original repo, instead from the MechboardsLTD one (&lt;a href="https://github.com/MechboardsLTD/vial-qmk/tree/r2g/keyboards/mechboards/lily58/r2g" rel="noopener noreferrer"&gt;https://github.com/MechboardsLTD/vial-qmk/tree/r2g/keyboards/mechboards/lily58/r2g&lt;/a&gt;). For that we have to execute the following command: &lt;code&gt;qmk setup MechboardsLTD/vial-qmk&lt;/code&gt;. If this command doesn't work, another solution is to execute &lt;code&gt;qmk setup&lt;/code&gt;, then clone the repository and use the clone to set up QMK with &lt;code&gt;qmk setup -H path/to/the/clone&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Here we need to switch to the &lt;code&gt;r2g&lt;/code&gt; branch with a good old &lt;code&gt;git checkout r2g&lt;/code&gt; (obviously you have to be in the repository to do that)&lt;/li&gt;
&lt;li&gt;To test that everything is correct, you can execute this command and check if the result lists &lt;code&gt;mechboards/lily58/r2g&lt;/code&gt;: &lt;code&gt;qmk list-keyboards | grep mechboards&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;If everything is correct, you can now edit the config file if wanted and then build the &lt;code&gt;uf2&lt;/code&gt; file with &lt;code&gt;qmk compile -kb mechboards/lily58/r2g -km vial&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;In my case, the &lt;code&gt;uf2&lt;/code&gt; file was generated in the root folder of the vial-qmk repository we set up at step 2&lt;/li&gt;
&lt;li&gt;When you have this file, you have to plug the main part &lt;strong&gt;only&lt;/strong&gt; of the keyboard into your PC while pressing the bootloader physical button located at the back of the board, the button is here:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr72sbc4l1kuenvca6rvy.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr72sbc4l1kuenvca6rvy.jpeg" alt="photo of the flash button at the back of the keyboard" width="800" height="1067"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;While pressing the button and plugging in the keyboard, a new USB device should appear on your system, the name should be &lt;code&gt;RPI-RP2&lt;/code&gt;. If this is the case, you simply have to place the &lt;code&gt;uf2&lt;/code&gt; file in the folder and the keyboard should reset itself&lt;/li&gt;
&lt;li&gt;Unplug the keyboard, replug the right part, replug the keyboard and voilà :)&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>keyboard</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>opensource</category>
    </item>
    <item>
      <title>VPN, Docker, and a cold coffee</title>
      <dc:creator>a2n</dc:creator>
      <pubDate>Fri, 31 Oct 2025 09:35:42 +0000</pubDate>
      <link>https://dev.to/a2nof/vpn-docker-and-a-cold-coffee-3la6</link>
      <guid>https://dev.to/a2nof/vpn-docker-and-a-cold-coffee-3la6</guid>
      <description>&lt;p&gt;Today I had a hard time understanding why my VPN was connected but I couldn’t reach my DB sitting just behind it.&lt;/p&gt;

&lt;p&gt;Very simple situation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;you launch your PC, coffee’s hot&lt;/li&gt;
&lt;li&gt;start your VPN&lt;/li&gt;
&lt;li&gt;open your code and do some stuff&lt;/li&gt;
&lt;li&gt;need to check something in a tool protected by the VPN&lt;/li&gt;
&lt;li&gt;hu ho, why can’t I reach the site?&lt;/li&gt;
&lt;li&gt;check the VPN logs — nothing weird&lt;/li&gt;
&lt;li&gt;double-check the VPN config&lt;/li&gt;
&lt;li&gt;recreate a VPN user&lt;/li&gt;
&lt;li&gt;try with another Wi-Fi&lt;/li&gt;
&lt;li&gt;coffee’s cold&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then I reboot my PC, connect to the VPN… and suddenly the site works.&lt;/p&gt;

&lt;p&gt;Why that?&lt;/p&gt;

&lt;p&gt;"Simply" because of Docker! More specifically: Docker networks.&lt;/p&gt;

&lt;p&gt;When I launched Docker, it created a network using the same IP range as my VPN.&lt;/p&gt;

&lt;p&gt;We can confirm the problem with a few commands.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ip route show&lt;/code&gt; (while connected to the VPN) gives us an idea of the VPN’s IP range.&lt;/p&gt;

&lt;p&gt;If we run it after launching Docker, we’ll usually see something like &lt;code&gt;172.17.0.0/16&lt;/code&gt; to &lt;code&gt;172.11.0.0/16&lt;/code&gt; for Docker.&lt;/p&gt;

&lt;p&gt;You can double-check with &lt;code&gt;ip addr&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;172.17.0.0/16 dev docker0 proto kernel scope &lt;span class="nb"&gt;link &lt;/span&gt;src 172.17.0.1 linkdown
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, if needed, we can find which Docker network is using which IP by listing all networks with &lt;code&gt;docker network list&lt;/code&gt; and inspecting them using &lt;code&gt;docker network inspect {network_id}&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In the resulting JSON, look at the &lt;code&gt;subnet&lt;/code&gt; and &lt;code&gt;gateway&lt;/code&gt; keys to identify which network is conflicting with your local one.&lt;/p&gt;

&lt;p&gt;If you created the network manually, you can simply edit your config.&lt;br&gt;
If not (like in my case), we can take a more general approach by updating Docker’s configuration itself.&lt;/p&gt;

&lt;p&gt;Open (or create) &lt;code&gt;/etc/docker/daemon.json&lt;/code&gt; and edit the &lt;code&gt;default-address-pools&lt;/code&gt; key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"default-address-pools"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"base"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"172.240.0.0/16"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;choose&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;any&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;IP&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;range&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;you&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;prefer&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"size"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before restarting Docker, run &lt;code&gt;docker network prune&lt;/code&gt; to remove all unused networks from your setup — those can still hold onto old IPs even after changing the config.&lt;/p&gt;

&lt;p&gt;If the prune doesn’t remove the “bad” network, you can delete it manually.&lt;/p&gt;

&lt;p&gt;Finally, restart Docker (eg: systemd): &lt;code&gt;systemctl restart docker&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And voilà — now I just need to reheat my coffee, but at least my setup works! ☕&lt;/p&gt;

</description>
      <category>docker</category>
      <category>vpn</category>
      <category>linux</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
