<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nicholas Wiseman</title>
    <description>The latest articles on DEV Community by Nicholas Wiseman (@nicholaswiseman).</description>
    <link>https://dev.to/nicholaswiseman</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nicholaswiseman"/>
    <language>en</language>
    <item>
      <title>Local LLM Inference on Windows 11 and AMD GPU using WSL and llama.cpp</title>
      <dc:creator>Nicholas Wiseman</dc:creator>
      <pubDate>Wed, 04 Mar 2026 16:07:51 +0000</pubDate>
      <link>https://dev.to/nicholaswiseman/local-llm-inference-on-windows-11-and-amd-gpu-using-wsl-and-llamacpp-36e7</link>
      <guid>https://dev.to/nicholaswiseman/local-llm-inference-on-windows-11-and-amd-gpu-using-wsl-and-llamacpp-36e7</guid>
      <description>&lt;h2&gt;
  
  
  Part 1: Config
&lt;/h2&gt;

&lt;p&gt;GPU: AMD Radeon RX 7800 XT&lt;br&gt;
Driver Version: 25.30.27.02-260217a-198634C-AMD-Software-Adrenalin-Edition&lt;br&gt;
llama.cpp SHA: ecd99d6a9acbc436bad085783bcd5d0b9ae9e9e9&lt;br&gt;
OS: Windows 11 (10.0.26200 Build 26200)&lt;br&gt;
Ubuntu version: 24.04&lt;/p&gt;

&lt;p&gt;Need to consult ROCm compatibility matrix (linked in Part 4) to ensure valid ROCm version, GPU, GFX driver and Ubuntu version.&lt;/p&gt;
&lt;h2&gt;
  
  
  Part 2: CPU Inference Baseline
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Setup WSL and Ubuntu VM:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wsl --install -d Ubuntu-24.04
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Launch "Ubuntu" from windows start menu&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grab some utilities&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install -y git build-essential cmake curl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Clone llama.cpp&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;ecd99d6a9acb was the latest commit at time of writing, you can do git checkout for max reproducibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grab the model&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd models
curl -L -o mistral.gguf \
https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/resolve/main/mistral-7b-instruct-v0.2.Q4_K_M.gguf
cd ..
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Build llama.cpp&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cmake -B build
cmake --build build --config Release
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Do CPU inference&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./build/bin/llama-cli -m models/mistral.gguf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhz94e5lrbqli0t497hpd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhz94e5lrbqli0t497hpd.png" alt=" " width="710" height="981"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Part 3: GPU Acceleration
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Install ROCm&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
wget https://repo.radeon.com/amdgpu-install/7.2/ubuntu/noble/amdgpu-install_7.2.70200-1_all.deb
sudo apt install ./amdgpu-install_7.2.70200-1_all.deb
amdgpu-install -y --usecase=wsl,rocm --no-dkms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Check your ROCm install&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rocminfo | grep "gfx"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Should see some output confirming ROCm detects your GPU&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build llama.cpp with HIP support&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rm -rf build
cmake -B build -DGGML_HIP=ON
cmake --build build --config Release
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Do inferencing on GPU&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./build/bin/llama-cli -m models/mistral.gguf -ngl 999
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nick@NickWiseman-PC:~/llama/llama.cpp$ ./build/bin/llama-cli -m models/mistral.gguf -ngl 999
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon RX 7800 XT, gfx1101 (0x1101), VMM: no, Wave Size: 32

Loading model...


▄▄ ▄▄
██ ██
██ ██  ▀▀█▄ ███▄███▄  ▀▀█▄    ▄████ ████▄ ████▄
██ ██ ▄█▀██ ██ ██ ██ ▄█▀██    ██    ██ ██ ██ ██
██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀
                                    ██    ██
                                    ▀▀    ▀▀

build      : b8199-d969e933e
model      : mistral.gguf
modalities : text

available commands:
  /exit or Ctrl+C     stop or exit
  /regen              regenerate the last response
  /clear              clear the chat history
  /read               add a text file


&amp;gt; Write a short love poem

 In the quiet of the moonlit night,

Two hearts entwined, a tender sight,

A dance of souls in gentle grace,

In love's sweet embrace, we find our place.

Your eyes, a mirror to my own,

Reflecting passion, love, and home,

Your voice, a melody that sings,

In every beat, my heart takes wings.

Together we weave a tapestry,

Of promises, of memories,

A bond that's woven strong and bright,

A love that shines, a beacon of light.

In this moment, in this stolen time,

Our hearts unite, two souls entwined,

A love so pure, a love so true,

A love that's mine, a love that's you.

[ Prompt: 149.0 t/s | Generation: 79.7 t/s ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Note the line confirming we use gfx1101 device.&lt;br&gt;
&lt;strong&gt;Mistral 7B Inference Perf Comparison&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Device&lt;/th&gt;
&lt;th&gt;Prompt Speed (tok/sec)&lt;/th&gt;
&lt;th&gt;Generation Speed (tok/sec)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;AMD Ryzen 5 3600 (CPU)&lt;/td&gt;
&lt;td&gt;1.5&lt;/td&gt;
&lt;td&gt;6.4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AMD Radeon RX 7800 XT (HIP / ROCm)&lt;/td&gt;
&lt;td&gt;149.0&lt;/td&gt;
&lt;td&gt;79.7&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
  
  
  Part 4: Resources
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://learn.microsoft.com/en-us/windows/wsl/setup/environment" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fmedia%2Fopen-graph-image.png" height="420" class="m-0" width="800"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://learn.microsoft.com/en-us/windows/wsl/setup/environment" rel="noopener noreferrer" class="c-link"&gt;
            Set up a WSL development environment | Microsoft Learn
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            Set up a WSL development environment using best practices from this set-by-step guide. Learn how to run Ubuntu, Visual Studio Code or Visual Studio, Git, Windows Credential Manager, MongoDB, MySQL, Docker remote containers and more.
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
          learn.microsoft.com
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;



&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/ggml-org" rel="noopener noreferrer"&gt;
        ggml-org
      &lt;/a&gt; / &lt;a href="https://github.com/ggml-org/llama.cpp" rel="noopener noreferrer"&gt;
        llama.cpp
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      LLM inference in C/C++
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;llama.cpp&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://user-images.githubusercontent.com/1991296/230134379-7181e485-c521-4d23-a0d6-f7b3b61ba524.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F1991296%2F230134379-7181e485-c521-4d23-a0d6-f7b3b61ba524.png" alt="llama"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://opensource.org/licenses/MIT" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/7013272bd27ece47364536a221edb554cd69683b68a46fc0ee96881174c4214c/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6c6963656e73652d4d49542d626c75652e737667" alt="License: MIT"&gt;&lt;/a&gt;
&lt;a href="https://github.com/ggml-org/llama.cpp/releases" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/e6b8287fcfee35b19cb6445b354371e6c41a7a0cb476b792453247b373bc92d9/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f762f72656c656173652f67676d6c2d6f72672f6c6c616d612e637070" alt="Release"&gt;&lt;/a&gt;
&lt;a href="https://github.com/ggml-org/llama.cpp/actions/workflows/server.yml" rel="noopener noreferrer"&gt;&lt;img src="https://github.com/ggml-org/llama.cpp/actions/workflows/server.yml/badge.svg" alt="Server"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/ggml-org/llama.cpp/discussions/205" rel="noopener noreferrer"&gt;Manifesto&lt;/a&gt; / &lt;a href="https://github.com/ggml-org/ggml" rel="noopener noreferrer"&gt;ggml&lt;/a&gt; / &lt;a href="https://github.com/ggml-org/llama.cpp/blob/master/docs/ops.md" rel="noopener noreferrer"&gt;ops&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;LLM inference in C/C++&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Recent API changes&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/ggml-org/llama.cpp/issues/9289" rel="noopener noreferrer"&gt;Changelog for &lt;code&gt;libllama&lt;/code&gt; API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ggml-org/llama.cpp/issues/9291" rel="noopener noreferrer"&gt;Changelog for &lt;code&gt;llama-server&lt;/code&gt; REST API&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Hot topics&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://github.com/ggml-org/llama.cpp/discussions/16938" rel="noopener noreferrer"&gt;guide : using the new WebUI of llama.cpp&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ggml-org/llama.cpp/discussions/15396" rel="noopener noreferrer"&gt;guide : running gpt-oss with llama.cpp&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ggml-org/llama.cpp/discussions/15313" rel="noopener noreferrer"&gt;[FEEDBACK] Better packaging for llama.cpp to support downstream consumers 🤗&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Support for the &lt;code&gt;gpt-oss&lt;/code&gt; model with native MXFP4 format has been added | &lt;a href="https://github.com/ggml-org/llama.cpp/pull/15091" rel="noopener noreferrer"&gt;PR&lt;/a&gt; | &lt;a href="https://blogs.nvidia.com/blog/rtx-ai-garage-openai-oss" rel="nofollow noopener noreferrer"&gt;Collaboration with NVIDIA&lt;/a&gt; | &lt;a href="https://github.com/ggml-org/llama.cpp/discussions/15095" rel="noopener noreferrer"&gt;Comment&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Multimodal support arrived in &lt;code&gt;llama-server&lt;/code&gt;: &lt;a href="https://github.com/ggml-org/llama.cpp/pull/12898" rel="noopener noreferrer"&gt;#12898&lt;/a&gt; | &lt;a href="https://github.com/ggml-org/llama.cpp/./docs/multimodal.md" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;VS Code extension for FIM completions: &lt;a href="https://github.com/ggml-org/llama.vscode" rel="noopener noreferrer"&gt;https://github.com/ggml-org/llama.vscode&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Vim/Neovim plugin for FIM completions: &lt;a href="https://github.com/ggml-org/llama.vim" rel="noopener noreferrer"&gt;https://github.com/ggml-org/llama.vim&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Hugging Face Inference Endpoints now support GGUF out of the box! &lt;a class="issue-link js-issue-link" href="https://github.com/ggml-org/llama.cpp/discussions/9669" rel="noopener noreferrer"&gt;#9669&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Hugging Face GGUF editor: &lt;a href="https://github.com/ggml-org/llama.cpp/discussions/9268" rel="noopener noreferrer"&gt;discussion&lt;/a&gt; | &lt;a href="https://huggingface.co/spaces/CISCai/gguf-editor" rel="nofollow noopener noreferrer"&gt;tool&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Quick start&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Getting started with llama.cpp is straightforward. Here are several ways to install it on your machine:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Install &lt;code&gt;llama.cpp&lt;/code&gt; using &lt;a href="https://github.com/ggml-org/llama.cpp/docs/install.md" rel="noopener noreferrer"&gt;brew, nix or winget&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Run with Docker - see our &lt;a href="https://github.com/ggml-org/llama.cpp/docs/docker.md" rel="noopener noreferrer"&gt;Docker documentation&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Download pre-built binaries from the &lt;a href="https://github.com/ggml-org/llama.cpp/releases" rel="noopener noreferrer"&gt;releases page&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Build…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/ggml-org/llama.cpp" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/tree/main" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-thumbnails.huggingface.co%2Fsocial-thumbnails%2Fmodels%2FTheBloke%2FMistral-7B-Instruct-v0.1-GGUF.png" height="432" class="m-0" width="800"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/tree/main" rel="noopener noreferrer" class="c-link"&gt;
            TheBloke/Mistral-7B-Instruct-v0.1-GGUF at main
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            We’re on a journey to advance and democratize artificial intelligence through open source and open science.
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
          huggingface.co
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
      &lt;div class="c-embed__body flex items-center justify-between"&gt;
        &lt;a href="https://rocm.docs.amd.com/projects/radeon-ryzen/en/latest/docs/install/installrad/wsl/install-radeon.html" rel="noopener noreferrer" class="c-link fw-bold flex items-center"&gt;
          &lt;span class="mr-2"&gt;rocm.docs.amd.com&lt;/span&gt;
          

        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
      &lt;div class="c-embed__body flex items-center justify-between"&gt;
        &lt;a href="https://rocm.docs.amd.com/projects/radeon-ryzen/en/latest/docs/compatibility/compatibilityrad/wsl/wsl_compatibility.html" rel="noopener noreferrer" class="c-link fw-bold flex items-center"&gt;
          &lt;span class="mr-2"&gt;rocm.docs.amd.com&lt;/span&gt;
          

        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


</description>
      <category>ai</category>
      <category>linux</category>
      <category>llm</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
