WSL2 is a fantastic development environment on Windows. It's also a system with sharp edges that the official docs rarely highlight — the kind you only discover after losing an afternoon to a process eating 300% CPU for no apparent reason.
This guide documents four specific problems I've hit repeatedly over the last year while using WSL2 as my main development environment for Docker-based projects. For each one: the root cause, why the obvious fix doesn't work, and what actually solves it.
This isn't an introduction to WSL2. If you're already using it daily and something feels off, keep reading.
1. Docker Desktop, cgroups, and processes that ignore resource limits
The symptom
You run a container on Docker Desktop for Windows (which uses WSL2 under the hood). The container executes a CPU-intensive process — a vulnerability scanner, a compiler, a batch job.
You watch htop and the process is consuming 300%+ CPU, dragging the entire system down.
You think: "no problem, I'll throttle it."
services:
heavy-worker:
image: my-scanner:latest
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
cpu_count: 1
You restart the container.
Still 300% CPU.
You try nice, ionice, cpulimit... nothing works.
The root cause
Docker Desktop runs containers inside a WSL2-hosted VM using cgroup v2, often with limited controllers.
The result:
-
deploy.resources.limits→ ignored -
cpu_count→ ignored -
nice/ionice→ ineffective
You can verify:
cat /sys/fs/cgroup/cgroup.controllers
You'll often see fewer controllers than on a native Linux system.
What actually works
Option A (best): Limit the WSL2 VM
[wsl2]
processors=4
memory=8GB
swap=2GB
Then:
wsl --shutdown
Now the VM is capped → containers behave predictably.
Option B: Use tool-level throttling
Examples:
--parallel=1
--low-mem
These bypass scheduler issues entirely.
Option C: Replace the tool
If it's designed to max all cores and can't be tuned → wrong tool for WSL2.
Key takeaway
Don't trust container limits on WSL2. Control the VM or use self-throttling tools.
2. Disk performance: /mnt/c vs native WSL filesystem
The symptom
Working in:
/mnt/c/Users/you/projects
-
npm install→ 8 minutes -
git status→ 4 seconds
Move to:
~/projects
-
npm install→ 25 seconds -
git status→ instant
The root cause
/mnt/c uses the 9P protocol → every filesystem call crosses the Windows ↔ Linux boundary.
Heavy IO workloads (Node, Git, Docker builds) get destroyed by latency.
Native WSL FS = ext4 inside VHDX → near-native Linux speed.
Real benchmark
| Operation | /mnt/c |
WSL native |
|---|---|---|
npm install |
~8 min | ~25 sec |
git status (10k files) |
~4 sec | < 100 ms |
docker build context |
~90 sec | ~3 sec |
What actually works
Rule: keep code inside WSL.
~/projects/myapp
Access options:
- VS Code + WSL extension ✅ (best)
-
\\wsl$\Ubuntu\home\you\projects(OK)
If you MUST use /mnt/c:
- Move heavy dirs (
node_modules,.git) to WSL - Use symlinks
Key takeaway
/mnt/cis for compatibility, not performance.
3. Networking: ports, shifting IPs, and host access
The symptom
-
localhost:3000→ sometimes works, sometimes not - WSL IP changes every reboot
- LAN access → broken
The root cause
WSL2 networking is:
- NATed via Hyper-V
- Not bridged
- Dynamic IP
- Partial
localhostforwarding
What actually works
Ensure localhost forwarding:
[wsl2]
localhostForwarding=true
Get WSL IP:
ip addr show eth0 | grep 'inet ' | awk '{print $2}' | cut -d/ -f1
Expose to LAN (port proxy):
$wslIP = (wsl hostname -I).Trim().Split(' ')[0]
netsh interface portproxy add v4tov4 `
listenport=3000 listenaddress=0.0.0.0 `
connectport=3000 connectaddress=$wslIP
New-NetFirewallRule -DisplayName "WSL 3000" `
-Direction Inbound -LocalPort 3000 `
-Protocol TCP -Action Allow
Best solution (modern WSL):
[wsl2]
networkingMode=mirrored
- ✔ Same network as host
- ✔ No NAT issues
- ✔ LAN works directly
Key takeaway
WSL2 networking = NAT. Use mirrored mode if available.
4. Memory: the vmmem problem
The symptom
- Start day → 4 GB used
- Work with Docker
- Stop everything
-
vmmemstill using 12 GB
Never released.
The root cause
WSL2:
- Allocates memory dynamically
- Does not release it back
- Linux keeps cache (normal behavior)
- Windows cannot reclaim it
What actually works
Cap memory:
[wsl2]
memory=8GB
swap=4GB
Enable auto reclaim:
[experimental]
autoMemoryReclaim=gradual
sparseVhd=true
Manual reclaim:
sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
Last resort:
wsl --shutdown
Key takeaway
WSL will NOT give memory back unless you force it or configure it.
Minimal .wslconfig (recommended)
[wsl2]
memory=8GB
processors=4
swap=2GB
localhostForwarding=true
networkingMode=mirrored
[experimental]
autoMemoryReclaim=gradual
sparseVhd=true
📍 Path:
C:\Users\<you>\.wslconfig
Apply with:
wsl --shutdown
Closing thoughts
Most of these issues come from one fact:
WSL2 is a VM pretending to be native Linux.
And the cracks show when:
- You push CPU
- You do heavy IO
- You rely on networking assumptions
- You expect Linux memory behavior to match Windows
WSL2 is still excellent — but only if you understand:
- cgroups quirks
-
/mnt/cperformance trap - NAT networking
- Memory ballooning
Once you do, most "random issues" become predictable.
If you've hit other WSL2 gotchas, drop them in the comments 👇
The one that surprised me most? Spending 3 days tuning container limits… that were being completely ignored.
💡 Did this save you an afternoon? A follow or reaction helps me write more of these.
Top comments (0)