<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: özkan pakdil</title>
    <description>The latest articles on DEV Community by özkan pakdil (@ozkanpakdil).</description>
    <link>https://dev.to/ozkanpakdil</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ozkanpakdil"/>
    <language>en</language>
    <item>
      <title>Accelerating LLMs on Debian 13: Setting up CUDA for llama.cpp</title>
      <dc:creator>özkan pakdil</dc:creator>
      <pubDate>Thu, 19 Mar 2026 22:35:00 +0000</pubDate>
      <link>https://dev.to/ozkanpakdil/accelerating-llms-on-debian-13-setting-up-cuda-for-llamacpp-22lb</link>
      <guid>https://dev.to/ozkanpakdil/accelerating-llms-on-debian-13-setting-up-cuda-for-llamacpp-22lb</guid>
      <description>&lt;p&gt;Setting up NVIDIA CUDA on Debian 13 (Trixie/Sid) to run Large Language Models (LLMs) can be a bit of a journey, especially if you’re transitioning from the default open-source drivers to the proprietary stack required for GPGPU workloads.&lt;/p&gt;

&lt;p&gt;Over the last few days, I’ve been working on getting &lt;code&gt;llama.cpp&lt;/code&gt; to run with CUDA on my laptop to see how much of a difference it makes compared to pure CPU execution.&lt;/p&gt;

&lt;p&gt;Initially, I tested a &lt;strong&gt;35B model on macOS&lt;/strong&gt; , where it was responding in about &lt;strong&gt;17 seconds&lt;/strong&gt;. When I moved that same 35B model to my old laptop running Debian 13 (on CPU), the response time plummeted to &lt;strong&gt;4 minutes and 30 seconds&lt;/strong&gt;. This massive gap was my main motivation to try and enable CUDA on the laptop’s GPU.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem: Nouveau vs. Proprietary Drivers
&lt;/h3&gt;

&lt;p&gt;By default, Debian might use the open-source &lt;code&gt;nouveau&lt;/code&gt; driver. While great for basic display tasks, it doesn’t support CUDA. To run &lt;code&gt;llama-server&lt;/code&gt; with GPU acceleration, you need the official NVIDIA drivers and the CUDA toolkit.&lt;/p&gt;

&lt;p&gt;I followed the &lt;a href="https://docs.nvidia.com/datacenter/tesla/driver-installation-guide/debian.html" rel="noopener noreferrer"&gt;NVIDIA Tesla Driver Installation Guide for Debian&lt;/a&gt;, which is a critical resource for getting the right packages.&lt;/p&gt;

&lt;p&gt;One specific hurdle with Secure Boot enabled was the need to trust the DKMS-generated keys:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;mokutil &lt;span class="nt"&gt;--import&lt;/span&gt; /var/lib/dkms/mok.pub

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a reboot and enrolling the key in the MOK manager, the driver was finally active and recognized by the system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Compiling llama.cpp with CUDA Support
&lt;/h3&gt;

&lt;p&gt;Once the drivers and &lt;code&gt;nvcc&lt;/code&gt; were ready, I recompiled &lt;code&gt;llama.cpp&lt;/code&gt; with CUDA enabled (see the &lt;a href="https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md#cuda" rel="noopener noreferrer"&gt;official CUDA build documentation&lt;/a&gt; for more details).&lt;/p&gt;

&lt;p&gt;The compilation process is quite resource-intensive and took about &lt;strong&gt;15 minutes&lt;/strong&gt; on my laptop:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gz1nn2wivunyjusccbs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gz1nn2wivunyjusccbs.png" alt="llama.cpp compilation" width="800" height="369"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CUDACXX&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/cuda/bin/nvcc
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CUDA_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/cuda
cmake &lt;span class="nt"&gt;-B&lt;/span&gt; build &lt;span class="nt"&gt;-DGGML_CUDA&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ON
cmake &lt;span class="nt"&gt;--build&lt;/span&gt; build &lt;span class="nt"&gt;--config&lt;/span&gt; Release

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The VRAM Reality Check (OOM Errors)
&lt;/h3&gt;

&lt;p&gt;My laptop has an &lt;strong&gt;NVIDIA GeForce MX450&lt;/strong&gt; with &lt;strong&gt;2 GB of VRAM&lt;/strong&gt;. This is quite modest for modern LLMs.&lt;/p&gt;

&lt;p&gt;Initially, I tried running that 35B model that was so slow on the CPU:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;llama-server &lt;span class="nt"&gt;-hf&lt;/span&gt; unsloth/Qwen3.5-35B-A3B-GGUF &lt;span class="nt"&gt;--jinja&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; 16384 &lt;span class="nt"&gt;--host&lt;/span&gt; 127.0.0.1 &lt;span class="nt"&gt;--port&lt;/span&gt; 8033

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It failed with a &lt;code&gt;cudaMalloc failed: out of memory&lt;/code&gt; error. Looking at the logs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Model parameters: ~857 MiB&lt;/li&gt;
&lt;li&gt;Context/CLIP buffers: ~899 MiB&lt;/li&gt;
&lt;li&gt;Total requested: &amp;gt; 1.7 GB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the OS and display driver already taking up some of that 2 GB, there just wasn’t enough room. The 35B model was simply too large for this specific hardware’s VRAM. Even though CUDA would have been faster than the CPU-only 4.5 minutes, the hardware limit forced me to pivot.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Result: 2B Model Benchmark
&lt;/h3&gt;

&lt;p&gt;I switched to a smaller 2B model to stay within the VRAM limits. The results were impressive and clearly showed why we go through this trouble.&lt;/p&gt;

&lt;p&gt;Asking Qwen 2B to &lt;strong&gt;“write me hello world in rust”&lt;/strong&gt; :&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Setup&lt;/th&gt;
&lt;th&gt;Time to Complete&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CPU Only&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1 minute 32 seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CUDA (GPU)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;24 seconds&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That’s nearly a &lt;strong&gt;4x speed improvement&lt;/strong&gt; on a entry-level mobile GPU!&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;While the setup can be “so complicated” (dealing with drivers, Secure Boot, and compilation), the performance gains are undeniable. Even on a low-end GPU like the MX450, offloading the heavy lifting to CUDA makes the local LLM experience much more interactive.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bonus: NVIDIA GPU Diagnostic Script
&lt;/h3&gt;

&lt;p&gt;To help troubleshoot my setup, I wrote a small script &lt;code&gt;nvidia_check_and_run.sh&lt;/code&gt; to verify the driver, kernel modules, and &lt;code&gt;llama.cpp&lt;/code&gt; support.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="c"&gt;# Configuration&lt;/span&gt;
&lt;span class="nv"&gt;LLAMA_PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/nix/store/wr7vi3957cx751la7q490h9v2m6q71fm-llama-cpp-8255/bin"&lt;/span&gt;
&lt;span class="nv"&gt;LLAMA_SERVER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LLAMA_PATH&lt;/span&gt;&lt;span class="s2"&gt;/llama-server"&lt;/span&gt;
&lt;span class="nv"&gt;LLAMA_BENCH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LLAMA_PATH&lt;/span&gt;&lt;span class="s2"&gt;/llama-bench"&lt;/span&gt;
&lt;span class="nv"&gt;LLAMA_CLI&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LLAMA_PATH&lt;/span&gt;&lt;span class="s2"&gt;/llama-cli"&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"--- NVIDIA GPU Diagnostic ---"&lt;/span&gt;

&lt;span class="c"&gt;# 1. Check for the NVIDIA device via PCI&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"[1/4] Checking PCI devices for NVIDIA GPU..."&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;lspci | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; nvidia&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" - NVIDIA hardware detected via lspci."&lt;/span&gt;
&lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" - No NVIDIA hardware found on PCI bus."&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# 2. Check for the driver status&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;[2/4] Checking NVIDIA driver status..."&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="nb"&gt;command&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; nvidia-smi &amp;amp;&amp;gt; /dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" - nvidia-smi found. Running..."&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; nvidia-smi&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" - CRITICAL: nvidia-smi failed. Kernel modules might not be loaded."&lt;/span&gt;
        &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" - ACTION: Try running 'sudo modprobe nvidia' and then 'nvidia-smi' again."&lt;/span&gt;
    &lt;span class="k"&gt;fi
else
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" - nvidia-smi NOT found. Driver might not be installed or active."&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# 3. Check for the kernel modules&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;[3/4] Checking for loaded NVIDIA kernel modules..."&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; /sbin/lsmod | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; nvidia &amp;amp;&amp;gt; /dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" - NVIDIA kernel modules are loaded."&lt;/span&gt;
    /sbin/lsmod | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; nvidia
&lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" - CRITICAL: No NVIDIA kernel modules loaded."&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" - ACTION: Run 'sudo modprobe nvidia' to load the driver."&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# 4. Check for llama.cpp device support&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;[4/4] Checking llama.cpp device support..."&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LLAMA_CLI&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" - Checking llama-cli with -ngl flag..."&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LLAMA_CLI&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-ngl&lt;/span&gt; 1 &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" - llama-cli not found at &lt;/span&gt;&lt;span class="nv"&gt;$LLAMA_CLI&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running this script gave me a clear picture of what was missing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;--- NVIDIA GPU Diagnostic ---
[1/4] Checking PCI devices for NVIDIA GPU...
0000:01:00.0 3D controller: NVIDIA Corporation TU117M [GeForce MX450] (rev a1)
  - NVIDIA hardware detected via lspci.

[2/4] Checking NVIDIA driver status...
  - nvidia-smi found. Running...
Fri Mar 20 01:55:51 2026       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 595.45.04 Driver Version: 595.45.04 CUDA Version: 13.2 |
+-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce MX450 On | 00000000:01:00.0 Off | N/A |
| N/A 53C P8 N/A / 5001W | 5MiB / 2048MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

[3/4] Checking for loaded NVIDIA kernel modules...
  - NVIDIA kernel modules are loaded.
&lt;/span&gt;&lt;span class="c"&gt;...
&lt;/span&gt;&lt;span class="go"&gt;[4/4] Checking llama.cpp device support...
  - Checking llama-cli with -ngl flag...
warning: no usable GPU found, --gpu-layers option will be ignored
warning: one possible reason is that llama.cpp was compiled without GPU support
&lt;/span&gt;&lt;span class="c"&gt;...
&lt;/span&gt;&lt;span class="go"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are on Debian 13 and want to try this, make sure you check your VRAM limits before picking a model, and don’t forget that &lt;code&gt;mokutil&lt;/code&gt; step if you have Secure Boot enabled!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>linux</category>
      <category>llm</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Tuning Podman on macOS to Match OrbStack Performance</title>
      <dc:creator>özkan pakdil</dc:creator>
      <pubDate>Sun, 08 Mar 2026 19:02:00 +0000</pubDate>
      <link>https://dev.to/ozkanpakdil/tuning-podman-on-macos-to-match-orbstack-performance-3f39</link>
      <guid>https://dev.to/ozkanpakdil/tuning-podman-on-macos-to-match-orbstack-performance-3f39</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; These are suggested optimizations I have not personally tried yet. I’m blogging them as I’m planning to test them throughout this week.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;OrbStack is highly optimized for macOS, using a proprietary, high-performance networking stack and a custom VirtioFS implementation with aggressive caching. Podman, while being open-source and standard-compliant, can be tuned to significantly bridge the performance gap.&lt;/p&gt;

&lt;p&gt;The following plan outlines key areas where Podman’s performance can be improved on macOS:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Enable Rosetta 2 for x86_64 Emulation (Apple Silicon only)
&lt;/h3&gt;

&lt;p&gt;If you are on an Apple Silicon (M1/M2/M3/M4) Mac, running x86_64 containers is often much slower than ARM64. Podman supports Apple’s native Rosetta 2 for Linux, which is substantially faster than QEMU-based emulation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Check if Rosetta is enabled:&lt;/strong&gt; Run &lt;code&gt;podman machine inspect&lt;/code&gt; and look for &lt;code&gt;"Rosetta": true&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enable Rosetta:&lt;/strong&gt; When creating a new machine, use the &lt;code&gt;--rosetta&lt;/code&gt; flag (if your Podman version and macOS version support it, typically macOS 13+):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;podman machine init &lt;span class="nt"&gt;--rosetta&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Note: If you have an existing machine, you may need to recreate it to enable Rosetta.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Optimize Volume Mounting (VirtioFS)
&lt;/h3&gt;

&lt;p&gt;Podman uses &lt;code&gt;virtiofs&lt;/code&gt; by default on macOS, which is the fastest way to share files between the host and the VM using Apple’s Virtualization.framework. However, file system I/O can still be a bottleneck.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Avoid Deeply Nested Mounts:&lt;/strong&gt; Minimize the number of files synced by mounting only the necessary sub-directories instead of the entire home directory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Named Volumes:&lt;/strong&gt; For high-I/O workloads (like database storage or &lt;code&gt;node_modules&lt;/code&gt;), use named volumes instead of bind mounts. Named volumes reside within the VM’s disk image and operate at near-native speeds.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;podman volume create my-data
podman run &lt;span class="nt"&gt;-v&lt;/span&gt; my-data:/app/data ...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Tuning Resource Allocation
&lt;/h3&gt;

&lt;p&gt;Ensure the Podman machine has sufficient resources. The default settings might be conservative for demanding workloads.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Increase CPUs and Memory:&lt;/strong&gt; Adjust the machine’s resources to match your workload.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;podman machine &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;--cpus&lt;/span&gt; 4 &lt;span class="nt"&gt;--memory&lt;/span&gt; 8192

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;(Requires the machine to be stopped: &lt;code&gt;podman machine stop&lt;/code&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Networking Performance (gvproxy)
&lt;/h3&gt;

&lt;p&gt;Podman uses &lt;code&gt;gvproxy&lt;/code&gt; for user-mode networking. This is often the primary reason OrbStack feels faster for network-heavy tasks, as OrbStack uses a more direct networking approach.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reduce Network Hops:&lt;/strong&gt; If possible, avoid complex port mappings or heavy network traffic through the user-mode proxy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MTU Tuning:&lt;/strong&gt; In some environments, increasing the MTU within the container can improve throughput, though this is dependent on the host’s network configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Experiment with the &lt;code&gt;libkrun&lt;/code&gt; Provider
&lt;/h3&gt;

&lt;p&gt;Podman on macOS supports multiple virtualization backends. While &lt;code&gt;applehv&lt;/code&gt; (default) is stable, &lt;code&gt;libkrun&lt;/code&gt; (based on &lt;code&gt;krun&lt;/code&gt;) can sometimes offer better performance for specific workloads, especially those involving GPU acceleration or specialized Virtio devices.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Try libkrun:&lt;/strong&gt; You can initialize a machine with the &lt;code&gt;libkrun&lt;/code&gt; provider:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;podman machine init &lt;span class="nt"&gt;--provider&lt;/span&gt; libkrun

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  6. Summary of Recommended Configuration for Speed
&lt;/h3&gt;

&lt;p&gt;To get the best performance today, use the following initialization command (on Apple Silicon):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Stop and remove existing machine if necessary&lt;/span&gt;
podman machine stop
podman machine &lt;span class="nb"&gt;rm&lt;/span&gt;

&lt;span class="c"&gt;# Initialize with optimized settings&lt;/span&gt;
podman machine init &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cpus&lt;/span&gt; 4 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--memory&lt;/span&gt; 8192 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--disk-size&lt;/span&gt; 50 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--rosetta&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--rootful&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By applying these optimizations, Podman’s performance on macOS will be significantly closer to OrbStack, especially for CPU-intensive emulation and file-system heavy development workflows.&lt;/p&gt;

&lt;p&gt;Happy containerizing!&lt;/p&gt;

</description>
      <category>containers</category>
      <category>opensource</category>
      <category>performance</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Atlassian MCP</title>
      <dc:creator>özkan pakdil</dc:creator>
      <pubDate>Sun, 08 Feb 2026 00:00:00 +0000</pubDate>
      <link>https://dev.to/ozkanpakdil/atlassian-mcp-229b</link>
      <guid>https://dev.to/ozkanpakdil/atlassian-mcp-229b</guid>
      <description>&lt;p&gt;I have been using &lt;a href="https://hub.docker.com/r/mcp/atlassian" rel="noopener noreferrer"&gt;Atlassian MCP&lt;/a&gt; with internal Confluence and Jira, and it has been wonderful.&lt;/p&gt;

&lt;p&gt;Finding internal information is often challenging and time-consuming. To be honest, searching through Jira or Confluence and locating the right information can be really difficult.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create Jira and Confluence API tokens from your internal site profile page. For example: &lt;code&gt;https://internalconfluence.company.com/profile/personal&lt;/code&gt; for Confluence and &lt;code&gt;https://jira.company.com/secure/admin/CreateAPIToken!default.jspa&lt;/code&gt; for Jira. These URLs may vary depending on your setup.&lt;/li&gt;
&lt;li&gt;Create an &lt;code&gt;mcp.json&lt;/code&gt; file in the &lt;code&gt;.vscode&lt;/code&gt; folder for Visual Studio Code, or place this MCP configuration in the appropriate folder for your IDE of choice:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mcp-atlassian"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"uvx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"mcp-atlassian"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"JIRA_URL"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://jira.company.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"JIRA_USERNAME"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"your.email@company.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"JIRA_API_TOKEN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"your_api_token"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CONFLUENCE_URL"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://internalconfluence.company.com/wiki"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CONFLUENCE_USERNAME"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"your.email@company.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CONFLUENCE_API_TOKEN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"your_api_token"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remember to run podman-desktop or docker desktop. Because this MCP works as a docker container.&lt;/p&gt;

&lt;p&gt;After that, open GitHub Copilot in your IDE and instruct it to use the Atlassian MCP to search Confluence and Jira. This makes finding internal information incredibly easy—it goes through pages systematically and retrieves all the details you need.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>mcp</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Building a Real-time File I/O Heatmap with eBPF and Java 25</title>
      <dc:creator>özkan pakdil</dc:creator>
      <pubDate>Wed, 21 Jan 2026 05:52:00 +0000</pubDate>
      <link>https://dev.to/ozkanpakdil/building-a-real-time-file-io-heatmap-with-ebpf-and-java-25-2fnb</link>
      <guid>https://dev.to/ozkanpakdil/building-a-real-time-file-io-heatmap-with-ebpf-and-java-25-2fnb</guid>
      <description>&lt;p&gt;Have you ever wondered exactly which files are being hammered by your Linux system in real-time? While tools like &lt;code&gt;iotop&lt;/code&gt; or &lt;code&gt;lsof&lt;/code&gt; are great, sometimes you want something more visual, custom, and lightweight.&lt;/p&gt;

&lt;p&gt;In this post, I’ll walk you through how I built a &lt;strong&gt;Real-time File I/O Heatmap&lt;/strong&gt; using the power of &lt;strong&gt;eBPF&lt;/strong&gt; for data collection and &lt;strong&gt;Java 25&lt;/strong&gt; for a modern Terminal UI (TUI).&lt;/p&gt;

&lt;h3&gt;
  
  
  What is eBPF and Why Use It?
&lt;/h3&gt;

&lt;p&gt;eBPF (Extended Berkeley Packet Filter) is a revolutionary technology that allows you to run sandboxed programs in the Linux kernel without changing kernel source code or loading kernel modules.&lt;/p&gt;

&lt;p&gt;Think of it as &lt;strong&gt;JavaScript for the Kernel&lt;/strong&gt;. It allows you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Observe&lt;/strong&gt; : Attach to almost any function in the kernel (kprobes) or userspace (uprobes).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Filter&lt;/strong&gt; : Process data efficiently at the source, inside the kernel.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Perform&lt;/strong&gt; : It’s extremely fast because it avoids expensive context switches between kernel and userspace for every event.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this project, we use eBPF to hook into &lt;code&gt;vfs_read&lt;/code&gt; and &lt;code&gt;vfs_write&lt;/code&gt;, the gatekeepers of all filesystem activity in Linux.&lt;/p&gt;

&lt;p&gt;If you want to dive deeper into eBPF, I highly recommend reading &lt;a href="https://web.archive.org/web/20251129115431/https://cilium.isovalent.com/hubfs/Learning-eBPF%20-%20Full%20book.pdf" rel="noopener noreferrer"&gt;Learning eBPF by Liz Rice&lt;/a&gt;, it’s an excellent resource.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Architecture
&lt;/h3&gt;

&lt;p&gt;Our project consists of three main layers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The eBPF Program (C)&lt;/strong&gt;: Sits in the kernel, intercepts VFS calls, and aggregates stats (reads, writes, bytes) into a BPF Hash Map.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Java Backend (Java 25 + JNA)&lt;/strong&gt;: Uses &lt;code&gt;libbpf&lt;/code&gt; via Java Native Access (JNA) to load the BPF program into the kernel and poll the maps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The TUI (Lanterna)&lt;/strong&gt;: A Terminal User Interface that renders the data as a color-coded heatmap.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  1. The Kernel Side: eBPF in C
&lt;/h3&gt;

&lt;p&gt;We use BPF CO-RE (Compile Once – Run Everywhere) to ensure our program works across different kernel versions without recompilation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SEC("kprobe/vfs_read")
int BPF_KPROBE(vfs_read, struct file *file, char *buf, size_t count) {
    // Extract filename from the file struct
    // Filter out non-file noise (sockets/pipes)
    // Update the BPF map with bytes read
    return 0;
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The magic happens in &lt;code&gt;file_heatmap.bpf.c&lt;/code&gt;, where we traverse the kernel’s &lt;code&gt;dentry&lt;/code&gt; structures to reconstruct partial file paths so we can actually see &lt;em&gt;what&lt;/em&gt; is being accessed.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The Bridge: JNA and libbpf
&lt;/h3&gt;

&lt;p&gt;Interfacing Java with the kernel might sound scary, but &lt;code&gt;libbpf&lt;/code&gt; makes it manageable. We defined a JNA interface to map the C functions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public interface LibBpf extends Library {
    Pointer bpf_object__open(String path);
    int bpf_object__load(Pointer obj);
    int bpf_map_lookup_elem(int fd, Pointer key, Pointer value);
    // ...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows our Java app to behave like a first-class Linux observability tool.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The Frontend: A Terminal Heatmap
&lt;/h3&gt;

&lt;p&gt;Using the &lt;strong&gt;Lanterna&lt;/strong&gt; library, we created a TUI that updates every 2 seconds. The heatmap effect is achieved by calculating the “intensity” of I/O for each file and mapping it to a color gradient from white to red.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;float intensity = (float) currentVal / maxVal;
int green = (int) (255 * (1 - intensity));
int blue = (int) (255 * (1 - intensity));
tg.setBackgroundColor(new TextColor.RGB(255, green, blue));

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Challenges Overcome
&lt;/h3&gt;

&lt;p&gt;Building this wasn’t without its hurdles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SDKMAN &amp;amp; Sudo&lt;/strong&gt; : BPF requires root privileges, but &lt;code&gt;sudo&lt;/code&gt; often strips the environment variables (like &lt;code&gt;JAVA_HOME&lt;/code&gt;) set by SDKMAN. I solved this in the &lt;code&gt;Makefile&lt;/code&gt; by using absolute paths and passing environment variables explicitly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BPF Verifier&lt;/strong&gt; : The kernel is very strict. Reconstructing file paths required careful use of &lt;code&gt;bpf_probe_read_kernel_str&lt;/code&gt; and &lt;code&gt;bpf_snprintf&lt;/code&gt; to keep the verifier happy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;vmlinux.h Size&lt;/strong&gt; : The standard &lt;code&gt;vmlinux.h&lt;/code&gt; is over 2MB. I optimized this by using &lt;code&gt;bpftool gen min_core_btf&lt;/code&gt; to generate a &lt;strong&gt;minified&lt;/strong&gt; header (~2KB) containing only the types we actually use.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Noise Filtering&lt;/strong&gt; : Initially, the heatmap was flooded with TCP/UDP socket activity. Adding a filter for &lt;code&gt;S_IFREG&lt;/code&gt; (regular files) made the output much cleaner.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How to Run It
&lt;/h3&gt;

&lt;p&gt;You can find the full source code for this project on GitHub: &lt;a href="https://github.com/ozkanpakdil/java-examlpes/tree/master/ebpf-file-heatmap" rel="noopener noreferrer"&gt;ozkanpakdil/java-examples/ebpf-file-heatmap&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you’re on a Linux machine with &lt;code&gt;clang&lt;/code&gt;, &lt;code&gt;bpftool&lt;/code&gt;, and &lt;code&gt;maven&lt;/code&gt; installed, you can try it out:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/ozkanpakdil/java-examples.git
cd java-examples/ebpf-file-heatmap
sudo make run

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once it’s running, you can press &lt;strong&gt;1-5&lt;/strong&gt; to sort by different metrics (Reads, Writes, Bytes) and watch your system’s I/O come to life!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/user-attachments/assets/8fee6e53-6a2a-41b9-aa70-33c2002c4dc2" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27ibaxoiov6dug6cmbmg.png" alt="Image" width="800" height="680"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Combining eBPF’s low-level performance with Java’s high-level productivity (and the latest features in Java 25!) is a powerful way to build Linux tooling. Whether you’re debugging a database or just curious about what your IDE is doing in the background, this heatmap gives you a unique window into your system.&lt;/p&gt;

&lt;p&gt;Happy hacking!&lt;/p&gt;

</description>
      <category>java</category>
      <category>linux</category>
      <category>monitoring</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Bun Joins the Microservice Framework Benchmark: Surprisingly Fast JavaScript Runtime</title>
      <dc:creator>özkan pakdil</dc:creator>
      <pubDate>Sat, 10 Jan 2026 17:00:00 +0000</pubDate>
      <link>https://dev.to/ozkanpakdil/bun-joins-the-microservice-framework-benchmark-surprisingly-fast-javascript-runtime-119d</link>
      <guid>https://dev.to/ozkanpakdil/bun-joins-the-microservice-framework-benchmark-surprisingly-fast-javascript-runtime-119d</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Today I’m excited to announce the addition of &lt;strong&gt;Bun&lt;/strong&gt; to our &lt;a href="https://ozkanpakdil.github.io/test-microservice-frameworks/" rel="noopener noreferrer"&gt;microservice framework benchmark suite&lt;/a&gt;. The results are nothing short of remarkable . Bun has proven to be one of the fastest runtimes in our entire test suite, competing directly with Rust frameworks!&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Bun?
&lt;/h2&gt;

&lt;p&gt;Bun is a modern JavaScript runtime built from scratch using &lt;a href="https://ziglang.org/" rel="noopener noreferrer"&gt;Zig&lt;/a&gt; and &lt;a href="https://developer.apple.com/documentation/javascriptcore" rel="noopener noreferrer"&gt;JavaScriptCore&lt;/a&gt; (the engine that powers Safari). It’s designed to be a drop-in replacement for Node.js with a focus on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Speed&lt;/strong&gt; - Native code execution and optimized I/O&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TypeScript support&lt;/strong&gt; - First-class TypeScript without transpilation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;All-in-one toolkit&lt;/strong&gt; - Runtime, bundler, test runner, and package manager&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementation Details
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Bun Version:&lt;/strong&gt; 1.3.5&lt;/p&gt;

&lt;p&gt;The implementation uses Bun’s native HTTP server API, which is incredibly simple and performant:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const port = 8080;

const server = Bun.serve({
    port: port,
    fetch(req) {
        const url = new URL(req.url);

        if (url.pathname === "/hello") {
            const info = {
                name: "bun",
                releaseYear: new Date().getFullYear()
            };
            return new Response(JSON.stringify(info), {
                headers: { "Content-Type": "application/json" }
            });
        }

        return new Response("Not Found", { status: 404 });
    }
});

console.log(`Bun server started on port ${server.port}`);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The build process is straightforward . Bun can compile TypeScript directly to a standalone executable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bun build --compile ./main.ts --outfile bun-demo

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Benchmark Results: The Numbers Speak
&lt;/h2&gt;

&lt;p&gt;Here are the complete benchmark results for Bun:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---- Global Information --------------------------------------------------------
&amp;gt; request count 32000 (OK=32000 KO=0 )
&amp;gt; min response time 0 (OK=0 KO=- )
&amp;gt; max response time 569 (OK=569 KO=- )
&amp;gt; mean response time 157 (OK=157 KO=- )
&amp;gt; std deviation 115 (OK=115 KO=- )
&amp;gt; response time 50th percentile 148 (OK=148 KO=- )
&amp;gt; response time 75th percentile 208 (OK=208 KO=- )
&amp;gt; response time 95th percentile 403 (OK=402 KO=- )
&amp;gt; response time 99th percentile 483 (OK=483 KO=- )
&amp;gt; mean requests/sec 6400 (OK=6400 KO=- )

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key highlights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;157ms mean response time&lt;/strong&gt; : faster than Golang (227ms), .NET 9 AOT (255ms), and all JVM frameworks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;6,400 requests/sec&lt;/strong&gt; : matching the throughput of Rust frameworks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;0 failed requests&lt;/strong&gt; : 100% success rate under load (unlike Express.js which had 75% failure rate)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;569ms max response time&lt;/strong&gt; : excellent consistency&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Performance Comparison
&lt;/h2&gt;

&lt;p&gt;Let’s put Bun’s performance in perspective with the top performers:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Rank&lt;/th&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;th&gt;Mean Response Time (ms)&lt;/th&gt;
&lt;th&gt;Requests/sec&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Rust (Warp)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;144&lt;/td&gt;
&lt;td&gt;6,400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Rust (Actix)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;154&lt;/td&gt;
&lt;td&gt;6,400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Rust (Axum)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;154&lt;/td&gt;
&lt;td&gt;6,400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Bun&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;157&lt;/td&gt;
&lt;td&gt;6,400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Rust (Rocket)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;238&lt;/td&gt;
&lt;td&gt;5,333&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Golang&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;227&lt;/td&gt;
&lt;td&gt;5,333&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;.NET 9 AOT&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;255&lt;/td&gt;
&lt;td&gt;5,333&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;.NET 7 AOT&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;284&lt;/td&gt;
&lt;td&gt;5,333&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;.NET 8 AOT&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;285&lt;/td&gt;
&lt;td&gt;5,333&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;GraalVM Micronaut&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;339&lt;/td&gt;
&lt;td&gt;5,333&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Bun vs Express.js: A JavaScript Runtime Showdown
&lt;/h2&gt;

&lt;p&gt;The comparison between Bun and Express.js (Node.js) is particularly striking:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Bun&lt;/th&gt;
&lt;th&gt;Express.js (Node.js)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Mean Response Time&lt;/td&gt;
&lt;td&gt;157ms&lt;/td&gt;
&lt;td&gt;815ms (3,247ms for OK requests)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Requests/sec&lt;/td&gt;
&lt;td&gt;6,400&lt;/td&gt;
&lt;td&gt;667 (successful only)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Failed Requests&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;24,000 (75% failure rate)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Max Response Time&lt;/td&gt;
&lt;td&gt;569ms&lt;/td&gt;
&lt;td&gt;10,719ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Bun is approximately &lt;strong&gt;5x faster&lt;/strong&gt; than Express.js in mean response time and handles &lt;strong&gt;~10x more successful requests per second&lt;/strong&gt;. Most importantly, Bun maintained 100% stability under load while Express.js struggled significantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is Bun So Fast?
&lt;/h2&gt;

&lt;p&gt;Several factors contribute to Bun’s impressive performance:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;JavaScriptCore Engine&lt;/strong&gt; : Safari’s JS engine is highly optimized and often outperforms V8 in certain workloads&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zig Implementation&lt;/strong&gt; : Low-level systems language with minimal overhead&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Native HTTP Server&lt;/strong&gt; : Built-in server implementation bypasses the overhead of frameworks like Express&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimized I/O&lt;/strong&gt; : Uses io_uring on Linux for efficient async I/O operations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No Transpilation Overhead&lt;/strong&gt; : Native TypeScript execution&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Updated Performance Tiers
&lt;/h2&gt;

&lt;p&gt;With Bun’s addition, our performance tiers now look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Performance Tiers:

🥇 TIER 1 (&amp;lt; 200ms): Rust frameworks, Bun
   - Native compilation or highly optimized runtimes
   - Minimal overhead, maximum throughput

🥈 TIER 2 (200-300ms): Golang, .NET AOT, GraalVM Native
   - Excellent performance with broader ecosystem

🥉 TIER 3 (300-600ms): GraalVM Java frameworks
   - Native compilation benefits for JVM

🏅 TIER 4 (&amp;gt; 600ms): JVM frameworks, Node.js/Express.js
   - Full-featured but with more overhead

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Bun’s benchmark results are genuinely surprising. A JavaScript/TypeScript runtime competing with Rust frameworks was not something I expected to see. Here are the key takeaways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bun is production-ready for high-performance workloads&lt;/strong&gt; : The 157ms mean response time and 0% failure rate prove it can handle serious traffic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;JavaScript doesn’t have to be slow&lt;/strong&gt; : Bun demonstrates that with the right architecture, JavaScript can achieve near-native performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consider Bun for new projects&lt;/strong&gt; : If you’re starting a new microservice and your team knows JavaScript/TypeScript, Bun offers an excellent balance of developer experience and performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The gap between Bun and Node.js is massive&lt;/strong&gt; : If you’re currently using Express.js and need better performance, Bun is worth serious consideration.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For the complete benchmark results including all frameworks, GraalVM native builds, and detailed statistics, check out the &lt;a href="https://ozkanpakdil.github.io/test-microservice-frameworks/posts/2026/2026-01-10-microservice-framework-test-25/" rel="noopener noreferrer"&gt;full benchmark report&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://github.com/ozkanpakdil/test-microservice-frameworks" rel="noopener noreferrer"&gt;Source code for tests&lt;/a&gt; 👈 &lt;a href="https://github.com/ozkanpakdil/rust-examples" rel="noopener noreferrer"&gt;Rust examples&lt;/a&gt; 👈&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>microservices</category>
      <category>performance</category>
    </item>
    <item>
      <title>Eclipse Collections vs JDK Collections: A Performance Deep Dive</title>
      <dc:creator>özkan pakdil</dc:creator>
      <pubDate>Mon, 29 Dec 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/ozkanpakdil/eclipse-collections-vs-jdk-collections-a-performance-deep-dive-21pi</link>
      <guid>https://dev.to/ozkanpakdil/eclipse-collections-vs-jdk-collections-a-performance-deep-dive-21pi</guid>
      <description>&lt;h3&gt;
  
  
  The Spark
&lt;/h3&gt;

&lt;p&gt;The other day I came across a fascinating post on Substack by &lt;a href="https://substack.com/@skilledcoder/note/c-190793397" rel="noopener noreferrer"&gt;Skilled Coder&lt;/a&gt; about Java data structure performance. The post showed some eye-opening numbers for 10M operations:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get operations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;HashMap.get()&lt;/code&gt; → ~140 ms&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TreeMap.get()&lt;/code&gt; → ~420 ms&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ArrayList.get(i)&lt;/code&gt; → ~40 ms&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;LinkedList.get(i)&lt;/code&gt; → ~2.5 s&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Insertion (10M elements):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ArrayList.add()&lt;/code&gt; → ~180 ms&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;HashMap.put()&lt;/code&gt; → ~300 ms&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;LinkedList.add()&lt;/code&gt; → ~900 ms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This got me thinking: how do these numbers compare to &lt;a href="https://eclipse.dev/collections/" rel="noopener noreferrer"&gt;Eclipse Collections&lt;/a&gt;? And more importantly, how can we calculate these numbers ourselves using open source tools?&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Eclipse Collections?
&lt;/h3&gt;

&lt;p&gt;Eclipse Collections (EC) has an interesting &lt;a href="https://eclipse.dev/collections/#history" rel="noopener noreferrer"&gt;history&lt;/a&gt;. It started around 2004 (probably for Java 1.4) because of buggy and slow implementations in the JDK at the time. Goldman Sachs originally developed it as GS Collections before donating it to the Eclipse Foundation.&lt;/p&gt;

&lt;p&gt;Today, EC provides drop-in replacements for JDK collections with additional functionality and, as we’ll see, slightly better performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Benchmark Setup
&lt;/h3&gt;

&lt;p&gt;I used &lt;a href="https://github.com/openjdk/jmh" rel="noopener noreferrer"&gt;JMH (Java Microbenchmark Harness)&lt;/a&gt; to run proper benchmarks. You can see the full results on my &lt;a href="https://ozkanpakdil.github.io/java-benchmarks/docs/eclipse-collections.html" rel="noopener noreferrer"&gt;Java Benchmarks page&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Quick Comparison
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Get (avg per operation):&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ArrayList.get()&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~0.833 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;HashMap.get()&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~4.324 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;TreeMap.get()&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~272.823 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;LinkedList.get()&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~6,036,876.394 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Insertion (avg per operation):&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ArrayList.add()&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~133.370 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;HashMap.put()&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~378.101 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;TreeMap.put()&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~432.432 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;LinkedList.add()&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~408.091 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Detailed Comparison: JDK vs Eclipse Collections
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Structure&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Insertion (ns/op)&lt;/th&gt;
&lt;th&gt;Get (ns/op)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ArrayList&lt;/td&gt;
&lt;td&gt;JDK&lt;/td&gt;
&lt;td&gt;~133.370 ns&lt;/td&gt;
&lt;td&gt;~0.833 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MutableList (FastList)&lt;/td&gt;
&lt;td&gt;EC&lt;/td&gt;
&lt;td&gt;~129.426 ns&lt;/td&gt;
&lt;td&gt;~0.831 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HashMap&lt;/td&gt;
&lt;td&gt;JDK&lt;/td&gt;
&lt;td&gt;~378.101 ns&lt;/td&gt;
&lt;td&gt;~4.324 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MutableMap (UnifiedMap)&lt;/td&gt;
&lt;td&gt;EC&lt;/td&gt;
&lt;td&gt;~371.230 ns&lt;/td&gt;
&lt;td&gt;~3.796 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TreeMap&lt;/td&gt;
&lt;td&gt;JDK&lt;/td&gt;
&lt;td&gt;~432.432 ns&lt;/td&gt;
&lt;td&gt;~272.823 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TreeSortedMap&lt;/td&gt;
&lt;td&gt;EC&lt;/td&gt;
&lt;td&gt;~480.139 ns&lt;/td&gt;
&lt;td&gt;~271.022 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LinkedList&lt;/td&gt;
&lt;td&gt;JDK&lt;/td&gt;
&lt;td&gt;~408.091 ns&lt;/td&gt;
&lt;td&gt;~6,036,876.394 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Key Takeaways
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Eclipse Collections is slightly faster overall&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For the most commonly used collections (List and Map), EC shows consistent improvements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;FastList&lt;/strong&gt; beats ArrayList by ~3% on insertion and is essentially equal on get&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UnifiedMap&lt;/strong&gt; beats HashMap by ~2% on insertion and ~12% on get&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. TreeMap vs TreeSortedMap is a wash&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;TreeSortedMap is slightly slower on insertion (~11%) but marginally faster on get. If you need sorted maps, either choice works well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. LinkedList is still terrible for random access&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Look at that &lt;code&gt;LinkedList.get()&lt;/code&gt; number: &lt;strong&gt;~6 million nanoseconds&lt;/strong&gt; per operation! This is because LinkedList has O(n) complexity for random access — it must traverse the list from the beginning (or end) to find each element.&lt;/p&gt;

&lt;p&gt;As Skilled Coder wisely noted: &lt;em&gt;“Once you know this, you stop misusing LinkedList forever.”&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why These Performance Differences?
&lt;/h3&gt;

&lt;p&gt;Understanding the “why” helps you make better choices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ArrayList/FastList&lt;/strong&gt; = contiguous memory, cache-friendly. The CPU can prefetch data efficiently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HashMap/UnifiedMap&lt;/strong&gt; = hashing + pointer chasing. UnifiedMap uses a more compact memory layout.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TreeMap/TreeSortedMap&lt;/strong&gt; = O(log n) + rebalancing. Red-black tree operations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LinkedList&lt;/strong&gt; = worst cache locality + pointer traversal. Every access is a cache miss.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  When to Use Eclipse Collections
&lt;/h3&gt;

&lt;p&gt;Consider EC when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You’re doing heavy collection operations and every nanosecond counts&lt;/li&gt;
&lt;li&gt;You want additional APIs like &lt;code&gt;select()&lt;/code&gt;, &lt;code&gt;reject()&lt;/code&gt;, &lt;code&gt;collect()&lt;/code&gt;, &lt;code&gt;groupBy()&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;You need primitive collections (avoiding boxing overhead)&lt;/li&gt;
&lt;li&gt;You want immutable collections with a rich API&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Running Your Own Benchmarks
&lt;/h3&gt;

&lt;p&gt;Want to reproduce these results? Check out the &lt;a href="https://github.com/openjdk/jmh" rel="noopener noreferrer"&gt;JMH documentation&lt;/a&gt; and my benchmark code at &lt;a href="https://ozkanpakdil.github.io/java-benchmarks/" rel="noopener noreferrer"&gt;java-benchmarks&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrap-up
&lt;/h3&gt;

&lt;p&gt;This was a fun research project! The numbers confirm what the Eclipse Collections team has been saying for years: their implementations are well-optimized and can provide meaningful performance improvements over JDK collections.&lt;/p&gt;

&lt;p&gt;For most applications, the difference won’t be noticeable. But if you’re building high-performance systems or processing large datasets, EC is worth considering.&lt;/p&gt;

&lt;p&gt;I shared these findings on &lt;a href="https://substack.com/@thejvmbender/note/c-192541890/stats" rel="noopener noreferrer"&gt;my Substack&lt;/a&gt; — feel free to check it out and share your own benchmark experiences!&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://eclipse.dev/collections/" rel="noopener noreferrer"&gt;Eclipse Collections&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://eclipse.dev/collections/#history" rel="noopener noreferrer"&gt;Eclipse Collections History&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openjdk/jmh" rel="noopener noreferrer"&gt;JMH - Java Microbenchmark Harness&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ozkanpakdil.github.io/java-benchmarks/docs/eclipse-collections.html" rel="noopener noreferrer"&gt;My Java Benchmarks Page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://substack.com/@skilledcoder/note/c-190793397" rel="noopener noreferrer"&gt;Skilled Coder’s Original Post&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>algorithms</category>
      <category>java</category>
      <category>performance</category>
    </item>
    <item>
      <title>From PKIX errors to a clean mTLS + Feign + IAM demo</title>
      <dc:creator>özkan pakdil</dc:creator>
      <pubDate>Fri, 05 Dec 2025 18:50:00 +0000</pubDate>
      <link>https://dev.to/ozkanpakdil/from-pkix-errors-to-a-clean-mtls-feign-iam-demo-56oe</link>
      <guid>https://dev.to/ozkanpakdil/from-pkix-errors-to-a-clean-mtls-feign-iam-demo-56oe</guid>
      <description>&lt;h3&gt;
  
  
  Why this post
&lt;/h3&gt;

&lt;p&gt;I started this mini‑project after seeing a common roadblock: &lt;code&gt;PKIX path building failed&lt;/code&gt; when calling HTTPS services with OpenFeign. The goal was to create a tiny, runnable example that eliminates guesswork, shows how to configure client certificates and trust properly, and layers basic IAM policies on top.&lt;/p&gt;

&lt;p&gt;Reference: &lt;a href="https://stackoverflow.com/questions/79835509/unable-to-configure-ssl-context-for-open-feign-client-getting-pkix-error" rel="noopener noreferrer"&gt;https://stackoverflow.com/questions/79835509/unable-to-configure-ssl-context-for-open-feign-client-getting-pkix-error&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s inside the example
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Two Spring Boot apps:

&lt;ul&gt;
&lt;li&gt;Server: HTTPS on 8443, requires client certs (mTLS), and recognizes/authorizes callers with Spring Security’s X.509 support.&lt;/li&gt;
&lt;li&gt;Client: Spring Cloud OpenFeign calling the server via Apache HttpClient5 with a custom &lt;code&gt;SSLContext&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;A one‑command cert toolchain (local CA → server/client certs → PKCS#12 keystores/truststores).&lt;/li&gt;

&lt;li&gt;An automated test script that runs a positive call (expected 200) and a negative call with an unauthorized client (expected 403).&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Project (ready to publish here):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;github.com/ozkanpakdil/spring-examples/mtls-feignclient-client-server-iam&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why this avoids PKIX
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The Feign client explicitly uses a truststore that contains the CA that signed the server certificate.&lt;/li&gt;
&lt;li&gt;The client presents its own certificate (keystore) during TLS handshake for mutual auth.&lt;/li&gt;
&lt;li&gt;No reliance on default JDK trust settings; the &lt;code&gt;SSLContext&lt;/code&gt; is built explicitly and injected into Feign’s HttpClient5.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How to run (quick)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# 1) Generate demo certs
cd certs &amp;amp;&amp;amp; chmod +x gen-certs.sh &amp;amp;&amp;amp; ./gen-certs.sh

# 2) Start server (terminal A)
cd server &amp;amp;&amp;amp; mvn spring-boot:run

# 3) Start client (terminal B)
cd client &amp;amp;&amp;amp; mvn spring-boot:run
# Look for: "Received from server: Hello from secure server!"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or run the end‑to‑end script (boots server, runs client, then negative test with an unauthorized cert → 403 as expected):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x client/scripts/test-mtls.sh
client/scripts/test-mtls.sh --tail 200

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  IAM in one paragraph
&lt;/h3&gt;

&lt;p&gt;mTLS answers “who is calling?” at the transport layer using X.509 certificates from a trusted CA. Many systems also need app‑level IAM: mapping that certificate to an application principal and enforcing authorization policies. Here, Spring Security X.509 maps the Subject CN (e.g., &lt;code&gt;demo-client&lt;/code&gt;) to a user and requires role &lt;code&gt;CLIENT&lt;/code&gt; for &lt;code&gt;/api/hello&lt;/code&gt;. Our negative test shows a different CN gets a clean &lt;code&gt;403&lt;/code&gt; — proving authorization on top of TLS validation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key files (direct links)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Top‑level overview (README)

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/ozkanpakdil/spring-examples/tree/main/mtls-feignclient-client-server-iam/README.md" rel="noopener noreferrer"&gt;https://github.com/ozkanpakdil/spring-examples/tree/main/mtls-feignclient-client-server-iam/README.md&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Certificate toolchain

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/certs/gen-certs.sh" rel="noopener noreferrer"&gt;https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/certs/gen-certs.sh&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Client (Feign over mTLS)

&lt;ul&gt;
&lt;li&gt;SSLContext wiring: &lt;a href="https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/client/src/main/java/dev/demo/client/config/SslFeignConfig.java" rel="noopener noreferrer"&gt;https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/client/src/main/java/dev/demo/client/config/SslFeignConfig.java&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Properties: &lt;a href="https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/client/src/main/resources/application.yml" rel="noopener noreferrer"&gt;https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/client/src/main/resources/application.yml&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Server (mTLS + X.509 security)

&lt;ul&gt;
&lt;li&gt;HTTPS/mTLS config: &lt;a href="https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/server/src/main/resources/application.yml" rel="noopener noreferrer"&gt;https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/server/src/main/resources/application.yml&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Security (X.509 mapping + authorization): &lt;a href="https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/server/src/main/java/dev/demo/server/SecurityConfig.java" rel="noopener noreferrer"&gt;https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/server/src/main/java/dev/demo/server/SecurityConfig.java&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Clean certificate access in controllers:&lt;/li&gt;
&lt;li&gt;Annotation: &lt;a href="https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/server/src/main/java/dev/demo/server/security/ClientCert.java" rel="noopener noreferrer"&gt;https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/server/src/main/java/dev/demo/server/security/ClientCert.java&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Resolver: &lt;a href="https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/server/src/main/java/dev/demo/server/security/ClientCertArgumentResolver.java" rel="noopener noreferrer"&gt;https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/server/src/main/java/dev/demo/server/security/ClientCertArgumentResolver.java&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;MVC config: &lt;a href="https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/server/src/main/java/dev/demo/server/WebConfig.java" rel="noopener noreferrer"&gt;https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/server/src/main/java/dev/demo/server/WebConfig.java&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Example endpoint: &lt;a href="https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/server/src/main/java/dev/demo/server/HelloController.java" rel="noopener noreferrer"&gt;https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/server/src/main/java/dev/demo/server/HelloController.java&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Automated E2E script (positive + negative):

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/client/scripts/test-mtls.sh" rel="noopener noreferrer"&gt;https://github.com/ozkanpakdil/spring-examples/blob/main/mtls-feignclient-client-server-iam/client/scripts/test-mtls.sh&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Troubleshooting PKIX fast
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;PKIX path building failed&lt;/code&gt; → Your client truststore must include the CA that signed the server cert.&lt;/li&gt;
&lt;li&gt;Hostname verification → Ensure SANs cover the hostname you call (e.g., &lt;code&gt;localhost&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Is Feign using your &lt;code&gt;SSLContext&lt;/code&gt;? → Provide a Feign &lt;code&gt;Client&lt;/code&gt; bean backed by HttpClient5 configured with your keystore and truststore.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Wrap‑up
&lt;/h3&gt;

&lt;p&gt;If you’re battling PKIX with OpenFeign, start from this working baseline. It shows the complete chain — certs → TLS → Feign SSL → X.509 auth → endpoint authorization — plus a negative test to validate policy. The code is intentionally small, and the repository README goes deeper if you need more detail.&lt;/p&gt;

</description>
      <category>java</category>
      <category>security</category>
      <category>tutorial</category>
      <category>networking</category>
    </item>
    <item>
      <title>Adding Golang and Express.js to the Microservice Framework Benchmark Suite</title>
      <dc:creator>özkan pakdil</dc:creator>
      <pubDate>Wed, 03 Dec 2025 21:49:00 +0000</pubDate>
      <link>https://dev.to/ozkanpakdil/adding-golang-and-expressjs-to-the-microservice-framework-benchmark-suite-194c</link>
      <guid>https://dev.to/ozkanpakdil/adding-golang-and-expressjs-to-the-microservice-framework-benchmark-suite-194c</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://ozkanpakdil.github.io/test-microservice-frameworks/posts/2025/2025-12-03-microservice-framework-test-25/" rel="noopener noreferrer"&gt;Test results for this benchmark run →&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Over the last two days, I’ve expanded our microservice framework benchmark suite to include two new contenders: &lt;strong&gt;Golang&lt;/strong&gt; and &lt;strong&gt;Express.js&lt;/strong&gt;. This addition allows us to compare performance across a broader spectrum of technologies, from compiled languages like Rust and Go to JVM-based frameworks and Node.js.&lt;/p&gt;

&lt;h2&gt;
  
  
  New Additions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Golang (Go 1.24.10)
&lt;/h3&gt;

&lt;p&gt;Go was added using the &lt;strong&gt;standard library only&lt;/strong&gt; - no external frameworks. The implementation uses &lt;code&gt;net/http&lt;/code&gt; package which is known for its excellent performance and simplicity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation Details:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Go Version:&lt;/strong&gt; 1.24.10&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTP Server:&lt;/strong&gt; Standard library &lt;code&gt;net/http&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No external dependencies&lt;/strong&gt; - pure Go implementation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Binary size:&lt;/strong&gt; ~7.6 MB
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
    "encoding/json"
    "log"
    "net/http"
    "time"
)

type ApplicationInfo struct {
    Name string `json:"name"`
    ReleaseYear int `json:"releaseYear"`
}

func helloHandler(w http.ResponseWriter, r *http.Request) {
    info := ApplicationInfo{
        Name: "golang",
        ReleaseYear: time.Now().Year(),
    }
    w.Header().Set("Content-Type", "application/json")
    json.NewEncoder(w).Encode(info)
}

func main() {
    http.HandleFunc("/hello", helloHandler)
    log.Println("Golang server started on port 8080")
    log.Fatal(http.ListenAndServe(":8080", nil))
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Express.js (Node.js v20.19.6)
&lt;/h3&gt;

&lt;p&gt;Express.js was added using Node.js 20 with the &lt;strong&gt;Single Executable Application (SEA)&lt;/strong&gt; feature, which allows bundling the application into a standalone executable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation Details:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Express.js Version:&lt;/strong&gt; 4.21.0&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node.js Version:&lt;/strong&gt; v20.19.6&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bundler:&lt;/strong&gt; esbuild 0.24.0&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Packaging:&lt;/strong&gt; Node.js SEA (Single Executable Application) using postject 1.0.0-alpha.6&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Binary size:&lt;/strong&gt; Self-contained executable
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require('express');
const app = express();
const port = 8080;

app.get('/hello', (req, res) =&amp;gt; {
    const info = {
        name: 'expressjs',
        releaseYear: new Date().getFullYear()
    };
    res.json(info);
});

app.listen(port, () =&amp;gt; {
    console.log(`Express.js server started on port ${port}`);
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Benchmark Results Overview
&lt;/h2&gt;

&lt;p&gt;The results align with expectations based on each technology’s characteristics:&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance Ranking (by mean response time, lower is better):
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Rank&lt;/th&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;th&gt;Mean Response Time (ms)&lt;/th&gt;
&lt;th&gt;Requests/sec&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Rust (Warp)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;135&lt;/td&gt;
&lt;td&gt;6,400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Rust (Axum)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;141&lt;/td&gt;
&lt;td&gt;6,400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Rust (Actix)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;171&lt;/td&gt;
&lt;td&gt;6,400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Rust (Rocket)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;191&lt;/td&gt;
&lt;td&gt;5,333&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Golang&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;212&lt;/td&gt;
&lt;td&gt;5,333&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;.NET 8 AOT&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;261&lt;/td&gt;
&lt;td&gt;5,333&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;.NET 9 AOT&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;285&lt;/td&gt;
&lt;td&gt;5,333&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;.NET 7 AOT&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;353&lt;/td&gt;
&lt;td&gt;5,333&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Java Robaho&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;474&lt;/td&gt;
&lt;td&gt;4,571&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Express.js&lt;/strong&gt; *&lt;/td&gt;
&lt;td&gt;789&lt;/td&gt;
&lt;td&gt;2,667&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Micronaut&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;823&lt;/td&gt;
&lt;td&gt;3,556&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Avaje Jex&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;854&lt;/td&gt;
&lt;td&gt;2,133&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Ktor&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;920&lt;/td&gt;
&lt;td&gt;1,684&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Vertx&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1,019&lt;/td&gt;
&lt;td&gt;4,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Quarkus&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1,133&lt;/td&gt;
&lt;td&gt;3,200&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Spring Boot Web&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1,238&lt;/td&gt;
&lt;td&gt;2,909&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;17&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Spring WebFlux&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1,279&lt;/td&gt;
&lt;td&gt;2,462&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Kumuluz&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1,384&lt;/td&gt;
&lt;td&gt;2,667&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Key Observations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Golang Performance
&lt;/h3&gt;

&lt;p&gt;Golang delivered excellent results with a &lt;strong&gt;212ms mean response time&lt;/strong&gt; and &lt;strong&gt;5,333 requests/sec&lt;/strong&gt; , placing it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Just behind the Rust frameworks&lt;/li&gt;
&lt;li&gt;Ahead of all .NET versions&lt;/li&gt;
&lt;li&gt;Significantly faster than all JVM-based frameworks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This performance comes from Go’s efficient runtime, goroutine-based concurrency model, and the highly optimized standard library HTTP server. The fact that we achieved these results with zero external dependencies is impressive.&lt;/p&gt;

&lt;h3&gt;
  
  
  Express.js Performance and Stability Issues
&lt;/h3&gt;

&lt;p&gt;Express.js showed &lt;strong&gt;789ms overall mean response time&lt;/strong&gt; , but this number is misleading due to severe stability issues under load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Critical concern:&lt;/strong&gt; Express.js had a &lt;strong&gt;75% failure rate&lt;/strong&gt; - out of 32,000 total requests, &lt;strong&gt;24,000 failed (KO)&lt;/strong&gt; and only 8,000 succeeded (OK). For successful requests only, the mean response time was actually &lt;strong&gt;3,137ms&lt;/strong&gt;. This is dramatically worse than all other frameworks in our test suite, which typically show 0 KO (failed) requests. The successful request rate was only &lt;strong&gt;667 requests/sec&lt;/strong&gt; compared to Golang’s 5,333 requests/sec.&lt;/p&gt;

&lt;p&gt;The errors could be attributed to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Single-threaded nature of Node.js&lt;/strong&gt; - Under heavy concurrent load, the event loop can become saturated&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Connection handling limits&lt;/strong&gt; - Default configuration may not be optimized for high concurrency&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SEA packaging&lt;/strong&gt; - The experimental Single Executable Application feature might have some performance implications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory pressure&lt;/strong&gt; - Node.js garbage collection under load&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Technology Stack Comparison
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Performance Tiers:

🥇 TIER 1 (&amp;lt; 250ms): Rust frameworks, Golang
   - Native compilation, minimal runtime overhead

🥈 TIER 2 (250-500ms): .NET AOT, Java Native (Robaho)
   - AOT compilation benefits, optimized runtimes

🥉 TIER 3 (500-1200ms): Micronaut, Ktor, Avaje, Vertx, Express.js, Quarkus
   - JVM frameworks with varying optimizations
   - Node.js with event-driven I/O

🏅 TIER 4 (&amp;gt; 1200ms): Spring Boot, Kumuluz
   - Full-featured frameworks with more overhead

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Build and Packaging Details
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Golang Build
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CGO_ENABLED=0 go build -o golang-demo .

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Simple, fast compilation producing a statically linked binary.&lt;/p&gt;

&lt;h3&gt;
  
  
  Express.js Build (Node.js SEA)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install
npx esbuild main.js --bundle --platform=node --outfile=bundle.js
node --experimental-sea-config sea-config.json
cp $(command -v node) expressjs-demo
chmod 755 expressjs-demo
npx postject expressjs-demo NODE_SEA_BLOB sea-prep.blob \
    --sentinel-fuse NODE_SEA_FUSE_fce680ab2cc467b6e072b8b5df1996b2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Multi-step process to create a self-contained executable from Node.js application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The addition of Golang and Express.js to our benchmark suite provides valuable insights:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Golang&lt;/strong&gt; proves to be an excellent choice for high-performance microservices, offering near-Rust performance with a gentler learning curve and excellent developer experience.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Express.js&lt;/strong&gt; delivers acceptable performance for many use cases but shows stability concerns under heavy load. For high-throughput scenarios, consider alternatives like Fastify or native solutions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The performance hierarchy&lt;/strong&gt; is now clearer:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;a href="https://github.com/ozkanpakdil/test-microservice-frameworks" rel="noopener noreferrer"&gt;Source code for tests&lt;/a&gt; 👈 &lt;a href="https://github.com/ozkanpakdil/rust-examples" rel="noopener noreferrer"&gt;Rust examples&lt;/a&gt; 👈&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>performance</category>
      <category>node</category>
      <category>go</category>
    </item>
    <item>
      <title>How to solve macos hdmi sound control problem</title>
      <dc:creator>özkan pakdil</dc:creator>
      <pubDate>Fri, 21 Nov 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/ozkanpakdil/how-to-solve-macos-hdmi-sound-control-problem-4dd5</link>
      <guid>https://dev.to/ozkanpakdil/how-to-solve-macos-hdmi-sound-control-problem-4dd5</guid>
      <description>&lt;p&gt;So I got my Mac Mini M4 from Amazon for £500 and started using it. I had so many problems with the shortcuts I normally use even Ctrl+A wasn’t working, I had to use Win+A, and many other shortcuts were different. One of the biggest problems was using the sound keys on the keyboard. On Linux they worked fine: sound up and down controlled the output volume. But on macOS it didn’t work. Very strange policy Apple has macOS doesn’t allow the user to control end devices connected through HDMI.&lt;/p&gt;

&lt;p&gt;The solution is &lt;a href="https://github.com/briankendall/proxy-audio-device" rel="noopener noreferrer"&gt;proxy-audio-device&lt;/a&gt;, installed via brew. Now the system output sound is controlled over HDMI. This proxy audio device shows itself as a sound output option, and we can control it. The sound configuration looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdm8pr79benqvgvxhvduu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdm8pr79benqvgvxhvduu.png" alt="Image" width="800" height="695"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As seen in the picture, proxy audio device is selected and you can easily change the sound, and it is open source and working nice, I am feeling happy.&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://apple.stackexchange.com/a/336751/683942" rel="noopener noreferrer"&gt;https://apple.stackexchange.com/a/336751/683942&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://apple.stackexchange.com/a/374380/683942" rel="noopener noreferrer"&gt;https://apple.stackexchange.com/a/374380/683942&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>macos</category>
    </item>
    <item>
      <title>MacOS on debian QEMU KVM</title>
      <dc:creator>özkan pakdil</dc:creator>
      <pubDate>Thu, 13 Nov 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/ozkanpakdil/macos-on-debian-qemu-kvm-15fi</link>
      <guid>https://dev.to/ozkanpakdil/macos-on-debian-qemu-kvm-15fi</guid>
      <description>&lt;h2&gt;
  
  
  From Frustration to Breakthrough: Running macOS on KVM
&lt;/h2&gt;

&lt;p&gt;For years, I chased the dream of running &lt;strong&gt;macOS in a virtual machine&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
On Windows, I tried VMware and VirtualBox countless times with different tutorials and blogs. Each attempt ended in frustration: crashes, unsupported hardware, endless configuration rabbit holes. It felt like a goal always just out of reach. And finally I found &lt;a href="https://github.com/kholia/OSX-KVM" rel="noopener noreferrer"&gt;https://github.com/kholia/OSX-KVM&lt;/a&gt; it has the readme which explains the steps for setting up.&lt;/p&gt;

&lt;p&gt;First couple of attempts failed as usual.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Breakthrough
&lt;/h2&gt;

&lt;p&gt;After many failed experiments, paired with &lt;strong&gt;Github Copilot&lt;/strong&gt; to help refine the setup. This time, things clicked.&lt;/p&gt;

&lt;p&gt;The key changes were subtle but powerful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adjusting CPU flags and thread allocation for better compatibility&lt;/li&gt;
&lt;li&gt;Increasing RAM and core counts to give macOS breathing room&lt;/li&gt;
&lt;li&gt;Adding a no‑reboot option and restructuring QEMU arguments for stability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can see the details &lt;a href="https://github.com/ozkanpakdil/OSX-KVM/pull/1/files" rel="noopener noreferrer"&gt;here&lt;/a&gt; for the these tweaks.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Moment of Success
&lt;/h2&gt;

&lt;p&gt;After days of trial and error, I woke up one morning, applied the final tweaks, and it worked. macOS booted smoothly inside my QEMU VM. A moment of triumph after years of effort.&lt;/p&gt;

&lt;p&gt;Here’s the screenshot I shared on the &lt;a href="https://techhub.social/@thejvmbender/115541503397049297" rel="noopener noreferrer"&gt;Fediverse&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xp3ejx46ydgw3r3ykbj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xp3ejx46ydgw3r3ykbj.png" alt="macOS VM running GitHub page in QEMU on Debian" width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Reflections
&lt;/h2&gt;

&lt;p&gt;Running macOS on KVM isn’t just about virtualization.&lt;br&gt;&lt;br&gt;
For me, it’s proof that with patience, experimentation, and the right guidance, even long‑standing technical goals can be achieved.&lt;/p&gt;

&lt;p&gt;Thanks to &lt;strong&gt;Debian&lt;/strong&gt; for the rock‑solid foundation, and &lt;strong&gt;Copilot&lt;/strong&gt; for being the companion that helped me cross the finish line.&lt;/p&gt;

&lt;p&gt;I am still thinking to buy a mac mini though, VM is too slow 😄&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>linux</category>
      <category>tooling</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Paint, a lightweight image editor for quick edits</title>
      <dc:creator>özkan pakdil</dc:creator>
      <pubDate>Wed, 12 Nov 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/ozkanpakdil/paint-a-lightweight-image-editor-for-quick-edits-3d6a</link>
      <guid>https://dev.to/ozkanpakdil/paint-a-lightweight-image-editor-for-quick-edits-3d6a</guid>
      <description>&lt;h2&gt;
  
  
  Why I built Paint
&lt;/h2&gt;

&lt;p&gt;Whenever I wanted to make a very small edit on my Debian laptop, crop a screenshot, add a quick arrow, or block out a small area, the system only had GIMP for editing images. Powerful, but heavy and slow for tiny, frequent tasks. I wanted something nimble: quick to open, easy to use, and focused on the common one-off edits people do dozens of times a day.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ozkanpakdil/paint" rel="noopener noreferrer"&gt;This project&lt;/a&gt; started from that itch and grew into a polished, compact Swing-based application (with GraalVM native builds and platform installers). The goal remains the same: make the small workflows lightning fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  Short feature summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Core drawing tools&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Editing &amp;amp; image handling&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;UI &amp;amp; UX&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;File formats &amp;amp; packaging&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Development timeline (high-level commit history)
&lt;/h2&gt;

&lt;p&gt;This timeline is extracted from the project’s git history to tell the story of how Paint evolved.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2025-11-03, initial commit, project foundation and first implementation.&lt;/li&gt;
&lt;li&gt;2025-11-04, undo/redo functionality and keyboard shortcuts were added, UI/ribbon alignment improved.&lt;/li&gt;
&lt;li&gt;2025-11-05, File→Open added; installer scaffolding and CI release flows introduced (installer dev release commits).&lt;/li&gt;
&lt;li&gt;2025-11-05, reachability metadata updates and Linux packaging/icon tweaks for installers.&lt;/li&gt;
&lt;li&gt;2025-11-06, FlatLaf integration for modern look &amp;amp; feel; canvas size controls wired into status bar; tests improved.&lt;/li&gt;
&lt;li&gt;2025-11-06, OS-specific native-image packaging settings and multi-line app description support for installers.&lt;/li&gt;
&lt;li&gt;2025-11-07, reachability metadata and DEB packaging refinements.&lt;/li&gt;
&lt;li&gt;2025-11-08, Transparent Highlighter and Drawing Arrows added; crop behavior and image placement refined; open image from CLI arguments.&lt;/li&gt;
&lt;li&gt;2025-11-10, CI consolidation: native-image builds and OS packaging unified into a single workflow; Linux/macOS packaging improvements.&lt;/li&gt;
&lt;li&gt;2025-11-11, final release staging and small fixes for canvas sizing and GUI status synchronization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For the exact commit list and messages, see the repository’s &lt;code&gt;git log&lt;/code&gt; history.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works (implementation notes)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Canvas model: the app uses a persistent &lt;code&gt;BufferedImage&lt;/code&gt; as the backing canvas (&lt;code&gt;DrawArea.cache&lt;/code&gt;) and a separate transparent &lt;code&gt;highlightLayer&lt;/code&gt; for the highlighter tool. This makes highlights non-accumulating and easy to composite.&lt;/li&gt;
&lt;li&gt;Drawing: continuous tools (pencil, eraser, highlighter) commit during dragging for a responsive feel; shapes/lines/arrow/bucket commit at mouse release (with an undo snapshot taken appropriately).&lt;/li&gt;
&lt;li&gt;Image placement: pasted or opened images are shown as a pending overlay with a dashed border; the user can drag them, press Enter to accept (commits to the backing image), or Esc to cancel (restores any cut area if moving a selection).&lt;/li&gt;
&lt;li&gt;Undo/Redo: snapshot-based (base + highlight) with a capped history size to keep memory usage bounded.&lt;/li&gt;
&lt;li&gt;Packaging: Maven profiles for native (&lt;code&gt;-Pnative&lt;/code&gt;) and installer (&lt;code&gt;-Pinstaller&lt;/code&gt;) builds; CI workflows prepare and upload native and installer artifacts automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try it locally
&lt;/h2&gt;

&lt;p&gt;Requirements: Java (JDK 25+ recommended), Maven 3.6+.&lt;/p&gt;

&lt;p&gt;Build and run the JAR:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /home/ozkan/projects/paint
mvn -q clean package
java -jar target/paint-1.0.0.jar

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open an image directly from the command line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;java -jar target/paint-1.0.0.jar /path/to/screenshot.png

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GraalVM native (optional):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export JAVA_HOME=/path/to/graalvm
export PATH="$JAVA_HOME/bin:$PATH"
gu install native-image # if needed
mvn -Pnative -DskipTests -q clean package
./target/paint

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Packaging: there are helper scripts in &lt;code&gt;src/installer/ci/bin/&lt;/code&gt; and a Maven &lt;code&gt;installer&lt;/code&gt; profile that uses &lt;code&gt;jpackage&lt;/code&gt; to produce platform installers (DEB, DMG, MSI). See the README for details.&lt;/p&gt;

&lt;h2&gt;
  
  
  Roadmap (next steps)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;UX&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Features&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distribution&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tests &amp;amp; CI&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Notes &amp;amp; credits
&lt;/h2&gt;

&lt;p&gt;This app is intentionally lightweight and focuses on the frequent day-to-day editing tasks where a full image editor is overkill. The project uses FlatLaf for modern theming and includes helper scripts and CI workflows to build native binaries and installers.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>java</category>
    </item>
    <item>
      <title>Java imperative vs functional in 2025 — revisiting a 2015 microbenchmark</title>
      <dc:creator>özkan pakdil</dc:creator>
      <pubDate>Mon, 29 Sep 2025 20:22:00 +0000</pubDate>
      <link>https://dev.to/ozkanpakdil/java-imperative-vs-functional-in-2025-revisiting-a-2015-microbenchmark-hin</link>
      <guid>https://dev.to/ozkanpakdil/java-imperative-vs-functional-in-2025-revisiting-a-2015-microbenchmark-hin</guid>
      <description>&lt;p&gt;Quick numbers (avg; smaller is faster)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/ozkanpakdil/ozkanpakdil.github.io/blob/ec3bbcee3a1fcd28c673d4bcca8138b878a4a2be/scripts/java25-bench/Benchmark.java#L119" rel="noopener noreferrer"&gt;I (imperative nested)&lt;/a&gt;: 3.28 µs&lt;/li&gt;
&lt;li&gt;I2 (imperative freq-map): 1.93 µs&lt;/li&gt;
&lt;li&gt;F (streams grouping): 127.37 µs&lt;/li&gt;
&lt;li&gt;FP (parallel streams grouping): 599.28 µs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Winner: &lt;a href="https://github.com/ozkanpakdil/ozkanpakdil.github.io/blob/ec3bbcee3a1fcd28c673d4bcca8138b878a4a2be/scripts/java25-bench/Benchmark.java#L130" rel="noopener noreferrer"&gt;I2 — imperative freq-map&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: These are sample numbers from the run below on my machine; yours will differ. I/F labels mirror the 2015 post for a simple visual compare.&lt;/p&gt;

&lt;p&gt;== 2015-style harness (I:/F: lines) ==&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ozkan@ozkan-debian:~/projects/ozkanpakdil.github.io/scripts/compare-2015-25&lt;span class="nv"&gt;$ &lt;/span&gt;./run.sh
javac 25
I:5372
F:22032373
I:5816
F:186352
F:144816
F:134903
F:107685
I:4919
I:4903
I:4698
I:4147
F:104857
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;== 2025 benchmark summary (fastest → slowest) ==&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;javac 25
Java version: 25
CPU cores: 8
&lt;span class="nv"&gt;Size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;12, &lt;span class="nv"&gt;warmup&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5, &lt;span class="nv"&gt;iters&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10

imperativeNested:
avg  &lt;span class="o"&gt;=&lt;/span&gt; 3.28 µs
p50  &lt;span class="o"&gt;=&lt;/span&gt; 1.82 µs
p90  &lt;span class="o"&gt;=&lt;/span&gt; 3.62 µs
p99  &lt;span class="o"&gt;=&lt;/span&gt; 14.36 µs
raw  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;1769, 1772, 1787, 1788, 1797, 1818, 1827, 2300, 3622, 14357]

imperativeFreqMap:
avg  &lt;span class="o"&gt;=&lt;/span&gt; 1.93 µs
p50  &lt;span class="o"&gt;=&lt;/span&gt; 1.77 µs
p90  &lt;span class="o"&gt;=&lt;/span&gt; 1.98 µs
p99  &lt;span class="o"&gt;=&lt;/span&gt; 3.18 µs
raw  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;1715, 1716, 1724, 1727, 1762, 1771, 1845, 1900, 1979, 3182]

streamGrouping:
avg  &lt;span class="o"&gt;=&lt;/span&gt; 127.37 µs
p50  &lt;span class="o"&gt;=&lt;/span&gt; 127.73 µs
p90  &lt;span class="o"&gt;=&lt;/span&gt; 164.10 µs
p99  &lt;span class="o"&gt;=&lt;/span&gt; 176.66 µs
raw  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;88484, 88562, 96348, 121006, 125770, 127731, 138254, 146769, 164102, 176661]

parallelStreamGrouping:
avg  &lt;span class="o"&gt;=&lt;/span&gt; 599.28 µs
p50  &lt;span class="o"&gt;=&lt;/span&gt; 576.47 µs
p90  &lt;span class="o"&gt;=&lt;/span&gt; 927.77 µs
p99  &lt;span class="o"&gt;=&lt;/span&gt; 931.11 µs
raw  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;365417, 384289, 411498, 519010, 523009, 576465, 580818, 773447, 927771, 931106]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Summary (fastest → slowest):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;imperativeFreqMap — avg=1.93 µs (x1.00)&lt;/li&gt;
&lt;li&gt;imperativeNested — avg=3.28 µs (x1.70)&lt;/li&gt;
&lt;li&gt;streamGrouping — avg=127.37 µs (x65.92)&lt;/li&gt;
&lt;li&gt;parallelStreamGrouping — avg=599.28 µs (x310.17)
Winner: imperativeFreqMap&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Winner (2025 run): Winner: imperativeFreqMap&lt;/p&gt;

&lt;p&gt;Tip: You can change SIZE/WARMUP/ITERS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;./run.sh 12 5 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See also (2015): &lt;a href="https://dev.to/java-performance/2015/09/19/java-imperative-vs-functional/"&gt;Java imperative and functional approach performance test&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2015-style run numbers (same format)&lt;/p&gt;

&lt;p&gt;If you prefer the exact I:/F: lines from the original 2015 post, run this tiny harness that prints the same format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./scripts/legacy-2015-run/run.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;It compiles a small Test2015.java and prints lines like &lt;code&gt;I:12345&lt;/code&gt; and &lt;code&gt;F:67890&lt;/code&gt; (your real numbers will vary by machine/load).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Minimal code (like 2015)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Imperative (nested loops, 2015-style)&lt;/span&gt;
&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;sum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;length&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="o"&gt;++)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;length&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;++)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="o"&gt;])&lt;/span&gt; &lt;span class="n"&gt;sum&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;];&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Functional-ish with Streams (grouping)&lt;/span&gt;
&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;total&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;java&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;util&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;stream&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;IntStream&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;of&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;boxed&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;collect&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;java&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;util&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;stream&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;Collectors&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;groupingBy&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;))&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;entrySet&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;stream&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getValue&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;mapToInt&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getValue&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;stream&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;mapToInt&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;Integer:&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="n"&gt;intValue&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;sum&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sum&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/ozkanpakdil/ozkanpakdil.github.io/blob/4dd36a1b07b982dbad8e8283bc28efc7ebc8bb24/scripts/java25-bench/run.sh#L1" rel="noopener noreferrer"&gt;Run it yourself&lt;/a&gt; (no Maven/JMH)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./scripts/java25-bench/run.sh 12 5 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Args are: size warmup iters. Try 12, 1000, 100000 to see where Streams catch up or parallel helps.&lt;/li&gt;
&lt;li&gt;Prints a ranked summary (fastest → slowest) with real timings from your machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Takeaways&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Small data + simple work: tight loops are still fastest and allocate less.&lt;/li&gt;
&lt;li&gt;Streams improved since 2015 but have overhead on tiny inputs.&lt;/li&gt;
&lt;li&gt;Parallel streams: only useful for big inputs or heavy per-element work.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Side-by-side: 2015 vs 2025&lt;/p&gt;

&lt;p&gt;Run one command to see both the original 2015-style I:/F: lines and the 2025 ranked summary together:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./scripts/compare-2015-25/run.sh 12 5 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Args are still: size warmup iters (for the 2025 part). The 2015 harness always uses the original array of 12 elements.&lt;/li&gt;
&lt;li&gt;Output shows two blocks: “2015-style harness” and “2025 benchmark summary”, plus a Winner line.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>performance</category>
      <category>algorithms</category>
      <category>programming</category>
      <category>java</category>
    </item>
  </channel>
</rss>
