<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kendrick B. Jung</title>
    <description>The latest articles on DEV Community by Kendrick B. Jung (@sonim1).</description>
    <link>https://dev.to/sonim1</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sonim1"/>
    <language>en</language>
    <item>
      <title>Codex Fast Mode vs Claude Fast Mode: What’s Actually Different?</title>
      <dc:creator>Kendrick B. Jung</dc:creator>
      <pubDate>Tue, 31 Mar 2026 14:08:35 +0000</pubDate>
      <link>https://dev.to/sonim1/codex-fast-mode-vs-claude-fast-mode-whats-actually-different-2kf5</link>
      <guid>https://dev.to/sonim1/codex-fast-mode-vs-claude-fast-mode-whats-actually-different-2kf5</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;Both Codex and Claude support a fast mode, but the way they achieve speed is completely different. Codex has two tracks: either it serves the same GPT-5.4 model about 1.5× faster, or it runs a separate small model called Spark on Cerebras wafer-scale hardware at more than 1,000 tokens per second. Claude keeps the same Opus 4.6 model and speeds it up through infrastructure-level prioritization, with output speed improving by up to 2.5×. The tradeoffs around price, speed, and intelligence retention are subtle, and which option is better depends on your workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  What got me curious
&lt;/h2&gt;

&lt;p&gt;Since I use both Codex and Claude Code, I already knew both sides offered a fast mode. But the pricing felt different, the speed felt different, and the user experience felt different. Sean Goedecke’s post, "Two different tricks for fast LLM inference," made it clear that the two companies were solving the problem in fundamentally different ways, so I started digging deeper.&lt;/p&gt;

&lt;h2&gt;
  
  
  Codex fast mode: really two different tracks
&lt;/h2&gt;

&lt;p&gt;On the Codex side, there are actually two things that can reasonably be called fast.&lt;/p&gt;

&lt;p&gt;The first is GPT-5.4 fast mode. It serves the same GPT-5.4 model about 1.5× faster while consuming 2× the credits. Since the model itself does not change, there is no intelligence drop. In the CLI, it is just a simple &lt;code&gt;/fast on&lt;/code&gt; toggle.&lt;/p&gt;

&lt;p&gt;Nathan Lambert noted that even when using GPT-5.4 fast mode with xhigh reasoning effort, he had never hit the Codex limit, while Claude could still hit limits sometimes. Whether that comes from better token efficiency or looser limits on OpenAI’s side, it does feel noticeably roomier in practice.&lt;/p&gt;

&lt;p&gt;The second is GPT-5.3-Codex-Spark, which is a separate model entirely. This is the truly ultra-fast path, running on Cerebras WSE-3 (Wafer-Scale Engine 3) hardware. It can generate more than 1,000 tokens per second. Right now, it is available as a research preview for ChatGPT Pro subscribers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cerebras WSE-3: a different world from GPUs
&lt;/h2&gt;

&lt;p&gt;Cerebras WSE-3 is fundamentally different from a conventional GPU. NVIDIA’s flagship B200 is around 208 billion transistors, while the Cerebras chip packs 4 trillion transistors across roughly 900,000 cores on a single silicon wafer. The core advantage is memory bandwidth: up to 27 petabytes per second on chip. Since memory bandwidth is one of the real bottlenecks in LLM inference, Cerebras is attacking that bottleneck directly at the hardware level.&lt;/p&gt;

&lt;p&gt;That said, WSE-3 only has 44GB of on-chip memory, so it is difficult to place a very large model like GPT-5.3-Codex on it wholesale. That is why Spark is a smaller model. In real use, some people say it still carries that familiar "small model smell," especially when tool calls get messy.&lt;/p&gt;

&lt;p&gt;OpenAI and Cerebras have also announced a multi-year partnership worth up to $10B, including plans for a 750MW data center. The longer-term direction seems clear: Spark is likely just the beginning of putting bigger frontier models onto Cerebras hardware.&lt;/p&gt;

&lt;p&gt;OpenAI also shared infrastructure-level optimizations around Spark. By introducing persistent WebSocket connections and optimizing the Responses API internals, they say they reduced client-server roundtrip overhead by 80%, token overhead by 30%, and TTFT by 50%. So the speedup is not only about the model itself. It is also about tightening the whole pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude fast mode: same model, different infrastructure
&lt;/h2&gt;

&lt;p&gt;Claude’s approach is much simpler. The Opus 4.6 model stays exactly the same. If you set &lt;code&gt;speed: "fast"&lt;/code&gt; in the API, Anthropic prioritizes the request at the infrastructure layer. According to the official docs, output token speed can improve by up to 2.5×. The focus is on output throughput rather than TTFT.&lt;/p&gt;

&lt;p&gt;Anthropic has not publicly disclosed the full implementation details, but the likely explanation is something like lower-batch-size inference with more dedicated GPU allocation. Smaller batches are less efficient for GPU utilization, but they improve response speed for individual requests. That inefficiency is then covered by the 6× premium pricing.&lt;/p&gt;

&lt;p&gt;In Claude Code, fast mode is toggled with &lt;code&gt;/fast&lt;/code&gt;, and it requires version 2.1.36 or later. When enabled, it automatically switches to Opus 4.6 and shows a ↯ icon next to the prompt.&lt;/p&gt;

&lt;p&gt;One important detail is that fast mode usage is not included in the normal subscription usage bucket. It is billed as extra usage. Pricing kicks in from the very first token, so cost management matters.&lt;/p&gt;

&lt;p&gt;Fast mode and effort level are also completely different axes. If you lower effort, the model simply spends less time reasoning and quality may drop. Fast mode, by contrast, serves the same reasoning process faster at the infrastructure level. You can combine them: fast mode plus lower effort for simpler tasks, fast mode plus higher effort for more complex ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  The core difference
&lt;/h2&gt;

&lt;p&gt;The most important distinctions look like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Codex GPT-5.4 fast mode: about 1.5× speed, 2× credits, same model&lt;/li&gt;
&lt;li&gt;Codex Spark: 15×+ speed, separate ultra-fast smaller model&lt;/li&gt;
&lt;li&gt;Claude fast mode: up to 2.5× speed, 6× price, same Opus 4.6 model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sean Goedecke captures the difference well. Anthropic is still serving the actual Opus 4.6 model, while OpenAI’s Spark path uses a separate lower-capability model. In terms of raw speed, Spark is dramatically faster. In terms of quality retention, Claude has the stronger position.&lt;/p&gt;

&lt;p&gt;There is also a broader point here: the value of an AI agent is often determined less by raw speed and more by how rarely it makes mistakes. If something is 6× faster but increases mistakes by 20%, that can easily be a net loss, because fixing those mistakes may take much longer than waiting for the model.&lt;/p&gt;

&lt;p&gt;So if you compare same-model fast modes only, Claude offers a bigger speed bump than Codex, but it is also much more expensive. If you include Spark, OpenAI has the more extreme speed story, but you have to remember it is not the same model.&lt;/p&gt;

&lt;h2&gt;
  
  
  What about speculative decoding?
&lt;/h2&gt;

&lt;p&gt;Early in my research, I came across claims that Codex fast mode used speculative decoding. That does not seem accurate. Speculative decoding itself is a real and widely used inference optimization technique, but I could not find official confirmation that Codex fast mode specifically uses it.&lt;/p&gt;

&lt;p&gt;The idea behind speculative decoding is elegant. A small draft model predicts upcoming tokens first, and then the larger main model verifies them in a single pass. Google published work on this in 2022 and later discussed using it in products like AI Overviews, where it can deliver 2–3× speedups while preserving the same output distribution.&lt;/p&gt;

&lt;p&gt;For Codex Spark, though, the main speed story seems much more tied to the hardware characteristics of Cerebras itself. The model benefits from staying close to on-chip SRAM and avoiding the usual memory bandwidth bottlenecks. It is possible that speculative decoding is also used somewhere internally, but there is no official confirmation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing thoughts
&lt;/h2&gt;

&lt;p&gt;Peter Steinberger is one of the most fascinating examples of where this kind of workflow can go. He reportedly runs four OpenAI subscriptions and one Anthropic subscription, spends around $1,000 per month, runs 3–8 Codex CLI sessions in a 3×3 terminal grid, and can hit 600 commits in a day. That is a completely different scale. By his own estimate, API usage would cost about 10× more, so running multiple subscriptions is actually the more rational option. More recently, he has even joined OpenAI.&lt;/p&gt;

&lt;p&gt;What is especially interesting is that Peter used to be a serious Claude Code power user but gradually shifted toward Codex. His reason was surprisingly relatable: Claude Code kept saying things like "absolutely right" and "100% production ready" even when tests were failing, and he found that unbearable. Codex, by contrast, felt more like an introverted engineer quietly doing the work. He also said Codex tends to read far more code before starting, which lets it infer intent well even from short prompts. Eventually he canceled additional Anthropic subscriptions and made Codex his main driver, even though he still uses Claude in a smaller role.&lt;/p&gt;

&lt;p&gt;Whether I am on Claude Max or Codex Pro, I usually cannot even consume the full weekly quota. But people like that are running five subscriptions at once. If you listen to AI podcasts, there are quite a few people using even more. A while ago I had to force myself to adapt to a kind of parallel-project brain just to burn through huge amounts of tokens, and it was honestly exhausting. Now I do not really get the headache anymore. Instead, I get stuck wondering what else I could even do with all this capacity. That is how one project leads to another, and another task appears from there.&lt;/p&gt;

&lt;p&gt;In the end, running several projects at once becomes a kind of refresh loop. If I look away from one blocked project for a while and work on another, ideas tend to come back. Peter described it as doing one thing while another is "cooking," then switching again while that one cooks too. My scale is obviously smaller, but I recognize the pattern.&lt;/p&gt;

&lt;h2&gt;
  
  
  Refs
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developers.openai.com/codex/speed" rel="noopener noreferrer"&gt;Codex Speed - OpenAI Developers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://openai.com/index/introducing-gpt-5-3-codex-spark/" rel="noopener noreferrer"&gt;Introducing GPT-5.3-Codex-Spark - OpenAI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://openai.com/index/introducing-gpt-5-4/" rel="noopener noreferrer"&gt;Introducing GPT-5.4 - OpenAI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://platform.claude.com/docs/en/build-with-claude/fast-mode" rel="noopener noreferrer"&gt;Fast mode - Claude API Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://code.claude.com/docs/en/fast-mode" rel="noopener noreferrer"&gt;Speed up responses with fast mode - Claude Code Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.seangoedecke.com/fast-llm-inference/" rel="noopener noreferrer"&gt;Two different tricks for fast LLM inference - Sean Goedecke&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.interconnects.ai/p/openai-codex-gpt54" rel="noopener noreferrer"&gt;GPT 5.4 is a big step for Codex - Nathan Lambert&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cerebras.ai/blog/openai-codexspark" rel="noopener noreferrer"&gt;Introducing GPT-5.3-Codex-Spark - Cerebras Blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://research.google/blog/looking-back-at-speculative-decoding/" rel="noopener noreferrer"&gt;Looking back at speculative decoding - Google Research&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://steipete.me/posts/2026/just-talk-to-it" rel="noopener noreferrer"&gt;Just Talk To It - Peter Steinberger&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>infrastructure</category>
      <category>llm</category>
      <category>performance</category>
    </item>
    <item>
      <title>Using git worktree for parallel AI agent development</title>
      <dc:creator>Kendrick B. Jung</dc:creator>
      <pubDate>Tue, 24 Mar 2026 12:45:18 +0000</pubDate>
      <link>https://dev.to/sonim1/using-git-worktree-for-parallel-ai-agent-development-44nb</link>
      <guid>https://dev.to/sonim1/using-git-worktree-for-parallel-ai-agent-development-44nb</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;If you want to run multiple AI coding agents in parallel, &lt;code&gt;git worktree&lt;/code&gt; is the answer. It gives each branch its own working directory inside the same repository, so you do not need stash gymnastics or multiple clones.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Git Worktree?
&lt;/h2&gt;

&lt;p&gt;Even if you are juggling several tasks, a human developer can still only work in one context at a time. The old pattern was to stash your current changes, check out another branch, do some work there, and then come back and pop the stash later.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;git worktree&lt;/code&gt; changes that entire flow. It lets one Git repository have multiple working directories attached to it. Normally, a repository has a single working tree. With worktree, you can keep the same &lt;code&gt;.git&lt;/code&gt; history and object database while checking out different branches into separate folders.&lt;/p&gt;

&lt;p&gt;The structure looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/projects/
├── my-app/                 ← main worktree (main branch)
│   └── .git/               ← real git data
├── my-app-feature/         ← linked worktree (feature/auth branch)
│   └── .git                ← not a directory, but a file pointing to the main .git
└── my-app-hotfix/          ← linked worktree (hotfix/login branch)
    └── .git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each worktree has its own HEAD, index, and working files, but they all share commit history and Git objects. In terms of Git objects, extra disk usage is minimal. But dependencies like &lt;code&gt;node_modules&lt;/code&gt; or &lt;code&gt;.venv&lt;/code&gt; still need to exist per worktree, so heavy projects can consume disk space quickly if you keep many worktrees around.&lt;/p&gt;

&lt;p&gt;There is also one important limitation: you cannot check out the same branch in two worktrees at once. This is intentional. It prevents the confusion of having the same branch diverge across multiple active directories.&lt;/p&gt;

&lt;h2&gt;
  
  
  When did it arrive?
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;git worktree&lt;/code&gt; officially landed with Git 2.5 on July 29, 2015. A major contributor was Nguyễn Thái Ngọc Duy, who had been refining the idea for years. At launch it still wore an experimental label and had some submodule compatibility issues, but those rough edges have largely been resolved over time.&lt;/p&gt;

&lt;p&gt;Later releases added more lifecycle commands. Git 2.7 brought &lt;code&gt;git worktree move&lt;/code&gt; and &lt;code&gt;git worktree remove&lt;/code&gt;, and Git 2.15 introduced &lt;code&gt;git worktree lock&lt;/code&gt; and &lt;code&gt;git worktree unlock&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I only started paying real attention to it recently, but clearly many people had already been quietly using it for years. It spent nearly a decade as one of those “great if you know it” features. Once AI coding agents became normal, though, it suddenly started feeling essential.&lt;/p&gt;

&lt;h2&gt;
  
  
  Harness engineering: why worktree matters
&lt;/h2&gt;

&lt;p&gt;Harness engineering is not about building the AI agent itself. It is about designing and orchestrating the environment you delegate work into. &lt;code&gt;git worktree&lt;/code&gt; becomes incredibly powerful once that environment exists.&lt;/p&gt;

&lt;p&gt;Agents like Claude Code and Codex read and write files directly in the working directory. If an agent is working on the &lt;code&gt;feature/payments&lt;/code&gt; branch, that directory may be sitting in a half-modified state at any moment.&lt;/p&gt;

&lt;p&gt;What happens if you check out another branch in that same directory, or launch a second agent into it? Best case, you create confusion. Worst case, you end up with conflicting file states and agents working from the wrong code snapshot.&lt;/p&gt;

&lt;p&gt;The old solution was &lt;code&gt;git stash&lt;/code&gt;, but once several stashes pile up, it becomes annoying to remember which one belongs to which task. Cloning the repository multiple times also works, but now you are duplicating repo state and losing the convenience of sharing local history and objects directly.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;git worktree&lt;/code&gt; solves this cleanly. Each AI session gets a fully independent directory tied to its own branch, while history and objects remain shared. Claude Code made this even more explicit by adding an official &lt;code&gt;--worktree&lt;/code&gt; flag in February 2026, effectively promoting this workflow to a first-class citizen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3dooi48v9a372amjc6r5.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3dooi48v9a372amjc6r5.webp" alt="A conceptual diagram of multiple AI agents working in parallel on separate Git worktrees" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting a worktree-based workflow
&lt;/h2&gt;

&lt;p&gt;The basic commands are simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a new worktree with a new branch&lt;/span&gt;
git worktree add ../my-app-feature &lt;span class="nt"&gt;-b&lt;/span&gt; feature/auth

&lt;span class="c"&gt;# Create a worktree from an existing branch&lt;/span&gt;
git worktree add ../my-app-hotfix hotfix/login

&lt;span class="c"&gt;# List all attached worktrees&lt;/span&gt;
git worktree list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are two common directory layouts. The first puts worktrees next to the main project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/projects/
├── my-app/
├── my-app-feature-auth/
└── my-app-hotfix-login/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second keeps them inside the project under a &lt;code&gt;trees/&lt;/code&gt; folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/my-app/
├── src/
├── .git/
└── trees/
    ├── feature-auth/
    └── hotfix-login/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you use the second pattern, do not forget to add &lt;code&gt;trees/&lt;/code&gt; to &lt;code&gt;.gitignore&lt;/code&gt;. Otherwise the main worktree will see them as untracked files.&lt;/p&gt;

&lt;p&gt;There is one more thing to handle when creating worktrees. Files ignored by Git, such as &lt;code&gt;.env&lt;/code&gt;, are not copied automatically. A plain &lt;code&gt;cp&lt;/code&gt; works, but then you need to repeat that every time the main &lt;code&gt;.env&lt;/code&gt; changes. A symlink is often more convenient because updates in the main worktree are reflected everywhere:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create worktree, then link .env&lt;/span&gt;
git worktree add trees/feature-auth &lt;span class="nt"&gt;-b&lt;/span&gt; feature/auth
&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/.env"&lt;/span&gt; trees/feature-auth/.env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have multiple environment files like &lt;code&gt;.env.local&lt;/code&gt; and &lt;code&gt;.env.development&lt;/code&gt;, it helps to wrap this in a shell function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# ~/.zshrc or ~/.bashrc&lt;/span&gt;
wt&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  git worktree add &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-b&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;f &lt;span class="k"&gt;in&lt;/span&gt; .env .env.local .env.development&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$f&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$f&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$f&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"linked &lt;/span&gt;&lt;span class="nv"&gt;$f&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="k"&gt;done&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Usage&lt;/span&gt;
wt trees/feature-auth feature/auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you use Claude Code, the official &lt;code&gt;--worktree&lt;/code&gt; flag makes the flow even simpler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a worktree and start Claude Code in one step&lt;/span&gt;
claude &lt;span class="nt"&gt;--worktree&lt;/span&gt; feature-auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That single command creates &lt;code&gt;.claude/worktrees/feature-auth/&lt;/code&gt;, creates the branch, and starts the Claude session inside it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Working inside a worktree
&lt;/h2&gt;

&lt;p&gt;Once the worktree exists, you just move into that directory and work as usual. IDEs and editors can also open each worktree as a separate project.&lt;/p&gt;

&lt;p&gt;With AI agents, it looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Terminal 1 - feature work&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; ../my-app-feature-auth
claude &lt;span class="c"&gt;# or codex, gemini-cli, etc.&lt;/span&gt;

&lt;span class="c"&gt;# Terminal 2 - hotfix work, at the same time&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; ../my-app-hotfix-login
claude
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While one agent is working, you can review the output from another. You stop being the person waiting for code and start being the person directing parallel work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg883t1qujwy56zfz8fgq.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg883t1qujwy56zfz8fgq.webp" alt="A repository graph showing multiple agent branches connected to a shared Git history" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Commits inside each worktree work exactly the same as usual. Since the branch is already separated, you do not have to think much about context switching.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git add &lt;span class="nb"&gt;.&lt;/span&gt;
git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"feat: add auth middleware"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  After the work is done
&lt;/h2&gt;

&lt;p&gt;Once the task is finished, the rest looks like a normal PR workflow. If you are already inside the worktree directory, push naturally goes to that branch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ../my-app-feature-auth
git push &lt;span class="nt"&gt;-u&lt;/span&gt; origin feature/auth
gh &lt;span class="nb"&gt;pr &lt;/span&gt;create &lt;span class="nt"&gt;--title&lt;/span&gt; &lt;span class="s2"&gt;"Add auth middleware"&lt;/span&gt; &lt;span class="nt"&gt;--base&lt;/span&gt; main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After opening the PR, there are three common ways to integrate the branch back into the main worktree.&lt;/p&gt;

&lt;h3&gt;
  
  
  Squash merge — usually the cleanest for AI-generated work
&lt;/h3&gt;

&lt;p&gt;Inside the worktree, the agent may have made several exploratory commits. Those process commits usually do not need to live forever in main history. Squash merge compresses everything into one clean commit. On GitHub you can choose “Squash and merge”, or in the CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git merge &lt;span class="nt"&gt;--squash&lt;/span&gt; feature/auth
git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"feat: add auth middleware"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Rebase merge — when you want perfectly linear history
&lt;/h3&gt;

&lt;p&gt;This rebases the worktree branch on top of main and then fast-forwards it in. It is useful when the commits are already clean and meaningful on their own:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Inside the worktree (or use master instead of main if needed)&lt;/span&gt;
git rebase main

&lt;span class="c"&gt;# Back in the main worktree&lt;/span&gt;
git checkout main
git merge feature/auth &lt;span class="nt"&gt;--ff-only&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Merge commit — when you want to preserve branch history
&lt;/h3&gt;

&lt;p&gt;This creates an explicit merge commit, leaving a visible record that &lt;code&gt;feature/auth&lt;/code&gt; was integrated at that point in time. It is useful for larger work units or when branch-level traceability matters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git checkout main
git merge feature/auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For many small tasks handled by AI through harness engineering, squash merge tends to fit best. There is usually no reason to keep all the intermediate trial commits. From the perspective of the main worktree, one clean commit that says “this feature was added” is often enough.&lt;/p&gt;

&lt;p&gt;Once the merge is done, clean up the worktree.&lt;/p&gt;

&lt;h2&gt;
  
  
  Removing a worktree
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Remove the worktree&lt;/span&gt;
git worktree remove ../my-app-feature-auth

&lt;span class="c"&gt;# Delete the branch too&lt;/span&gt;
git branch &lt;span class="nt"&gt;-d&lt;/span&gt; feature/auth

&lt;span class="c"&gt;# Clean up stale worktree metadata&lt;/span&gt;
git worktree prune
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the worktree was created through Claude Code’s &lt;code&gt;--worktree&lt;/code&gt; option, it will automatically delete the worktree and branch when the session ends with no changes. If commits exist, Claude asks whether to keep them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Things to watch out for
&lt;/h2&gt;

&lt;p&gt;Do not parallelize tasks that edit the same files unless you are prepared to handle merge conflicts later. Worktree does not magically solve overlapping changes. If two agents touch the same file, the merge conflict still exists. You still need to split work along sane boundaries.&lt;/p&gt;

&lt;p&gt;Servers using the same port will also collide. If you run multiple dev servers from multiple worktrees at once, make sure they use different ports or only run one at a time.&lt;/p&gt;

&lt;p&gt;It is also worth running &lt;code&gt;git worktree prune&lt;/code&gt; periodically. If you manually delete directories, stale worktree metadata can linger and clutter the list. &lt;code&gt;git worktree prune&lt;/code&gt; cleans those invalid references up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing thoughts
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;git worktree&lt;/code&gt; first appeared in 2015, but this may be the exact era it was waiting for. Once AI coding agents become a normal part of development, running multiple isolated workspaces in parallel stops being a niche trick and starts feeling like the default.&lt;/p&gt;

&lt;p&gt;Instead of repeatedly stashing and checking out branches, you can switch context just by changing directories. That is why &lt;code&gt;git worktree&lt;/code&gt; feels less like a neat Git feature now, and more like core infrastructure for parallel AI-assisted development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Refs
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://git-scm.com/docs/git-worktree" rel="noopener noreferrer"&gt;Git Official Documentation - git-worktree&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://superset.sh/blog/git-worktrees-history-deep-dive" rel="noopener noreferrer"&gt;Git Worktrees: The Feature That Waited a Decade for Its Moment&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://code.claude.com/docs/en/common-workflows" rel="noopener noreferrer"&gt;Claude Code Common Workflows - Worktree Support&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.dandoescode.com/blog/parallel-vibe-coding-with-git-worktrees" rel="noopener noreferrer"&gt;Parallel Vibe Coding: Using Git Worktrees with Claude Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://incident.io/blog/shipping-faster-with-claude-code-and-git-worktrees" rel="noopener noreferrer"&gt;How we're shipping faster with Claude Code and Git Worktrees&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@dtunai/mastering-git-worktrees-with-claude-code-for-parallel-development-workflow-41dc91e645fe" rel="noopener noreferrer"&gt;Mastering Git Worktrees with Claude Code for Parallel Development Workflow&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>git</category>
      <category>productivity</category>
    </item>
    <item>
      <title>fractional-indexing: Implementing Drag-and-Drop Ordering and Avoiding Index Collisions</title>
      <dc:creator>Kendrick B. Jung</dc:creator>
      <pubDate>Mon, 23 Mar 2026 12:46:47 +0000</pubDate>
      <link>https://dev.to/sonim1/fractional-indexing-implementing-drag-and-drop-ordering-and-avoiding-index-collisions-g3</link>
      <guid>https://dev.to/sonim1/fractional-indexing-implementing-drag-and-drop-ordering-and-avoiding-index-collisions-g3</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Avoiding index collisions in sortable lists&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The limits of integer indices
&lt;/h2&gt;

&lt;p&gt;If you have ever built a drag-and-drop list, you have probably stored the order like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"a"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"order"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"b"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"order"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"c"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"order"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What happens if you move &lt;code&gt;b&lt;/code&gt; to the front? &lt;code&gt;b&lt;/code&gt; becomes 0, and &lt;code&gt;a&lt;/code&gt; is still 1, so at first glance it seems fine. But if you later want to insert a new item between &lt;code&gt;a&lt;/code&gt; and &lt;code&gt;b&lt;/code&gt;, you have to shift &lt;code&gt;a&lt;/code&gt; to 2 and &lt;code&gt;c&lt;/code&gt; to 3. In other words, changing one item often forces you to update several others too.&lt;/p&gt;

&lt;p&gt;In collaborative tools where multiple users can reorder items at the same time, that structure tends to create collisions. If two people modify the same part of the list concurrently, the final order can become inconsistent or trigger large update conflicts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrkm2e79e3z35g3eaoqs.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrkm2e79e3z35g3eaoqs.webp" alt="Drag-and-drop UI example with multiple items being reordered" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is fractional-indexing?
&lt;/h2&gt;

&lt;p&gt;David Greenspan introduced this approach in &lt;a href="https://observablehq.com/@dgreensp/implementing-fractional-indexing" rel="noopener noreferrer"&gt;Implementing Fractional Indexing&lt;/a&gt;. The core idea is simple: instead of using integers for order, use &lt;strong&gt;sortable string keys&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"a"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"order"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"a0"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"b"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"order"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"a1"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"c"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"order"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"a2"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Want to insert an item between &lt;code&gt;a1&lt;/code&gt; and &lt;code&gt;a2&lt;/code&gt;? You can generate a middle key like &lt;code&gt;a1V&lt;/code&gt;. Everything else stays unchanged.&lt;/p&gt;

&lt;p&gt;Figma uses this idea in its multiplayer editing system. It manages child-node ordering with fractional indexing, which means reordering typically updates only the moved node.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the library
&lt;/h2&gt;

&lt;p&gt;In JavaScript, you can use the &lt;a href="https://www.npmjs.com/package/fractional-indexing" rel="noopener noreferrer"&gt;&lt;code&gt;fractional-indexing&lt;/code&gt;&lt;/a&gt; package.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;fractional-indexing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;generateKeyBetween&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;generateNKeysBetween&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fractional-indexing&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// First key&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;first&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;generateKeyBetween&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// → 'a0'&lt;/span&gt;

&lt;span class="c1"&gt;// Insert at the beginning&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;zeroth&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;generateKeyBetween&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;first&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// → 'Zz'&lt;/span&gt;

&lt;span class="c1"&gt;// Insert at the end&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;second&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;generateKeyBetween&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;first&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// → 'a1'&lt;/span&gt;

&lt;span class="c1"&gt;// Generate a key between two existing keys&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;third&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;generateKeyBetween&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;second&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// 'a2'&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;generateKeyBetween&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;second&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;third&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// → 'a1V'&lt;/span&gt;

&lt;span class="c1"&gt;// Generate multiple keys at once&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;keys&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;generateNKeysBetween&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;a0&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;a2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// → ['a0G', 'a0V']&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You store the key as a string in the database and sort it with lexicographic order using &lt;code&gt;ORDER BY&lt;/code&gt;. The scheme is designed so alphabetical order matches the intended item order.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other ways to manage ordering
&lt;/h2&gt;

&lt;p&gt;fractional-indexing is not the only option. There are a few common alternatives, and each comes with tradeoffs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gap strategy with integers
&lt;/h3&gt;

&lt;p&gt;This is the simplest approach. You start with generous spacing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"a"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"order"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"b"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"order"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2000&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"c"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"order"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To insert between &lt;code&gt;a&lt;/code&gt; and &lt;code&gt;b&lt;/code&gt;, you assign &lt;code&gt;order: 1500&lt;/code&gt;. It is simple and fast. The downside is that once the gaps are exhausted, you eventually need to reindex everything. If inserts keep happening in the same region, you end up with values like &lt;code&gt;1500 → 1250 → 1375 → ...&lt;/code&gt;, and a full rebalance becomes unavoidable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwhcwtykb4gpiyichzel.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwhcwtykb4gpiyichzel.webp" alt="Illustration of inserting 1.5 between 1 and 2 with the gap strategy" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Timestamp-based ordering
&lt;/h3&gt;

&lt;p&gt;Another approach is to use insertion time as the order value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;a&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;order&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="c1"&gt;// 1700000001000&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the easiest implementation. The problem is that if two clients insert at nearly the same time, ordering becomes ambiguous. For a single-user app, that may be fine. In collaborative environments, it is usually not reliable enough.&lt;/p&gt;

&lt;h3&gt;
  
  
  Linked list ordering
&lt;/h3&gt;

&lt;p&gt;In this model, each item points to the next item.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"a"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"b"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"b"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"c"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"c"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"next"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The nice part is that insertion only touches nearby nodes, so the update scope stays small. The downside is that reading the full order requires traversal, and you lose the convenience of a simple database &lt;code&gt;ORDER BY&lt;/code&gt;. If your service reads ordered lists frequently, query complexity can become a real cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to choose
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Approach&lt;/th&gt;
&lt;th&gt;Implementation complexity&lt;/th&gt;
&lt;th&gt;Collaboration safety&lt;/th&gt;
&lt;th&gt;Long-term operation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;fractional-indexing&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;Rebalancing needed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;linked list&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;More complex queries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;integer gaps&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Reindexing needed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;timestamps&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Collision risk&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If your product involves frequent reordering or multiple users interacting with the same list, fractional-indexing is close to a practical default. For simpler single-user apps, a gap strategy with integers can still be perfectly sufficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  Things to watch out for
&lt;/h2&gt;

&lt;p&gt;Keys can grow longer over time. If you keep generating new keys inside the same narrow interval, the string length increases. That is why long-running systems often need a &lt;strong&gt;rebalancing&lt;/strong&gt; step that periodically rewrites the ordering keys.&lt;/p&gt;

&lt;p&gt;Another important detail is consistency in string comparison. Your database, server, and client should all treat ordering the same way. If different layers compare keys differently, the rendered order can drift from the intended one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing thoughts
&lt;/h2&gt;

&lt;p&gt;If you manage ordering with plain integers, you eventually run into friction. fractional-indexing is a fairly elegant way to avoid that problem. It is especially worth considering when you need realtime collaboration, optimistic updates, or frequent drag-and-drop reordering.&lt;/p&gt;

&lt;h2&gt;
  
  
  Refs
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/fractional-indexing" rel="noopener noreferrer"&gt;fractional-indexing - npm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://observablehq.com/@dgreensp/implementing-fractional-indexing" rel="noopener noreferrer"&gt;Implementing Fractional Indexing - David Greenspan&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.figma.com/blog/realtime-editing-of-ordered-sequences/" rel="noopener noreferrer"&gt;Figma: Realtime Editing of Ordered Sequences&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.figma.com/blog/how-figmas-multiplayer-technology-works/" rel="noopener noreferrer"&gt;Figma: How Figma's multiplayer technology works&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>algorithms</category>
      <category>computerscience</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Why AI Gives You a Headache: Managing Cognitive Fatigue for Developers</title>
      <dc:creator>Kendrick B. Jung</dc:creator>
      <pubDate>Wed, 18 Feb 2026 20:17:02 +0000</pubDate>
      <link>https://dev.to/sonim1/why-ai-gives-you-a-headache-managing-cognitive-fatigue-for-developers-12dg</link>
      <guid>https://dev.to/sonim1/why-ai-gives-you-a-headache-managing-cognitive-fatigue-for-developers-12dg</guid>
      <description>&lt;h2&gt;
  
  
  A New Kind of Fatigue in the AI Era
&lt;/h2&gt;

&lt;p&gt;Recently, I've been subscribing to Claude Code Max, Codex (ChatGPT Pro), and Antigravity (Google AI Pro), which has dramatically increased my workload. At some point, I started getting headaches. I wondered if it was from lack of sleep, but our CTO at work asked if I was getting headaches. And the thing is, I had actually taken Tylenol the day before. So I thought that might be it, but after talking to others who use AI heavily, they said they occasionally get headaches too. So I decided to investigate. It turns out I'm not alone. Community posts asking "Does anyone get headaches when using AI? Planning and directing takes so much brainpower" are becoming common.&lt;/p&gt;

&lt;p&gt;A 2025 academic study also found that deeper engagement with GenAI doesn't reduce cognitive burden—it actually amplifies it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7s2zx0vhhm51n9hpyqq8.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7s2zx0vhhm51n9hpyqq8.webp" alt="Decision Fatigue" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why AI Exhausts Your Brain
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Decision Fatigue Explosion
&lt;/h3&gt;

&lt;p&gt;In traditional development, you'd spend a day diving deep into one design problem. Implementation took time, giving you the luxury of slowly making architectural decisions. AI flips this dynamic. When you can prototype three approaches in the time it previously took to build one, you must constantly make architecture-level decisions. The bottleneck shifts from "can we build this?" to "should we build this, and how?"&lt;/p&gt;

&lt;h3&gt;
  
  
  Continuous Task Initiation Burden
&lt;/h3&gt;

&lt;p&gt;AI doesn't move on its own. "Remove this," "redo it," "change direction"—you must constantly direct the next action. This process intensely consumes your brain's executive function, a high-intensity cognitive task.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prompt Fatigue
&lt;/h3&gt;

&lt;p&gt;A 2025 study of 832 GenAI users found that uncertainty about how to write prompts causes emotional fatigue, while unexpected responses cause cognitive fatigue. The process of choosing words and designing context to get desired results consumes a new type of energy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Context Switching Costs
&lt;/h3&gt;

&lt;p&gt;Prompt writing → result review → revision instruction → re-review. This loop repeats dozens or hundreds of times daily. While AI doesn't tire from context switching, the human brain pays a transition cost each time it changes modes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Solutions That Work
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The 20-20-20 Rule
&lt;/h3&gt;

&lt;p&gt;Every 20 minutes, look at something 20 feet (6m) away for 20 seconds. Proposed by ophthalmologist Dr. Anshel in the 1990s, this rule is recommended by both the American Optometric Association (AOA) and the American Academy of Ophthalmology (AAO). Research shows that applying this rule for 2 weeks significantly reduces digital eye strain symptoms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55dwdk7nls9u3uiwicvp.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55dwdk7nls9u3uiwicvp.webp" alt="20-20-20 Rule" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I happen to have a view of the Mississauga skyline from my place, so every 20 minutes I look out at the open landscape for 20 seconds. Having a distant view to rest your eyes on makes practicing this rule much easier than trying to focus on a wall or nearby objects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshy8zrp2w6qp19uyphnz.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshy8zrp2w6qp19uyphnz.webp" alt="View from the window" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Batch Prompting
&lt;/h3&gt;

&lt;p&gt;Instead of continuously micro-directing, give broad guidelines once, let AI draft the solution, then review the results in batches. This reduces the number of brain transitions. For example, tools like oh-my-claudecode's autopilot or ralplan's autonomous execution modes let you review outputs without directing every step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Intentional Downtime
&lt;/h3&gt;

&lt;p&gt;After 50 minutes of focus, you need 10 minutes away from screens entirely. This allows your brain's Default Mode Network (DMN) to activate, consolidating and organizing information—a completely different brain activity from continuously reading and judging AI outputs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Posture and Environment Check
&lt;/h3&gt;

&lt;p&gt;An easily overlooked aspect. When concentrating on AI conversations, you may unconsciously tense your neck and shoulders, leading to tension headaches. Simply positioning your monitor at eye level and maintaining at least 63cm (arm's length) from the screen makes a noticeable difference.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Key: Not "Use Less" but "Use Differently"
&lt;/h2&gt;

&lt;p&gt;The solution to AI fatigue isn't to use AI less. The key is using it with boundaries, intention, and awareness that you're not a machine.&lt;/p&gt;

&lt;p&gt;Acknowledging that productivity gains come with increased cognitive costs, and managing those costs, has become the new essential skill for developers in the AI era.&lt;/p&gt;

&lt;h2&gt;
  
  
  Refs
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Siddhant Khare, "AI fatigue is real and nobody talks about it" (2025) — &lt;a href="https://siddhantkhare.com/writing/ai-fatigue-is-real" rel="noopener noreferrer"&gt;siddhantkhare.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;WarpedVisions, "The hidden cost of AI-assisted development: cognitive fatigue" (2025) — &lt;a href="https://warpedvisions.org/blog/2025/hitting-the-wall-at-ai-speed/" rel="noopener noreferrer"&gt;warpedvisions.org&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;ScienceDirect, "Fatigued by uncertainties: Exploring the cognitive and emotional costs of generative AI usage" (2025) — &lt;a href="https://www.sciencedirect.com/science/article/abs/pii/S0268401225001422" rel="noopener noreferrer"&gt;sciencedirect.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;MDPI, "Generative AI and Cognitive Challenges in Research" (2025) — &lt;a href="https://www.mdpi.com/2227-7080/13/11/486" rel="noopener noreferrer"&gt;mdpi.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Human Clarity Institute, "Cognitive Load, Fatigue &amp;amp; Decision Offloading 2025 Data Summary" — &lt;a href="https://humanclarityinstitute.com/data/ai-fatigue-decision-2025/" rel="noopener noreferrer"&gt;humanclarityinstitute.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Healthline, "20-20-20 Rule: Does It Help Prevent Digital Eyestrain?" (2025) — &lt;a href="https://www.healthline.com/health/eye-health/20-20-20-rule" rel="noopener noreferrer"&gt;healthline.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;ScienceDirect, "The effects of breaks on digital eye strain, dry eye and binocular vision: Testing the 20-20-20 rule" (2022) — &lt;a href="https://www.sciencedirect.com/science/article/pii/S1367048422001990" rel="noopener noreferrer"&gt;sciencedirect.com&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>health</category>
      <category>devrel</category>
    </item>
  </channel>
</rss>
