<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Adam Poulemanos</title>
    <description>The latest articles on DEV Community by Adam Poulemanos (@adam-knitli).</description>
    <link>https://dev.to/adam-knitli</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/adam-knitli"/>
    <language>en</language>
    <item>
      <title>AI Killed One Open Source Business Model; Not Open Source</title>
      <dc:creator>Adam Poulemanos</dc:creator>
      <pubDate>Tue, 13 Jan 2026 19:06:34 +0000</pubDate>
      <link>https://dev.to/adam-knitli/ai-killed-one-open-source-business-model-not-open-source-2mp0</link>
      <guid>https://dev.to/adam-knitli/ai-killed-one-open-source-business-model-not-open-source-2mp0</guid>
      <description>&lt;p&gt;The developer world panicked this week when Tailwind CSS—one of the most popular open source projects with 75M monthly downloads—announced an 80% revenue drop and questioned whether their business model can survive the AI era.&lt;/p&gt;

&lt;p&gt;As someone building an open source company right now (&lt;a href="https://knitli.com" rel="noopener noreferrer"&gt;Knitli&lt;/a&gt;), this hit close to home. I needed to know: is this the beginning of the end for open source, or something else entirely?&lt;/p&gt;

&lt;p&gt;I spent the weekend digging into the data. Here's what I found: &lt;strong&gt;AI killed one specific business model. Meanwhile, other open source companies are having their best years ever.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Winners Are Winning BIG
&lt;/h2&gt;

&lt;p&gt;While Tailwind struggled, here's what happened to other open source companies over the past 12-18 months:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Databricks&lt;/strong&gt;: Raised $10B at $62B valuation in Dec 2024, then another $4B at $134B in Dec 2025. Revenue hit $4.8B with 55% YoY growth.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vercel&lt;/strong&gt; (Next.js): Reached $9.3B valuation with 82% ARR growth. Their AI tool v0 attracted 3.5M users.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supabase&lt;/strong&gt;: Hit $2B valuation with revenue exploding from $20M to $70M—that's 250% growth.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Temporal&lt;/strong&gt;: $2.5B valuation with 4.4x revenue growth in 18 months.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hugging Face&lt;/strong&gt;: Maintained $4.5B valuation, revenue grew from $70M to $130M.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HashiCorp&lt;/strong&gt;: Sold to IBM for $6.4B.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostHog&lt;/strong&gt;: Just hit unicorn status at ~$1.2B valuation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These aren't small wins. These are &lt;strong&gt;record-breaking valuations happening RIGHT NOW&lt;/strong&gt;, during the supposed "death of open source."&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Changed?
&lt;/h2&gt;

&lt;p&gt;The difference isn't that Tailwind makes bad software. It's that &lt;strong&gt;AI fundamentally changed how people consume documentation-heavy products.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Tailwind's Model vs. AI
&lt;/h3&gt;

&lt;p&gt;The Tailwind model:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create amazing documentation&lt;/li&gt;
&lt;li&gt;People visit your website&lt;/li&gt;
&lt;li&gt;Convert some percentage to paid customers (UI Kit, Catalyst, etc.)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The content itself drives the business.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The problem&lt;/strong&gt;: AI doesn't visit websites and doesn't click "buy." It caches documentation, extracts what it needs, and generates code without ever hitting your servers. The traffic disappears. The conversion funnel breaks.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Winners' Models
&lt;/h3&gt;

&lt;p&gt;The thriving companies? They're selling &lt;strong&gt;cloud services and enterprise contracts&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Databricks sells data processing&lt;/li&gt;
&lt;li&gt;Vercel sells deployment infrastructure&lt;/li&gt;
&lt;li&gt;Supabase sells hosted Postgres&lt;/li&gt;
&lt;li&gt;Temporal sells workflow orchestration&lt;/li&gt;
&lt;li&gt;Hugging Face sells model hosting and inference&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are services AI can't cache away. When a developer needs to deploy something or process actual data, &lt;strong&gt;they have to connect to the real service.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If your business model was "great documentation drives traffic drives sales," AI just killed your funnel. If your business model is "use our OSS locally, pay us to run it in production," you're probably having your best year ever.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Hidden Problem: Contribution Quality
&lt;/h2&gt;

&lt;p&gt;There's a second issue that's less talked about: &lt;strong&gt;AI is flooding popular open source projects with low-quality contributions.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yesterday, I checked GitHub's &lt;a href="https://github.com/github/spec-kit" rel="noopener noreferrer"&gt;Spec-Kit repo&lt;/a&gt;—a collection of markdown files and shell scripts to guide AI through spec-writing. Simple but effective, with ~62,000 stars.&lt;/p&gt;

&lt;p&gt;One thing caught my eye: &lt;strong&gt;507 open issues.&lt;/strong&gt; For a repo with 14 markdown files and a handful of scripts.&lt;/p&gt;

&lt;p&gt;I dove in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Most issues are low-effort contributions or feature requests (only 10% are actual bugs)&lt;/li&gt;
&lt;li&gt;The majority aren't issues at all—they're people asking questions already covered in docs&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/github/spec-kit/blob/main/CONTRIBUTING.md#ai-contributions-in-spec-kit" rel="noopener noreferrer"&gt;One third of the Contributing guidelines&lt;/a&gt; are dedicated to reducing low-quality AI contributions&lt;/li&gt;
&lt;li&gt;73 open pull requests, mostly simple copy-paste additions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The problem isn't truly AI-generated contributions—it's people unfamiliar with repository norms who can't be bothered to read the docs themselves. A flood of "vibe coders" drowning out the signal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Good Examples Already Exist
&lt;/h3&gt;

&lt;p&gt;Many successful projects don't rely on mass contributions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Django&lt;/strong&gt; requires a Contributor License Agreement (CLA)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Caddy&lt;/strong&gt; sets a high bar for contributions without being exclusionary&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostHog&lt;/strong&gt;, &lt;strong&gt;Plausible Analytics&lt;/strong&gt;, and others have clear guidelines: "contributions are welcome if they meet our standards and align with our goals"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The message is shifting from "you build it, they come, everyone wins" to &lt;strong&gt;"build in public, keep it open for good players, protect yourself and your time."&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Licensing Question
&lt;/h2&gt;

&lt;p&gt;This brings me to something I'm personally wrestling with right now.&lt;/p&gt;

&lt;p&gt;I'm building Knitli, which includes &lt;a href="https://github.com/knitli/codeweaver" rel="noopener noreferrer"&gt;CodeWeaver&lt;/a&gt; (semantic code search) and &lt;a href="https://github.com/knitli/thread" rel="noopener noreferrer"&gt;Thread&lt;/a&gt; (real-time codebase intelligence). Thread is AGPL-3.0. CodeWeaver is currently MIT OR Apache-2.0.&lt;/p&gt;

&lt;h3&gt;
  
  
  I'm about 90% decided to move CodeWeaver to AGPL-3.0
&lt;/h3&gt;

&lt;p&gt;Here's why the calculus has changed:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The old way:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MIT/Apache-2.0 were defaults for open source&lt;/li&gt;
&lt;li&gt;Permissive licenses drove adoption&lt;/li&gt;
&lt;li&gt;Get mass adoption, monetize later&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The new reality:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI companies train on your permissively licensed code&lt;/li&gt;
&lt;li&gt;Large players fork and wrap your work&lt;/li&gt;
&lt;li&gt;You have limited options to build a sustainable business&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What I'm seeing now:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Big growth in projects with commercial intent choosing AGPL-3.0 (&lt;a href="https://github.com/firecrawl/firecrawl" rel="noopener noreferrer"&gt;Firecrawl&lt;/a&gt;, &lt;a href="https://github.com/opendatalab/MinerU" rel="noopener noreferrer"&gt;MinerU&lt;/a&gt;, &lt;a href="https://github.com/daytonaio/daytona" rel="noopener noreferrer"&gt;Daytona&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Projects with many stars using "fair use" (not open source) licenses (&lt;a href="https://github.com/n8n-io/n8n" rel="noopener noreferrer"&gt;n8n&lt;/a&gt;, &lt;a href="https://github.com/OpenHands/OpenHands" rel="noopener noreferrer"&gt;OpenHands&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Growing use of commons clause additions (&lt;a href="https://github.com/dokploy/dokploy" rel="noopener noreferrer"&gt;dokploy&lt;/a&gt;, &lt;a href="https://github.com/DavidHDev/react-bits" rel="noopener noreferrer"&gt;ReactBits&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;More fully proprietary code in public repositories&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;I expect these trends to accelerate. We'll see more "open code, closed source" projects soon.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My Reasoning for CodeWeaver
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;It's a developer tool. Most developers can use or modify it without contributing back. AGPL-3.0 doesn't block that.&lt;/li&gt;
&lt;li&gt;The main risk is AI companies using CodeWeaver as a feature without contributing back. AGPL-3.0 limits that risk.&lt;/li&gt;
&lt;li&gt;AGPL-3.0 is still very much open source—it just requires anyone who modifies and hosts it to share their changes, keeping the ecosystem healthy.&lt;/li&gt;
&lt;li&gt;It protects against large players taking core ideas and building proprietary versions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The decision is harder for CodeWeaver than Thread. Thread is infrastructure—it always needed strong copyleft protection. CodeWeaver is more of a developer tool, where permissive licenses traditionally made sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Maintainers
&lt;/h2&gt;

&lt;p&gt;If you're maintaining an open source project as a business, the rules have changed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Re-evaluate your licensing strategy.&lt;/strong&gt; Permissive licenses that once drove adoption may no longer protect your business. Consider AGPL-3.0 or fair use licenses that align with your goals. Don't let philosophy override practicality.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tighten contribution guidelines.&lt;/strong&gt; Make low-effort contributions difficult. Set clear standards, require CLAs, and be prepared to reject contributions that don't meet your criteria.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Focus on building services, not just code.&lt;/strong&gt; If your business relies on documentation-driven traffic, pivot. Offer hosted services, enterprise features, or other value AI can't replicate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Transparency still matters.&lt;/strong&gt; Even with restrictive licenses or proprietary code, keeping your repository public builds trust.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;Watching Tailwind struggle while Databricks raises $14B has clarified something:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Open source isn't dying. One business model is dying. The companies that adapt are thriving.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I'm still figuring this out in real time. But the data are clear: if you're building open source software as a business, the path forward isn't about abandoning open source—it's about choosing the right model and protecting what you build.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm Adam, founder of &lt;a href="https://knitli.com" rel="noopener noreferrer"&gt;Knitli&lt;/a&gt;. I'm building AI context management tools and figuring out open source business models in real time. If you're wrestling with similar questions, I'd love to hear from you on &lt;a href="https://bsky.app/profile/adam.knitli.com" rel="noopener noreferrer"&gt;Bluesky&lt;/a&gt;, &lt;a href="https://twitter.com/SpyToFounder" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, or &lt;a href="https://www.linkedin.com/in/adampoulemanos/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://github.com/knitli/codeweaver" rel="noopener noreferrer"&gt;CodeWeaver&lt;/a&gt; is in alpha and I need users! Try it out, break it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This post originally appeared on the Knitli blog at &lt;a href="https://blog.knitli.com/ai-killed-one-open-sorce-business-model-not-open-source/" rel="noopener noreferrer"&gt;https://blog.knitli.com/ai-killed-one-open-sorce-business-model-not-open-source/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>opensource</category>
      <category>discuss</category>
    </item>
    <item>
      <title>When it's all you have, bad code review is better than no code review.

... but please don't let this be the future.</title>
      <dc:creator>Adam Poulemanos</dc:creator>
      <pubDate>Mon, 12 Jan 2026 18:15:42 +0000</pubDate>
      <link>https://dev.to/adam-knitli/when-its-all-you-have-bad-code-review-is-better-than-no-code-review-but-please-dont-let-5enb</link>
      <guid>https://dev.to/adam-knitli/when-its-all-you-have-bad-code-review-is-better-than-no-code-review-but-please-dont-let-5enb</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/adam-knitli" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1363010%2Fae846a70-0b57-4779-ba67-02e076a34242.png" alt="adam-knitli"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/adam-knitli/i-let-four-ai-code-reviews-fight-over-my-prs-17je" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;I let four AI code reviewers fight over my PRs&lt;/h2&gt;
      &lt;h3&gt;Adam Poulemanos ・ Jan 12&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#vibecoding&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#coding&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#githubcopilot&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>ai</category>
      <category>vibecoding</category>
      <category>coding</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>I let four AI code reviewers fight over my PRs</title>
      <dc:creator>Adam Poulemanos</dc:creator>
      <pubDate>Mon, 12 Jan 2026 18:11:08 +0000</pubDate>
      <link>https://dev.to/adam-knitli/i-let-four-ai-code-reviews-fight-over-my-prs-17je</link>
      <guid>https://dev.to/adam-knitli/i-let-four-ai-code-reviews-fight-over-my-prs-17je</guid>
      <description>&lt;h2&gt;
  
  
  AI code reviewers are annoying
&lt;/h2&gt;

&lt;p&gt;Many developers complain about AI code review. It's noisy. It's repetitive. It misunderstands context. It suggests bad patterns. It never shuts up.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Four&lt;/em&gt; AI systems review every push I make, &lt;strong&gt;and they will not shut up.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;GitHub Copilot. Sourcery. GitHub Code Quality. Claude. All via CI. Every push.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Every pull request becomes a battlefield of automated opinions. &lt;strong&gt;My most recent architectural PR had 72 comments. I'm the only human&lt;/strong&gt; participating.&lt;/p&gt;

&lt;p&gt;This is annoying. I'm going to tell you why I do it anyway.&lt;/p&gt;

&lt;h3&gt;
  
  
  The solo developer's dilemma
&lt;/h3&gt;

&lt;p&gt;I'm building &lt;a href="https://github.com/knitli/codeweaver" rel="noopener noreferrer"&gt;CodeWeaver&lt;/a&gt;, an open-source MCP server for semantic code search. It's just me. No co-founder, no team, no code review buddy.&lt;/p&gt;

&lt;p&gt;This is a problem. Not because I need validation (though that's nice), but because code review serves a specific function: it creates friction. Someone else looks at your work and asks "why?" before it ships. That friction catches bugs. It surfaces assumptions. It forces you to defend decisions you might have made on autopilot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When you're solo, that friction disappears.&lt;/strong&gt; You write code, you merge code. Nobody asks questions. Your bad habits calcify into architectural decisions.&lt;/p&gt;

&lt;p&gt;So I outsourced the friction to robots.&lt;/p&gt;

&lt;h3&gt;
  
  
  What AI code review actually looks like
&lt;/h3&gt;

&lt;p&gt;Let me be clear about what I'm dealing with here.&lt;/p&gt;

&lt;p&gt;On a recent PR introducing daemon architecture (17 commits, 2000 lines changed), I got 37 comments from Copilot &lt;em&gt;alone&lt;/em&gt;. Add in the other reviewers and we're at &lt;strong&gt;60 line-level comments plus 12 general PR reviews&lt;/strong&gt;. One PR.&lt;/p&gt;

&lt;p&gt;Most of them are about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Unused imports and variables&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Empty except blocks missing explanatory comments&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Health check loops that sleep before checking instead of after&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Parameter names that could be "clearer"&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The same suggestions, repeated by three different reviewers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It's a lot. Most of it is noise. The signal-to-noise ratio is genuinely terrible.&lt;/p&gt;

&lt;p&gt;But here's the thing about noise: it takes two seconds to dismiss. I scan a comment, think "no, that's intentional," and move on. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The cognitive overhead is low once you accept that most suggestions won't matter.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  The .variable vs .value war
&lt;/h3&gt;

&lt;p&gt;I have a custom enum pattern in CodeWeaver. Instead of accessing enum values with &lt;code&gt;.value&lt;/code&gt; (Python's default), I use a custom &lt;code&gt;.variable&lt;/code&gt; property on my &lt;code&gt;BaseEnum&lt;/code&gt; class. There are good reasons for this — it gives me more control over serialization and string representation and not all &lt;code&gt;BaseEnum&lt;/code&gt;s are strings.&lt;/p&gt;

&lt;p&gt;Copilot flags this. Every time. In one PR, it flagged the same .value → .variable pattern ten times across different files.&lt;/p&gt;

&lt;p&gt;"Consider using .value for consistency with Python enum conventions."&lt;/p&gt;

&lt;p&gt;This is maddening. It's an intentional design decision. I've made it. I'm committed to it. &lt;code&gt;BaseEnum&lt;/code&gt; actually has a &lt;code&gt;.variable&lt;/code&gt; property — Copilot just doesn't know about my custom base class.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first time I saw this comment, it made me stop and think. Why am I using &lt;code&gt;.variable&lt;/code&gt;? Is there actually a good reason? I spent a few minutes writing up the justification in my head. Turned out yes, there was a good reason. But I hadn't consciously articulated it until the robot asked.&lt;/p&gt;

&lt;p&gt;That's the value buried in the annoyance. Being forced to defend a decision — even to a machine that won't understand your defense — clarifies your own thinking.&lt;/p&gt;

&lt;h3&gt;
  
  
  The catches that matter
&lt;/h3&gt;

&lt;p&gt;Buried in the noise, real bugs surface.&lt;/p&gt;

&lt;p&gt;In the daemon PR I mentioned, I had a stop command that was supposed to kill the daemon process. The code was:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;kill&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getpid&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;signal&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SIGTERM&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That kills &lt;em&gt;the CLI process&lt;/em&gt; itself, not the daemon. Multiple AI reviewers caught this. I'd have shipped a command that literally did nothing useful.&lt;/p&gt;

&lt;p&gt;GitHub Code Quality caught me using &lt;code&gt;asyncio.suppress(asyncio.CancelledError)&lt;/code&gt; — which doesn't exist. The correct form is &lt;code&gt;contextlib.suppress()&lt;/code&gt;. I use the latter all the time, but an AI assistant suggested the former and I missed it in my review. Would have caused a runtime error. &lt;strong&gt;My AI reviewers caught my AI developers' mistakes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Copilot flagged an inverted conditional:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;project&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="nf"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;project&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;project&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exists&lt;/span&gt;&lt;span class="p"&gt;()):&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The logic was backwards. I'd have been showing warnings when I shouldn't and staying silent when I should have warned.&lt;/p&gt;

&lt;p&gt;Sourcery caught that my systemd service file generator wasn't quoting paths. Any user with a space in their home directory would have had a broken service file.&lt;/p&gt;

&lt;p&gt;In another PR, Copilot found a format string error — %r% s%r instead of %r, %s, %r — and a type error where I was dividing a boolean by an integer instead of dividing lengths.&lt;/p&gt;

&lt;p&gt;None of these were catastrophic. All of them would have wasted my time later. Some would have reached users before I noticed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why multiple reviewers?
&lt;/h3&gt;

&lt;p&gt;If AI code review is annoying, why run four of them?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Coverage.&lt;/strong&gt; They catch different things.&lt;/p&gt;

&lt;p&gt;Looking at the actual data, each reviewer has a distinct personality:&lt;/p&gt;

&lt;p&gt;Copilot generates the highest volume (37 comments on one PR) but has the lowest signal-to-noise ratio. About 55% of suggestions get implemented. It's good at catching unused code, potential AttributeErrors, and logic issues. It's also the most repetitive — it'll flag the same pattern ten times if it appears ten times.&lt;/p&gt;

&lt;p&gt;Sourcery is the opposite: fewer comments (12 on that same PR), but nearly all of them matter. About 90% implementation rate. It catches security issues like path quoting, architecture problems, and it tracks which suggestions you've addressed with a "✅ Addressed" marker, which is nice.&lt;/p&gt;

&lt;p&gt;GitHub Code Quality goes deep on static analysis. It caught the asyncio.suppress misuse that no other reviewer flagged. It also provides autofix capabilities — click a button and the fix is applied.&lt;/p&gt;

&lt;p&gt;Claude (via CI) writes comprehensive architectural reviews. Good for cross-cutting concerns. The downside: I had 11 nearly identical reviews on one PR because my CI workflow triggers it multiple times. That's a configuration problem, not a Claude problem.&lt;/p&gt;

&lt;p&gt;The enum &lt;code&gt;.variable&lt;/code&gt; thing gets flagged by Copilot every time. The &lt;code&gt;asyncio.suppress&lt;/code&gt; bug? Only Code Quality caught that one.&lt;/p&gt;

&lt;p&gt;Running multiple reviewers also creates a consensus signal. When all of them flag the same thing, it's probably worth a closer look.&lt;/p&gt;

&lt;h3&gt;
  
  
  This isn't for everyone
&lt;/h3&gt;

&lt;p&gt;I want to be honest: this approach requires a specific temperament.&lt;/p&gt;

&lt;p&gt;You have to be okay with noise. Lots of it. If seeing "wrong" suggestions irritates you, this will drive you crazy. You need to be able to scan, dismiss, and move on without getting emotionally hooked by bad advice.&lt;/p&gt;

&lt;p&gt;You also have to be solo (or nearly solo). On a team, AI code review creates a different dynamic. Human reviewers might feel their feedback is redundant. The comment threads become unreadable. The value proposition changes entirely.&lt;/p&gt;

&lt;p&gt;And you have to accept that you're trading time for coverage. Reading through 60 comments takes time, even if most are quick dismissals. That's time you could spend actually coding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For me, the trade is worth it.&lt;/strong&gt; I don't have teammates to catch my mistakes. The AI reviewers are bad teammates, but they're the teammates I have.&lt;/p&gt;

&lt;h3&gt;
  
  
  The real point
&lt;/h3&gt;

&lt;p&gt;Here's what I've learned from this experiment: AI code review isn't good. The tools are noisy, repetitive, and often wrong about intent. Developers who find it annoying are correct.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;But "not good" isn't the same as "not useful."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When you're a solo developer, your options are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;No code review at all&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Annoying, imperfect, robotic code review&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I pick option two. Not because it's good, but because it's better than nothing. It creates friction where there would otherwise be none. It catches some bugs that would otherwise ship. It forces me to articulate decisions I might otherwise make unconsciously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is this the future of code review? God, I hope not.&lt;/strong&gt; Human reviewers who understand context and intent will always be better than pattern-matching robots.&lt;/p&gt;

&lt;p&gt;But until I have those human reviewers, I'll keep letting the robots fight over my PRs.&lt;/p&gt;

&lt;p&gt;I'm building &lt;a href="https://github.com/knitli/codeweaver" rel="noopener noreferrer"&gt;CodeWeaver&lt;/a&gt;, semantic code search for AI agents. If you want to see what 60+ AI comments on a PR looks like, &lt;a href="https://github.com/knitli/codeweaver/pull/184" rel="noopener noreferrer"&gt;check out PR #184&lt;/a&gt;. It's not pretty, but it works.&lt;/p&gt;




&lt;p&gt;Post originally appeared on the Knitli blog at &lt;a href="https://blog.knitli.com/i-let-four-ai-code-reviewers-fight-over-my-prs/" rel="noopener noreferrer"&gt;https://blog.knitli.com/i-let-four-ai-code-reviewers-fight-over-my-prs/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>vibecoding</category>
      <category>coding</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>A beginner's level introduction to context engineering and context management.</title>
      <dc:creator>Adam Poulemanos</dc:creator>
      <pubDate>Tue, 06 Jan 2026 16:02:13 +0000</pubDate>
      <link>https://dev.to/adam-knitli/a-beginners-level-introduction-to-context-engineering-and-context-management-1dc</link>
      <guid>https://dev.to/adam-knitli/a-beginners-level-introduction-to-context-engineering-and-context-management-1dc</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/knitli" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__org__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F11638%2F17be09da-9c27-4359-aba9-e9fa88a0e3ca.png" alt="Knitli" width="768" height="769"&gt;
      &lt;div class="ltag__link__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1363010%2Fae846a70-0b57-4779-ba67-02e076a34242.png" alt="" width="768" height="769"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/knitli/context-engineering-how-we-work-around-the-goldfish-problem-252i" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Context Engineering: How We Work Around the Goldfish Problem&lt;/h2&gt;
      &lt;h3&gt;Adam Poulemanos for Knitli ・ Jan 6&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#contextengineering&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#mcp&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#development&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>contextengineering</category>
      <category>mcp</category>
      <category>ai</category>
      <category>development</category>
    </item>
    <item>
      <title>A beginner's level introduction to context engineering and context management.</title>
      <dc:creator>Adam Poulemanos</dc:creator>
      <pubDate>Tue, 06 Jan 2026 15:52:04 +0000</pubDate>
      <link>https://dev.to/adam-knitli/a-beginners-level-introduction-to-context-engineering-and-context-management-a8n</link>
      <guid>https://dev.to/adam-knitli/a-beginners-level-introduction-to-context-engineering-and-context-management-a8n</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/knitli" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__org__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F11638%2F17be09da-9c27-4359-aba9-e9fa88a0e3ca.png" alt="Knitli" width="768" height="769"&gt;
      &lt;div class="ltag__link__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1363010%2Fae846a70-0b57-4779-ba67-02e076a34242.png" alt="" width="768" height="769"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/knitli/context-engineering-how-we-work-around-the-goldfish-problem-252i" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Context Engineering: How We Work Around the Goldfish Problem&lt;/h2&gt;
      &lt;h3&gt;Adam Poulemanos for Knitli ・ Jan 6&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#contextengineering&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#mcp&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#development&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>contextengineering</category>
      <category>mcp</category>
      <category>ai</category>
      <category>development</category>
    </item>
    <item>
      <title>Context Engineering: How We Work Around the Goldfish Problem</title>
      <dc:creator>Adam Poulemanos</dc:creator>
      <pubDate>Tue, 06 Jan 2026 15:50:43 +0000</pubDate>
      <link>https://dev.to/knitli/context-engineering-how-we-work-around-the-goldfish-problem-252i</link>
      <guid>https://dev.to/knitli/context-engineering-how-we-work-around-the-goldfish-problem-252i</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://blog.knitli.com/context-engineering-how-we-work-around-the-golfish-problem" rel="noopener noreferrer"&gt;blog.knitli.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  tl;dr
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Context engineering is the practice of deciding what information goes into a large language model's (LLM's) context window and when&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The dominant approach today is summarization&lt;/strong&gt;: using an LLM to compress context when the window fills up&lt;/li&gt;
&lt;li&gt;Summarization works well for some tasks but loses critical details in others, forcing agents to re-retrieve the same information repeatedly&lt;/li&gt;
&lt;li&gt;Other approaches like RAG and fine-tuning exist, each with real tradeoffs&lt;/li&gt;
&lt;li&gt;Understanding these tradeoffs helps you choose the right tools and know when to trust them&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  If Context is King, Context Engineering is Kingmaking
&lt;/h2&gt;

&lt;p&gt;In my last post, I explained that LLMs are goldfish. They can only see what fits in their context window, and they forget everything else. I also showed how context poisoning happens when you dump too much irrelevant information into that window, making it harder for the model to find what matters.&lt;/p&gt;

&lt;p&gt;So how do engineers actually deal with this? That's where context engineering comes in.&lt;/p&gt;

&lt;p&gt;Context engineering is the practice of deciding what information an LLM sees, when it sees it, and how much of it gets included. It's the difference between an AI that gives you useful answers and one that hallucinates or misses obvious details.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Summarization Approach (What Most Tools Do Today)
&lt;/h2&gt;

&lt;p&gt;Here's how most AI coding tools handle context limits:&lt;/p&gt;

&lt;p&gt;The agent starts working on your task. It reads files, makes changes, runs commands, and accumulates history. All of this fills the context window.&lt;/p&gt;

&lt;p&gt;When the window approaches its limit—usually around 95% full—the system needs to make room for more.&lt;/p&gt;

&lt;p&gt;The standard solution: call another LLM to summarize everything that's happened so far. The summarization LLM gets the entire conversation history and a prompt like "compress this to save tokens." It produces a shortened version, the system discards the original details, and the agent continues with this compressed summary as its only record of what came before.&lt;/p&gt;

&lt;p&gt;This is called "auto-compact" or "context compression" or "hierarchical summarization," but it's all the same basic idea. Claude Code does it. Cursor does it. Most agent frameworks do it.&lt;/p&gt;

&lt;p&gt;Why is this approach so common? Because it's a reasonable response to a hard constraint. Context windows are finite. Work sessions aren't. Something has to give, and summarization is cheap to implement and works surprisingly well for many tasks.&lt;/p&gt;

&lt;p&gt;But it has real limitations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Summarization Works and Where It Doesn't
&lt;/h2&gt;

&lt;p&gt;Summarization works well when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The task is mostly linear (do step A, then B, then C)&lt;/li&gt;
&lt;li&gt;Earlier details genuinely don't matter once completed&lt;/li&gt;
&lt;li&gt;The agent won't need to revisit specific information from early in the session&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Summarization struggles when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The task involves debugging or iterative refinement&lt;/li&gt;
&lt;li&gt;The agent needs to compare current state to earlier state&lt;/li&gt;
&lt;li&gt;Specific details (exact error messages, variable names, code snippets) matter more than general narrative&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's a concrete example of the second case:&lt;/p&gt;

&lt;p&gt;An agent is debugging a function. It reads the function definition, identifies a bug, makes a fix, tests it, sees a new error, and reads the function again to understand the new error.&lt;/p&gt;

&lt;p&gt;Then the context window fills up. The system summarizes.&lt;/p&gt;

&lt;p&gt;The summary might say: "Fixed bug in calculate_total function, encountered new error."&lt;/p&gt;

&lt;p&gt;But it doesn't include the actual function code, the specific error message, or the change that was made. That detail is gone.&lt;/p&gt;

&lt;p&gt;Two turns later, the agent needs to understand why the new error is happening. It doesn't have the function code anymore—that got summarized away. So it re-reads the file, re-retrieving context it already had.&lt;/p&gt;

&lt;p&gt;This happens often in debugging workflows. Agents spend time and tokens re-reading information they've already seen because summarization discarded the details they need.&lt;/p&gt;

&lt;p&gt;It's like taking notes during a meeting by writing "discussed the budget" and then, when someone asks you what the actual numbers were, having to go back and re-watch the recording.&lt;/p&gt;

&lt;p&gt;The deeper problem: summarization is lossy in unpredictable ways. The LLM doing the compression has to guess what's important. Sometimes it guesses wrong. When that happens, the agent either fails or has to backtrack and reconstruct context from scratch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Approaches and Their Tradeoffs
&lt;/h2&gt;

&lt;p&gt;Summarization dominates because it is easy and often 'good enough.' There are other approaches, each with their own pros and cons:&lt;/p&gt;

&lt;h3&gt;
  
  
  RAG: Retrieval Augmented Generation
&lt;/h3&gt;

&lt;p&gt;RAG treats your codebase (or other data) like a searchable database. It breaks everything into chunks, converts them into numerical representations called embeddings (essentially coordinates in a high-dimensional space where similar content clusters together), and stores them. When the agent needs information, it searches for relevant chunks and adds them to the context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The appeal:&lt;/strong&gt; RAG lets you work with massive codebases without loading everything at once. You retrieve only what's relevant for each query.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The tradeoff:&lt;/strong&gt; The quality of RAG depends entirely on how you implement it. Naive implementations use simple similarity matching—essentially asking "which chunks of text sound most like this query?" This works okay for documentation but breaks down for code. A function definition might have low textual similarity to a query about debugging an error that function causes. Dependencies three files away don't "sound like" the immediate problem, even when they're critical to understanding it.&lt;/p&gt;

&lt;p&gt;More sophisticated RAG systems understand code structure: they know about function calls, imports, type definitions, and can traverse these relationships. This makes retrieval much more accurate but is significantly harder to build.&lt;/p&gt;

&lt;p&gt;The practical result: RAG quality varies enormously between tools. When evaluating a tool that uses RAG, the question isn't "does it use RAG" but "how smart is its retrieval?"&lt;/p&gt;

&lt;h3&gt;
  
  
  Caching: Remember What You've Already Seen
&lt;/h3&gt;

&lt;p&gt;Some systems cache frequently-accessed context so they don't have to re-retrieve or re-process it. If an agent reads the same file five times during a session, caching means you only pay the retrieval cost once.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The appeal:&lt;/strong&gt; Caching directly addresses the re-retrieval problem that summarization creates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The tradeoff:&lt;/strong&gt; Caches take memory. They can become stale if files change. And deciding what to cache (and when to invalidate it) adds complexity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Agents: Let the Model Search for Itself
&lt;/h3&gt;

&lt;p&gt;Agent systems give the LLM tools to retrieve its own context. Instead of pre-selecting information, you let the model search files, run commands, or call APIs to find what it needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The appeal:&lt;/strong&gt; Agents can adapt. They search for what they need in the moment and course-correct based on what they find.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The tradeoff:&lt;/strong&gt; Agents are slower and more expensive. Every search is another API call (called an "inference call"), which means more tokens—the basic units that AI providers charge you for—and more compute. Agents also make mistakes: they search for the wrong things, miss obvious information, or get stuck in loops. And because the model has to reason about what to retrieve at each step, the whole process uses tokens fast.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fine-tuning: Bake It Into the Model
&lt;/h3&gt;

&lt;p&gt;Fine-tuning means retraining the model on your specific codebase or domain so it "learns" your patterns and doesn't need them in the context window.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The appeal:&lt;/strong&gt; Once fine-tuned, the model already "knows" your code. No retrieval needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The tradeoff:&lt;/strong&gt; Fine-tuning is expensive and inflexible. You need GPU time, training data, and constant retraining as your codebase changes. Fine-tuned models also aren't great at specific details—they learn general patterns but still hallucinate function names or recent changes. For fast-moving projects, fine-tuning can't keep up.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Challenge: Context Engineering is a Hard Problem
&lt;/h2&gt;

&lt;p&gt;Good context engineering for coding tasks requires several things that are genuinely difficult:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding code structure:&lt;/strong&gt; What depends on what? Which files matter for which tasks? How does information flow through the system? This requires parsing and analyzing code, not just treating it as text.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dynamic decision-making:&lt;/strong&gt; Different questions need different context. Understanding what a function does requires different information than debugging why it crashes, which requires different information than refactoring it for performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Precision:&lt;/strong&gt; Pulling the right information without including noise. Every irrelevant token makes it harder for the model to find what matters.&lt;/p&gt;

&lt;p&gt;Most tools make pragmatic tradeoffs here. They use approaches that are cheap to implement and work well enough for common cases, even if they break down on complex tasks. That's not incompetence—it's engineering under constraints.&lt;/p&gt;

&lt;p&gt;But it does mean that for complex, real-world work, context engineering is often the limiting factor. Not model capability. Not prompt quality. Whether the model has the right information to work with.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Costs of Poor Context Engineering
&lt;/h2&gt;

&lt;p&gt;When context engineering breaks down, the costs show up in three places:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Money:&lt;/strong&gt; Every token you process costs money. When you re-retrieve the same information repeatedly, you're paying to process those tokens over and over. When you include irrelevant context "just in case," you're paying for all of it. For teams using AI at scale, this can significantly increase infrastructure costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Speed:&lt;/strong&gt; Processing large contexts takes time. The more tokens you feed the model, the longer it takes to respond. When agents have to search repeatedly for information they've already seen, tasks stretch out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reliability:&lt;/strong&gt; When the model has to work with lossy summaries or sift through irrelevant information, it makes mistakes. It latches onto the wrong details, misses important nuance, or hallucinates. This is why AI coding tools sometimes confidently suggest fixes that break your code or miss bugs that are obvious if you have the right context.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Can Do About It
&lt;/h2&gt;

&lt;p&gt;If you're using AI coding tools, here are some practical things to keep in mind:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Watch for re-retrieval patterns.&lt;/strong&gt; If you notice an agent reading the same file multiple times in a session, that's a sign that context is being lost. Some tools handle this better than others.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Match tools to tasks.&lt;/strong&gt; Summarization-based tools work fine for straightforward, linear tasks. For debugging or iterative work, look for tools with smarter context management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ask about context strategy.&lt;/strong&gt; When evaluating AI coding tools, ask: How do they handle long sessions? What happens when the context window fills up? Do they use RAG, and if so, how sophisticated is the retrieval?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep sessions focused.&lt;/strong&gt; Shorter, focused sessions are less likely to hit context limits than sprawling multi-hour sessions. If you're doing complex work, sometimes starting fresh with targeted context is more effective than continuing a bloated session.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Provide explicit context.&lt;/strong&gt; Don't assume the tool will find what it needs. If you know a specific file or function is relevant, mention it directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I'm Trying to Fix It
&lt;/h2&gt;

&lt;p&gt;With Knitli, I'm working on context engineering that understands code structure—tracking dependencies, call graphs, repository patterns, and type relationships so retrieval is precise and adaptive rather than approximate and sweeping. My goal: assemble exactly the context each task needs, avoiding both the re-retrieval problem and context pollution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My first attempt at that is &lt;a href="https://github.com/knitli/codeweaver" rel="noopener noreferrer"&gt;CodeWeaver&lt;/a&gt;, which you can try today.&lt;/strong&gt; It's rough around the edges and doesn't achieve that goal yet, but it's much more capable at attacking the problem than existing tools. It's also fully open source and free.&lt;/p&gt;

&lt;p&gt;I'm not claiming I've solved context engineering. It's a genuinely hard problem. But I think current approaches leave a lot of room for improvement, and I'm focused on closing that gap.&lt;/p&gt;

&lt;p&gt;If you're interested in following along, you can learn more at &lt;a href="https://knitli.com" rel="noopener noreferrer"&gt;knitli.com&lt;/a&gt;, or try &lt;a href="https://github.com/knitli/codeweaver" rel="noopener noreferrer"&gt;CodeWeaver&lt;/a&gt; and get involved in making something better.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What context engineering challenges have you run into with AI coding tools? I'd love to hear your experiences in the comments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>contextengineering</category>
      <category>mcp</category>
      <category>ai</category>
      <category>development</category>
    </item>
    <item>
      <title>From Intelligence Expert to AI Business Leader: A Surprising Path</title>
      <dc:creator>Adam Poulemanos</dc:creator>
      <pubDate>Thu, 09 Oct 2025 02:34:40 +0000</pubDate>
      <link>https://dev.to/adam-knitli/from-intelligence-expert-to-ai-business-leader-a-surprising-path-4gj7</link>
      <guid>https://dev.to/adam-knitli/from-intelligence-expert-to-ai-business-leader-a-surprising-path-4gj7</guid>
      <description>&lt;h2&gt;
  
  
  Tinker, Tailor, Soldier, Spy
&lt;/h2&gt;

&lt;p&gt;I've been a lifelong technologist and tinkerer. Linux CLI user for 25 years. Built a web design business at 13 (long before JavaScript existed). Even assembled a basic Linux distro from scratch once.&lt;/p&gt;

&lt;p&gt;Despite decades of solving computer problems and writing the occasional bash script, I never quite broke through to serious programming. I'd tried to learn a few times and failed - the learning curve couldn't hold my interest long enough with the tools available.&lt;/p&gt;

&lt;p&gt;And I didn't work in tech. I was an 18-year intelligence officer and senior leader of a global workforce. My work was spying (well, leading it).&lt;/p&gt;

&lt;h2&gt;
  
  
  The Philosophy That Shaped Everything
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;My entire leadership ethos came down to one principle: remove barriers so people can do their best work.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let the people in the field--the doers--do what they do best. Give them the safety and decision space to succeed. &lt;strong&gt;Make innovation feel safe, not risky.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I was obsessive about plain language because anything else created uncertainty, and uncertainty breeds caution. &lt;strong&gt;When people aren't sure if they're allowed to try something, they don't try it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As a heavily regulated 'business', law and policy are deeply linked to intelligence outcomes. I dug deep into every law and policy that could impact our work and systematically questioned everything that might be a myth. "We can't do that" barriers often turned out to be misunderstandings (or complete fabrications), not rules or laws. &lt;strong&gt;My job was finding those barriers and destroying them, along with any others that kept good people from doing good things.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This philosophy worked. We turned around failing teams, made a (relatively) small $50M program into a blockbuster that garnered White House attention -- attention that exceeded programs 100x our size and got me a couple of surprise phone calls that skipped about 8 layers of management.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Solved a Problem and Changed Everything
&lt;/h2&gt;

&lt;p&gt;Two things happened in close proximity that changed my path.&lt;/p&gt;

&lt;p&gt;First, I found ChatGPT right after its first public release and when it was powerful enough to fuel my insatiable appetite for deep learning. As someone with ADHD who learns by diving deep into problems, this was transformative. ChatGPT could work with my hands-on, tinkering style and accelerate my learning process dramatically.&lt;/p&gt;

&lt;p&gt;Second, I hit a recurring data problem at work that I couldn't shake. I was managing operations for a 300+ person global organization, constantly frustrated that I couldn't get solid analysis of workforce activity and performance when all the data I needed was there. It just wasn't in a usable form. I tried to find expertise to help, but it didn't exist or wasn't available.&lt;/p&gt;

&lt;p&gt;I did what came naturally: I learned to do it myself in Python, using AI-accelerated learning to iterate and solve real problems.&lt;/p&gt;

&lt;p&gt;The limitations of AI were actually a hidden benefit. AI could help me dig deep into complex problems - like building and automating feature extraction and inference pipelines - but couldn't quite dig me out or debug it. It forced me to understand the fundamentals, to learn to shape my questions and context for better outcomes, to understand the 300-ish things I'd skipped along the way.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I was hooked. The AI-driven programming loop was incredibly powerful. I was already a fast learner, but AI let me accelerate that by 3x (maybe 10x...).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I spent every spare moment coding, learning, iterating. Literally every week I'd look back at my code and think "what was I thinking? This is amateur hour stuff." Three years later, I still have that experience, though now it's more like a month or two.&lt;/p&gt;

&lt;p&gt;By February 2025, I was professionally proficient in Python and TypeScript, with some Rust under my belt.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgvgfhhyulqim673hmdq9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgvgfhhyulqim673hmdq9.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  When You Can't Lead Authentically Anymore
&lt;/h2&gt;

&lt;p&gt;Then came the crisis. The new administration issued executive orders that conflicted with my core leadership principles - the ones that had made me effective for 18 years.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I believe deeply in what Amazon calls "have backbone; disagree and commit" - if I don't like a decision, I'll tell you why and try to convince you to change it. If I fail to change your mind, I'll turn around and champion it completely.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;That's how effective organizations work, with leaders who put outcomes before themselves.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But this was different. I couldn't do that anymore without sacrificing who I was.&lt;/p&gt;

&lt;p&gt;I couldn't lead genuinely while being asked to implement policies that &lt;strong&gt;contradicted the values I'd built my career on. Transparency. Psychological safety. Empathy. Speaking truth to power.&lt;/strong&gt; The same values that made me effective at removing barriers and enabling people to do their best work.&lt;/p&gt;

&lt;p&gt;In my mind, I had to go. There was no choice. I could stay and compromise my principles - either by championing values I deeply disagreed with or by breaking my commitment to effective followership. Or I could leave, keep my principles, and use them to build something.&lt;/p&gt;

&lt;p&gt;When the administration offered its infamous "fork in the road" deferred resignation initiative - continue receiving salary and benefits until September 30th in exchange for resigning - I jumped.&lt;/p&gt;

&lt;h3&gt;
  
  
  Same Values, Different Problem
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;At the same time, I'd been watching something troubling: AI was creating a new divide.&lt;/strong&gt; The technical few who understood prompt engineering, context windows, and model selection were getting incredible value. Everyone else was struggling.&lt;/p&gt;

&lt;p&gt;All the most powerful AI tools required high technical expertise to use effectively. This bothered me. It was the same barrier problem I'd spent 18 years solving - just in a different domain. So, &lt;strong&gt;I started building Knitli with a clear mission: democratize AI for normal people. Remove the barriers between technical and non-technical users.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Then I hit the wall myself.&lt;/p&gt;

&lt;p&gt;As I was building, I kept getting frustrated by how &lt;strong&gt;AI coding tools were simultaneously powerful and really, really dumb&lt;/strong&gt;. They'd generate code that ignored my architecture, my patterns, my existing conventions, my dependencies. A quick conversation about this problem, and a lot of reflection, ballooned over a couple days into a realization: &lt;strong&gt;the fundamental issue was context.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI agents needed carefully tailored context that shifts and adapts throughout interactions. The "dump all the context" approach everyone was using made agents both ineffective and unnecessarily expensive. What was missing was a context layer - systematic, intelligent, adaptive.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The insight clicked: this was the same problem I'd been trying to solve for non-technical users, just one level deeper. &lt;strong&gt;And solving it for developers first would actually enable the broader vision later.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Building With the Same Philosophy
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Now I'm building Knitli to solve the fundamental context problem in AI-driven software development.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We're building tools that help AI agents understand codebases the way I helped my teams understand their mission space: deeply, systematically, and with enough context to be confident.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Thread&lt;/strong&gt; provides intelligent, adaptive context to AI agents so they understand your architecture, patterns, and existing code. &lt;strong&gt;CodeWeaver&lt;/strong&gt; makes code navigation and comprehension actually pleasant for humans.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My philosophy hasn't changed: remove barriers. Make complex things accessible. Use plain language. Question assumptions about what's possible. Enable people--and AI--to do their best work without friction.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The vision evolved, but the values stayed constant.&lt;/p&gt;

&lt;p&gt;The journey from intelligence executive to AI entrepreneur wasn't planned. But it makes perfect sense in hindsight. I'm solving the same problems I've always solved - just in a different domain, with the freedom to practice my values without compromise.&lt;/p&gt;

&lt;p&gt;Sometimes the best solutions come from understanding problems from multiple angles. And sometimes you need uncomfortable realities to force the leap.&lt;/p&gt;




&lt;p&gt;(I'm not immune to the problems I'm trying to solve, either. I had to completely scrap the first versions of Thread &lt;em&gt;and&lt;/em&gt; CodeWeaver. The speed of using AI produced code that would have been impossible to maintain, I now assign AI agents much smaller tasks while I deliberately shape their context to get what I need from them -- a process Thread and CodeWeaver hope to eventually automate.)&lt;/p&gt;




&lt;p&gt;I'd like to hear about similar decisions other people made - &lt;strong&gt;when did you realize the only way forward was to do something completely different?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;]]&amp;gt;&lt;/p&gt;

</description>
      <category>founderstory</category>
      <category>founder</category>
      <category>founderjourney</category>
      <category>contextengineering</category>
    </item>
    <item>
      <title>Tree-Sitter Grammars Explained: Leveraging Data for Clarity</title>
      <dc:creator>Adam Poulemanos</dc:creator>
      <pubDate>Mon, 06 Oct 2025 14:22:20 +0000</pubDate>
      <link>https://dev.to/knitli/tree-sitter-grammars-explained-leveraging-data-for-clarity-3hgf</link>
      <guid>https://dev.to/knitli/tree-sitter-grammars-explained-leveraging-data-for-clarity-3hgf</guid>
      <description>&lt;h2&gt;
  
  
  How a Week of Jargon and 25 Languages Resulted in Creating the Parser I Needed
&lt;/h2&gt;




&lt;h4&gt;
  
  
  Clarity Engineering
&lt;/h4&gt;




&lt;h2&gt;
  
  
  TL;DR: If You're Here Because Tree-sitter's &lt;code&gt;node-types.json&lt;/code&gt; Makes No Sense
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;You're not alone.&lt;/strong&gt; Tree-sitter's terminology is confusing because it evolved from internal implementation details, not developer clarity.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Core Problems
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;"Named" doesn't mean "has a name"&lt;/strong&gt; (everything has a name). It means "corresponds to a named grammar rule" an internal detail that's noise for most use cases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;"Fields" and "children" are both parent-child relationships&lt;/strong&gt; but the distinction is unclear. Fields are semantic ("this node's &lt;em&gt;condition&lt;/em&gt;"), children are positional ("this node's first child").&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Everything is a "type"&lt;/strong&gt; : Nodes, edges, and abstract categories all use the same terminology, obscuring the differences that matter.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;This post uses formatting unsupported by dev.to, please view the rest at &lt;a href="https://blog.knitli.com/tree-sitter-grammars-explained-leveraging-data-for-clarity" rel="noopener noreferrer"&gt;our website&lt;/a&gt;&lt;/p&gt;

</description>
      <category>clarityengineering</category>
      <category>codeweaver</category>
      <category>astgrep</category>
      <category>contextengineering</category>
    </item>
    <item>
      <title>Context and Context Windows: What You Need to Know</title>
      <dc:creator>Adam Poulemanos</dc:creator>
      <pubDate>Fri, 26 Sep 2025 18:17:18 +0000</pubDate>
      <link>https://dev.to/knitli/context-and-context-windows-what-you-need-to-know-h4k</link>
      <guid>https://dev.to/knitli/context-and-context-windows-what-you-need-to-know-h4k</guid>
      <description>&lt;h2&gt;
  
  
  Why Your AI is a Goldfish
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Part 2 of Knitli's 101 introductions to AI and the economics of AI&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  tl;dr
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Large language models (LLMs) use a fixed-size context window to process input and generate responses, but they don't have memory like humans.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The context window contains all the information the model can consider at once, and when it overflows, older information is lost.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;LLMs are trained on outdated data, leading to a preference for older information and potential hallucinations when asked about unknown topics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Context management is crucial, as including too much or irrelevant information can hinder response accuracy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Engineers use techniques like prioritizing recent information and filtering out irrelevant details to manage context effectively.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  LLMs and Their 'Memories'
&lt;/h2&gt;

&lt;p&gt;When people talk about 'AI' today, they usually mean ChatGPT, Claude, or Gemini. These tools all use &lt;strong&gt;large language models (LLMs)&lt;/strong&gt;. LLMs consist of billions of &lt;em&gt;parameters_think of each parameter as a number in a massive mathematical equation. The model combines these parameters with your input to generate responses. Its a huge statistical machine: predicting the most _likely&lt;/em&gt; output based on its training parameters and the context you gave it (intentionally or otherwise).&lt;/p&gt;

&lt;h3&gt;
  
  
  The Context Window 'Container'
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;LLMs don't remember like humans do (or at all).&lt;/strong&gt; Instead, they work with a fixed-size container called a &lt;strong&gt;context window&lt;/strong&gt;. Everything you send the model every word, file, or bit of data_and_ all of the model's previous responses fill this container. When the container overflows, the oldest information disappears. The model can no longer see it, even if you can still see it on your screen.&lt;/p&gt;

&lt;p&gt;Think of it this way: &lt;strong&gt;the context window contains &lt;em&gt;all&lt;/em&gt; the information the LLM can consider at once.&lt;/strong&gt; Any information not in the window, or that doesn't fit in it, doesn't exist to the model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Time Bias: Why LLMs Live in the Past
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The context window is the only way to provide LLMs with recent or specific information.&lt;/strong&gt; Companies train these models on huge datasets, but collecting and processing this data takes years. Most of the training data is 2-3 years old or even older. A model might say it was trained up to a month or two ago, but recent information is only a tiny part of its overall training data. Most of what it knows is outdated.&lt;/p&gt;

&lt;p&gt;For instance, if you ask an LLM about a programming framework released last month, it won't know about it unless you include the documentation in your context window. The model's training data just doesn't have that recent information. This leads to a strong preference for older information, even when newer details might be more important.&lt;/p&gt;

&lt;p&gt;It's also important to note that if you ask LLMs to provide information on something they haven't been trained on and have no context for, they will likely produce &lt;em&gt;hallucinations&lt;/em&gt;. A &lt;em&gt;hallucination&lt;/em&gt; occurs when the LLM generates false or made-up information that can sometimes sound real or nearly true. This happens because you asked it to provide information about something it &lt;em&gt;can't&lt;/em&gt;, so it creates something &lt;em&gt;similar&lt;/em&gt; based on its training.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model &lt;em&gt;training&lt;/em&gt; is permanent. &lt;em&gt;Context&lt;/em&gt; is &lt;em&gt;temporary&lt;/em&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Your AI Friend is a Goldfish: It Has No Memory
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Models can't save their context between messages.&lt;/strong&gt; The system that feeds data to the LLM rebuilds and reprocesses the entire context history every single turn. This process of generating output from input combined with the models training is called &lt;em&gt;inference&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Here's what actually happens: You send "hey there" to ChatGPT. Your buddy ChatGPT replies with a friendly response. When you send your second message, the model doesn't just process that new message &lt;strong&gt;it processes your first message, its first response, AND your second message all at once&lt;/strong&gt;. This context grows with each exchange.&lt;/p&gt;

&lt;p&gt;The model treats this entire conversation thread as one giant input until the window reaches its limit and forces older turns to drop. Thats why responses can change tone or forget earlier details as conversations grow longer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Context Window Paradox
&lt;/h2&gt;

&lt;p&gt;Context window sizes have grown dramatically. A few years ago, models handled only a few thousand words (8,192 or 16,384 &lt;a href="https://blog.knitli.com/understanding-tokens-what-they-are-and-why-theyre-important" rel="noopener noreferrer"&gt;tokens&lt;/a&gt;). Today's top models can process 128,000 to 2 million tokens worth of information.&lt;/p&gt;

&lt;p&gt;Bigger windows allow more context, but they create new problems. Fill a window with irrelevant information, and you're giving the model junk data that makes accurate responses harder. Processing large contexts also takes more time and costs more money.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This creates a paradox: any information you exclude might be crucial, but including too much information can poison the model's ability to respond accurately.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Context Poisoning Problem
&lt;/h3&gt;

&lt;p&gt;Most current tools don't handle context well. For coding tasks, many systems add everything that might be relevant into the model's context without careful selection. They might include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Entire codebases when only a few functions are needed&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Outdated documentation along with current specs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Error logs mixed with successful runs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multiple conflicting examples&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This adds more confusion than clarity, making it difficult for the model to find useful information among irrelevant details.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters for You
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Understanding context windows helps explain common AI frustrations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Why an AI assistant "forgets" something you mentioned earlier in a long conversation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Why providing too much background information sometimes makes responses worse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Why the same prompt can give different results depending on what else is in the context (since models are probabilistic, meaning they create output based on statistical likelihood, even the exact same context can lead to different results).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Why AI coding tools sometimes suggest outdated approaches despite having access to current documentation&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Working Around the Limitations
&lt;/h2&gt;

&lt;p&gt;Engineers use several techniques to manage context effectively, like prioritizing recent information, summarizing older exchanges, and filtering out irrelevant details. But these approaches have their own trade-offs and limitations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The bottom line: "context is king" for LLMs.&lt;/strong&gt; Feeding the right amount of the right information in the right order matters more than raw context window size. &lt;strong&gt;This makes context management the central engineering challenge for anyone building with LLMs.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Our next post in this series will explore current solutions to these context problems including their strengths, weaknesses, and why even the best approaches today aren't quite "good enough" for complex, long-running tasks.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;visit us at &lt;a href="https://knitli.com" rel="noopener noreferrer"&gt;knitli.com&lt;/a&gt; to learn how we're fixing the context problem, and sign up for our waitlist!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
      <category>deeplearning</category>
      <category>learning</category>
    </item>
    <item>
      <title>Understanding Tokens: What They Are and Why They're Important</title>
      <dc:creator>Adam Poulemanos</dc:creator>
      <pubDate>Fri, 26 Sep 2025 02:08:14 +0000</pubDate>
      <link>https://dev.to/knitli/understanding-tokens-what-they-are-and-why-theyre-important-876</link>
      <guid>https://dev.to/knitli/understanding-tokens-what-they-are-and-why-theyre-important-876</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frv5ruwrttr6nayzzcy3f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frv5ruwrttr6nayzzcy3f.png" alt="a graphic showing how tokens are processed" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Part 1 of &lt;a href="https://knitli.com" rel="noopener noreferrer"&gt;Knitli's&lt;/a&gt; 101 introductions to AI and the economics of AI&lt;/p&gt;




&lt;h2&gt;
  
  
  Tokens are &lt;em&gt;Parts&lt;/em&gt; of Words
&lt;/h2&gt;

&lt;p&gt;Most people think AI, like ChatGPT, reads words. It doesn't.&lt;/p&gt;

&lt;p&gt;It reads &lt;strong&gt;tokens&lt;/strong&gt; invisible chunks of text that power every interaction.&lt;/p&gt;

&lt;p&gt;When you type something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Hello, world!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The model doesn't see two words. It sees four tokens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Hello 1 token&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;, 1 token&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;world (note the space) 1 token&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;! 1 token&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That simple greeting is &lt;strong&gt;4 tokens&lt;/strong&gt; , not &lt;strong&gt;2 words&lt;/strong&gt;. Code fragments break into even more tokens because punctuation, brackets, and symbols all get split up. (What is and isn't a token and what becomes one actually depends on the model, so our example isn't exact.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Tokens Aren't Expensive. Processing them is.
&lt;/h2&gt;

&lt;p&gt;When you send your tokens to get processed, &lt;em&gt;each one&lt;/em&gt; must be run through &lt;strong&gt;billions of math operations on very expensive GPUs&lt;/strong&gt; every single time. Thats where the cost comes from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Power-hungry hardware&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data center space&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cooling&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Staff to maintain and secure it&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More tokens more GPU time higher costs.&lt;/p&gt;

&lt;p&gt;Fewer tokens less GPU time lower costs.&lt;/p&gt;

&lt;p&gt;Right now, you probably don't see the meter running. You pay a flat subscription; someone else covers the token bill.&lt;/p&gt;

&lt;p&gt;Under the hood, &lt;strong&gt;tokens are the biggest driver of compute costs at every AI company&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tokens are the Foundation for Everything Else
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Context windows&lt;/strong&gt; , or how much a model can see at one time, are measured in tokens.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API pricing&lt;/strong&gt; is per million tokens (API access is when companies or developers access an AI model to provide their own service, like a chatbot on a website, or just for internal use).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Memory&lt;/strong&gt; , efficiency, and much of prompt engineering are all about how tokens are used.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;If you want to understand AI, and how it really works, or why it sometimes &lt;em&gt;costs so much&lt;/em&gt;. You have to start with tokens.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;Learn more about how &lt;strong&gt;Knitli&lt;/strong&gt; is tackling the hidden economics of AI at the source, visit us at &lt;a href="http://knitli.com" rel="noopener noreferrer"&gt;&lt;strong&gt;knitli.com&lt;/strong&gt;&lt;/a&gt; and subscribe to our waitlist for updates!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
      <category>tutorial</category>
      <category>openai</category>
    </item>
  </channel>
</rss>
