<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nicola Alessi</title>
    <description>The latest articles on DEV Community by Nicola Alessi (@nicolalessi).</description>
    <link>https://dev.to/nicolalessi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nicolalessi"/>
    <language>en</language>
    <item>
      <title>I tracked every token my AI coding agent consumed for a week. 70% was waste.</title>
      <dc:creator>Nicola Alessi</dc:creator>
      <pubDate>Sat, 28 Mar 2026 17:04:11 +0000</pubDate>
      <link>https://dev.to/nicolalessi/i-tracked-every-token-my-ai-coding-agent-consumed-for-a-week-70-was-waste-465</link>
      <guid>https://dev.to/nicolalessi/i-tracked-every-token-my-ai-coding-agent-consumed-for-a-week-70-was-waste-465</guid>
      <description>&lt;p&gt;Last week Anthropic announced tighter usage limits for Claude during peak hours. My timeline exploded with developers asking why they're hitting limits after 2-3 prompts.&lt;/p&gt;

&lt;p&gt;I'm the developer behind vexp, a local context engine for AI coding agents. Before building it, I did something nobody seems to do: I actually measured what's happening under the hood.&lt;/p&gt;

&lt;h2&gt;
  
  
  The experiment
&lt;/h2&gt;

&lt;p&gt;I tracked token consumption on FastAPI v0.115.0 — the real open-source framework, ~800 Python files. Not a toy project.&lt;/p&gt;

&lt;p&gt;7 tasks (bug fixes, features, refactors, code understanding). 3 runs per task. 42 total executions. Claude Sonnet 4.6. Full isolation between runs.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I found
&lt;/h2&gt;

&lt;p&gt;Every single prompt, Claude Code did this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Glob pattern &lt;code&gt;*&lt;/code&gt; — found all files&lt;/li&gt;
&lt;li&gt;Glob pattern &lt;code&gt;**/*.{py,js,ts,...}&lt;/code&gt; — found code files&lt;/li&gt;
&lt;li&gt;Read file 1&lt;/li&gt;
&lt;li&gt;Read file 2&lt;/li&gt;
&lt;li&gt;Read file 3&lt;/li&gt;
&lt;li&gt;...repeat 20+ times&lt;/li&gt;
&lt;li&gt;Finally start thinking about my actual question&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Average per prompt:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;23 tool calls (Read/Grep/Glob)&lt;/li&gt;
&lt;li&gt;~180,000 tokens consumed&lt;/li&gt;
&lt;li&gt;~50,000 tokens actually relevant to the question&lt;/li&gt;
&lt;li&gt;70% waste rate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That 70% is why you're hitting usage limits. You're not asking too many questions. Your agent is reading too many files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this happens
&lt;/h2&gt;

&lt;p&gt;AI coding agents don't have a map of your codebase. They don't know which files are relevant to your question before they start reading. So they do what any new developer would do on their first day: read everything.&lt;/p&gt;

&lt;p&gt;The difference is that a new developer reads the codebase once. Your AI agent reads it on every single prompt.&lt;/p&gt;

&lt;p&gt;And it gets worse. As your session continues, context accumulates. By turn 15, each prompt is re-processing your full conversation history plus the codebase reads. The cost per prompt grows exponentially, not linearly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually helps
&lt;/h2&gt;

&lt;p&gt;Free fixes (do these today):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Scope your prompts. "Fix the auth error in src/auth/login.ts" triggers 3-5 file reads. "Fix the auth error" triggers 20+.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Short sessions. Start a new session for each task. Don't do 15 things in one conversation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use /compact before context bloats. Don't wait for auto-compaction at 167K tokens.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Audit your MCPs. Every loaded MCP server adds token overhead on every prompt, even when you don't use it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use /model opusplan. Planning with Opus, implementation with Sonnet.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These get you 20-30% savings. The structural fix gets you 58-74%.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;The idea: instead of letting the agent explore your codebase file-by-file, pre-index the project and serve only the relevant code per query.&lt;/p&gt;

&lt;p&gt;I built this as an MCP server called vexp. Rust binary, tree-sitter AST parsing, dependency graph, SQLite. Runs 100% locally. Your code never leaves your machine.&lt;/p&gt;

&lt;p&gt;Here's what changed on the FastAPI benchmark:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;th&gt;Change&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Tool calls/task&lt;/td&gt;
&lt;td&gt;23&lt;/td&gt;
&lt;td&gt;2.3&lt;/td&gt;
&lt;td&gt;-90%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost/task&lt;/td&gt;
&lt;td&gt;$0.78&lt;/td&gt;
&lt;td&gt;$0.33&lt;/td&gt;
&lt;td&gt;-58%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Output tokens&lt;/td&gt;
&lt;td&gt;504&lt;/td&gt;
&lt;td&gt;189&lt;/td&gt;
&lt;td&gt;-63%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Task duration&lt;/td&gt;
&lt;td&gt;170s&lt;/td&gt;
&lt;td&gt;132s&lt;/td&gt;
&lt;td&gt;-22%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Total across 42 runs: $16.29 without vexp, $6.89 with.&lt;/p&gt;

&lt;p&gt;The output token drop surprised me. Claude doesn't just read less — it generates less irrelevant output too. Focused input context leads to focused responses. I didn't design for that, but it makes sense: less noise in, less noise out.&lt;/p&gt;

&lt;h2&gt;
  
  
  The output quality didn't drop. It improved.
&lt;/h2&gt;

&lt;p&gt;I also ran this on SWE-bench Verified — 100 real GitHub bugs, Claude Opus 4.5, same $3 budget per task:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;73% pass rate (highest in the lineup)&lt;/li&gt;
&lt;li&gt;$0.67/task vs $1.98 average&lt;/li&gt;
&lt;li&gt;8 bugs only vexp solved&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same model. Same budget. The only variable was context quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means for the usage limits debate
&lt;/h2&gt;

&lt;p&gt;Everyone's arguing about whether Anthropic should raise limits or lower prices. Both miss the point.&lt;/p&gt;

&lt;p&gt;The real issue is architectural: AI coding agents don't know your codebase. They compensate by reading everything. You pay for that compensation with tokens — and now, with tighter session limits.&lt;/p&gt;

&lt;p&gt;Cheaper tokens help. Higher limits help. But reducing what goes into the context window in the first place is the only fix that works regardless of what Anthropic does with pricing or limits.&lt;/p&gt;

&lt;p&gt;Full benchmark data (open source, reproducible): &lt;a href="https://vexp.dev/benchmark" rel="noopener noreferrer"&gt;https://vexp.dev/benchmark&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;FastAPI methodology: &lt;a href="https://www.reddit.com/r/ClaudeCode/comments/1rjra2w/i_built_a_context_engine_that_works_with_claude/" rel="noopener noreferrer"&gt;https://www.reddit.com/r/ClaudeCode/comments/1rjra2w/i_built_a_context_engine_that_works_with_claude/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Free tier available, no account needed. I'm curious what numbers you see on your own projects — especially on repos larger than FastAPI.&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>ai</category>
      <category>productivity</category>
      <category>devtools</category>
    </item>
    <item>
      <title>How I Cut My AI Coding Agent's Token Usage by 65% (Without Changing Models)</title>
      <dc:creator>Nicola Alessi</dc:creator>
      <pubDate>Thu, 26 Feb 2026 21:25:42 +0000</pubDate>
      <link>https://dev.to/nicolalessi/how-i-cut-my-ai-coding-agents-token-usage-by-65-without-changing-models-47m</link>
      <guid>https://dev.to/nicolalessi/how-i-cut-my-ai-coding-agents-token-usage-by-65-without-changing-models-47m</guid>
      <description>&lt;p&gt;I've been using Claude Code on a 200-file TypeScript project. The model is great. The token bill was not.&lt;/p&gt;

&lt;p&gt;The problem wasn't the model — it was what I was feeding it. Every session, the agent would read 30-40 files trying to orient itself before doing any actual work. Same files, same discoveries, same wasted tokens. Every single time.&lt;/p&gt;

&lt;p&gt;After a lot of trial and error, I got my average input tokens per query from about 8,200 down to 2,100. Here's what worked, in order of impact.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Write a real CLAUDE.md (not a vague one)
&lt;/h2&gt;

&lt;p&gt;Most people write something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;This is a TypeScript project using Express and React.
Please follow best practices.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells the agent almost nothing. It's going to read your whole codebase anyway.&lt;/p&gt;

&lt;p&gt;What actually works is being specific about decisions, not descriptions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Auth&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Auth uses middleware in src/auth/middleware.ts
&lt;span class="p"&gt;-&lt;/span&gt; JWT tokens, not sessions. Refresh token rotation in src/auth/refresh.ts
&lt;span class="p"&gt;-&lt;/span&gt; DO NOT touch src/auth/legacy.ts — deprecated, will be removed Q2

&lt;span class="gu"&gt;## Database&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Prisma ORM, schema in prisma/schema.prisma
&lt;span class="p"&gt;-&lt;/span&gt; All migrations must be backward-compatible
&lt;span class="p"&gt;-&lt;/span&gt; Connection pooling handled by src/db/pool.ts, do not create new connections

&lt;span class="gu"&gt;## Conventions&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; All API handlers in src/handlers/, one file per resource
&lt;span class="p"&gt;-&lt;/span&gt; Error handling through src/lib/errors.ts, do not use try/catch in handlers
&lt;span class="p"&gt;-&lt;/span&gt; Tests mirror src/ structure in tests/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key: tell the agent what it would otherwise spend 10 minutes figuring out. Decisions, not descriptions. "We use Express" is useless. "Auth uses JWT with refresh rotation in this specific file" saves the agent from reading your entire auth directory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Impact: about 20% token reduction.&lt;/strong&gt; Significant, but not enough.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Stop letting the agent grep your whole project
&lt;/h2&gt;

&lt;p&gt;Here's what happens when you ask "how does authentication work in this project" without any context management:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Agent searches for "auth" across the codebase&lt;/li&gt;
&lt;li&gt;Gets 40+ hits across middleware, tests, configs, legacy code, node_modules if you're unlucky&lt;/li&gt;
&lt;li&gt;Reads 15-20 files to piece together the picture&lt;/li&gt;
&lt;li&gt;Burns 8,000+ tokens before writing a single line of code&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The agent doesn't need 40 files. It needs the auth middleware, the two things it depends on, and the three things that depend on it. That's maybe 5 files.&lt;/p&gt;

&lt;p&gt;The question is: how do you give the agent the right 5 files instead of all 40?&lt;/p&gt;

&lt;p&gt;This is where I stopped being able to solve it with prompting alone.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Give the agent a dependency graph
&lt;/h2&gt;

&lt;p&gt;I built a tool called &lt;a href="https://vexp.dev" rel="noopener noreferrer"&gt;vexp&lt;/a&gt; that pre-computes a dependency graph of your codebase at the AST level. Not grep, not text search — actual parsed relationships: who imports what, who calls what, what types flow where.&lt;/p&gt;

&lt;p&gt;When the agent asks about authentication, instead of grep-matching "auth" across 40 files, it gets the relevant subgraph: the auth function, its dependencies, and its dependents, packed into a token budget you control.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before&lt;/strong&gt; (grep approach):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Agent reads: 40 files, 8,247 tokens
Relevant files: 5
Wasted: about 80% of input tokens
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;After&lt;/strong&gt; (dependency graph):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Agent reads: capsule with 5 relevant nodes, 2,140 tokens
Relevant files: 5
Wasted: near zero
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same information, 74% fewer tokens.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Solve the "amnesia" problem
&lt;/h2&gt;

&lt;p&gt;Token reduction is half the problem. The other half: every new session starts from zero.&lt;/p&gt;

&lt;p&gt;Monday the agent spends 20 minutes discovering that your payment module has a non-obvious dependency on a legacy Redis cache. Tuesday, new session, same 20 minutes. Wednesday, same again.&lt;/p&gt;

&lt;p&gt;I tried every approach to make agents save their own notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"After completing a task, save your observations" — ignored 90% of the time&lt;/li&gt;
&lt;li&gt;Detailed save instructions in CLAUDE.md — maybe 15% compliance&lt;/li&gt;
&lt;li&gt;Making it a "required step" — agent writes "completed successfully, no issues" and moves on&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The models are optimized for current-task completion. A tool that only benefits future sessions has zero value to the current context window. The incentive structure works against you.&lt;/p&gt;

&lt;p&gt;What actually worked: &lt;strong&gt;passive observation&lt;/strong&gt;. Instead of asking the agent to save things, watch what it does. Track which files it reads, what changes it makes at the AST level, and infer observations from its behavior. The agent that spent 20 minutes on your Redis dependency didn't save a note about it — but the tool call pattern and code changes tell you exactly what it learned.&lt;/p&gt;

&lt;p&gt;These observations get linked to the code graph. When the underlying code changes, linked observations automatically go stale. So you're never feeding the agent outdated context.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Session 1:&lt;/strong&gt; Agent discovers Redis dependency — observation saved passively&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session 2:&lt;/strong&gt; Agent gets the observation immediately — skips the 20-minute rediscovery&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session 3:&lt;/strong&gt; Someone refactors the Redis cache out — observation flagged stale — agent re-explores&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The combined result
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Avg input tokens/query&lt;/td&gt;
&lt;td&gt;8,200&lt;/td&gt;
&lt;td&gt;2,100&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Session start orientation&lt;/td&gt;
&lt;td&gt;5-10 min&lt;/td&gt;
&lt;td&gt;under 30 sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Repeated discoveries&lt;/td&gt;
&lt;td&gt;Every session&lt;/td&gt;
&lt;td&gt;Once&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Token reduction&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;65-74%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;On a practical level this means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;If you're on Claude Max/Pro&lt;/strong&gt;: 2-3x more work before hitting usage caps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;If you're on API&lt;/strong&gt;: direct cost savings on input tokens&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;On any plan&lt;/strong&gt;: the agent starts working immediately instead of spending the first 10 minutes reading&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://vexp.dev" rel="noopener noreferrer"&gt;vexp&lt;/a&gt; works as a VS Code extension or standalone CLI. It's an MCP server, so it works with any agent that speaks MCP: Claude Code, Cursor, Windsurf, Cline, Roo Code, Copilot, aider, Codex.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# VS Code&lt;/span&gt;
Search &lt;span class="s2"&gt;"vexp"&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;the extension marketplace

&lt;span class="c"&gt;# CLI (for Claude Code, terminal agents)&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; vexp-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Free tier: 2,000 nodes, full memory tools, no account needed. Runs 100% local — single Rust binary, SQLite, zero network calls.&lt;/p&gt;

&lt;p&gt;Pro ($19/mo): multi-repo support, 50k nodes, priority updates.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'd do if I were starting today
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Write a specific CLAUDE.md&lt;/strong&gt; — decisions, not descriptions. 30 minutes, 20% improvement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set up a dependency graph&lt;/strong&gt; — stop letting the agent grep. This is where the real token savings are.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Let memory accumulate&lt;/strong&gt; — don't try to make the agent save notes. Observe passively and let the context build itself over 3-4 sessions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first step is free and takes 30 minutes. The rest takes about 5 minutes to install.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm the developer behind vexp. Happy to answer questions about the architecture, MCP integration, or anything else in the comments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>vscode</category>
      <category>devtools</category>
    </item>
    <item>
      <title>Show me your AI side project</title>
      <dc:creator>Nicola Alessi</dc:creator>
      <pubDate>Tue, 15 Apr 2025 18:47:35 +0000</pubDate>
      <link>https://dev.to/nicolalessi/show-me-your-ai-side-project-omo</link>
      <guid>https://dev.to/nicolalessi/show-me-your-ai-side-project-omo</guid>
      <description>&lt;p&gt;AI is exploding right now, and I’m sure many of you are tinkering with cool side projects—whether it’s LLM experiments, automation tools, or something completely unexpected.&lt;/p&gt;

&lt;p&gt;Let’s share what we’re building! Drop a comment with:&lt;/p&gt;

&lt;p&gt;Name/Description of your project&lt;/p&gt;

&lt;p&gt;Tech stack you’re using&lt;/p&gt;

&lt;p&gt;Biggest challenge you faced (or are still facing)&lt;/p&gt;

&lt;p&gt;Demo link (if public)&lt;/p&gt;

&lt;p&gt;I’ll go first:&lt;/p&gt;

&lt;p&gt;Lumigo&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; A mood-based movie discovery tool that helps you find films and shows by describing how you feel or what kind of emotional vibe you’re looking for (instead of relying on generic genre tags).&lt;br&gt;
&lt;strong&gt;Tech stack:&lt;/strong&gt; Next.js, Python, Neon database&lt;br&gt;
&lt;strong&gt;Challenge:&lt;/strong&gt; Building a system that accurately interprets user mood descriptions and maps them to the right titles without making the experience feel generic or repetitive.&lt;br&gt;
&lt;strong&gt;Try it:&lt;/strong&gt; &lt;a href="https://lumigo.tv" rel="noopener noreferrer"&gt;https://lumigo.tv&lt;/a&gt;&lt;br&gt;
 (still evolving — feedback is always useful!)&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>webdev</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
