<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Łukasz Holc</title>
    <description>The latest articles on DEV Community by Łukasz Holc (@luzgan).</description>
    <link>https://dev.to/luzgan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/luzgan"/>
    <language>en</language>
    <item>
      <title>Here is ESLint-like tool for composing AI agent rules — here's why</title>
      <dc:creator>Łukasz Holc</dc:creator>
      <pubDate>Wed, 04 Mar 2026 21:27:51 +0000</pubDate>
      <link>https://dev.to/luzgan/i-built-an-eslint-like-tool-for-composing-ai-agent-rules-heres-why-gfd</link>
      <guid>https://dev.to/luzgan/i-built-an-eslint-like-tool-for-composing-ai-agent-rules-heres-why-gfd</guid>
      <description>&lt;p&gt;If you use AI coding agents — Claude Code, Cursor, Copilot, Codex, Windsurf — you already know the pain: every agent wants its own context file. Claude Code reads &lt;code&gt;CLAUDE.md&lt;/code&gt;, Cursor wants &lt;code&gt;.cursorrules&lt;/code&gt;, Copilot expects &lt;code&gt;.github/copilot-instructions.md&lt;/code&gt;, Codex needs &lt;code&gt;AGENTS.md&lt;/code&gt;. The rules inside are usually the same, but you end up maintaining them separately across files you can never remember the name of.&lt;/p&gt;

&lt;p&gt;I kept copying the same guidelines between projects and agents, tweaking formatting, forgetting to update one file when I changed another. So I built &lt;strong&gt;&lt;a href="https://github.com/Luzgan/ai-rulesmith" rel="noopener noreferrer"&gt;ai-rulesmith&lt;/a&gt;&lt;/strong&gt; — a CLI that lets you define your rules once and compose them into the right output for each agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The ESLint analogy
&lt;/h2&gt;

&lt;p&gt;The mental model is borrowed directly from ESLint. Rules are small, focused atoms — each one enforces a single practice (like &lt;code&gt;code-style/strict-typescript&lt;/code&gt; or &lt;code&gt;workflow/verify-before-completing&lt;/code&gt;). You pick the ones you need, skip the ones you don't, and compose them into a config. The tool generates the right file for each target agent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; ai-rulesmith
rulesmith init
rulesmith build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One config (&lt;code&gt;AI_RULES.json&lt;/code&gt;), multiple agents, consistent rules everywhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  What makes it different
&lt;/h2&gt;

&lt;p&gt;There are a few tools in this space now, but ai-rulesmith focuses on two ideas I haven't seen elsewhere:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Priority Zones&lt;/strong&gt; — LLMs pay most attention to the beginning and end of their context window. The middle is a lower-attention zone. ai-rulesmith lets you explicitly place rules in &lt;code&gt;before_start&lt;/code&gt; (top of context) and &lt;code&gt;before_finish&lt;/code&gt; (bottom of context) sections, so critical behavioral rules like "understand the codebase before changing anything" don't get buried between coding standards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-step workflows&lt;/strong&gt; — Instead of dumping everything into one file, you can define a stepped workflow where each step gets its own rule file. The main output instructs the agent to read step-specific files as it progresses. Think: Step 1 (Create) → Step 2 (Review) → Step 3 (Ship), each with its own rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  Built-in ruleset
&lt;/h2&gt;

&lt;p&gt;The tool ships with 29 rules across 9 categories, distilled from patterns found across the AI coding community — awesome-cursorrules, cursor.directory, Addy Osmani's spec writing guide, Trail of Bits' Claude Code config, and others. Categories include code style, testing, error handling, git workflow, security, architecture, and AI behavior.&lt;/p&gt;

&lt;p&gt;But the real value is the composability model. You can override built-in rules at the project or global level, add your own custom rules as plain markdown files, and even use rule variables for project-specific values (like a project name or tracker URL).&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing your rules
&lt;/h2&gt;

&lt;p&gt;One feature I'm particularly happy about: &lt;code&gt;rulesmith test&lt;/code&gt; lets you define scenarios that verify your rules actually influence agent behavior. It uses an LLM to simulate a prompt against your composed rules, then a judge model evaluates whether assertions pass. You can catch regressions in your AI workflow the same way you'd catch regressions in code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it out
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; ai-rulesmith
rulesmith init
&lt;span class="c"&gt;# edit AI_RULES.json&lt;/span&gt;
rulesmith build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GitHub: &lt;a href="https://github.com/Luzgan/ai-rulesmith" rel="noopener noreferrer"&gt;https://github.com/Luzgan/ai-rulesmith&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's MIT licensed and contributions are welcome — especially new rules. Good rules are focused (one practice per rule), universal (not tied to a specific stack), and actionable (concrete guidelines, not vague principles).&lt;/p&gt;

&lt;p&gt;I'd love to hear how others are managing their AI agent rules — are you maintaining separate files per agent? Using a different tool? Just copy-pasting and hoping for the best?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devtools</category>
      <category>opensource</category>
      <category>productivity</category>
    </item>
    <item>
      <title>I Built a Tool That Feeds AI Assistants Real Big-O Analysis Instead of Letting Them Guess</title>
      <dc:creator>Łukasz Holc</dc:creator>
      <pubDate>Thu, 26 Feb 2026 21:35:11 +0000</pubDate>
      <link>https://dev.to/luzgan/i-built-a-tool-that-feeds-ai-assistants-real-big-o-analysis-instead-of-letting-them-guess-2kji</link>
      <guid>https://dev.to/luzgan/i-built-a-tool-that-feeds-ai-assistants-real-big-o-analysis-instead-of-letting-them-guess-2kji</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Frontier AI models &lt;em&gt;can&lt;/em&gt; analyze time complexity. But they do it by reading your code, reasoning through it, and burning tokens in the process. Sometimes they get it right. Sometimes they confidently tell you a function is O(n) when it's actually O(n²) because there's a &lt;code&gt;.contains()&lt;/code&gt; hiding inside a loop.&lt;/p&gt;

&lt;p&gt;What if we skip all that and just feed them the answer through static analysis?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;I built &lt;strong&gt;Time Complexity MCP&lt;/strong&gt; — an MCP server that parses your code into ASTs using &lt;a href="https://tree-sitter.github.io/tree-sitter/" rel="noopener noreferrer"&gt;tree-sitter&lt;/a&gt;, walks the syntax tree to detect complexity patterns, and reports per-function Big-O with line-level annotations.&lt;/p&gt;

&lt;p&gt;It works as a tool that Claude Code or GitHub Copilot can call on demand — no tokens spent on analysis, just structured results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/Luzgan/time-complexity-mcp" rel="noopener noreferrer"&gt;Luzgan/time-complexity-mcp&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Detects
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Loop nesting&lt;/strong&gt; — &lt;code&gt;for&lt;/code&gt;, &lt;code&gt;while&lt;/code&gt;, &lt;code&gt;do-while&lt;/code&gt; with depth tracking. Constant-bound loops (e.g., &lt;code&gt;for i in range(10)&lt;/code&gt;) are recognized as O(1)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recursion&lt;/strong&gt; — linear recursion (O(n)) vs branching recursion like fibonacci (O(2^n))&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Known stdlib methods&lt;/strong&gt; — &lt;code&gt;.sort()&lt;/code&gt; as O(n log n), &lt;code&gt;.filter()/.map()&lt;/code&gt; as O(n), &lt;code&gt;.push()/.pop()&lt;/code&gt; as O(1). Each language has its own pattern set&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Combined complexity&lt;/strong&gt; — a &lt;code&gt;.indexOf()&lt;/code&gt; inside a &lt;code&gt;for&lt;/code&gt; loop correctly reports O(n²), not O(n)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Supports &lt;strong&gt;JavaScript, TypeScript, Python, Java, Kotlin, and Dart&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;Each language has an analyzer class that implements 9 template methods from a shared &lt;code&gt;BaseAnalyzer&lt;/code&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Parse the source file into an AST via tree-sitter&lt;/li&gt;
&lt;li&gt;Extract all function nodes&lt;/li&gt;
&lt;li&gt;For each function, walk the tree to find:

&lt;ul&gt;
&lt;li&gt;Loops (and their nesting depth)&lt;/li&gt;
&lt;li&gt;Recursive calls (and whether they branch)&lt;/li&gt;
&lt;li&gt;Known stdlib calls (e.g., &lt;code&gt;.sort()&lt;/code&gt;, &lt;code&gt;.indexOf()&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Combine the results: an O(n) method inside an O(n) loop = O(n²)&lt;/li&gt;
&lt;li&gt;Return structured results with per-line annotations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;No code is ever executed — it's pure structural analysis.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fun Part: Self-Analysis
&lt;/h2&gt;

&lt;p&gt;I pointed the tool at its own codebase. 27 files, 150 functions.&lt;/p&gt;

&lt;p&gt;The breakdown:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Complexity&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;O(1)&lt;/td&gt;
&lt;td&gt;102&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;O(n)&lt;/td&gt;
&lt;td&gt;40&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;O(n log n)&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;O(n²)&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;O(n³)&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;O(2^n)&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;It found real issues:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The directory scanner was O(n³) — because &lt;code&gt;.indexOf()&lt;/code&gt; was being called inside a &lt;code&gt;.sort()&lt;/code&gt; comparator, which was called after a nested file → function iteration loop. A formatting utility was O(n²) for the same &lt;code&gt;.indexOf()&lt;/code&gt; reason.&lt;/p&gt;

&lt;p&gt;I fixed those based on its own report. Replaced &lt;code&gt;.indexOf()&lt;/code&gt; lookups with a &lt;code&gt;Map&lt;/code&gt; for O(1) access. The tool ate its own dog food.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Download a prebuilt release for your platform from &lt;a href="https://github.com/Luzgan/time-complexity-mcp/releases/latest" rel="noopener noreferrer"&gt;GitHub Releases&lt;/a&gt;, extract it, and add to your MCP config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"time-complexity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"node"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/path/to/time-complexity-mcp/dist/index.js"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart Claude Code and the tools are available. No npm install, no C++ compiler — just Node.js 18+.&lt;/p&gt;

&lt;p&gt;Or install from source:&lt;/p&gt;

&lt;p&gt;git clone &lt;a href="https://github.com/Luzgan/time-complexity-mcp.git" rel="noopener noreferrer"&gt;https://github.com/Luzgan/time-complexity-mcp.git&lt;/a&gt;&lt;br&gt;
cd time-complexity-mcp&lt;br&gt;
npm install &amp;amp;&amp;amp; npm run build&lt;/p&gt;

&lt;p&gt;What's Next&lt;br&gt;
Open to feedback and language requests. The architecture makes it straightforward to add new languages — each one is ~3 files implementing the template methods for that language's AST structure.&lt;/p&gt;

&lt;p&gt;Repo: github.com/Luzgan/time-complexity-mcp&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>opensource</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
