<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Christian C. Berclaz</title>
    <description>The latest articles on DEV Community by Christian C. Berclaz (@chrisgve).</description>
    <link>https://dev.to/chrisgve</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chrisgve"/>
    <language>en</language>
    <item>
      <title>Party On. Keep Coding. And Make It Last.</title>
      <dc:creator>Christian C. Berclaz</dc:creator>
      <pubDate>Tue, 17 Mar 2026 23:00:00 +0000</pubDate>
      <link>https://dev.to/chrisgve/party-on-keep-coding-and-make-it-last-3a5e</link>
      <guid>https://dev.to/chrisgve/party-on-keep-coding-and-make-it-last-3a5e</guid>
      <description>&lt;p&gt;&lt;em&gt;How AI-powered hyperfocus loops can quietly burn you out — and why the danger extends far beyond developers.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;A few days ago, I came across a Reddit post titled "I think we need a name for this new dev behavior: Slurm coding." I expected a joke, maybe a meme. By the time I finished reading, something had shifted.&lt;/p&gt;

&lt;p&gt;The author, referencing Futurama's Slurms MacKenzie — the party worm who just kept going forever — was describing a pattern anyone using AI coding tools would recognize. You start with a small idea. You ask an LLM to scaffold a few pieces. It works. Suddenly you're refactoring the architecture, adding features, building a cross-platform version of something that didn't exist a week ago. You sit down after dinner and resurface at 3am with an entire system running.&lt;/p&gt;

&lt;p&gt;It was well written, energetic, charming, and deeply relatable. I expected to laugh and move on.&lt;/p&gt;

&lt;p&gt;Instead, it stayed stuck with me for days. Because the author was describing, with admiration and humor, a pattern that nearly destroyed me. Not once — three times.&lt;/p&gt;

&lt;h2&gt;
  
  
  The loop before AI
&lt;/h2&gt;

&lt;p&gt;I need to be honest about something: this loop is not new. AI has supercharged it, but the fundamental pattern has been running in my brain since I was a teenager.&lt;/p&gt;

&lt;p&gt;In the late 1980s, I was writing Turbo Pascal on MS-DOS. Self-taught, no computer science education, just a kid fascinated by what these machines could do. I built a text windowing library — overlapping windows, resizable, pretty sophisticated for the time. Then a menu system on top of it, configurable by file instead of hardcoded. Then a generic library for single and double linked lists and binary trees. Then a printing library for arbitrary report generation. Then I started on a database module.&lt;/p&gt;

&lt;p&gt;Each piece triggered the next. One idea fed another. I was learning and building simultaneously, and the combination was intoxicating.&lt;/p&gt;

&lt;p&gt;The project died when I hit Turbo Pascal's 64KB binary limit. My libraries consumed about 95% of the available space, leaving no room for actual applications. I still have a large binder of printed code somewhere. But the lesson I didn't learn — couldn't learn, at that age — was that the loop itself was the product. Not the software. The feeling.&lt;/p&gt;

&lt;p&gt;That feeling followed me through decades. In my thirties, coding late into the night. In my forties, spending weekends on work projects instead of being fully present with my family. By then we were living in Asia — Hong Kong first, then Singapore — where I held senior roles at a major bank. In 2011, I left the bank to start a company, writing Objective-C for iOS with the same all-consuming intensity. When that venture failed, the loop didn't vanish — it shifted. I poured my energy into wildlife photography for over a year, until I found a new role at the same bank and resumed my usual pattern: adjacent coding, Excel pivot tables and formulas, always finding ways to build systems even without a compiler. The medium changed. The wiring didn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AuDHD has to do with it
&lt;/h2&gt;

&lt;p&gt;I was diagnosed with ADHD and autism — AuDHD — at 55. My daughter had been diagnosed about ten years earlier. My wife just before we left Singapore to return to Switzerland in 2022, after sixteen years in Asia. We'd recognized the shared traits for years. The autism part was more of a formality for me; I'd known for a while.&lt;/p&gt;

&lt;p&gt;The ADHD part hit differently.&lt;/p&gt;

&lt;p&gt;In early 2023, hoping to get a handle on my procrastination, I found a YouTube channel called "&lt;a href="https://www.youtube.com/@HowtoADHD" rel="noopener noreferrer"&gt;How to ADHD.&lt;/a&gt;" After one video, I was stunned. She was talking about me. I watched it again. At 55 years old, I couldn't watch her other videos without my wife sitting next to me. I needed someone there to help me digest what I was learning about myself.&lt;/p&gt;

&lt;p&gt;Think about that for a moment. A grown man, decades of professional experience across continents, and I needed support to watch YouTube videos about my own brain.&lt;/p&gt;

&lt;p&gt;The diagnosis didn't explain one single thing in my life. It explained all of it. The special interests. The inability for small talk without alcohol to smooth the edges. The compulsion to dive into any rabbit hole and go deep, deep down. The meltdowns when technology fails — because computers aren't just tools for me, they're part of my identity. I learned English from computer user manuals as a French-speaking teenager. They've been woven into who I am since my early teens.&lt;/p&gt;

&lt;p&gt;It also explained the coping mechanisms I'd built over a lifetime. Smoking. Drinking. Spending money I shouldn't have. One by one, I dismantled them. I quit smoking in 2012. My alcohol consumption dropped gradually to once or twice a year. And when I recognized the spending pattern as a danger to my family — we'd bought a house, we had a life to protect — I asked for an administrative guardianship with my wife as guardian.&lt;/p&gt;

&lt;p&gt;That took everything I had. It also left me exposed. All the buffers were gone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three burnouts and what I didn't see coming
&lt;/h2&gt;

&lt;p&gt;Here's what I know about burnout: you don't see it coming. You especially don't see it coming when you have a brain that's wired for hyperfocus, perfectionism, and an overwhelming need not to let the team down.&lt;/p&gt;

&lt;p&gt;The first burnout hit at the end of my physics studies. I never finished my Master's. At the time, I didn't have a name for what happened. I just couldn't continue.&lt;/p&gt;

&lt;p&gt;The second came in 2011 when I left the bank to start a company. The timing was terrible. The experience was rich; the outcome was financially disastrous. I coded through it with the same intensity I'd always had, and when it collapsed, so did I.&lt;/p&gt;

&lt;p&gt;The third, in mid-2023, was the one that finally got named. My wife saw it before I did. "You need to see a doctor. I think you are having a burnout." She was right. She usually is.&lt;/p&gt;

&lt;p&gt;I'm now working at 60% capacity. The remaining 40% is about to be declared a legal partial disability. I'm 58, and I won't fully get back on the horse. That's not self-pity — it's arithmetic. Each recovery has taken years. The reserves aren't what they were.&lt;/p&gt;

&lt;p&gt;There's a fallacy we all fall for: these things only happen to other people. We ignore the signs and keep going. And we tend to be the last ones to know.&lt;/p&gt;

&lt;h2&gt;
  
  
  The LLM asymmetry
&lt;/h2&gt;

&lt;p&gt;Now here's where the Reddit post becomes something more than a relatable anecdote.&lt;/p&gt;

&lt;p&gt;When I used to code with another person — a colleague, a collaborator — there were natural breaks. They'd go home. They'd forget what we discussed and we'd need to re-establish context. They'd get tired and say "let's pick this up tomorrow."&lt;/p&gt;

&lt;p&gt;An LLM does none of that.&lt;/p&gt;

&lt;p&gt;It waits indefinitely. It loses nothing. It never signals fatigue. And when you have a brain that's already wired to ignore its own needs in pursuit of an interesting problem, this asymmetry is genuinely dangerous.&lt;/p&gt;

&lt;p&gt;I know this because I've lived it. In the past few months, working with AI tools, I've watched projects explode in exactly the way the Reddit post describes.&lt;/p&gt;

&lt;p&gt;I started building a semantic search tool for my codebase — a side project. Three months later, it had grown into a full knowledge management system with code intelligence, document storage, a background daemon, a CLI, and text search with trigrams optimized to outperform ripgrep. From an idea to an ecosystem, organically, each feature triggering the next.&lt;/p&gt;

&lt;p&gt;I wanted to build a database for my fountain pen collection — how to manage rotations of pens and inks. I ran into the problem of qualifying colors objectively. That opened a color science rabbit hole. Out came a Munsell color space library implementing mathematical transformations from public-domain research dating back to the 1930s, plus an overlay system from a recent academic paper. The library is open source now. The pen database still isn't finished.&lt;/p&gt;

&lt;p&gt;I wanted to relearn how to use a slide rule. That project spawned a Lua-Swift integration library, a numerical methods library inspired by SciPy, an array library with NumPy-style broadcasting, a plotting library modeled on Matplotlib and Seaborn, a computer algebra system in Rust, and several other open-source packages. The slide rule app itself is still in progress.&lt;/p&gt;

&lt;p&gt;Each of these is a textbook example of the Slurm coding loop. And none of them involved a single sleepless night — because I've learned, at great cost, where that leads. But the pull is constant. The ideas don't stop. And the AI is always ready to go.&lt;/p&gt;

&lt;h2&gt;
  
  
  The guardrails I have, and the ones I don't
&lt;/h2&gt;

&lt;p&gt;I use a break timer — Break Time, from Parallels — set for every 50 minutes with a 10-minute break. It starts automatically in the morning and stops in the early evening. When it fires, the screen dims and a countdown begins. I can snooze or skip it, but doing so requires a deliberate action. I can't pretend I didn't see it.&lt;/p&gt;

&lt;p&gt;In theory, this works. In practice, when I'm in a conversation with an AI and ideas are flowing, it's always too easy to skip. "We're in the flow, we have so many things to explore, why stop now?" The rationale that the LLM will simply wait — that nothing is lost by taking a break — is often lost on me in the moment.&lt;/p&gt;

&lt;p&gt;I have a strict bedtime. Around 10pm, sometimes a bit later, never past midnight. I wind down with an audiobook and a game on my phone. Medication helps me fall asleep. When I feel the effect: stop the book, phone on the charger, lights off. This routine is non-negotiable, and it's probably the single most important guardrail I have.&lt;/p&gt;

&lt;p&gt;My wife is the ultimate safety net. She's observant in ways I'm not. She sees the patterns before I can even imagine them. But I don't want to be solely reliant on her. She has enough on her plate, and adding friction to her life isn't something I'm willing to do.&lt;/p&gt;

&lt;p&gt;So I'm experimenting. I'm considering physical programming books — Rust, Swift, TypeScript, Zig, color science — as a way to keep learning without the LLM amplification loop. Slower, more deliberate, and it splits the day into smaller, more manageable chunks. I'm also looking at tools that create stronger friction than my current timer.&lt;/p&gt;

&lt;p&gt;My intuition tells me the solution won't be a single thing. It'll be a series of things: some creating forceful friction, others creating dilution — keeping the content interesting while splitting it into pieces that are easier to manage. But I have to be careful. Building habits can trigger something called PDA — Pathological Demand Avoidance — and if that happens, everything has to be rebuilt from the ground up. I can't use the same trick twice.&lt;/p&gt;

&lt;p&gt;As a physicist by training, I think of it as two competing potential wells. One voice says "take a breath." The other says "you're in the flow, don't stop now." Which one wins depends on their relative depth and distance, and I can't always predict what positions them. It's not binary. It's not simple. It's always a combination.&lt;/p&gt;

&lt;h2&gt;
  
  
  This isn't just about developers
&lt;/h2&gt;

&lt;p&gt;Right now, Slurm coding lives in the tech world. Developers with AI tools, building things at a pace that wasn't possible a few years ago. But I don't think it stays there.&lt;/p&gt;

&lt;p&gt;Agentic AI systems are going to spread. They'll reach people who don't write code — professionals of all kinds setting up automated workflows, building things, getting caught in the same feedback loop. Idea, build, it works, dopamine, bigger idea, keep going.&lt;/p&gt;

&lt;p&gt;We talk about the dangers of AI in abstract terms — job displacement, misinformation, autonomy. But there's a quieter risk that lives at the human level: the risk of what happens when a tool that never gets tired meets a person who doesn't know when to stop.&lt;/p&gt;

&lt;p&gt;Will the people around them understand what's happening? Will they see the signs? Are there universal signs, or are we all different enough that a one-size-fits-all warning wouldn't work?&lt;/p&gt;

&lt;p&gt;I don't have all the answers. I'm not sure anyone does yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  We are not machines
&lt;/h2&gt;

&lt;p&gt;My burnout wasn't caused by Slurm coding. But if the tools I have now had existed then, I'm certain it would have come faster and hit harder.&lt;/p&gt;

&lt;p&gt;What I wish someone had told me — what I want to say clearly — is this: it's not a lesson, and it's not a moral. It's about sharing what I've learned. It's about giving clues, not having all the answers. It's about being honest that the thing you love doing can also be the thing that breaks you, and that you'll likely be the last person to notice.&lt;/p&gt;

&lt;p&gt;There are people who sleep four hours a night and bulldoze through their days without consequence. They exist. They're annoyed when they have to take holidays. Despite what we'd like to think, most of us are not built that way.&lt;/p&gt;

&lt;p&gt;There can still be enormous fun in these fascinating, immersive activities. So much fun that the longer you can sustain them, the better life is. But the key word is &lt;em&gt;sustain&lt;/em&gt;. Giving your body and mind a break is not too high a price to keep doing what you love for as long as humanly possible — or until the next shiny thing captures your attention entirely.&lt;/p&gt;

&lt;p&gt;No matter what, we are not machines. We are humans.&lt;/p&gt;

&lt;p&gt;The original Reddit post ends with: "Party on. Keep coding."&lt;/p&gt;

&lt;p&gt;I'd add three words: &lt;strong&gt;And make it last.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The original Reddit post that inspired this article is reproduced below with the author's permission.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  "&lt;a href="https://www.reddit.com/r/ClaudeCode/comments/1rp6iya/i_think_we_need_a_name_for_this_new_dev_behavior/" rel="noopener noreferrer"&gt;I think we need a name for this new dev behavior: Slurm coding&lt;/a&gt;"
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;By &lt;a href="https://www.reddit.com/user/Khr0mZ/" rel="noopener noreferrer"&gt;u/Khr0mZ&lt;/a&gt; — posted March 9, 2026 on &lt;a href="https://www.reddit.com/r/ClaudeCode/" rel="noopener noreferrer"&gt;r/ClaudeCode&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A few years ago if you had told me that a single developer could casually start building something like a Discord-style internal communication tool on a random evening and have it mostly working a week later, I would have assumed you were either exaggerating or running on dangerous amounts of caffeine.&lt;/p&gt;

&lt;p&gt;Now it's just Monday.&lt;/p&gt;

&lt;p&gt;Since AI coding tools became common I've started noticing a particular pattern in how some of us work. People talk about "vibe coding", but that doesn't quite capture what I'm seeing. Vibe coding feels more relaxed and exploratory. What I'm talking about is more… intense.&lt;/p&gt;

&lt;p&gt;I've started calling it Slurm coding.&lt;/p&gt;

&lt;p&gt;If you remember Futurama, Slurms MacKenzie was the party worm powered by Slurm who just kept going forever. That's basically the energy of this style of development.&lt;/p&gt;

&lt;p&gt;Slurm coding happens when curiosity, AI coding tools, and a brain that likes building systems all line up. You start with a small idea. You ask an LLM to scaffold a few pieces. You wire things together. Suddenly the thing works. Then you notice the architecture could be cleaner so you refactor a bit. Then you realize adding another feature wouldn't be that hard.&lt;/p&gt;

&lt;p&gt;At that point the session escalates.&lt;/p&gt;

&lt;p&gt;You tell yourself you're just going to try one more thing. The feature works. Now the system feels like it deserves a better UI. While you're there you might as well make it cross platform. Before you know it you're deep into a React Native version of something that didn't exist a week ago.&lt;/p&gt;

&lt;p&gt;The interesting part is that these aren't broken weekend prototypes. AI has removed a lot of the mechanical work that used to slow projects down. Boilerplate, digging through documentation, wiring up basic architecture. A weekend that used to produce a rough demo can now produce something actually usable.&lt;/p&gt;

&lt;p&gt;That creates a very specific feedback loop.&lt;/p&gt;

&lt;p&gt;Idea. Build something quickly. It works. Dopamine. Bigger idea. Keep going.&lt;/p&gt;

&lt;p&gt;Once that loop starts it's very easy to slip into coding sessions where time basically disappears. You sit down after dinner and suddenly it's 3 in the morning and the project is three features bigger than when you started.&lt;/p&gt;

&lt;p&gt;The funny part is that the real bottleneck isn't technical anymore. It's energy and sleep. The tools made building faster, but they didn't change the human tendency to get obsessed with an interesting problem.&lt;/p&gt;

&lt;p&gt;So you get these bursts where a developer just goes full Slurms MacKenzie on a project.&lt;/p&gt;

&lt;p&gt;Party on. Keep coding.&lt;/p&gt;

&lt;p&gt;I'm curious if other people have noticed this pattern since AI coding tools became part of the workflow. It feels like a distinct mode of development that didn't really exist a few years ago.&lt;/p&gt;

&lt;p&gt;If you've ever sat down to try something small and resurfaced 12 hours later with an entire system running, you might be doing Slurm coding.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>mentalhealth</category>
      <category>burnout</category>
      <category>ai</category>
      <category>audhd</category>
    </item>
    <item>
      <title>Why I built codesize: enforcing function length limits with an AST</title>
      <dc:creator>Christian C. Berclaz</dc:creator>
      <pubDate>Sat, 14 Mar 2026 19:53:00 +0000</pubDate>
      <link>https://dev.to/chrisgve/why-i-built-codesize-enforcing-function-length-limits-with-an-ast-4fe2</link>
      <guid>https://dev.to/chrisgve/why-i-built-codesize-enforcing-function-length-limits-with-an-ast-4fe2</guid>
      <description>&lt;p&gt;Every team I have worked on had some version of the rule: "keep functions short." It shows up in style guides, code review comments, and onboarding docs. It almost never shows up in CI.&lt;/p&gt;

&lt;p&gt;When it does get automated, the tooling usually reaches for &lt;code&gt;wc -l&lt;/code&gt; on the whole file. That is a rough proxy at best. A 300-line file might contain five short, readable functions and a big block of comments. A 150-line file might contain one function that does the work of three. File length and function length are different problems.&lt;/p&gt;

&lt;p&gt;In the advent of AI, these constraints can be even more difficult to enforce.&lt;/p&gt;

&lt;p&gt;I built &lt;a href="https://github.com/ChrisGVE/codesize" rel="noopener noreferrer"&gt;codesize&lt;/a&gt; to check the thing that actually matters: how long each function is.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why existing tools fall short
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;cloc&lt;/code&gt; and similar tools count lines of code, but at the file level. Linters can flag long functions, but they are language-specific — you need a different one for every language in a polyglot repo, each with its own config format and output schema.&lt;/p&gt;

&lt;p&gt;What I wanted was a single binary that could scan a mixed-language project and produce a uniform report: which files exceed their limit, which functions exceed their limit, sorted by how far over the line they are.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using tree-sitter to find function boundaries
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;codesize&lt;/code&gt; uses &lt;a href="https://tree-sitter.github.io/tree-sitter/" rel="noopener noreferrer"&gt;tree-sitter&lt;/a&gt; to parse each source file and walk the AST to find function boundaries. Instead of counting lines in a text buffer, it counts the lines that belong to each function from signature to closing brace.&lt;/p&gt;

&lt;p&gt;This handles a number of cases that trip up simpler approaches: blank lines inside a function body, comments interleaved with code, string literals that happen to contain braces, and nested functions. Arrow functions in JavaScript and TypeScript are counted as functions. Constructors in Java are counted. Nested functions in Rust and Python are each counted independently.&lt;/p&gt;

&lt;p&gt;Ten languages have built-in grammars: Rust, TypeScript, JavaScript, Python, Go, Java, C, C++, Swift, and Lua. For any other language you can add an extension mapping in config and still get file-level enforcement — you just won't get per-function analysis until a grammar is available.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the output looks like
&lt;/h2&gt;

&lt;p&gt;Results go to a CSV file (or stdout with &lt;code&gt;--stdout&lt;/code&gt;). Six columns:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;language&lt;/th&gt;
&lt;th&gt;exception&lt;/th&gt;
&lt;th&gt;function&lt;/th&gt;
&lt;th&gt;codefile&lt;/th&gt;
&lt;th&gt;lines&lt;/th&gt;
&lt;th&gt;limit&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Rust&lt;/td&gt;
&lt;td&gt;function&lt;/td&gt;
&lt;td&gt;build_report&lt;/td&gt;
&lt;td&gt;src/scanner.rs&lt;/td&gt;
&lt;td&gt;95&lt;/td&gt;
&lt;td&gt;80&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Python&lt;/td&gt;
&lt;td&gt;file&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;src/legacy/monolith.py&lt;/td&gt;
&lt;td&gt;450&lt;/td&gt;
&lt;td&gt;300&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;exception&lt;/code&gt; column is either &lt;code&gt;file&lt;/code&gt; (the whole file is over the limit) or &lt;code&gt;function&lt;/code&gt; (a specific function is). For file-level violations, &lt;code&gt;function&lt;/code&gt; is empty. Rows are sorted by language, then by line count descending, so the worst offenders are at the top.&lt;/p&gt;

&lt;p&gt;This format was a deliberate choice. CSV goes everywhere: spreadsheets, GitHub issue imports, Jira, Linear, a shell pipeline. The intent is not to fail a build on day one — it is to generate a list of violations you can work through over time, treating function length as technical debt to be retired gradually rather than a gate that blocks you immediately.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;p&gt;Limits are per-language and fully configurable in a TOML file at &lt;code&gt;~/.config/codesize/config.toml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[limits.Rust]&lt;/span&gt;
&lt;span class="py"&gt;file&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;500&lt;/span&gt;
&lt;span class="py"&gt;function&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;

&lt;span class="nn"&gt;[limits.Python]&lt;/span&gt;
&lt;span class="py"&gt;function&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;   &lt;span class="c"&gt;# leave file limit at the default 300&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also add languages that have no built-in grammar:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[languages]&lt;/span&gt;
&lt;span class="py"&gt;".rb"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Ruby"&lt;/span&gt;

&lt;span class="nn"&gt;[limits.Ruby]&lt;/span&gt;
&lt;span class="py"&gt;file&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;
&lt;span class="py"&gt;function&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you are onboarding an existing codebase with a lot of violations, the &lt;code&gt;--tolerance&lt;/code&gt; flag lets you start with headroom and tighten the limits over time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Report only functions more than 20% over the limit&lt;/span&gt;
codesize &lt;span class="nt"&gt;--tolerance&lt;/span&gt; 20 &lt;span class="nt"&gt;--gitignore&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the backlog is clear, drop the tolerance and the limits become exact.&lt;/p&gt;

&lt;h2&gt;
  
  
  CI integration
&lt;/h2&gt;

&lt;p&gt;For GitHub Actions, there is a companion action that installs and runs &lt;code&gt;codesize&lt;/code&gt; with no setup beyond a checkout:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .github/workflows/codesize.yml&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Code size check&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;codesize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ChrisGVE/codesize-action@v1.0.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;--fail&lt;/code&gt; flag makes &lt;code&gt;codesize&lt;/code&gt; exit with status 1 when violations are found, which is what you want for a blocking CI check. Without it, the tool always exits 0 and just writes the report — useful for the gradual rollout approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;Install via Homebrew (macOS and Linux):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;ChrisGVE/tap/codesize
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or from crates.io:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cargo &lt;span class="nb"&gt;install &lt;/span&gt;codesize
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Shell completions for zsh, bash, and fish are included. When installed via Homebrew they are set up automatically; otherwise run &lt;code&gt;codesize init &amp;lt;shell&amp;gt;&lt;/code&gt; to generate them.&lt;/p&gt;

&lt;p&gt;Source, issues, and the full CLI reference are at &lt;a href="https://github.com/ChrisGVE/codesize" rel="noopener noreferrer"&gt;github.com/ChrisGVE/codesize&lt;/a&gt;. The companion GitHub Action is at &lt;a href="https://github.com/ChrisGVE/codesize-action" rel="noopener noreferrer"&gt;ChrisGVE/codesize-action&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>codequality</category>
      <category>showdev</category>
      <category>tooling</category>
    </item>
  </channel>
</rss>
