<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bakunin</title>
    <description>The latest articles on DEV Community by Bakunin (@bakunin).</description>
    <link>https://dev.to/bakunin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bakunin"/>
    <language>en</language>
    <item>
      <title>This Python script creates technical docs in one command</title>
      <dc:creator>Bakunin</dc:creator>
      <pubDate>Sun, 29 Mar 2026 16:55:59 +0000</pubDate>
      <link>https://dev.to/bakunin/i-got-tired-of-updating-docs-by-hand-so-i-built-a-one-file-python-script-44am</link>
      <guid>https://dev.to/bakunin/i-got-tired-of-updating-docs-by-hand-so-i-built-a-one-file-python-script-44am</guid>
      <description>&lt;p&gt;Keeping project docs up to date is one of those tasks that always sounds reasonable until you actually have to do it.&lt;/p&gt;

&lt;p&gt;I kept ending up in the same loop: change the code, forget to update the docs, remember later, then spend way too much time rebuilding the project context for myself or for an AI chat.&lt;/p&gt;

&lt;p&gt;At some point, I got tired of it and built a tiny open-source script for my own workflow.&lt;/p&gt;

&lt;p&gt;The idea was simple:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;one Python file, zero dependencies, one command, two output files&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It generates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;CodebaseDump.md&lt;/code&gt; — a full markdown snapshot of the codebase that is easy to paste into AI chats&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ReadmeDev.md&lt;/code&gt; — a technical developer doc generated from the repository&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why this design?
&lt;/h2&gt;

&lt;p&gt;Because I did &lt;strong&gt;not&lt;/strong&gt; want to build a huge agent system just to solve a boring problem.&lt;/p&gt;

&lt;p&gt;I wanted something that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;works with plain Python&lt;/li&gt;
&lt;li&gt;is easy to drop into any project&lt;/li&gt;
&lt;li&gt;does not need extra setup&lt;/li&gt;
&lt;li&gt;stays compact and hackable&lt;/li&gt;
&lt;li&gt;can still work on free or rate-limited models&lt;/li&gt;
&lt;li&gt;makes sense even when your provider has weak RPM / RPS limits&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I kept the AI layer simple too.&lt;/p&gt;

&lt;p&gt;Instead of asking one model call to do everything at once, the script uses two passes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a focused research pass that extracts grounded project context from the codebase dump&lt;/li&gt;
&lt;li&gt;a final aggregator pass that turns that into a usable technical doc&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That makes the whole thing faster, cheaper, and more realistic to use with smaller models.&lt;/p&gt;

&lt;p&gt;Honestly, that part mattered a lot to me.&lt;/p&gt;

&lt;p&gt;A lot of AI tooling still quietly assumes you have access to a big paid model with comfortable limits.&lt;/p&gt;

&lt;p&gt;I did not want to build around that assumption.&lt;/p&gt;

&lt;p&gt;I wanted something that still feels practical when you are using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;free-tier API keys&lt;/li&gt;
&lt;li&gt;smaller models&lt;/li&gt;
&lt;li&gt;slow provider backends&lt;/li&gt;
&lt;li&gt;narrow request limits&lt;/li&gt;
&lt;li&gt;side projects, internal tools, or research prototypes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The slowest part of tools like this usually is &lt;strong&gt;not Python&lt;/strong&gt;.&lt;br&gt;
It is waiting on model responses.&lt;/p&gt;

&lt;p&gt;So I tried to avoid unnecessary AI overhead and keep the workflow practical for people using free keys, small models, and providers that can take a few seconds even on lightweight requests.&lt;/p&gt;

&lt;p&gt;That also shaped the philosophy of the project a bit.&lt;/p&gt;

&lt;p&gt;It is &lt;strong&gt;not&lt;/strong&gt; trying to be the ultimate autonomous documentation agent.&lt;/p&gt;

&lt;p&gt;It is trying to be useful in the messy real world where API latency is annoying, limits are tight, and sometimes the best model for the job is just the one that is cheap, available, and good enough.&lt;/p&gt;

&lt;p&gt;It is just a useful little script that lets me run one command and get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a repo dump for AI chats&lt;/li&gt;
&lt;li&gt;a readable technical doc for the project&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Good enough to save me from doing this manually over and over again.&lt;/p&gt;

&lt;p&gt;Could this become a much bigger agent system later? Sure.&lt;/p&gt;

&lt;p&gt;But I actually think there is value in starting with the boring version first:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;one file, one command, no dependency drama&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is the version people might genuinely use.&lt;/p&gt;

&lt;p&gt;The project is open source here: &lt;a href="https://github.com/Bakunin-dev/SumAI" rel="noopener noreferrer"&gt;Bakunin-dev/SumAI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If this sounds useful for your workflow, feel free to explore it, fork it, or make it better.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Yes — this post was created by AI.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Praise the Omnissiah. ⚙️&lt;/em&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>opensource</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
