<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Summonair</title>
    <description>The latest articles on DEV Community by Summonair (@talbalash).</description>
    <link>https://dev.to/talbalash</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/talbalash"/>
    <language>en</language>
    <item>
      <title>How i handle complex tasks with Claude Code</title>
      <dc:creator>Summonair</dc:creator>
      <pubDate>Mon, 09 Mar 2026 20:13:54 +0000</pubDate>
      <link>https://dev.to/talbalash/how-i-handle-complex-tasks-with-claude-code-1gp5</link>
      <guid>https://dev.to/talbalash/how-i-handle-complex-tasks-with-claude-code-1gp5</guid>
      <description>&lt;p&gt;Every big task I have that needs attention from several repos, I like to set up a fresh isolated folder for Claude with everything it needs. Searching, fetching, cloning the right repos every single time - it's repetitive and annoying. That's why I built &lt;code&gt;claude-clone&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Problem With Multi-Repo Work
&lt;/h2&gt;

&lt;p&gt;When Claude Code opens a folder, it sees everything in it. Clone multiple repos into the same workspace and Claude understands how they all relate — your API and frontend, your shared lib and the services using it. The context quality jumps.&lt;/p&gt;

&lt;p&gt;But nobody talks about the setup cost. You have to manually find each repo, clone it, and assemble the workspace every single time.&lt;/p&gt;

&lt;h2&gt;
  
  
  One Command Instead
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude-clone create my-feature
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Choose your desired repos for the task:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrkh9iiddox6r83k8hqy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrkh9iiddox6r83k8hqy.png" alt="Choose repos" width="800" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it. It:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fetches all your GitHub repos (org or personal)&lt;/li&gt;
&lt;li&gt;Shows a searchable list, space to select&lt;/li&gt;
&lt;li&gt;Clones everything you picked in parallel&lt;/li&gt;
&lt;li&gt;Writes a &lt;code&gt;CLAUDE.md&lt;/code&gt; describing the workspace&lt;/li&gt;
&lt;li&gt;Launches Claude with full context across all repos&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No config files to edit. No paths to remember. No repetitive setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Save Repo Groups as Presets
&lt;/h2&gt;

&lt;p&gt;Working on the same stack repeatedly? Save it once:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude-clone preset save backend
claude-clone create my-feature &lt;span class="nt"&gt;--preset&lt;/span&gt; backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Works for Orgs and Personal Repos
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude-clone create my-feature &lt;span class="nt"&gt;--org&lt;/span&gt; acme   &lt;span class="c"&gt;# org repos&lt;/span&gt;
claude-clone create my-feature              &lt;span class="c"&gt;# your own repos&lt;/span&gt;
claude-clone open my-feature               &lt;span class="c"&gt;# reopen later&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; claude-clone
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Requirements:&lt;/strong&gt; &lt;code&gt;gh&lt;/code&gt; CLI authenticated, Claude Code in PATH.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Open source: &lt;a href="https://github.com/Summonair/claude-clone" rel="noopener noreferrer"&gt;github.com/Summonair/claude-clone&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claudecode</category>
      <category>development</category>
      <category>programming</category>
    </item>
    <item>
      <title>Agent build RPG style</title>
      <dc:creator>Summonair</dc:creator>
      <pubDate>Sat, 24 Jan 2026 12:51:20 +0000</pubDate>
      <link>https://dev.to/talbalash/agent-build-rpg-style-1amb</link>
      <guid>https://dev.to/talbalash/agent-build-rpg-style-1amb</guid>
      <description>&lt;p&gt;I built 𝗪𝗼𝗿𝗹𝗱 𝗼𝗳 𝗖𝗹𝗮𝘂𝗱𝗲𝗰𝗿𝗮𝗳𝘁 to replace 𝗮𝗴𝗲𝗻𝘁 𝗰𝗼𝗻𝗳𝗶𝗴𝘂𝗿𝗮𝘁𝗶𝗼𝗻 𝗳𝗶𝗹𝗲𝘀 with something more visual (and honestly, more fun).&lt;/p&gt;

&lt;p&gt;Instead of editing configs, you equip your Claude agent:&lt;br&gt;
• Roles as helmets 🪖&lt;br&gt;
• Behaviors as armor 🛡️&lt;br&gt;
• Skills, constraints, formats, tools, context - each in its own slot&lt;br&gt;
You drag &amp;amp; drop items, mix builds, track token budgets 💰, save loadouts, and export straight to ~/.claude/agents.&lt;/p&gt;

&lt;p&gt;Each item is tokenized with GPT-Tokenizer and ranked by rarity ✨ according to its token cost.&lt;br&gt;
Inspired by my favorite RPG games and their loadout systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Summonair/world-of-claudecraft" rel="noopener noreferrer"&gt;https://github.com/Summonair/world-of-claudecraft&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>Resolving Storage Space Issues on AWS RDS Postgres</title>
      <dc:creator>Summonair</dc:creator>
      <pubDate>Thu, 12 Sep 2024 15:59:52 +0000</pubDate>
      <link>https://dev.to/talbalash/resolving-storage-space-issues-on-aws-rds-postgres-19am</link>
      <guid>https://dev.to/talbalash/resolving-storage-space-issues-on-aws-rds-postgres-19am</guid>
      <description>&lt;p&gt;As a devops engineer, maintaining optimal database performance is crucial. Recently, we encountered a persistent issue with our Postgres RDS instance: the free storage space was decreasing rapidly, and finding the root cause wasn't straightforward. This post outlines the investigation process, the routes we explored, and how we ultimately resolved the issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Temporary Files
&lt;/h3&gt;

&lt;p&gt;One of the first things we checked was the use of temporary files. Typically, large temporary files can consume significant storage, especially when &lt;code&gt;work_mem&lt;/code&gt; is set too low or when long-running queries abuse temp space. We ran the following query to check temporary files usage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;datname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;temp_files&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="nv"&gt;"Temporary files"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;temp_bytes&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="nv"&gt;"Size of temporary files"&lt;/span&gt; 
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_stat_database&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, in our case, this was ruled out early because our configuration uses a storage-optimized image. Temporary files were not written to the DB's EBS storage but to the local storage instead, so they were not contributing to the space issue on the database storage.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Dead Tuples
&lt;/h3&gt;

&lt;p&gt;Next, we considered dead tuples, which can accumulate over time and bloat the database size. Dead tuples occur when rows are marked for deletion or updated, but the space they occupy isn’t immediately reclaimed. This can lead to bloat, where disk space is consumed by data that’s no longer needed but remains invisible to transactions. We checked the tables with the most dead tuples using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;relname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;n_dead_tup&lt;/span&gt; 
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_stat_all_tables&lt;/span&gt; 
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;n_dead_tup&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt; 
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Although we found a few million dead tuples, running a vacuum didn’t recover as much space as anticipated. This indicated that while dead tuples were present, they weren’t the main factor behind the excessive storage consumption&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Orphaned Files
&lt;/h3&gt;

&lt;p&gt;We also checked for orphaned files, which can occur if files remain in the database directory without corresponding objects pointing to them. This situation might happen if the instance runs out of storage or the engine crashes during operations like ALTER TABLE, VACUUM FULL, or CLUSTER. To check for orphaned files, we compared the database size occupied by files against the real size retrieved by summing the objects:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Size of the database occupied by files&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;pg_size_pretty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pg_database_size&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'DATABASE_NAME'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="c1"&gt;-- Size of database retrieved by summing the objects (real size)&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;pg_size_pretty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;SUM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pg_relation_size&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;oid&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt; 
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_class&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If a significant difference exists between these sizes, orphaned files might be using up storage space. In our case, the sizes were even, so orphaned files were not the reason.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Replication Slots
&lt;/h3&gt;

&lt;p&gt;Replication slots are another common cause of storage bloat due to accumulating WAL (Write Ahead Logs) if they are not consumed properly.&lt;br&gt;
To check replication slots, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_replication_slots&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, upon checking, we confirmed that replication was active and working as intended&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Log Files
&lt;/h3&gt;

&lt;p&gt;The breakthrough came when we investigated log files. We discovered that our log files were consuming an enormous amount of space:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log sizes over the past three days included 20GB, 18GB, 10GB, and 8GB files.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our logging configuration was set to log queries that ran for a minimum of 1 second, including their bind parameters. This resulted in very large log entries, especially when complex queries with extensive parameters were executed frequently.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Solution
&lt;/h4&gt;

&lt;p&gt;To resolve the storage issue, we made the following adjustments:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Increased Minimum Run Time for Logging (&lt;em&gt;log_min_duration_statement&lt;/em&gt;)&lt;/strong&gt;: We changed the minimum run time for logging queries from 1 second to 5 seconds. This reduced the number of logged queries significantly, focusing on longer-running, potentially more impactful queries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Adjusted Log Retention Policy (&lt;em&gt;rds.log_retention_period&lt;/em&gt;)&lt;/strong&gt;: We reduced the retention period of log files from the default 3 days to just 1 day. This change immediately helped in freeing up storage space by reducing the volume of retained log data.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Also worth mentioning is the &lt;strong&gt;&lt;em&gt;log_parameter_max_length&lt;/em&gt;&lt;/strong&gt; parameter which can limit the logged bind parameter.&lt;/p&gt;

&lt;p&gt;These changes resulted in a significant reduction in the size of the log files, thereby freeing up the storage space and preventing further rapid consumption.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;When faced with storage space issues in PostgreSQL RDS, it’s crucial to systematically investigate all possible sources of storage consumption. In our case, the logs were the unexpectedly large, but a strategic adjustment to logging parameters and retention policies helped us reclaim valuable storage space. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>rds</category>
      <category>postgres</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
