<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Serge Levin</title>
    <description>The latest articles on DEV Community by Serge Levin (@srgylvn).</description>
    <link>https://dev.to/srgylvn</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/srgylvn"/>
    <language>en</language>
    <item>
      <title>Classifying 33K Reddit posts on a laptop: anchor first, exclude nothing</title>
      <dc:creator>Serge Levin</dc:creator>
      <pubDate>Mon, 27 Apr 2026 09:29:06 +0000</pubDate>
      <link>https://dev.to/srgylvn/notes-from-building-a-reddit-signal-classifier-on-a-laptop-f4e</link>
      <guid>https://dev.to/srgylvn/notes-from-building-a-reddit-signal-classifier-on-a-laptop-f4e</guid>
      <description>&lt;h2&gt;
  
  
  Anchor-first prompts: the fix for noisy local-LLM classifiers
&lt;/h2&gt;

&lt;p&gt;If you're using a local LLM as a YES/NO classifier and seeing too many false positives, don't fix it by enumerating off-domain categories to exclude. Flip the prompt to require an explicit named anchor from the target domain - answer YES only when a specific in-domain term appears, otherwise NO. Same prompt shape transfers across model sizes. I've applied it on phi3.5, qwen2.5:7b, and phi4:14b.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;I'm building an agent that watches Reddit (and eventually HN, Lobsters, Mastodon) for posts and comments matching specific signals. Not for lead gen - more for community intelligence. One signal I'm working on: someone comparing or migrating between S3-compatible object storage providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pipeline
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Reddit RSS/JSON  →  keyword pre-filter  →  Bayes classifier  →  LLM (Ollama)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most posts get rejected at the pre-filter. The Bayes classifier handles the bulk of obvious YES/NO. The LLM calculates Bayes weights on the first run; on later runs it only sees ambiguous cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Naive approach
&lt;/h2&gt;

&lt;p&gt;I started with phi3.5:latest (~2 GB on disk, snappy in Ollama). Almost immediately, two false-positive classes appeared with hundreds of posts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Kubernetes infra posts. Threads like "comparing Kafka deployment options on K8s" got flagged YES even though there's no object-storage angle.&lt;/li&gt;
&lt;li&gt;Microsoft Fabric / Copilot / data warehouse posts. "Snowflake vs Fabric for our team".&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The smaller model was pattern-matching on the shape of "user comparing options" and dropping the domain anchor entirely.&lt;/p&gt;

&lt;p&gt;The first attempt to fix was to add exclusions to the prompt.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Answer YES if the post is about object storage migration.
Answer NO if the post is about Kubernetes.
Answer NO if the post is about data warehouses.
Answer NO if the post is about AI tooling.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The negative categories started acting as their own relevance signal. The model treated "is this Kubernetes-y?" as a classification axis - meaning a Kafka-on-K8s post was now being classified along the wrong dimension entirely. Started to get even more noise. Hard to describe the world by what it isn't.&lt;/p&gt;

&lt;p&gt;Next, instead of &lt;em&gt;NO if {growing list of off-domain things}&lt;/em&gt;, structure the prompt as: &lt;em&gt;YES only if {short positive list of in-domain anchors} AND {intent clause}. Otherwise, NO. No exclusions.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Also, changed the model to qwen2.5:7b as such prompt appeared to be too heavy for phi3.5. What that looks like for my signal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Answer YES only if the text explicitly names:
  - S3, or an S3-compatible provider (AWS S3, MinIO, Ceph, Garage,
    SeaweedFS, Backblaze B2, Cloudflare R2, Wasabi, Storj),
  - or a tool for moving data between them (rclone, s5cmd, mc mirror,
    AWS DataSync, Cyberduck, boto3, aws cli),
AND the author is comparing options or planning to migrate.
Otherwise answer NO.
Do not infer. If no such name appears, answer NO.
Output a single token: YES or NO.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No "ignore Kubernetes." Nothing about what NOT to match. The prompt only describes what counts. As a result, false positives dropped significantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;VRAM&lt;/th&gt;
&lt;th&gt;Verdict&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;phi3.5:latest&lt;/td&gt;
&lt;td&gt;~2 GB&lt;/td&gt;
&lt;td&gt;Too small to hold the domain anchor reliably even with the positive gate. Dropped.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;qwen2.5:7b&lt;/td&gt;
&lt;td&gt;~5 GB&lt;/td&gt;
&lt;td&gt;Large step up. Honored the gate. Fast enough to experiment with the prompt.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;phi4:14b&lt;/td&gt;
&lt;td&gt;~9 GB&lt;/td&gt;
&lt;td&gt;Settled here for production. Needed one more prompt iteration.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Bayes classifier in front of the LLM
&lt;/h2&gt;

&lt;p&gt;Most posts get a high-confidence YES or NO and never see the LLM. LLM as an initial classifier saves time compared to manual classification. And as soon as Bayesian filter weights are good enough, only the ambiguous cases are passed to it.&lt;/p&gt;

&lt;p&gt;I pushed roughly 33,000 Reddit records through the full pipeline in under an hour on a laptop GPU. LLM-only would've taken so long that manual filtering would've been faster. If a post doesn't name an in-domain thing AND express comparison/migration intent, it's rejected before the LLM is even invoked. This helps save tokens and mirrors the prompt's logic.&lt;/p&gt;

&lt;p&gt;Every new signal I've written since follows this shape - anchor first, everything else after.&lt;/p&gt;

&lt;h2&gt;
  
  
  Seeding
&lt;/h2&gt;

&lt;p&gt;Running experiments on the data needs a lot of seeding. I tried Google's and Bing's search APIs first - both have been shut down. Ended up with the Brave Search API. The free tier was enough to pull more than 30K seed posts and comments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;All of this is part of an open-source project I'm building - AGPL, runs on a laptop. Some restrictions baked in on purpose: no auto-posting and no DM outreach.&lt;/p&gt;

&lt;p&gt;The current approach with the Bayes filter is lean. It does incremental retraining, not drift handling. Every 50 LLM labels, the system pulls the entire historical dataset from DB and calculates the weights from scratch. That's the whole adaptation mechanism for now.&lt;/p&gt;

&lt;p&gt;What's missing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A label from six months ago counts the same as yesterday's.&lt;/li&gt;
&lt;li&gt;No distribution-shift detection.&lt;/li&gt;
&lt;li&gt;No automatic relabeling of old data when prompts change. I added the &lt;code&gt;--reset&lt;/code&gt; command to call full retrain, not sure if that's enough in the long run.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What's coming (and more):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Recency-weighted Bayes retraining. Time-decay on label weights so vocabulary drift follows automatically, plus optional pruning of stale examples that disagree with current labels.&lt;/li&gt;
&lt;li&gt;Cross-source aggregation. Same signal hitting three different subreddits is stronger than three hits in one sub. Cross-channel multiplier on the score.&lt;/li&gt;
&lt;li&gt;HN, Lobsters, Mastodon as additional sources. The same pain showing up across different cultures, wording, and blind spots is a much stronger signal than three hits in one subreddit.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;em&gt;This post was drafted by the author and refined with AI for clarity and structure.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>llm</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Migrate Multiple S3 Buckets in Parallel</title>
      <dc:creator>Serge Levin</dc:creator>
      <pubDate>Fri, 10 Apr 2026 10:11:40 +0000</pubDate>
      <link>https://dev.to/godwitio/migrate-multiple-s3-buckets-in-parallel-4hj4</link>
      <guid>https://dev.to/godwitio/migrate-multiple-s3-buckets-in-parallel-4hj4</guid>
      <description>&lt;p&gt;Once you know how to migrate &lt;a href="https://godwit.io/blog/s3-migration-guide" rel="noopener noreferrer"&gt;one S3 bucket&lt;/a&gt;, the next problem is migrating 5 or 20 at once.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Hands-on lab available:&lt;/strong&gt; Run a 4-bucket parallel migration against two RustFS S3 endpoints in Docker, with Prometheus and Grafana pre-configured. &lt;a href="https://github.com/godwitio/godwit-labs/tree/main/multi-bucket-migration-lab" rel="noopener noreferrer"&gt;Go to the lab&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Each Bucket Pair Deserves Its Own Migration
&lt;/h2&gt;

&lt;p&gt;A multi-bucket migration needs to handle bucket pairs that differ in credentials, rate limits, object count, and object size. One bucket might have millions of small files, another a few hundred multi-gigabyte objects, a third might sit behind a strict RPS limit, and a fourth might require cross-provider transfer. A bash loop over &lt;code&gt;aws s3 sync&lt;/code&gt; or a set of &lt;code&gt;rclone&lt;/code&gt; instances can move the bytes, but both re-list source and destination on every run instead of resuming from a local checkpoint. Neither tool lets you describe a complete transfer job -- source, destination, rate limits, parallelism, state path, metrics port -- in a single config file. You end up encoding that in wrapper scripts.&lt;/p&gt;

&lt;p&gt;The right primitive is one run equals one bucket pair. Each run has its own state, its own resume point, its own metrics endpoint. To migrate multiple S3 buckets, you orchestrate N independent runs - not one run that tries to be N things. Orchestration belongs outside the sync tool, in a script you own.&lt;/p&gt;

&lt;h2&gt;
  
  
  One S3 Migration Config per Bucket Pair
&lt;/h2&gt;

&lt;p&gt;Godwit Sync supports YAML config files via the &lt;code&gt;-f&lt;/code&gt; flag, including &lt;code&gt;${ENV_VAR}&lt;/code&gt; expansion for credentials. Create one config per bucket pair and drop them in a folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;migrations/
  app-data.yml
  ml-models.yml
  logs-archive.yml
  user-uploads.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each file is a standard Godwit config. Here is &lt;code&gt;app-data.yml&lt;/code&gt; from the &lt;a href="https://github.com/godwitio/godwit-labs/tree/main/multi-bucket-migration-lab" rel="noopener noreferrer"&gt;companion lab&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;s3://prod-app-data"&lt;/span&gt;
  &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;s3.us-east-1.amazonaws.com"&lt;/span&gt;
  &lt;span class="na"&gt;access_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${AWS_SOURCE_ACCESS_KEY}"&lt;/span&gt;
  &lt;span class="na"&gt;secret_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${AWS_SOURCE_SECRET_KEY}"&lt;/span&gt;

&lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;s3://backup-app-data"&lt;/span&gt;
  &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;s3.us-east-1.amazonaws.com"&lt;/span&gt;
  &lt;span class="na"&gt;access_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${AWS_DEST_ACCESS_KEY}"&lt;/span&gt;
  &lt;span class="na"&gt;secret_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${AWS_DEST_SECRET_KEY}"&lt;/span&gt;

&lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;parallel&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;16&lt;/span&gt;

&lt;span class="na"&gt;rate_limit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;read_bps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;209715200&lt;/span&gt; &lt;span class="c1"&gt;# 200 MB/s&lt;/span&gt;

&lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;run_id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;app-data"&lt;/span&gt;
  &lt;span class="na"&gt;state_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./state/app-data.db"&lt;/span&gt;

&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;addr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;:9100"&lt;/span&gt;
  &lt;span class="na"&gt;drain_timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;15&lt;/span&gt;

&lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;brief&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each file is self-contained: credentials, endpoints, parallelism, rate limits, state path, and metrics port. A cross-provider pair (say, AWS to an on-prem S3-compatible store) would use a different endpoint, &lt;code&gt;secure: false&lt;/code&gt;, and lower &lt;code&gt;read_bps&lt;/code&gt; -- but the structure is identical. Godwit expands &lt;code&gt;${...}&lt;/code&gt; variables at load time, so the files themselves contain no secrets. The full config schema is documented in the &lt;a href="https://godwit.io/docs/config" rel="noopener noreferrer"&gt;configuration reference&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjpsfzqj3udn21fxj3p9y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjpsfzqj3udn21fxj3p9y.png" alt="The orchestrator enumerates config files in a folder and fans out to N independent godwit sync processes, each with its own state database and Prometheus metrics endpoint." width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Parallel S3 Migration Script
&lt;/h2&gt;

&lt;p&gt;The orchestrator enumerates every &lt;code&gt;.yml&lt;/code&gt; file in the migrations folder, spawns one &lt;code&gt;godwit sync -f &amp;lt;file&amp;gt;&lt;/code&gt; per pair in parallel, and collects exit codes. No custom YAML parsing, no env var resolution -- Godwit handles all of that natively.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is a simplified version of the script. The full version with retry, verify, and status commands is in the &lt;a href="https://github.com/godwitio/godwit-labs/tree/main/multi-bucket-migration-lab" rel="noopener noreferrer"&gt;companion lab&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;#!/usr/bin/env python3
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;migrate-many.py -- fan out godwit sync across config files in a folder.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;subprocess&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;concurrent.futures&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ProcessPoolExecutor&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;as_completed&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pathlib&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Path&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run_pair&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;cmd&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;godwit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sync&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config_path&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;plan&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;cmd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--plan-only&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;retry&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;cmd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--resume&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;subprocess&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cmd&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;config_path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stem&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;returncode&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;folder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;argv&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;argv&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;migrations&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;action&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;argv&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;argv&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;run&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;configs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sorted&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;folder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;glob&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*.yml&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;configs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;No .yml files found in &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;folder&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;ProcessPoolExecutor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max_workers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;configs&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;futures&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;submit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;run_pair&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cfg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="n"&gt;cfg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stem&lt;/span&gt;
            &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;cfg&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;configs&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="n"&gt;failed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;future&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;as_completed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;futures&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;rc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;future&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;result&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OK&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;rc&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FAIL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;] &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;failed&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;rc&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;

    &lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;failed&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each pair runs as an independent &lt;code&gt;godwit sync&lt;/code&gt; process with its own state database, run ID, and metrics port -- all defined in its config file. The script prints which pairs succeeded and which failed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling Failures Without All-or-Nothing
&lt;/h2&gt;

&lt;p&gt;Each &lt;code&gt;godwit sync&lt;/code&gt; process maintains its own SQLite state database tracking every object: planned, transferred, verified, or failed. A failure in one process does not touch another process's state.&lt;/p&gt;

&lt;p&gt;To retry a failed pair, re-run the script with the &lt;code&gt;retry&lt;/code&gt; action. The script passes &lt;code&gt;--resume&lt;/code&gt;, which tells Godwit to skip planning and pick up from the last successful object. Resume works at the object level, not the bucket level. A fresh &lt;code&gt;run&lt;/code&gt; plans and transfers from scratch; &lt;code&gt;retry&lt;/code&gt; resumes where the previous attempt stopped.&lt;/p&gt;

&lt;p&gt;For details on state tracking and checksum verification, see &lt;a href="https://godwit.io/blog/verifying-s3-migrations" rel="noopener noreferrer"&gt;Verifying S3 Migrations&lt;/a&gt;. The &lt;a href="https://github.com/godwitio/godwit-labs/tree/main/multi-bucket-migration-lab" rel="noopener noreferrer"&gt;companion lab&lt;/a&gt; extends this script with retry, verify, and status commands and walks through a failure-and-retry scenario step by step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Observability: One Dashboard, Four Concurrent Runs
&lt;/h2&gt;

&lt;p&gt;Each &lt;code&gt;godwit sync&lt;/code&gt; process exposes Prometheus metrics on the port defined in its config file (&lt;code&gt;status.addr&lt;/code&gt;). The four configs use ports 9100, 9101, 9102, 9103. Prometheus scrapes all four targets. A single Grafana dashboard shows every run side by side.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fipdr2o9p1istb69b9xm9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fipdr2o9p1istb69b9xm9.png" alt="Grafana dashboard showing throughput, progress, errors, and ETA for four concurrent migration pairs." width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This dashboard is from the &lt;a href="https://github.com/godwitio/godwit-labs/tree/main/multi-bucket-migration-lab" rel="noopener noreferrer"&gt;companion lab&lt;/a&gt;, which ships it pre-provisioned so you can see it live during the walkthrough.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Godwit exposes a wide set of Prometheus metrics. The lab dashboard uses 8 to keep things simple:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;godwit_run_transfer_bytes_total&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Counter&lt;/td&gt;
&lt;td&gt;Cumulative bytes; &lt;code&gt;rate()&lt;/code&gt; gives per-pair throughput&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;godwit_run_objects_completed&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Objects finished per pair&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;godwit_run_objects_total&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Planned objects per pair&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;godwit_run_objects_failed&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Failed objects per pair&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;godwit_run_bytes_transferred&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Bytes moved per pair&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;godwit_run_stage&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Current phase: planning, transferring, verifying, completed, failed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;godwit_objects_total{status}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Counter&lt;/td&gt;
&lt;td&gt;Global objects by status; used for error rate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;godwit_eta_seconds&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Estimated time to completion per pair&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These metrics drive five dashboard panels:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Throughput by Pair (time series).&lt;/strong&gt; Four lines showing &lt;code&gt;rate(godwit_run_transfer_bytes_total[15s])&lt;/code&gt; grouped by &lt;code&gt;pair&lt;/code&gt;. The lines diverge because each pair has different bandwidth limits. The &lt;code&gt;app-data&lt;/code&gt; pair sustains higher throughput than &lt;code&gt;logs-archive&lt;/code&gt; -- expected, given the different &lt;code&gt;read_bps&lt;/code&gt; limits in their config files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Migration Progress (bar gauge).&lt;/strong&gt; Each bar represents &lt;code&gt;godwit_run_objects_completed / godwit_run_objects_total&lt;/code&gt; for one &lt;code&gt;pair&lt;/code&gt;. At a glance you see that &lt;code&gt;ml-models&lt;/code&gt; is at 95% while &lt;code&gt;logs-archive&lt;/code&gt; is at 12% -- expected, given the object counts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Pair Status (table).&lt;/strong&gt; Columns: Pair, Stage (from &lt;code&gt;godwit_run_stage&lt;/code&gt;), Bytes Transferred (&lt;code&gt;godwit_run_bytes_transferred&lt;/code&gt;), and Failed Objects (&lt;code&gt;godwit_run_objects_failed&lt;/code&gt;). This is the single-pane summary an operator needs during the migration window.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Error Rate by Pair (time series).&lt;/strong&gt; &lt;code&gt;rate(godwit_objects_total{status="failed"}[1m])&lt;/code&gt; grouped by &lt;code&gt;pair&lt;/code&gt;. When &lt;code&gt;logs-archive&lt;/code&gt; hits NoSuchBucket errors, this panel spikes for that pair while the other three stay flat. Immediate signal, no log parsing required.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Estimated Time Remaining (stat panel).&lt;/strong&gt; &lt;code&gt;godwit_eta_seconds&lt;/code&gt; per &lt;code&gt;pair&lt;/code&gt;, displayed as human-readable remaining time. Operators see that &lt;code&gt;app-data&lt;/code&gt; finishes in 8 minutes while &lt;code&gt;logs-archive&lt;/code&gt; needs another 2 hours.&lt;/p&gt;

&lt;p&gt;The Prometheus label scheme is simple: each scrape target corresponds to one port, and Prometheus relabeling adds a &lt;code&gt;pair&lt;/code&gt; label from the target's port. No custom service discovery is needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Parallel Migration Uses One Process per Bucket Pair
&lt;/h3&gt;

&lt;p&gt;Create one config file per source/destination bucket pair, then use an orchestrator script to spawn one &lt;code&gt;godwit sync -f &amp;lt;file&amp;gt;&lt;/code&gt; process per pair. Each process runs independently with its own state database, resume point, and Prometheus metrics endpoint. The pattern works for any number of pairs.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Failed Pair Does Not Affect Other Pairs
&lt;/h3&gt;

&lt;p&gt;Only that pair stops. Every other pair continues unaffected because each runs as a separate process with its own SQLite state. Re-run the failed pair with &lt;code&gt;--resume&lt;/code&gt; to pick up from the last successfully transferred object, not from the beginning.&lt;/p&gt;

&lt;h3&gt;
  
  
  One Grafana Dashboard Monitors All Pairs
&lt;/h3&gt;

&lt;p&gt;Each Godwit process exposes Prometheus metrics on a separate port. Point Prometheus at all ports and build a single Grafana dashboard that groups panels by &lt;code&gt;run_id&lt;/code&gt;. The companion lab ships a pre-built dashboard that shows throughput, progress, errors, and ETA for all pairs side by side.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It: Multi-Bucket S3 Migration Lab
&lt;/h2&gt;

&lt;p&gt;The companion lab runs the entire workflow on your laptop in 15 minutes: two RustFS S3 endpoints (source with four pre-populated buckets, target empty), Prometheus, Grafana with a pre-provisioned dashboard, the config files, and the orchestrator script. You run all four pairs in parallel, watch per-pair progress in Grafana, see one pair fail due to an injected permission error, retry it, and verify all four.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/godwitio/godwit-labs/tree/main/multi-bucket-migration-lab" rel="noopener noreferrer"&gt;Go to the Multi-Bucket Migration Lab&lt;/a&gt;&lt;/p&gt;

</description>
      <category>s3</category>
      <category>devops</category>
      <category>prometheus</category>
      <category>abotwrotethis</category>
    </item>
  </channel>
</rss>
