DEV Community

smplogs
smplogs

Posted on

I used AI to rewrite my entire Go engine in Rust in 2 days. 60+ files, zero shortcuts.

A few weeks ago I had a working analysis engine written in Go, compiled to WASM. It worked. It was 60+ files, thousands of lines, battle-tested. And I decided to throw it all away and rewrite it in Rust.

I did it in 2 days, with AI doing most of the heavy lifting.

This is the honest account of how that went: what worked, what didn't, and whether the result was actually worth it.


The setup

smplogs is a tool that analyzes AWS CloudWatch log exports entirely in your browser. The analysis engine is compiled to WASM and runs client-side. Logs never touch my server.

The Go engine had been running in production for months. It handled Lambda, API Gateway, and ECS parsing. It had anomaly detection, log clustering, T-Digest streaming percentiles, cold start detection, and a full parity test suite. It wasn't small.

55 Go source files. 17,000+ lines. All of it needed to produce byte-for-byte identical output to pass the parity tests.


Why bother?

Honest reasons, not post-hoc rationalization:

Speed. No GC, no goroutine scheduler, and the zero-copy scanner meant the Rust engine ended up 3x(!!) times faster on the same files.

The binary size. The Go WASM binary was 4.2 MB. The Rust binary came in at 470 KB - over 9x smaller. For something that loads on every page visit, that's a meaningful difference in cold-load time.

GC pauses. Go has a garbage collector. In a browser tab, GC pauses show up as UI jank when analyzing large files. Rust has no GC that makes latency predictable from start to finish.

Memory. The Go engine's WASM target uses the standard Go runtime which brings in a full garbage collector and goroutine scheduler - overhead you pay even for a single-threaded WASM module. The Rust build is lean: only what you explicitly use ends up in the binary.

Those were the real reasons. I could also say "I wanted to learn Rust better," which is true, but let's be real, that's not why you rewrite production code.


The approach: one phase at a time

I didn't try to rewrite everything at once. I broke it into 8 phases and did one per session with Claude:

Phase 1 — Foundation: types, parser, basic lib.rs scaffolding
Phase 2 — Core accumulation: service detection, metrics
Phase 3 — Findings, risk scoring, anomaly, clustering, correlation
Phase 4 — API Gateway + ECS parsers, finalize dispatch, parity tests
Phase 5 — Session validation, T-Digest streaming, smplogs_analyze
Phase 6 — Frontend dual-engine loader (Rust primary, Go fallback)
Phase 7 — Zero-copy chunk scanner, enhanced WASM bindings
Phase 8 — Final migration: drop Go entirely, Rust-only CI
Enter fullscreen mode Exit fullscreen mode

Each phase had one clear deliverable: build passes, parity tests pass against Go's golden output, move on.

The key thing I told Claude at the start of each session: "Mirror the Go implementation exactly. Same logic, same thresholds, same output format. Don't improve it. Don't optimize it. Just port it." That instruction saved a lot of pain. Every time AI tried to "improve" a threshold or restructure a calculation, the parity tests caught it.


What AI was good for

Mechanical translation. Go struct → Rust struct, Go method → Rust function. The Go code was clean and well-commented, which made this straightforward. Claude would take a 300-line Go file and produce a correct Rust port in one shot. Not perfect, but close enough that a short review + test run caught the rest.

Lifetime annotations. I would have spent days figuring out the right lifetime bounds for the zero-copy scanner. The signature looks like this:

pub fn scan_chunk<'a, F>(src: &'a [u8], mut on_event: F) -> Result<&'a str, ()>
where
    F: FnMut(i64, &'a str),
Enter fullscreen mode Exit fullscreen mode

The 'a on the closure parameter, meaning "the &str slices you receive are borrowed from src and live as long as src" is exactly the kind of thing that takes Rust beginners a while to reason through. Claude got it right on the first try, and more importantly, explained why it was correct.

Boilerplate. wasm-bindgen exports, #[cfg(target_arch = "wasm32")] conditional compilation, Cargo.toml feature flags - all stuff that's tedious and error-prone to write by hand. AI handled all of it.

T-Digest port. The Go T-Digest implementation was non-trivial. Porting it to Rust and maintaining the exact centroid compression algorithm, the quantile interpolation, the seed_from_sorted transition path would have taken me a full day alone. With AI it took about an hour, including fixing a subtle off-by-one in the compress_static function.


Where it got tricky

Parity failures. This was the main source of friction. The Go engine had accumulated edge case handling over months of production use. Things like: timestamps sometimes come as quoted strings in CloudWatch exports. The Go code handled it silently. The Rust port didn't, until I added the same handling. Parity tests caught every single one of these, which is why writing them first was the right call.

The unsafe blocks. WASM is single-threaded but Rust's borrow checker doesn't know that. Static mutable state (INCR_STATE, INPUT_BUF, DOMAIN_CANARY) requires unsafe. Claude was correct about when unsafe was needed, but I made sure to review every single one. I wasn't going to ship unsafe I didn't understand.

wasm-bindgen error messages. When you get a type wrong at the JS<->Rust boundary, the error is sometimes just unreachable executed in WASM: no stack trace, no type info. This isn't AI's fault, it's just the toolchain, but it meant debugging sessions that AI couldn't help with because there was no error message to analyze.

The Windows build. wasm-pack works, but getting wasm-bindgen-cli at exactly the right version to match wasm-bindgen in Cargo.toml on Windows took an embarrassing amount of time. I ended up with a build.ps1 that handles the version dance explicitly.


The zero-copy scanner (the one thing I pushed for)

Once the basic port was done and parity tests passed, I asked Claude to go further on one specific thing: memory.

The original Go engine used encoding/json to parse CloudWatch exports. That allocates a string per log event were fine in Go, painful in WASM where every allocation lives forever (WASM linear memory never shrinks). With 500K events in a file, you're doing 500K allocations for strings you inspect for a millisecond.

The Rust port initially did the same with serde_json. I asked Claude to write a custom scanner that walks the raw bytes and returns borrowed slices instead:

pub fn scan_chunk<'a, F>(src: &'a [u8], mut on_event: F) -> Result<&'a str, ()>
where
    F: FnMut(i64, &'a str),
{
    // walk src as bytes, locate "logEvents" array
    // for each object: parse timestamp, return &str slice for message
    // no String allocated — caller gets a pointer into src
}
Enter fullscreen mode Exit fullscreen mode

The insight that made it work: Lambda control lines (START, END, REPORT) are plain ASCII. You can check starts_with(b"REPORT") on raw bytes without decoding JSON escape sequences. The only time we need to decode is when storing the first log line of an invocation for display - which happens once per invocation, not once per event.

before: 1GB file -> ~2GB peak memory (string-per-event)
after:  1GB file -> ~50MB peak memory (borrowed slices)
Enter fullscreen mode Exit fullscreen mode

This was a legitimate collaboration: I knew what I wanted, I could describe the constraint (escape sequences, ASCII prefixes), and Claude wrote the implementation. I reviewed it, understood it, and it's now in production.


The result in numbers

Final migration commit stats:

  • 68 files changed
  • 17,952 lines deleted (Go engine, Go tests, Go bridge, wasm_exec.js)
  • ~180 lines added (mostly CI and build script changes, the Rust was already done in prior commits)
  • Binary: 4.2 MB →->470 KB
  • 3x faster analysis
  • All parity tests passing

The 8 phases took roughly 2 days of focused work. Not 2 days of AI running autonomously but rather 2 days of me directing, reviewing, running tests, and debugging the gaps. The AI probably saved me 2–3 weeks of solo work.


What I actually learned

AI is good at translation, not design. The Go engine had years of design decisions baked in: thresholds, error handling, data structures. Those decisions came from production experience. AI can port them, but it can't create them from scratch. The port worked because the design already existed.

Parity tests are the whole game. Without a golden-output test suite to compare against, I would have had no confidence the Rust engine was correct. Every time a parity test failed, it pointed directly at the divergence. Write parity tests before you start any migration.

"Don't improve it" is the right instruction. Every unsolicited improvement AI made introduced a parity failure. The instruction "port this exactly, don't optimize" was the single most useful constraint I set.

Review everything with unsafe. AI got the unsafe blocks right, but I still read every one. That's not paranoia, it's the deal you make when you use unsafe in production code.


Is it worth it?

For me: yes. The engine is way faster, binary is 2.5x smaller, GC pauses are gone, and I now have a Rust codebase I can extend in ways the Go WASM target wouldn't have allowed (zero-copy scanner, T-Digest streaming transition, reusable input buffers).

Would I do it without AI? Probably not on that timeline. The mechanical translation work: porting 50 files of Go to idiomatic Rust would have been a month-long project, not a 2-day one.

The thing I'd tell anyone considering a similar migration: AI does the typing, you do the thinking. You need to understand the source code well enough to know when the port is wrong. If you're treating it as "AI will handle it," you'll end up with a codebase you can't debug.


If you want to see the result, smplogs.com runs the Rust engine. Free plan gets you 3 analyses/day - upload a CloudWatch export or use the extension directly from AWS Events and see what it finds. The browser extension also lets you analyze directly from the AWS Console without exporting anything.

Happy to answer questions about the migration in the comments - especially if you've done something similar with a large Go or Python codebase.

Top comments (0)