DEV Community

Dan Guisinger
Dan Guisinger

Posted on

Building FluentDynamoDB in 45 Days with Kiro - A Hackathon Story

This is the story of how a massive .NET DynamoDB library went from an empty repo to production-ready in 45 days.

When I started building FluentDynamoDB, I expected it to take months or even a year to reach a usable state. It’s an extremely large .NET library: source generators, expression translation, composite keys, geospatial indexing, encryption, stream handling, and thousands of tests across nine packages.

Instead… it took 45 days.

The only reason that was possible is Kiro.

This post covers what I built, how Kiro enabled it, and why the project literally could not exist without an AI-augmented workflow.

Project Overview: FluentDynamoDB

FluentDynamoDB is a modern, strongly-typed DynamoDB toolkit for .NET. It includes:

  • Source-generated entity models
  • Lambda-expression to DynamoDB request translation
  • Composite key modeling
  • Encryption + S3-backed blob storage
  • A complete stream-processing framework
  • Geospatial lookup via GeoHash, S2, and H3
  • Thousands of automated tests
  • Nine NuGet packages that work together as a unified ecosystem

This is not a “weekend hackathon” project.
It is a production-grade library, built fast — because the workflow was different.

The Kiro Workflow — Requirements → Design → Tasks

Kiro’s “Spec” system is the most important part of how this project got built.

requirements.md → What the system must do

Every major feature begins with EARS-style requirement statements defining:

  • expected behaviors
  • constraints
  • edge cases
  • business rules and functional intent

design.md → How it should be built

This is where architectural decisions took shape:

  • source generation boundaries
  • lambda-expression normalization rules
  • composite key representation
  • consistency models
  • geospatial data structures
  • AOT safety considerations

tasks.md → What Kiro should implement

Each task references:

  • the requirement it fulfills
  • the design decision that governs how it should be implemented

Kiro then generates the implementation while staying aligned to the spec.

Working alone, that scale is nearly impossible without something like Kiro.

The Hardest Part: Geospatial Encoding with Multiple LLMs

I had zero geospatial background going in. I knew GeoHash existed, but that was the extent of it.

Kiro made it possible to build a complete geospatial feature set — but not without major challenges.

Phase 1: Auto Model + GeoHash

Using Kiro’s default “Auto” model, it implemented GeoHash reasonably well. But during development, Kiro suggested:

“You may also want to consider Google S2 or Uber H3 for geospatial precision.”

Great idea... in theory.

Phase 2: First Attempts at S2 and H3 (Failure)

I let Kiro implement S2 and H3, and both implementations failed badly. It produced:

  • inconsistent cell conversions
  • mismatched bit shifts
  • incorrect spherical coordinate handling
  • decoding errors on specific edge-case coordinates

Auto-mode models simply got lost in the complexity.

Phase 3: Switching to Claude Opus 4.5 + the Clear-Thought MCP

When Anthropic released Claude Opus 4.5, I retried the work with a different approach:

1. Break the problem into steps

Using the clear-thought MCP plugin, I forced structured workflows:

  • normalization rules
  • intermediate coordinate transforms
  • step-by-step verification
  • invariants for S2 and H3 cell relationships

2. Opus’s reasoning was noticeably stronger

It could explain concepts coherently:

  • Hilbert curves
  • space-filling patterns
  • face/cell transformations
  • spherical geometry relationships

3. But we still had edge cases

A handful of stubborn coordinates would not encode/decode correctly.

Phase 4: Deep Debugging Using Reference C Code

This is where Kiro became absolutely essential.

I loaded the original Google S2 and Uber H3 C reference code into the Kiro workspace and had Opus:

  • read them side-by-side
  • trace execution paths
  • compare intermediate values
  • identify mismatched conversions
  • reason through floating-point differences
  • reconcile the two implementations

After several cycles, it found the issues - subtle mathematical transformation details I could not have discovered without deep math knowledge.

Result

S2 + H3 now both work in FluentDynamoDB.

And I will be very blunt:

I do not have the mathematical or domain background to implement S2 or H3 manually. This feature simply would not exist in FluentDynamoDB without Kiro.

It also consumed over 1,000 Kiro credits on this set of Specs alone — but it was worth every one.

Steering Documents: Quietly Powerful

This project didn’t need as many steering rules as full-stack work,
but one rule was essential:

“After any public API change, update the corresponding documentation and append a note to the documentation changelog.”

Why?

Because I have a separate Kiro workspace that:

  • reads that changelog
  • updates the web documentation
  • keeps everything in sync

This is still an experiment, but so far it has been extremely successful.
It ensures documentation never lags behind the code.

Project Scale (Powered by Kiro)

Here’s what was produced during Kiroween:

  • 160,000+ lines of C#
  • 100,000+ lines of documentation, specs, and steering documents
  • ~4,000 automated tests
  • 9 NuGet packages
  • 5 example applications
  • 40+ Specs
  • Used Multiple LLMs per feature
  • Over 10,000 Kiro credits consumed

This is the output of a multi-team engineering project —
delivered by one person using AI-augmented development —
for a total cost of around $400.

Kiroween Hackathon Video

GitHub

What I Learned During Kiroween

1. AI-assisted engineering isn’t the future, it’s the present.

This wasn’t theory. This was shipping real software.

2. Distributed system features become accessible to developers who lack domain expertise.

The S2/H3 experience is proof.

3. Specs > prompts.

Kiro’s structured workflow is genuinely transformative.

4. Good engineering + good AI > either alone.

What’s Next

FluentDynamoDB’s first stable NuGet release is coming soon.
We'll be continuing to:

  • add runtime DynamoDB schema validation
  • improve our FluentResults coverage over the full API surface
  • build more sample applications
  • keep scaling the spec-driven development workflow

If you’re participating in Kiroween, I’d love to compare notes.

— Dan

Top comments (0)