DEV Community

Cover image for Building an AI-Powered App Entirely in Go: From Simple Prompt to Smart Pipeline
Naveen V
Naveen V

Posted on

Building an AI-Powered App Entirely in Go: From Simple Prompt to Smart Pipeline

The Challenge

I've shipped AI features in many stacks, but over a weekend, I wanted to answer one question: "Can I build a complete, production-quality AI app using only Go?"

Not just a proof of concept. A real application with:

  • Structured AI flows
  • Content moderation
  • Smart interpretation
  • Reactive UI
  • Type safety end-to-end

The result?

An AI Welcome Note Generator that evolved from a 10-line prompt to a multi-stage pipeline with safety filters and natural language understanding—all without leaving Go.

This article walks through how everything fits together — from the simplest flow to a smart, multi-stage LLM pipeline.

See It In Action

Welcome Note Generator Demo

Watch the application in action: from simple prompts to smart, moderated AI flows

The Stack

Backend:

  • Genkit — AI flow orchestration (the star of the show)
  • Gin — HTTP routing and middleware
  • Gemini 2.0 Flash — Fast, powerful LLM
  • Ollama — Local model support

Frontend (yes, in Go):

  • Templ — Type-safe HTML templates
  • Datastar — Reactive UI via Server-Sent Events (zero JavaScript!)
  • Tailwind CSS — Clean, responsive styling

Production:

  • Rate limiting (configurable, per-IP)
  • CSRF protection (with a clever workaround)
  • Docker deployment
  • Structured logging

Deployment:

  • Docker
  • Google Cloud Run

Architecture Overview

Before we dive into flows, here’s the full system at a glance:


          ┌──────────────────────────────┐
          │          Browser UI          │
          │  (Templ + Datastar + HTMX)   │
          └──────────────┬───────────────┘
                         │
                HTTP Form / SSE Streams
                         │
              ┌──────────▼───────────┐
              │         Gin          │
              │   (Handlers & API)   │
              └──────────┬───────────┘
                         │
                  Call Genkit Flow
                         │
            ┌────────────▼────────────┐
            │         Genkit          │
            │  Flows / Prompts / AI   │
            └────────────┬────────────┘
                         │
            ┌────────────▼─────────────┐
            │      Model Provider      │
            │   (Gemini / Ollama)      │
            └──────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Everything downstream of Gin is strongly typed, observable, traceable, and testable thanks to Genkit flows.

The Journey: Five Versions, Five Lessons

Version 1: Keep It Simple

I start every AI project with the bare minimum:

  • a flow that takes a string
  • generates text
  • returns it directly

genkit.DefineFlow(g, "welcomeV1",
    func(ctx context.Context, occasion string) (string, error) {

        system := "You write simple, warm welcome notes."

        resp, err := genkit.Generate(
            ctx, g,
            ai.WithSystem(system),
            ai.WithPrompt(
                fmt.Sprintf(`Generate a welcome note for "%s".`, occasion),
            ),
        )
        if err != nil {
            return "", err
        }

        return resp.Text(), nil
    })

Enter fullscreen mode Exit fullscreen mode

Input: "birthday party"
Output: A friendly welcome message

This version teaches the foundation:

  • flows are typed Go functions
  • prompt → model → response is explicit
  • Genkit generates schemas, observability, and HTTP endpoints automatically

The UI for this version is a single input box and a result area — powered purely by Templ + Datastar, no JS.

Lesson: Start with string → string. Get the basics working before adding complexity.

V1 Flow Screenshot


Version 2: Add Structure

Real apps need more than a text box. Users want control. Let's move from raw text to structured fields:

  • occasion or context
  • language of choice
  • length of the generated note
  • tone or style of the note to be generated
type WelcomeInput struct {
    Occasion string `json:"occasion"` // Occasion or Context
    Language string `json:"language"` // English, Spanish, etc.
    Length   string `json:"length"`   // Short, Medium, Long
    Tone     string `json:"tone"`     // Formal, Casual, Friendly
}
Enter fullscreen mode Exit fullscreen mode

The flow signature changes to:

func(ctx context.Context, in *WelcomeInput) (string, error)
Enter fullscreen mode Exit fullscreen mode

This step lets users customize the note through clean dropdowns.

Lesson: Structured input = better UX. Genkit handles validation automatically.

V2 Flow Screenshot


Version 3: Structured Output + Metadata

This is the moment Genkit's flows become truly powerful. Instead of parsing text, we tell the LLM to return typed JSON:
The model returns typed metadata, not just a string and Genkit parses it into typed data and validates it against the schema for us.

V3 Output

type WelcomeOutput struct {
    Note     string            `json:"note"`
    Occasion string            `json:"occasion"`
    Language string            `json:"language"`
    Length   string            `json:"length"`
    Tone     string            `json:"tone"`
    Metadata map[string]string `json:"metadata"`
}
Enter fullscreen mode Exit fullscreen mode

Flow V3 (structured input → structured output)

resp, err := genkit.GenerateData[WelcomeOutput](
    ctx, g,
    ai.WithSystem(systemPrompt),
    ai.WithPrompt(userPrompt),
)
Enter fullscreen mode Exit fullscreen mode

Now we get:

  • The welcome note
  • Extracted occasion
  • Metadata (sentiment, safety score, comments)

I update UI to showcase metadata panels and structured JSON output.

Lesson: GenerateData[T] gives you type-safe AI responses. The LLM becomes a structured API.

V3 Flow Screenshot


Safe Flow: Add Content Moderation

Production AI needs safety and guardrails. I built a two-stage pipeline:

  1. Generate the welcome note
  2. Moderate it with a second LLM call
  3. If flagged, sanitize and return both versions

Safety Sequence Diagram


      ┌────────┐      ┌───────────────┐      ┌───────────────┐
Input │  User  │ ---> │ Generate Note │ ---> │ Moderate Note │
      └────────┘      └───────────────┘      └───────┬───────┘
                                                     │
                                          Block?  Sanitize?
                                                     │
                                      ┌──────────────▼──────────────┐
                                      │   SafeWelcomeNoteOutput     │
                                      └─────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Moderation Prompt (small but effective)

You are a content safety filter. Remove or rewrite:

- toxicity
- insults
- hate speech
- threats
- explicit content
- sensitive details

Return JSON:
{
"note": "... sanitized note ...",
"blocked": false,
"moderationNote": "reason"
}
Enter fullscreen mode Exit fullscreen mode

Safe Flow Logic

// Stage 1: Generate
note := generateWelcomeNote(input)

// Stage 2: Moderate
moderation := moderateContent(note)

if moderation.Blocked {
    return SafeOutput{
        Note:           moderation.SanitizedNote,
        OriginalNote:   note,
        Blocked:        true,
        ModerationNote: moderation.Reason,
    }
}
Enter fullscreen mode Exit fullscreen mode

UI is updated with amber sanitization banner — with "View original (flagged)" collapsible detail.

Lesson: Don't trust raw LLM output. Use a second model to validate safety.

Content Moderation Screenshot


Smart Flow: Interpret Natural Language

The final evolution: let users describe what they want in plain English.

User input:

I need a short, friendly welcome note for my hotel guests arriving this weekend

What happens:

  1. Interpret the description → extract structured parameters
  2. Generate the note using those parameters
  3. Moderate the output for safety

Smart Flow Pipeline

Raw Description
      │
      ▼
Interpretation Flow (LLM → structured input)
      │
      ▼
V3 Generator (structured → JSON note)
      │
      ▼
Safe Flow (moderation + sanitization)
      │
      ▼
SmartWelcomeFlowOutput
Enter fullscreen mode Exit fullscreen mode

Smart Flow Output

This includes:

  • note (possibly sanitized)
  • structured fields
  • metadata
  • original note (if sanitized)
  • moderation reason
  • raw user description
  • parsed interpretation
type SmartFlowOutput struct {
    *SafeWelcomeNoteOutput              // everything from safe welcome note output
    RawDescription string               // what user typed
    ParsedInput    *WelcomeNoteInput    // result of Interpretation Flow LLM output
}
Enter fullscreen mode Exit fullscreen mode

UI is updated with extra sections such as - "Here's how the AI interpreted your request".
This feels magical for user.

Lesson: Chain flows together. Each step is clean, testable, and observable.

Smart Flow Screenshot


The Interesting Technical Bits

1. CSRF Challenges: Combining Gin and Gorilla

Gin is great for routing and middleware. Gorilla has battle-tested CSRF protection. But they don't play nicely out of the box.

Gorilla CSRF expects http.Handler, while Gin uses its own handler chain. I needed to combine the best of both.

The solution: Wrap the Gin router with CSRF middleware:

// In cmd/web/main.go
router := gin.New()
// use router as needed to define routes

// Wrap Gin with Gorilla CSRF
handler := csrf.Protect(
    cfg.CSRF.Key,
    csrf.SameSite(csrf.SameSiteStrictMode),
    // ... other options
)(router)

// Helper middleware to add CSRF token to Gin context
    router.Use(func(c *gin.Context) {
        c.Set("csrf_token", csrf.Token(c.Request))
        c.Next()
})
// handler is of type http.Handler and ready to be used
mux := http.NewServeMux()
mux.Handle("/", handler)
Enter fullscreen mode Exit fullscreen mode

2. Configurable Rate Limiting

I built per-IP rate limiting with zero rebuilds for config changes:

# Environment variables
RATE_LIMIT_REQUESTS_PER_MINUTE=30
RATE_LIMIT_BURST_SIZE=5
Enter fullscreen mode Exit fullscreen mode

The middleware uses Go's golang.org/x/time/rate for token bucket limiting. Clean, efficient, and Docker-friendly.


3. Reactive UI Without JavaScript

Datastar + Templ = reactive UI via Server-Sent Events:

<form data-on-submit="@post('/api/smart/generate')">
  <textarea name="description"></textarea>
  <button>Generate</button>
</form>

<div data-signal-loading>Loading...</div>
<div data-signal-result>{result.note}</div>
Enter fullscreen mode Exit fullscreen mode

The server streams updates, the browser reacts—all type-safe in Go.


Why This Architecture Works (and scales)

1. Go is the perfect LLM backend

  • Fast
  • Typed
  • Memory-efficient
  • Simple concurrency model
  • Deployable anywhere

2. Genkit flows > hand-rolled AI wrappers

  • Observability
  • Schemas
  • Generation tracing
  • Safe replayability
  • Validation
  • Type inference

3. Templ + Datastar feel like React without React

  • Full reactivity
  • No client bundles
  • No JS toolchain

4. Multi-stage flows model real AI workloads

  • Generation
  • Interpretation
  • Moderation
  • Structured output
  • Pipeline composition

This mirrors real systems: customer support bots, agents, RAG pipelines, content tools, internal automation.


Try It Yourself

The full code is open source. You can:

  • Run it locally with Ollama (no API keys needed)
  • Deploy with Docker (one command)
  • Swap Gemini for other models
  • Extend the flows with your own logic

Key files:

  • internal/flows/ — All 5 flow versions
  • web/middleware/ — Rate limiting, CSRF, logging
  • web/templates/ — Templ components
  • Dockerfile — Production-ready Alpine build

Final Thoughts

This project started as a small experiment:

“What if I built an AI product entirely in Go?”

It ended up demonstrating something bigger. Go's simplicity, Genkit's type safety, and a few well-chosen tools can give you:

  • Fast iteration
  • Clean architecture
  • Production-ready features
  • Type safety from frontend to LLM

Start with V1. Add structure when you need it. Layer in safety and smart features as you go.

The code is simple and the results are powerful.


Built with Go 1.25, Genkit 1.2, and a love for clean code.

🔗 GitHub: View the code
🚀 Live Demo: Try it live

Originally published on Medium: https://medium.com/p/0d9be75d3d00 — posting here for the dev.to community as well.

Top comments (0)