DEV Community

Paul Babatuyi
Paul Babatuyi

Posted on

# Building a Clean REST API in Go: From Spaghetti Code to Clean Architecture

I'll be honest—my first REST API in Go was a disaster. Everything lived in main.go: database queries, business logic, HTTP handlers, all tangled together in a 1,500-line monstrosity. Adding a feature meant scrolling through endless code, hoping I wouldn't break something unrelated.

Then I discovered Clean Architecture, and it changed everything. This is the story of how I rebuilt my inventory management API with proper separation of concerns, making it actually maintainable. No fluff, no theory-only content—just the real lessons from building BMG-Go-Backend.

The Problem: Why I Needed Better Architecture

Picture this: you need to add OAuth login to your API. In my original spaghetti code, I'd have to:

  1. Hunt through main.go for where users are created
  2. Hope the password hashing wasn't hardcoded somewhere random
  3. Add OAuth logic... where? Next to the database code? In a new file? Who knows!
  4. Cross your fingers that you didn't break existing email/password login

Sound familiar? That pain drove me to redesign everything with layers.

The Solution: Clean Architecture in Go

Here's the key insight that clicked for me: separate what something does from how it does it.

Instead of:

// ❌ BAD: Everything mixed together
func CreateUser(w http.ResponseWriter, r *http.Request) {
    var user User
    json.NewDecoder(r.Body).Decode(&user)  // HTTP stuff

    if user.Email == "" {                  // Validation
        http.Error(w, "bad email", 400)
        return
    }

    db.Exec("INSERT INTO users...")        // Database stuff

    json.NewEncoder(w).Encode(user)        // More HTTP stuff
}
Enter fullscreen mode Exit fullscreen mode

We do this:

// ✅ GOOD: Each layer does ONE thing
// Handler: HTTP concerns only
func (h *UserHandler) Create(w http.ResponseWriter, r *http.Request) {
    var dto CreateUserDTO
    parseJSON(r.Body, &dto)

    user, err := h.userService.Create(ctx, dto)
    writeJSON(w, 201, user)
}

// Service: Business logic only  
func (s *UserService) Create(ctx context.Context, dto CreateUserDTO) (*User, error) {
    if err := validateEmail(dto.Email); err != nil {
        return nil, err
    }
    return s.repo.Create(ctx, userFromDTO(dto))
}

// Repository: Database only
func (r *UserRepo) Create(ctx context.Context, user *User) error {
    _, err := r.db.Exec("INSERT INTO users...", user.Email, user.Name)
    return err
}
Enter fullscreen mode Exit fullscreen mode

Now, when PM asks for OAuth? I just add a new method in UserService. The handler and repository don't even know it happened. Beautiful.

The Architecture: A Tour Through the Layers

Let me walk you through how requests flow through BMG. I'll use a real example: creating an inventory item.

Layer 1: Handler (The HTTP Bouncer)

Job: Convert HTTP requests into something the business logic can understand.

func (app *application) createItemHandler(w http.ResponseWriter, r *http.Request) {
    // 1. Parse JSON from HTTP request
    var input CreateItemDTO
    err := app.readJSON(w, r, &input)
    if err != nil {
        app.badRequestResponse(w, r, err)
        return
    }

    // 2. Ask the service to do the work
    item, err := app.itemService.Create(r.Context(), input)
    if err != nil {
        app.serverErrorResponse(w, r, err)
        return
    }

    // 3. Convert back to JSON and send HTTP response
    app.writeJSON(w, http.StatusCreated, item, nil)
}
Enter fullscreen mode Exit fullscreen mode

Notice what's NOT here:

  • ❌ No business rules ("quantity can't be negative")
  • ❌ No SQL queries
  • ❌ No password hashing or complex logic

Just: receive HTTP → call service → return HTTP. That's it.

Layer 2: Service (The Brain)

Job: Enforce business rules, orchestrate complex operations.

func (s *ItemService) Create(ctx context.Context, dto CreateItemDTO) (*Item, error) {
    // Business rule: can't create items with negative quantity
    if dto.Quantity < 0 {
        return nil, ErrInvalidQuantity
    }

    // Business rule: prices must be positive
    if dto.Price <= 0 {
        return nil, ErrInvalidPrice
    }

    // Transform DTO into domain model
    item := &Item{
        Name:        dto.Name,
        Description: dto.Description,
        Quantity:    dto.Quantity,
        Price:       dto.Price,
        CreatedAt:   time.Now(),
    }

    // Ask repository to save it
    return s.repo.Create(ctx, item)
}
Enter fullscreen mode Exit fullscreen mode

This layer knows what should happen, but not how it happens. It doesn't care if we're using PostgreSQL, MongoDB, or a text file—that's the repository's problem.

Layer 3: Repository (The Database Whisperer)

Job: Talk to the database. Nothing else.

func (r *ItemRepository) Create(ctx context.Context, item *Item) (*Item, error) {
    query := `
        INSERT INTO items (name, description, quantity, price, created_at)
        VALUES ($1, $2, $3, $4, $5)
        RETURNING id, created_at
    `

    err := r.db.QueryRowContext(ctx, query,
        item.Name,
        item.Description,
        item.Quantity,
        item.Price,
        item.CreatedAt,
    ).Scan(&item.ID, &item.CreatedAt)

    return item, err
}
Enter fullscreen mode Exit fullscreen mode

All the SQL lives here. If we switch from PostgreSQL to MySQL tomorrow, we only change this file. The service and handler don't even know we use a database.

The Secret Sauce: DTOs (Data Transfer Objects)

Here's something that confused me for months: why not just use domain models everywhere?

Bad idea. Here's why:

// Domain model: internal representation
type User struct {
    ID           string
    Email        string
    PasswordHash string  // ⚠️ We DO NOT want this in API responses!
    CreatedAt    time.Time
    LastLoginAt  *time.Time
}

// DTO: what we actually send over the wire
type UserResponseDTO struct {
    ID        string    `json:"id"`
    Email     string    `json:"email"`
    CreatedAt time.Time `json:"created_at"`
    // Notice: no password hash!
}
Enter fullscreen mode Exit fullscreen mode

DTOs give you:

  1. Security: Don't accidentally leak sensitive fields
  2. Flexibility: API shape doesn't force database schema
  3. Versioning: Support multiple API versions easily

I learned this the hard way when I accidentally returned password hashes in /users endpoint. Good times.

Middleware: The Pipeline Pattern

Middleware in Go is elegant. Each request passes through a chain of functions before hitting your handler:

Request → Logger → CORS → RateLimit → Auth → Handler → Response
Enter fullscreen mode Exit fullscreen mode

Here's how simple it is with Chi router:

router := chi.NewRouter()

// Apply to ALL routes
router.Use(middleware.Logger)
router.Use(middleware.CORS)
router.Use(middleware.RateLimiter)

// Protected routes only
router.Group(func(r chi.Router) {
    r.Use(middleware.Auth)  // JWT validation

    r.Post("/items", app.createItemHandler)
    r.Put("/items/{id}", app.updateItemHandler)
})

// Public routes (no auth needed)
router.Get("/healthcheck", app.healthcheckHandler)
Enter fullscreen mode Exit fullscreen mode

The Middleware I Wish I'd Built Earlier

1. Request Logger

func Logger(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        start := time.Now()

        next.ServeHTTP(w, r)

        log.Printf("%s %s %v", r.Method, r.URL.Path, time.Since(start))
    })
}
Enter fullscreen mode Exit fullscreen mode

Seeing "POST /items 245ms" in logs saved my bacon when debugging slow requests.

2. Rate Limiter

func RateLimiter(next http.Handler) http.Handler {
    limiter := rate.NewLimiter(10, 20) // 10 req/s, burst of 20

    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        if !limiter.Allow() {
            http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
            return
        }
        next.ServeHTTP(w, r)
    })
}
Enter fullscreen mode Exit fullscreen mode

Prevented a junior dev's infinite loop from killing the server. True story.

Error Handling: The Go Way

Go's error handling gets mocked, but I've grown to love it. Here's my approach:

// Domain errors (business rules violated)
var (
    ErrInvalidQuantity = errors.New("quantity cannot be negative")
    ErrInvalidPrice    = errors.New("price must be positive")
    ErrNotFound        = errors.New("item not found")
)

// Service layer
func (s *ItemService) Create(ctx context.Context, dto CreateItemDTO) (*Item, error) {
    if dto.Quantity < 0 {
        return nil, ErrInvalidQuantity  // Business rule
    }

    item, err := s.repo.Create(ctx, item)
    if err != nil {
        return nil, fmt.Errorf("create item: %w", err)  // Wrap for context
    }

    return item, nil
}

// Handler layer
func (app *application) createItemHandler(w http.ResponseWriter, r *http.Request) {
    item, err := app.itemService.Create(r.Context(), input)
    if err != nil {
        switch {
        case errors.Is(err, ErrInvalidQuantity):
            app.badRequestResponse(w, r, err)  // 400
        case errors.Is(err, ErrNotFound):
            app.notFoundResponse(w, r)         // 404
        default:
            app.serverErrorResponse(w, r, err) // 500
        }
        return
    }

    app.writeJSON(w, http.StatusCreated, item, nil)
}
Enter fullscreen mode Exit fullscreen mode

The key: errors flow up, decisions flow down. Services return errors, handlers decide HTTP status codes.

Database: Connection Pooling That Actually Works

This took me embarrassingly long to get right:

// ❌ BAD: Creating connection per request
func createItem(w http.ResponseWriter, r *http.Request) {
    db, _ := sql.Open("postgres", "...")  // Creates new connection!
    defer db.Close()
    db.Exec("INSERT...")
}
Enter fullscreen mode Exit fullscreen mode
// ✅ GOOD: Pool created once at startup
func main() {
    db, err := sql.Open("postgres", connectionString)
    if err != nil {
        log.Fatal(err)
    }

    // Configure the pool
    db.SetMaxOpenConns(25)    // Max 25 connections
    db.SetMaxIdleConns(5)     // Keep 5 idle
    db.SetConnMaxLifetime(5 * time.Minute)

    // Share across all handlers
    app := &application{
        db: db,
    }
}
Enter fullscreen mode Exit fullscreen mode

Went from 200ms queries to 15ms just by fixing this. Connection overhead is real.

Testing: What Actually Gets Tested

I don't test everything. Hot take, I know. Here's what I do test:

1. Business Logic (Service Layer)

func TestItemService_Create_InvalidQuantity(t *testing.T) {
    service := &ItemService{repo: &mockRepo{}}

    _, err := service.Create(ctx, CreateItemDTO{
        Name:     "Widget",
        Quantity: -5,  // Invalid!
    })

    if !errors.Is(err, ErrInvalidQuantity) {
        t.Errorf("expected ErrInvalidQuantity, got %v", err)
    }
}
Enter fullscreen mode Exit fullscreen mode

2. Repository Integration Tests

func TestItemRepo_Create(t *testing.T) {
    db := setupTestDB(t)
    defer db.Close()

    repo := NewItemRepository(db)
    item := &Item{Name: "Test", Quantity: 10}

    created, err := repo.Create(context.Background(), item)

    assert.NoError(t, err)
    assert.NotEmpty(t, created.ID)
}
Enter fullscreen mode Exit fullscreen mode

What I DON'T test: Handlers. They're just glue code. If the service works and the repository works, the handler will work.

Performance: The Numbers That Matter

Here's what I learned from production:

Before Optimization

  • Avg Response Time: 450ms
  • P95: 1.2s
  • Throughput: ~100 req/s

After Optimization

  • Avg Response Time: 45ms (10x improvement!)
  • P95: 180ms
  • Throughput: ~800 req/s

What made the difference:

  1. Connection pooling (biggest win)
  2. Context timeouts (prevents slow queries from piling up)
  3. Proper indexing (added indexes on frequently queried columns)
  4. Middleware ordering (auth before expensive operations)

Deployment: From Local to Production

The beauty of this architecture? It deploys anywhere.

Local development:

export DATABASE_URL="postgres://localhost/bmginventory"
go run cmd/api/main.go
Enter fullscreen mode Exit fullscreen mode

Docker:

FROM golang:1.25-alpine AS builder
WORKDIR /app
COPY . .
RUN go build -o api cmd/api/main.go

FROM alpine:latest
COPY --from=builder /app/api /api
EXPOSE 4000
CMD ["/api"]
Enter fullscreen mode Exit fullscreen mode

Kubernetes/Cloud:
Same binary, different config. That's the power of the 12-factor app.

Lessons Learned (The Hard Way)

1. Start with layers from day one
Refactoring spaghetti code is 10x harder than starting clean.

2. DTOs are worth the boilerplate
Yes, it's extra typing. No, it's not premature optimization. Saved me from leaking sensitive data.

3. Middleware ordering matters

// ✅ GOOD: Auth after rate limiting
router.Use(RateLimit)
router.Use(Auth)

// ❌ BAD: Auth before rate limiting
// Attackers can spam your auth DB!
router.Use(Auth)
router.Use(RateLimit)
Enter fullscreen mode Exit fullscreen mode

4. Context is your friend

ctx, cancel := context.WithTimeout(r.Context(), 5*time.Second)
defer cancel()

item, err := app.itemService.Create(ctx, input)
Enter fullscreen mode Exit fullscreen mode

Prevents runaway queries from killing your server.

5. Don't over-abstract
My first attempt had 5 layers. Ridiculous. Three is plenty: handler, service, repository.

What's Next?

This is just v1. The architecture makes it easy to add:

  • [ ] OAuth2 integration (just add a new auth method in service)
  • [ ] Caching with Redis (add a cache layer, repository doesn't change)
  • [ ] Event-driven features (publish events from service layer)
  • [ ] gRPC endpoints (reuse the service layer!)

That last one is key: good architecture is protocol-agnostic. Want both REST and gRPC? Just add handlers. Your business logic stays the same.

The Code

Everything's on GitHub: BMG-Go-Backend

Clone it. Break it. Make it better. That's how we all learn.

The folder structure:

cmd/api/          # HTTP server entrypoint
internal/
  ├── handler/    # HTTP → Service
  ├── service/    # Business logic
  ├── repository/ # Service → Database
  ├── domain/     # Core entities
  └── dto/        # API contracts
Enter fullscreen mode Exit fullscreen mode

Quick start:

make migrate-up  # Setup DB
make run         # Start server
curl localhost:4000/v1/healthcheck
Enter fullscreen mode Exit fullscreen mode

Final Thoughts

Clean architecture isn't about following rules religiously. It's about making your future self's life easier.

When you get that 3 AM support call because something broke, you want to know exactly where to look. Handler? Service? Repository? Clear boundaries = faster debugging.

When PM wants "just a small feature" that turns into refactoring half the codebase, you want layers that prevent cascading changes.

When you're onboarding a new dev, you want a structure so obvious they can contribute on day one.

That's what this architecture gave me. Hope it helps you too.


Questions? Disagree with my approach? Drop a comment. I'm especially curious about how others handle testing—I know my approach is minimal, and I'd love to hear alternatives.

Found this useful? Star the repo and follow me for more backend deep dives. Next up: adding OAuth2 to this exact API.

Want the gRPC version? Check out my UploadStream project where I use the same layered approach for high-performance file streaming.

Top comments (0)