DEV Community

Cover image for Building a Producer–Consumer Pipeline in Go Using Goroutines and Channels
Athreya aka Maneshwar
Athreya aka Maneshwar

Posted on

Building a Producer–Consumer Pipeline in Go Using Goroutines and Channels

Hello, I'm Maneshwar. I'm working on FreeDevTools online currently building *one place for all dev tools, cheat codes, and TLDRs* — a free, open-source hub where developers can quickly find and use tools without any hassle of searching all over the internet.

When working with large volumes of SVG files, processing them in parallel while writing their metadata into SQLite can be challenging.

SQLite provides strong transactional guarantees, but it allows only one writer at a time.

If multiple goroutines write concurrently, the database becomes a bottleneck, producing contention, slowdowns, or lock timeouts.

To address this constraint, I implemented a producer–consumer pattern in Go.

The design dedicates multiple CPU cores to heavy CPU-bound work—SVG processing—and isolates all database writes into a single, linearized consumer stage.

This ensures high throughput without overloading SQLite.

Architecture Overview

The system contains three major components:

  1. Producers: Multiple goroutines that handle CPU-heavy SVG processing.
  2. Channels: Buffered pipelines that decouple producers from consumers.
  3. Consumers: Dedicated goroutines responsible only for database writes.

The goal was to saturate the CPU with parallel work while ensuring SQLite remains contention-free.

Why This Architecture?

  • My machine has 8 cores.
  • 7 cores were allocated to producers that parse SVGs, compute base64, extract metadata, and prepare insert payloads.
  • 1 core was effectively used by a consumer that performs sequential writes into SQLite.
  • This guarantees:

    • Maximum throughput in CPU-heavy tasks.
    • No parallel writes to SQLite.
    • Smooth, high-rate ingestion without DB lock errors.

Components in Detail

1. Buffered Channels

Two buffered channels serve as the communication mechanism:

  • iconChan for icon metadata
  • clusterChan for cluster metadata

These channels provide backpressure. Producers can continue working until buffers fill, and consumers drain them at their own pace.

iconChan := make(chan IconInsertData, 100)
clusterChan := make(chan ClusterInsertData, 50)
Enter fullscreen mode Exit fullscreen mode

2. Producer Goroutines (Workers)

Seven workers run concurrently. Each worker receives category jobs from a category channel and processes SVG assets:

  • Parsing each file
  • Converting SVG → base64
  • Extracting metadata
  • Sending prepared payloads into channels

Example structure:

for i := 0; i < maxWorkers; i++ {
    wg.Add(1)
    go func(id int) {
        defer wg.Done()
        for cat := range categoryChan {
            // CPU-heavy SVG processing
            iconChan <- IconInsertData{ /*...*/ }
            clusterChan <- ClusterInsertData{ /*...*/ }
        }
    }(i)
}
Enter fullscreen mode Exit fullscreen mode

By dedicating seven CPU cores to this processing stage, the throughput for heavy work is maximized.

3. Consumer Goroutines (Database Writers)

SQLite does not handle concurrent writes well. Instead of letting all producers write directly into DB, we use two dedicated consumer goroutines—one for icons and one for clusters.

Each consumer reads from its respective channel and writes to the database. Because each consumer is the only writer for its domain, transactional conflicts are eliminated.

go func() {
    defer dbWg.Done()
    for iconData := range iconChan {
        // Insert iconData into SQLite
    }
}()
Enter fullscreen mode Exit fullscreen mode

Similarly for cluster data:

go func() {
    defer dbWg.Done()
    for clusterData := range clusterChan {
        // Insert clusterData into SQLite
    }
}()
Enter fullscreen mode Exit fullscreen mode

This structure isolates responsibilities:

  • Producers do all compute-heavy tasks.
  • Consumers serialize database operations.

4. Synchronization

Two WaitGroups coordinate all goroutines:

  • wg waits for producer workers.
  • dbWg waits for consumer writers.

Once all producers finish reading categories:

close(iconChan)
close(clusterChan)
Enter fullscreen mode Exit fullscreen mode

Consumers detect closed channels, finish pending writes, and exit cleanly.

Why This Works Especially Well for SQLite

SQLite's write lock model is simple: only one write transaction at a time.

If multiple goroutines attempt writes concurrently:

  • You hit database is locked errors.
  • Writes are serialized anyway, but with unnecessary contention.
  • Throughput degrades heavily.

By designating a single writer for each table domain, writes become:

  • Predictable
  • Contention-free
  • Efficient

Since producers never touch the database directly, SQLite remains consistently available for inserts, with no risk of parallel write collisions.

Untitled-2025-07-09-1522

Performance Characteristics

CPU Utilization
The seven producer workers fully occupy seven CPU cores during heavy SVG processing.

DB Stability
The consumer goroutine writing to SQLite consistently runs with low CPU usage and no lock contention.

Throughput
This model enables high ingestion rates because:

  • Producers never wait on the database.
  • Consumers never contend with each other.
  • Channel buffers smooth out short workload bursts.

Final Notes

This architecture is a textbook producer–consumer pattern customized for Go’s concurrency model and SQLite’s constraints. It ensures that:

  • CPU-bound work happens in parallel.
  • I/O-bound database operations remain serialized.
  • The system utilizes hardware efficiently while operating within SQLite’s limitations.

FreeDevTools

I’ve been building for FreeDevTools.

A collection of UI/UX-focused tools crafted to simplify workflows, save time, and reduce friction in searching tools/materials.

Any feedback or contributors are welcome!

It’s online, open-source, and ready for anyone to use.

👉 Check it out: FreeDevTools
⭐ Star it on GitHub: freedevtools

Top comments (0)