Hello, I'm Maneshwar. I'm working on git-lrc: a Git hook for Checking AI generated code.
When working with large volumes of SVG files, processing them in parallel while writing their metadata into SQLite can be challenging.SQLite provides strong transactional guarantees, but it allows only one writer at a time.
If multiple goroutines write concurrently, the database becomes a bottleneck, producing contention, slowdowns, or lock timeouts.
To address this constraint, I implemented a producer–consumer pattern in Go.
The design dedicates multiple CPU cores to heavy CPU-bound work—SVG processing—and isolates all database writes into a single, linearized consumer stage.
This ensures high throughput without overloading SQLite.
Architecture Overview
The system contains three major components:
- Producers: Multiple goroutines that handle CPU-heavy SVG processing.
- Channels: Buffered pipelines that decouple producers from consumers.
- Consumers: Dedicated goroutines responsible only for database writes.
The goal was to saturate the CPU with parallel work while ensuring SQLite remains contention-free.
Why This Architecture?
- My machine has 8 cores.
- 7 cores were allocated to producers that parse SVGs, compute base64, extract metadata, and prepare insert payloads.
- 1 core was effectively used by a consumer that performs sequential writes into SQLite.
-
This guarantees:
- Maximum throughput in CPU-heavy tasks.
- No parallel writes to SQLite.
- Smooth, high-rate ingestion without DB lock errors.
Components in Detail
1. Buffered Channels
Two buffered channels serve as the communication mechanism:
-
iconChanfor icon metadata -
clusterChanfor cluster metadata
These channels provide backpressure. Producers can continue working until buffers fill, and consumers drain them at their own pace.
iconChan := make(chan IconInsertData, 100)
clusterChan := make(chan ClusterInsertData, 50)
2. Producer Goroutines (Workers)
Seven workers run concurrently. Each worker receives category jobs from a category channel and processes SVG assets:
- Parsing each file
- Converting SVG → base64
- Extracting metadata
- Sending prepared payloads into channels
Example structure:
for i := 0; i < maxWorkers; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
for cat := range categoryChan {
// CPU-heavy SVG processing
iconChan <- IconInsertData{ /*...*/ }
clusterChan <- ClusterInsertData{ /*...*/ }
}
}(i)
}
By dedicating seven CPU cores to this processing stage, the throughput for heavy work is maximized.
3. Consumer Goroutines (Database Writers)
SQLite does not handle concurrent writes well. Instead of letting all producers write directly into DB, we use two dedicated consumer goroutines—one for icons and one for clusters.
Each consumer reads from its respective channel and writes to the database. Because each consumer is the only writer for its domain, transactional conflicts are eliminated.
go func() {
defer dbWg.Done()
for iconData := range iconChan {
// Insert iconData into SQLite
}
}()
Similarly for cluster data:
go func() {
defer dbWg.Done()
for clusterData := range clusterChan {
// Insert clusterData into SQLite
}
}()
This structure isolates responsibilities:
- Producers do all compute-heavy tasks.
- Consumers serialize database operations.
4. Synchronization
Two WaitGroups coordinate all goroutines:
-
wgwaits for producer workers. -
dbWgwaits for consumer writers.
Once all producers finish reading categories:
close(iconChan)
close(clusterChan)
Consumers detect closed channels, finish pending writes, and exit cleanly.
Why This Works Especially Well for SQLite
SQLite's write lock model is simple: only one write transaction at a time.
If multiple goroutines attempt writes concurrently:
- You hit
database is lockederrors. - Writes are serialized anyway, but with unnecessary contention.
- Throughput degrades heavily.
By designating a single writer for each table domain, writes become:
- Predictable
- Contention-free
- Efficient
Since producers never touch the database directly, SQLite remains consistently available for inserts, with no risk of parallel write collisions.
Performance Characteristics
CPU Utilization
The seven producer workers fully occupy seven CPU cores during heavy SVG processing.
DB Stability
The consumer goroutine writing to SQLite consistently runs with low CPU usage and no lock contention.
Throughput
This model enables high ingestion rates because:
- Producers never wait on the database.
- Consumers never contend with each other.
- Channel buffers smooth out short workload bursts.
Final Notes
This architecture is a textbook producer–consumer pattern customized for Go’s concurrency model and SQLite’s constraints. It ensures that:
- CPU-bound work happens in parallel.
- I/O-bound database operations remain serialized.
- The system utilizes hardware efficiently while operating within SQLite’s limitations.
*AI agents write code fast. They also silently remove logic, change behavior, and introduce bugs -- without telling you. You often find out in production.
git-lrc fixes this. It hooks into git commit and reviews every diff before it lands. 60-second setup. Completely free.*
Any feedback or contributors are welcome! It's online, source-available, and ready for anyone to use.
⭐ Star it on GitHub:
HexmosTech
/
git-lrc
Free, Unlimited AI Code Reviews That Run on Commit
AI agents write code fast. They also silently remove logic, change behavior, and introduce bugs -- without telling you. You often find out in production.
git-lrc fixes this. It hooks into git commit and reviews every diff before it lands. 60-second setup. Completely free.
See It In Action
See git-lrc catch serious security issues such as leaked credentials, expensive cloud operations, and sensitive material in log statements
git-lrc-intro-60s.mp4
Why
- 🤖 AI agents silently break things. Code removed. Logic changed. Edge cases gone. You won't notice until production.
- 🔍 Catch it before it ships. AI-powered inline comments show you exactly what changed and what looks wrong.
- 🔁 Build a habit, ship better code. Regular review → fewer bugs → more robust code → better results in your team.
- 🔗 Why git? Git is universal. Every editor, every IDE, every AI…

Top comments (6)
One little suggestion: you can update your goroutine code using the brand new
WaitGroup.Gomethod.E.g., instead of:
Do:
The
wg.Goskips thewg.Add(1)andwg.Done()and you also don't need to pass theiinto the function asid. That hasn't been necessary since 1.22.Nice, thanks for the suggestion @goodevilgenius !
This is a clean setup. You let the CPU-heavy SVG work run in parallel, then serialize all SQLite writes through a single consumer so the DB never locks up. The buffered channels keep things smooth, producers stay fast, and SQLite stays happy.
Thank you! Yess way to do intensive data processing and store the data in a simple structured manner.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.