Hello, I'm Shrijith. I'm building git-lrc, an AI code reviewer that runs on every commit. It is free, unlimited, and source-available on Github. Star Us to help devs discover the project. Do give it a try and share your feedback for improving the product.
Concurrency in Go is powerful with goroutines, but shared data can cause trouble.The sync package offers Mutex and RWMutex to manage this.
This post explains what they do, how they work, and when to use
them with examples and real-world insights.
What Problems They Solve
Concurrency issues arise when goroutines access shared data at the same time. This leads to race conditions—unpredictable results from overlapping operations. Here’s a quick demo of the problem:
package main
import (
"fmt"
"sync"
)
func main() {
var count int
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
go func() {
defer wg.Done()
count++
}()
}
wg.Wait()
fmt.Println("Count:", count) // Likely less than 100!
}
Run this with go run -race, and you’ll see the race condition flagged. The final count varies because increments overlap.
Example output:
==================
WARNING: DATA RACE
Read at 0x00c000014188 by goroutine 9:
main.main.func1()
/home/shrsv/bin/goconcurrency/race1.go:16 +0x84
Previous write at 0x00c000014188 by goroutine 7:
main.main.func1()
/home/shrsv/bin/goconcurrency/race1.go:16 +0x96
Goroutine 9 (running) created at:
main.main()
/home/shrsv/bin/goconcurrency/race1.go:14 +0x78
Goroutine 7 (finished) created at:
main.main()
/home/shrsv/bin/goconcurrency/race1.go:14 +0x78
==================
==================
WARNING: DATA RACE
Write at 0x00c000014188 by goroutine 9:
main.main.func1()
/home/shrsv/bin/goconcurrency/race1.go:16 +0x96
Previous write at 0x00c000014188 by goroutine 8:
main.main.func1()
/home/shrsv/bin/goconcurrency/race1.go:16 +0x96
Goroutine 9 (running) created at:
main.main()
/home/shrsv/bin/goconcurrency/race1.go:14 +0x78
Goroutine 8 (finished) created at:
main.main()
/home/shrsv/bin/goconcurrency/race1.go:14 +0x78
==================
Count: 98
Found 2 data race(s)
exit status 66
Mutex fixes this by allowing only one goroutine to access the data at a time. It’s ideal for read and write safety.
RWMutex handles a specific case: when reads are more common than writes. It allows multiple readers simultaneously but locks fully for writes, improving efficiency in read-heavy scenarios.
Mutex Basics with an Example
A Mutex ensures exclusive access to shared data. Here’s a safe counter using it:
package main
import (
"fmt"
"sync"
)
type SafeCounter struct {
mu sync.Mutex
count int
}
func (c *SafeCounter) Increment() {
c.mu.Lock()
defer c.mu.Unlock()
c.count++
}
func (c *SafeCounter) Value() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.count
}
func main() {
counter := SafeCounter{}
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
go func() {
defer wg.Done()
counter.Increment()
}()
}
wg.Wait()
fmt.Println("Final count:", counter.Value()) // Always 100
}
The Lock() and Unlock() pair ensures no overlap. It’s simple and effective for any shared resource.
RWMutex Basics with an Example
RWMutex separates read and write locks. Here’s a cache example where reads dominate:
package main
import (
"fmt"
"sync"
"time"
)
type SafeCache struct {
mu sync.RWMutex // RWMutex to manage concurrent read/write access
cache map[string]string // Shared map to store key-value pairs
}
// Set adds or updates a key-value pair in the cache
func (c *SafeCache) Set(key, value string) {
c.mu.Lock() // Exclusive write lock: no readers or writers allowed
defer c.mu.Unlock() // Unlock when done, using defer for safety
c.cache[key] = value // Write to the map
}
// Get retrieves a value by key from the cache
func (c *SafeCache) Get(key string) string {
c.mu.RLock() // Read lock: allows multiple readers, blocks writers
defer c.mu.RUnlock() // Release read lock when done
return c.cache[key] // Read from the map (may return "" if key not set yet)
}
func main() {
// Initialize the cache with an empty map
cache := SafeCache{cache: make(map[string]string)}
var wg sync.WaitGroup // WaitGroup to synchronize goroutines
// Launch one writer goroutine
wg.Add(1)
go func() {
defer wg.Done() // Signal completion when goroutine finishes
cache.Set("key1", "value1") // Set "key1" to "value1" (takes 10ms due to sleep)
time.Sleep(10 * time.Millisecond) // Simulate a slow write operation
}()
// Launch five reader goroutines
for i := 0; i < 5; i++ {
wg.Add(1)
go func() {
defer wg.Done() // Signal completion when goroutine finishes
// Print the value of "key1" (may be "" or "value1" depending on timing)
fmt.Println(cache.Get("key1"))
}()
}
// Wait for all goroutines to finish
wg.Wait()
// Expected output: Mix of "" (empty string) and "value1".
// Readers starting before the writer finishes see "", after see "value1".
// Order is unpredictable due to goroutine scheduling.
}
RLock() allows multiple readers, while Lock() is for writes. This reduces waiting in read-heavy cases. In this program, the writer takes 10ms, so some readers may run before the write completes, printing an empty string. Others will see "value1" after the write. The output varies due to concurrency.
Methods and Examples Compared
Both types come from the sync package (docs). Here’s a method breakdown:
| Type | Method | Purpose | Example Use |
|---|---|---|---|
Mutex |
Lock() |
Exclusive access | Updating a counter |
Mutex |
Unlock() |
Releases the lock | After the update |
RWMutex |
Lock() |
Exclusive write access | Modifying a map |
RWMutex |
Unlock() |
Releases write lock | After writing |
RWMutex |
RLock() |
Read-only access (multiple OK) | Fetching a value |
RWMutex |
RUnlock() |
Releases read lock | After reading |
Mutex Example: Bank Account
type Account struct {
mu sync.Mutex
balance int
}
func (a *Account) Deposit(amount int) {
a.mu.Lock()
a.balance += amount
a.mu.Unlock()
}
RWMutex Example: Config Reader
type Config struct {
mu sync.RWMutex
data string
}
func (c *Config) Update(newData string) {
c.mu.Lock()
c.data = newData
c.mu.Unlock()
}
func (c *Config) Read() string {
c.mu.RLock()
defer c.mu.RUnlock()
return c.data
}
Always match locks with unlocks to avoid deadlocks.
Real-World Use Cases
Mutex: Inventory System
For an online store, Mutex prevents overselling:
type Inventory struct {
mu sync.Mutex
stock int
}
func (i *Inventory) Sell() bool {
i.mu.Lock()
defer i.mu.Unlock()
if i.stock > 0 {
i.stock--
return true
}
return false
}
Writes dominate here, so Mutex fits.
RWMutex: Dashboard Stats
A live dashboard with frequent reads benefits from RWMutex:
type Stats struct {
mu sync.RWMutex
visitors int
}
func (s *Stats) Update(newCount int) {
s.mu.Lock()
s.visitors = newCount
s.mu.Unlock()
}
func (s *Stats) GetVisitors() int {
s.mu.RLock()
defer s.mu.RUnlock()
return s.visitors
}
Readers access freely, writes lock fully.
Trade-Offs
| Scenario | Mutex | RWMutex |
|---|---|---|
| Mostly writes | Simple | Too complex |
| Mostly reads | Slow | Faster |
| Mixed | Test it | Test it |
Use go test -bench to measure.
Key Takeaways Mutex is your go-to for simplicity and write-heavy tasks. RWMutex excels when reads outnumber writes. Test your workload to choose wisely.
For real-world examples, check these repos:
-
Kubernetes: Uses
Mutexin scheduler code (e.g.,pkg/scheduler). -
HashiCorp Vault: Employs
RWMutexfor config management (e.g.,vault/core.go).
Explore the sync docs or this GopherCon talk for more. Happy coding!
*AI agents write code fast. They also silently remove logic, change behavior, and introduce bugs -- without telling you. You often find out in production.
git-lrc fixes this. It hooks into git commit and reviews every diff before it lands. 60-second setup. Completely free.*
Any feedback or contributors are welcome! It's online, source-available, and ready for anyone to use.
⭐ Star it on GitHub:
HexmosTech
/
git-lrc
Free, Unlimited AI Code Reviews That Run on Commit
AI agents write code fast. They also silently remove logic, change behavior, and introduce bugs -- without telling you. You often find out in production.
git-lrc fixes this. It hooks into git commit and reviews every diff before it lands. 60-second setup. Completely free.
See It In Action
See git-lrc catch serious security issues such as leaked credentials, expensive cloud operations, and sensitive material in log statements
git-lrc-intro-60s.mp4
Why
- 🤖 AI agents silently break things. Code removed. Logic changed. Edge cases gone. You won't notice until production.
- 🔍 Catch it before it ships. AI-powered inline comments show you exactly what changed and what looks wrong.
- 🔁 Build a habit, ship better code. Regular review → fewer bugs → more robust code → better results in your team.
- 🔗 Why git? Git is universal. Every editor, every IDE, every AI…
Top comments (0)