Go (Golang) is known for its simplicity, performance, and concurrency model.
At the heart of this concurrency lies one of Go’s most elegant features — the goroutine.
In this blog, we’ll explore what goroutines are, how they work under the hood, and how you can use them to build concurrent, scalable applications like a seasoned Go developer.
🧩 What is a Goroutine?
A goroutine is a lightweight thread managed by the Go runtime.
When you run:
go doSomething()
you’re telling Go:
“Run doSomething() asynchronously, and don’t wait for it to finish.”
That single keyword go launches a new goroutine.
Unlike traditional OS threads (which can take up megabytes of stack memory), a goroutine starts with only a few kilobytes — making it possible to spawn hundreds of thousands of them effortlessly.
⚙️ How Goroutines Work (Under the Hood)
Go uses an efficient M:N scheduler, meaning it maps:
M = OS threads
N = Goroutines
So, instead of each goroutine needing its own thread, the Go runtime multiplexes thousands of goroutines onto a few OS threads — automatically managing scheduling, context switching, and load balancing.
This is why goroutines are incredibly lightweight and efficient.
🧠 The Lifecycle of a Goroutine
You launch it with go ...
The Go scheduler places it in a run queue
The scheduler assigns it to an available P (processor context)
The goroutine executes
When it finishes (or main() exits), it’s garbage collected
Important: When the main function ends, all goroutines stop — no exceptions.
🧪 Example 1: Your First Goroutine
Let’s start simple:
package main
import (
"fmt"
"time"
)
func main() {
go fmt.Println("Hello from a goroutine!")
time.Sleep(time.Millisecond * 10)
}
Without the time.Sleep, this program might print nothing — because the main goroutine exits before the child one gets CPU time.
✅ Lesson: The main goroutine must stay alive until others finish.
🧱 Example 2: Launching Multiple Goroutines
package main
import "fmt"
func main() {
for i := 1; i <= 5; i++ {
go fmt.Println("Goroutine:", i)
}
fmt.Scanln() // Wait for user input to prevent exit
}
You’ll notice the output order is unpredictable — and that’s expected!
Each goroutine runs concurrently, not sequentially.
🔗 Communication Between Goroutines — Channels
Goroutines are great, but they’re even more powerful when they communicate safely.
That’s where channels come in.
A channel is a pipe that allows goroutines to send and receive data.
Example:
package main
import "fmt"
func main() {
ch := make(chan string)
go func() {
ch <- "Hello from goroutine!"
}()
msg := <-ch
fmt.Println(msg)
}
ch <- "Hello" sends data
<-ch receives data
Both send and receive block until the other side is ready — ensuring synchronization.
⚙️ Buffered Channels
Want to send without waiting immediately? Use a buffered channel:
ch := make(chan int, 2)
ch <- 1
ch <- 2
fmt.Println(<-ch)
Here, the buffer lets you send two items before blocking.
🧰 Controlling Goroutines with sync.WaitGroup
In real-world apps, you’ll often need to wait for multiple goroutines to finish.
That’s where sync.WaitGroup comes in:
package main
import (
"fmt"
"sync"
)
func main() {
var wg sync.WaitGroup
for i := 1; i <= 3; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
fmt.Println("Worker", id, "done")
}(i)
}
wg.Wait() // Wait until all goroutines finish
}
✅ This is the idiomatic Go way to synchronize goroutines.
🏗 Real-World Example: Worker Pool Pattern
A worker pool limits the number of concurrent workers — an essential pattern in production.
package main
import (
"fmt"
"time"
)
func worker(id int, jobs <-chan int, results chan<- int) {
for j := range jobs {
fmt.Println("Worker", id, "started job", j)
time.Sleep(time.Second)
fmt.Println("Worker", id, "finished job", j)
results <- j * 2
}
}
func main() {
jobs := make(chan int, 5)
results := make(chan int, 5)
for w := 1; w <= 3; w++ {
go worker(w, jobs, results)
}
for j := 1; j <= 5; j++ {
jobs <- j
}
close(jobs)
for a := 1; a <= 5; a++ {
<-results
}
}
💡 Conceptually:
Jobs channel → queue of tasks
Results channel → output
Workers (goroutines) → process jobs concurrently
⚠️ Common Pitfalls
❌ Forgetting to close channels — can cause deadlocks
❌ Starting goroutines without control — can overwhelm system memory
❌ Accessing shared memory without sync — causes race conditions
You can detect races by running:
go run -race main.go
🧭 Key Takeaways
✅ Goroutines are lightweight, cheap, and managed by Go’s scheduler
✅ Channels enable safe communication between them
✅ Always synchronize goroutines using WaitGroups, channels, or contexts
✅ Never start a goroutine you can’t stop
🚀 Final Thoughts
Goroutines are the foundation of Go’s simplicity and power.
Once you master them, you’ll start thinking in concurrent patterns — building systems that scale effortlessly.
In upcoming posts, I’ll dive deeper into:
- Context cancellation
- Pipelines
- Fan-out/Fan-in patterns
- Real-world concurrency debugging
✍️ Written by Yogesh Mathankar
💬 Modern MERN & Go developer passionate about building scalable systems and AI-integrated apps.
Top comments (0)