❓ What is Concurrency?
Concurrency is the ability of a program to manage multiple tasks at once. These tasks may not run exactly at the same time, but they are managed in such a way that it feels like they are happening simultaneously.
🧠 Real-Life Analogy
Imagine you’re cooking dinner:
- You put water on the stove to boil. While you wait for it to heat up, you chop vegetables. Once it’s boiling, you add pasta. While the pasta cooks, you start preparing the sauce.
- You’re not doing everything at once, but you’ve planned it so that while one task is happening in the background (like water boiling), you’re using that time to work on something else (like chopping vegetables). This way, you’re making steady progress on multiple things without wasting time.
That’s what concurrency means in programming: arranging tasks so they can make progress without blocking each other. Even if only one thing runs at a time, they move forward efficiently by taking turns when it makes sense.
🧑💻 Concurrency in Go
In Go, concurrency means:
- Your program runs multiple tasks (functions or processes).
- Each concurrent task runs in its own Goroutine.
- Go has a built-in scheduler that efficiently manages these Goroutines across system threads and CPU cores.
🧵 What is a Goroutine?
A Goroutine is a lightweight, independently executing function, managed by the Go runtime.
- You create one using the
go
keyword. - It’s much lighter than an OS thread (~2 KB of stack to start).
- Goroutines can scale massively, you can run thousands without major performance hits.
Here’s how you launch a goroutine:
go someFunction()
It starts running someFunction()
in the background, quickly and efficiently.
🧪 Example: Goroutine in Action
package main
import (
"fmt"
"time"
)
func backgroundTask() {
time.Sleep(2 * time.Second)
fmt.Println("Finished background task")
}
func main() {
go backgroundTask() // Run task concurrently
fmt.Println("Main function done")
}
Output:
Main function done
😲 Wait, where’s the output from backgroundTask()
?
🧠 What Happened?
-
main()
runs in the main goroutine, the default thread of execution in a Go program. - When
go backgroundTask()
is called, Go starts a new goroutine forbackgroundTask()
, but it doesn't wait for it to finish. - The program immediately continues to the next line:
fmt.Println("Main function done")
, and prints it. - Since there's nothing left after that, main ends.
- Once
main()
finishes, the entire program exits, and Go kills all running goroutines, even if they’re still doing something.
So the backgroundTask()
goroutine gets cut off before it can finish and print its message.
🛠️ Fixing It: Give It Time
Add a delay in main()
to let the background goroutine finish:
func main() {
go backgroundTask()
fmt.Println("Main function done")
time.Sleep(3 * time.Second)
}
Output:
Main function done
Finished background task
✔️ Now you see both outputs because we gave enough time for the background task to complete.
🤔 But… What If We Don’t Know How Long the Task Takes?
Using time.Sleep()
is a bad practice in real applications because:
- Unknown task duration: We can’t always predict how long a task will take. Tasks like API calls may vary in duration.
- Wasted resources: If you sleep for more time than necessary, you waste resources.
- Fragile code: Hardcoding sleep durations makes the program prone to bugs.
🔑 Instead, Go gives us a better tool: sync.WaitGroup
🧩 What is sync.WaitGroup
?
WaitGroup
lets you wait for a group of goroutines to finish; no guessing, no sleeping.
🎯 Think of it like:
A field trip leader keeping count of students:
- 🧑🎓 Each student going out =
Add(1)
- ✅ Each student returning =
Done()
- 🧍 The leader waits until everyone returns =
Wait()
✅ Using WaitGroup
(Step-by-Step)
package main
import (
"fmt"
"log"
"net/http"
"sync"
)
func backgroundTask(wg *sync.WaitGroup) {
defer wg.Done() // 4. Tell WaitGroup this task is done
url := "https://jsonplaceholder.typicode.com/posts"
resp, err := http.Get(url)
if err != nil {
fmt.Fatalf("Error fetching posts: %v", err)
}
fmt.Println("Background Task, Response Status:", resp.Status)
}
func main() {
var wg sync.WaitGroup // 1. Create a WaitGroup
wg.Add(1) // 2. We’re launching 1 goroutine
go backgroundTask(&wg) // 3. Start the goroutine
wg.Wait() // 5. Wait for all tasks to finish
fmt.Println("Main function done")
}
🧪 Output:
Background Task, Response Status: 200 OK
Main function done
🔍 Explanation:
-
wg.Add(1)
tells Go: “One goroutine is coming.” - Inside the goroutine,
defer wg.Done()
tells Go: “I’m finished.” -
wg.Wait()
blocks the main goroutine until the task finishes.
🚀 Running Multiple Goroutines with WaitGroup
You can use WaitGroup
to manage multiple goroutines at once. Here's how you can launch multiple background tasks concurrently:
package main
import (
"fmt"
"log"
"net/http"
"sync"
)
func backgroundTask(id int, wg *sync.WaitGroup) {
defer wg.Done()
url := "https://jsonplaceholder.typicode.com/posts"
resp, err := http.Get(url)
if err != nil {
log.Printf("Task %d failed: %v\n", id, err)
return
}
fmt.Printf("Task %d, Response Status: %s\n", id, resp.Status)
}
func main() {
var wg sync.WaitGroup
totalTasks := 20
for i := 1; i <= totalTasks; i++ {
wg.Add(1)
go backgroundTask(i, &wg)
}
wg.Wait()
fmt.Println("All background tasks completed")
}
🧵 Race Conditions and Synchronization
A race condition happens when two or more goroutines try to use or change the same variable at the same time, and the result depends on who gets there first. This can lead to wrong or unexpected results.
❌ Example: Race Condition Without Synchronization
package main
import (
"fmt"
"sync"
)
func main() {
counter := 0 // Shared variable
var wg sync.WaitGroup // Used to wait for all goroutines to finish
for i := 0; i < 1000; i++ {
wg.Add(1) // Increase WaitGroup counter
go func() {
defer wg.Done() // Decrease WaitGroup counter when done
counter++ // 🔥 Race condition happens here!
}()
}
wg.Wait() // Wait for all goroutines to finish
fmt.Println("Final counter:", counter) // 😬 Unpredictable result!
}
🧠 What’s Happening?
That line counter++
looks simple, but it's not safe when many goroutines run it at the same time.
Here's what really happens inside counter++
:
- Read the current value of
counter
- Add 1 to it
- Save the new value back
If two goroutines do this at the same time, they might both read the same old value before either writes the new one. So, one increment gets lost.
That’s why you’ll often see a final count that’s less than 1000. This is called a race condition.
✅ How to Fix It: Use a Mutex
To safely share data between goroutines, we use a mutex (short for mutual exclusion). It ensures that only one goroutine can access the critical section (the shared resource) at a time.
package main
import (
"fmt"
"sync"
)
func main() {
counter := 0
var wg sync.WaitGroup
var mu sync.Mutex // 👈 Mutex to protect the counter
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
mu.Lock() // 👈 Lock before accessing counter
counter++
mu.Unlock() // 👈 Unlock after done
}()
}
wg.Wait()
fmt.Println("Final counter:", counter) // ✅ Always 1000
}
🔐 What mu.Lock()
and mu.Unlock()
Do:
Imagine a single key to a room where the shared variable (like counter
) lives.
- 🧑🔧
mu.Lock()
means:
"I need the key to go into the room and do something important. No one else can come in while I’m inside."
- 🧑💼
mu.Unlock()
means:
"I’m done! Here’s the key, someone else can go in now."
So when you write:
mu.Lock()
counter++
mu.Unlock()
You're saying:
- Lock the door so no one else can touch
counter
. - Safely update
counter
. - Unlock the door so others can take their turn.
✅ This makes sure only one goroutine at a time is changing counter
, which keeps things safe and correct.
🔁 If another goroutine tries to mu.Lock()
while it's already locked, it will wait until it's unlocked.
👉 Pro tip: Use defer mu.Unlock()
right after mu.Lock()
to make sure the lock is always released, even if something goes wrong.
📌 Go's Race Detector
Go includes a built-in race detector. Just add -race
flag when running your program:
go run -race yourprogram.go
It will tell you if and where race conditions occur in your code!
🔁 Recap: What We Learned So Far
We learned:
- Goroutines run functions in the background.
- WaitGroups wait until all goroutines are done.
- Mutexes prevent race conditions when multiple goroutines access shared data.
🤔 But what if we want goroutines to send back some result or communicate with each other?
That's where channels come in.
📬 What is a Channel?
A channel is a built-in Go feature that allows goroutines to talk to each other.
Think of a channel like a message pipe. One goroutine puts data in, and another takes it out.
✅ Creating a Channel
ch := make(chan int) // creates a channel of type int
📤 Sending Data to a Channel
ch <- 42 // send 42 to channel
📥 Receiving Data from a Channel
value := <-ch // receive value from channel
fmt.Println(value) // prints: 42
🧪 Basic Example
Let’s write a small program with a goroutine that sends a message back to the main function.
package main
import (
"fmt"
)
func greet(ch chan string) {
ch <- "Hello from goroutine!"
}
func main() {
// 1. Create a channel of type string
messageChannel := make(chan string)
// 2. Start a goroutine and pass the channel to it
go greet(messageChannel)
// 3. Receive the message from the channel
message := <-messageChannel
fmt.Println("Received:", message)
}
✅ Output:
Received: Hello from goroutine!
Even if the goroutine takes time (e.g., sleeps), the main function waits until the message is received.
🔍 Code Explanation
-
make(chan string)
We create a channel that can carry strings. -
go greet(messageChannel)
We start a goroutine and give it the channel. -
ch <- "Hello..."
Inside the goroutine, we send a message into the channel. -
<-messageChannel
In the main function, we wait and receive the message.
🔹 Blocking Behavior of Channels
Channels in Go are synchronous by default. That means when you send or receive a value, your code waits (or blocks) until the other side is ready.
Operation | Blocks Until |
---|---|
ch <- value |
Another goroutine is ready to receive |
value := <-ch |
Another goroutine sends a value |
Because of this, you can’t send and receive on an unbuffered channel in the same goroutine, the send will pause and never reach the receive, since both need each other to proceed. This is why we usually use goroutines with channels.
🔥 Here is an example of this
package main
import (
"fmt"
)
func main() {
ch := make(chan string) // create a string channel
ch <- "Hello from goroutine!" // send message to channel
msg := <-ch // receive message from channel
fmt.Println(msg) // Output: Hello from goroutine!
}
Output:
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [chan send]:
main.main()
/home/iamismile/Desktop/development/golang/helloworld/main.go:10 +0x36
exit status 2
💡 Why This Happens:
You created an unbuffered channel, which means:
- The send operation
ch <- "..."
will block until someone is ready to receive the value. - But in your code, the main goroutine tries to send before any goroutine is receiving.
- Since no other goroutine is receiving yet, the main goroutine just waits forever, which causes a deadlock.
🤔 You might be wondering…
I've used the term unbuffered channel. So far, all the channels we’ve used were unbuffered, which means the sender and receiver had to be ready at the same time for communication to happen.
But Go also gives us another type of channel called a buffered channel.
Let’s break down both types in a simple way:
🔹 Unbuffered Channels — "Direct Delivery"
Think of an unbuffered channel like a handshake:
The sender holds out a value, but can’t let go until someone is there to take it.
ch := make(chan int) // unbuffered
📦 How it works:
-
ch <- 10
→ blocks until another goroutine does<-ch
-
<-ch
→ blocks until another goroutine doesch <- value
📌 Example:
ch := make(chan string)
go func() {
ch <- "Hello" // waits until someone receives
}()
msg := <-ch
fmt.Println(msg) // "Hello"
🧾 When to use:
- You want strict synchronization between goroutines.
- You want the sender to wait for the receiver.
🔷 Buffered Channels — "Mailboxes"
Buffered channels act like a mailbox:
The sender can drop messages into it and walk away, unless the mailbox is full.
ch := make(chan int, 2) // buffer size = 2
📦 How it behaves:
-
ch <-
value blocks only if the buffer is full -
<-ch
blocks only if the buffer is empty
📌 Example:
ch := make(chan string, 2)
ch <- "one" // ✅ doesn't block
ch <- "two" // ✅ doesn't block
// ch <- "three" // ❌ blocks — buffer is full
fmt.Println(<-ch) // "one"
fmt.Println(<-ch) // "two"
🧾 When to use:
- You want to separate the timing between sender and receiver
- You need to store a few values temporarily
- Producer works in bursts (faster than consumer)
🔐 Closing Channels in Go
Now let’s focus on closing channels, a super important part of managing communication between goroutines. It’s not just about stopping data, it's about doing it the right way so your program stays safe, efficient, and bug-free.
🧠 Why Closing a Channel Matters
Imagine you’re running a conveyor belt in a factory. Workers (goroutines) put boxes (data) on the belt, and quality checkers (other goroutines) take those boxes off and inspect them.
But here’s the thing:
If the workers finish their job and leave without telling anyone, the checkers will keep waiting, thinking more boxes will come. Forever. 😬
In Go, closing a channel is how workers say, "Hey, I’m done sending data!"
That way, receivers (the checkers) know it's safe to stop waiting.
🤔 What Happens When a Channel Is Closed?
Here’s what you need to know:
-
✅ Receiving from a closed channel still works! You’ll get:
- All remaining values in the channel.
- Then zero values (0, "", nil, etc.) after it’s empty.
- ❌ Sending to a closed channel causes a panic.
You can also check whether a channel is closed using the ok
idiom:
value, ok := <-ch
if !ok {
fmt.Println("Channel is closed!")
}
⏳ When Should You Close a Channel?
Here’s the golden rule:
🟢 Only the sender should close the channel.
And only when all data has been sent.
- ✅ Do close the channel when sending is done.
- ❌ Don't close from multiple places.
- ❌ Don't close a channel if you're only receiving from it.
Think of closing a channel like turning off a faucet; only the person using it should do it.
✅ How to Close a Channel
You use the built-in close()
function:
close(ch)
🧪 Example: Properly Closing a Channel
package main
import "fmt"
func greet(ch chan string) {
ch <- "Hello from goroutine!"
close(ch) // Close after sending
}
func main() {
messageChannel := make(chan string)
go greet(messageChannel)
for message := range messageChannel {
fmt.Println("Received:", message)
}
fmt.Println("All messages received!")
}
Output
Received: Hello from goroutine!
All messages received!
The for range
loop automatically stops when the channel is closed.
Nice and clean!
❗ What If We Don’t Close the Channel?
Let’s say you forget to close the channel:
package main
import "fmt"
func greet(ch chan string) {
ch <- "Hello from goroutine!"
// ⚠️ No close
}
func main() {
messageChannel := make(chan string)
go greet(messageChannel)
for message := range messageChannel { // ⚠️ This will block forever
fmt.Println("Received:", message)
}
}
😱 This will cause a deadlock!
The for range
loop keeps waiting for new messages that will never come, because the sender is done but didn’t signal it. Your program will just hang.
📬 Analogy: Letters in the Mailbox
Think of a channel like a mailbox:
- 📨 Senders put letters inside.
- 📭 Receivers check and collect them.
But if no one ever puts up a “no more letters” sign, the mailman keeps checking the box... forever. 🕳️
Closing the channel = putting up that “no more mail” sign.
🧠 Summary of Best Practices
✅ Do | ❌ Don’t |
---|---|
Close the channel only from sender | ❌ Don’t close the channel from receiver |
Close only once, in one place | ❌ Don’t close from multiple goroutines |
Use for range ch to receive safely |
❌ Don’t send on a closed channel; panic! |
🧵 What If Multiple Goroutines Are Sending?
Here’s a common scenario: You have multiple goroutines sending data into the same channel.
The problem?
If more than one of them tries to close the channel — ❌ Panic alert!
🔑 Solution:
Use a sync.WaitGroup
to:
- Track when all senders are done.
- Let a single, dedicated goroutine close the channel after that.
🧪 Example: Safe Channel Closing with Multiple Senders
package main
import (
"fmt"
"sync"
)
func worker(id int, ch chan<- string, wg *sync.WaitGroup) {
defer wg.Done()
ch <- fmt.Sprintf("worker %d done", id)
}
func main() {
var wg sync.WaitGroup
ch := make(chan string)
numWorkers := 5
wg.Add(numWorkers)
// Start multiple sender goroutines
for i := 1; i <= numWorkers; i++ {
go worker(i, ch, &wg)
}
// 🔒 Dedicated goroutine to close the channel
go func() {
wg.Wait() // Wait until all workers are done
close(ch) // ✅ Only one closer
}()
// Receive from channel until it's closed
for msg := range ch {
fmt.Println("Received:", msg)
}
}
Output:
Received: worker 1 done
Received: worker 2 done
Received: worker 3 done
Received: worker 4 done
Received: worker 5 done
🧠 Why This Pattern Works:
1. Goroutines (Workers)
Each worker runs in its own goroutine, sends a message into ch
, and then calls wg.Done()
to signal it’s finished.
2. WaitGroup
The WaitGroup
keeps track of all running workers.
We start by calling wg.Add(5)
to tell it we're waiting for 5 workers.
Each worker does wg.Done()
when it's finished.
3. Closing the Channel
We spawn a separate goroutine whose only job is to wait until all workers are done (wg.Wait()
), and then safely close the channel.
This ensures only one goroutine closes the channel, and it does so after all sends are complete.
4. Receiving Messages
The for msg := range ch
loop reads messages as long as the channel is open.
Once it's closed and empty, the loop ends, cleanly and safely.
🎯 A Quick Note: What’s chan<- string
?
You might have noticed this weird-looking function signature in our example:
func worker(id int, ch chan<- string, wg *sync.WaitGroup)
What’s that chan<- string
thing? 🤔
This is a send-only channel, meaning the worker function can only send data into the channel, not receive from it.
It's a good practice because:
- It makes your code safer and easier to understand.
- It prevents accidental reads from the channel inside the sender.
Here’s a quick comparison:
Syntax | Meaning |
---|---|
chan string |
Read and write (send + receive) |
chan<- string |
Send-only |
<-chan string |
Receive-only |
This kind of type narrowing helps Go enforce better separation of concerns between senders and receivers. 🛡️
🧭 Enter select
: Choosing Between Channels
Sometimes, you're listening to multiple channels and want to act as soon as any one of them sends data.
That’s exactly what select
is for.
🧠 Motivation: Pick the First to Reply
Imagine you’re waiting for two friends to text you.
Whoever replies first, you’ll go hang out with them.
That’s what select
does in Go. It waits for any one of multiple channels to send data, and responds immediately.
🧪 Example: First Response Wins
package main
import (
"fmt"
"time"
)
func main() {
fast := make(chan string)
slow := make(chan string)
go func() {
time.Sleep(1 * time.Second)
fast <- "I'm fast!"
}()
go func() {
time.Sleep(2 * time.Second)
slow <- "I'm slow!"
}()
select {
case msg := <-fast:
fmt.Println("Got:", msg)
case msg := <-slow:
fmt.Println("Got:", msg)
}
}
Output:
Got: I'm fast!
💡 What Happens?
- The first goroutine sleeps for 1 second and sends
"I'm fast!"
- The second goroutine sleeps for 2 seconds and sends "I'm slow!"
- 🔍
select
waits until any one of the channels is ready. - Because
fast
sends first, that case runs, and we skip the slower one!
🔁 What if Nothing Is Ready Yet?
You can use a default
case inside select
to avoid blocking. Useful when you want to do something else if no channel is ready right now.
🧪 Example: select
with default
(Non-blocking)
package main
import (
"fmt"
"time"
)
func main() {
fast := make(chan string)
slow := make(chan string)
go func() {
time.Sleep(2 * time.Second)
fast <- "I'm fast!"
}()
go func() {
time.Sleep(3 * time.Second)
slow <- "I'm slow!"
}()
// Try to receive before any goroutine sends
select {
case msg := <-fast:
fmt.Println("Got:", msg)
case msg := <-slow:
fmt.Println("Got:", msg)
default:
fmt.Println("No messages yet. Doing something else.")
}
// Wait for messages to arrive
time.Sleep(3 * time.Second)
// Try again after waiting
select {
case msg := <-fast:
fmt.Println("Later got:", msg)
case msg := <-slow:
fmt.Println("Later got:", msg)
default:
fmt.Println("Still nothing...")
}
}
Output:
No messages yet. Doing something else.
Later got: I'm fast!
🧠 What Happens in This Program?
- The first
select
runs immediately, but no message has arrived, sodefault
is chosen. - Later, all channels are ready. When more than one case is ready, Go picks one at random, so which one runs may change each time you run the program.:
Later got: I'm fast!
or
Later got: I'm slow!
Use default
when you don’t want to wait around, perfect for non-blocking checks or responsive UIs.
🚦 Timeouts with select
Another common pattern is using select
with a timeout channel:
package main
import (
"fmt"
"time"
)
func main() {
ch := make(chan string) // Create a channel to receive a string
// Start a goroutine that waits for 2 seconds and then sends a message
go func() {
time.Sleep(2 * time.Second) // Simulate a delay
ch <- "Finally got data!" // Send message after delay
}()
// Use select to either receive from the channel or timeout
select {
case msg := <-ch:
// If data is received from the channel before timeout
fmt.Println("Received:", msg)
case <-time.After(1 * time.Second):
// If no data arrives in 1 second, this case runs
fmt.Println("Timeout! Moving on...")
}
}
Output:
Timeout! Moving on...
🧠 What's Happening?
- The goroutine sleeps for 2 seconds before sending data.
- But
time.After(1 * time.Second)
creates a channel that sends a signal after 1 second. - The
select
waits for whichever comes first. - Since the timeout comes before the message, the program prints:
"Timeout! Moving on..."
🧰 This pattern is super useful for:
- Timing out slow network calls
- Canceling tasks that take too long
- Preventing your app from getting stuck waiting
🏁 Advanced Pattern: Fan-Out, Fan-In
The Fan-Out, Fan-In pattern is a powerful concurrency pattern in Go, designed to help distribute work across multiple workers and collect their results efficiently.
💡 What is Fan-Out, Fan-In?
- 👨🍳 Fan-Out: Distributing tasks across multiple workers (like assigning jobs to different team members).
- 🧾 Fan-In: Gathering the results from all workers into one place (like collecting the completed work at the end of the day).
🍕 Think of it like a pizza restaurant:
- Fan-Out: Customer orders are sent to multiple chefs, who each make different pizzas simultaneously.
- Fan-In: All completed pizzas are gathered at the same pickup counter for delivery.
That's the Fan-Out, Fan-In pattern! It's a way to:
- Break a big task into smaller parts
- Work on those parts at the same time
- Combine all the results at the end
✅ A Clearer Example (with Code)
package main
import (
"fmt"
"sync"
"time"
)
func main() {
// Step 1: Create channels for jobs and results
jobs := make(chan int, 5) // Channel to send work
results := make(chan int, 5) // Channel to collect results
// Step 2: Start multiple workers (Fan-Out)
var wg sync.WaitGroup
numberOfWorkers := 3
// Launch 3 workers
fmt.Println("Starting workers...")
wg.Add(numberOfWorkers)
for workerId := 1; workerId <= numberOfWorkers; workerId++ {
// Start each worker in its own goroutine
go worker(workerId, jobs, results, &wg)
}
// Step 3: Send jobs to the workers
fmt.Println("Sending jobs...")
jobsToProcess := 6 // We'll process 6 jobs
for jobId := 1; jobId <= jobsToProcess; jobId++ {
jobs <- jobId
}
close(jobs) // Close jobs channel to signal no more jobs
// Step 4: Wait for all workers to finish and close results channel
go func() {
wg.Wait() // Wait for all workers to finish
close(results) // Signal that all results are collected
}()
// Step 5: Collect and print all results (Fan-In)
fmt.Println("Collecting results...")
totalProcessed := 0
for result := range results {
totalProcessed++
fmt.Printf("Got result: %d\n", result)
}
fmt.Printf("\nAll done! Processed %d jobs with %d workers\n",
totalProcessed, numberOfWorkers)
}
// worker function: processes jobs and sends back results
func worker(id int, jobs <-chan int, results chan<- int, wg *sync.WaitGroup) {
defer wg.Done() // Mark this worker as done when the function exits
// Process all jobs assigned to this worker
for job := range jobs {
fmt.Printf("Worker %d started job %d\n", id, job)
// Simulate actual work with different durations
workTime := 300 * time.Millisecond
if job%2 == 0 {
workTime = 500 * time.Millisecond // Even numbered jobs take longer
}
time.Sleep(workTime)
// Send the result (job × 10) back through results channel
result := job * 10
results <- result
fmt.Printf("Worker %d finished job %d → result: %d\n", id, job, result)
}
fmt.Printf("Worker %d completed all assigned jobs\n", id)
}
🔍 How the Fan-Out, Fan-In Example Works Step-by-Step)
1.🛠 Setting Up the Channels
We create two channels:
-
jobs
channel: To send work to workers -
results
channel: To collect completed work
2.🧯 Fan-Out Process
We create 3 worker goroutines that run simultaneously:
- Each worker runs independently in the background
- All workers watch the same
jobs
channel for incoming work - This distributes the workload across multiple processors
3.📦 Sending Jobs
We push 6 jobs into the jobs
channel:
- Each worker picks up jobs as they become available
- Workers might process different numbers of jobs depending on their speed
- We close the
jobs
channel to signal that no more work is coming
4.📥 Fan-In Process
We collect all results through the single results
channel:
- As each worker finishes a job, it sends the result to the
results
channel - The main program reads all results from the
results
channel - We only close the
results
channel after all workers are done
5.🔒 Coordination with WaitGroup
sync.WaitGroup
ensures that we:
- Know when all workers are done.
- Only close the
results
channel once all processing is finished.
🛠 Real-World Applications
This pattern is useful when you need to:
- Process many items in parallel (e.g., analyzing multiple files).
- Make multiple API calls simultaneously
- Break a large task into smaller independent pieces
Think of it like multiple cashiers at a store (fan-out) all putting money into the same safe at the end of their shift (fan-in).
🚀 Key Benefits of This Pattern
- Speed: Work happens in parallel, making better use of multiple CPU cores
- Scalability: You can easily adjust the number of workers based on your needs
- Resource Control: Channels act as buffers to prevent overwhelming the system
🔍 What's Happening Under the Hood
When you run this code, you'll see workers picking up jobs at different times and finishing at different speeds. Some might do more work than others, but together they finish all the jobs much faster than doing them one by one!
Think of it like multiple checkout lanes at a grocery store versus having just one lane - everything gets done much faster than using just one!
🎯 Conclusion
Go makes it easy to write programs that can do many things at the same time, which is called concurrency. It helps your programs run faster and stay responsive.
Here are the main things to remember:
- Goroutines are like super-lightweight threads; they let your code run in the background without much cost.
- Channels are the safe way for goroutines to talk to each other and share data.
- WaitGroups help you wait until a group of goroutines finishes their work.
- Mutexes are tools that make sure only one goroutine can use a shared resource at a time; this prevents bugs called race conditions.
- Select lets you listen to multiple channels and respond to whichever one is ready first.
Mastering these tools step by step will help you build faster, more efficient Go programs.
🚀 Happy concurrent programming!
Top comments (2)
Nice one, but its a crazy big article.
I prefer to split this into different articles. Kinda article series.
Yeah. It's a big one. After finishing the article I also thought to split it into a series. But I'm a lazy person to do so.
Thanks 😊