Concurrency – is a buzzword often thrown around these days in the world of programming, but what does it really mean, especially in the context of the Go programming Language?
Let us start by defining concurrency.
Concurrency
Concurrency is an approach that involves handling multiple tasks in a seemingly simultaneous fashion to improve the overall performance of our program.
This should not be confused with parallelism. parallelism is running two tasks simultaneously(at the same time) utilizing multiple processors or cores. whereas Concurrency is about managing different tasks and optimally utilizing resources to maximize efficiency. for example, you might have 2 tasks and one is IO intensive maybe fetching data from an API. With concurrency, we can make the API call for the first task, and instead of waiting for the API call to complete, we can proceed to do the second task in that waiting time using the same core.
Goroutine
Goroutines allows us to run programs concurrently in Go.
Goroutines are lightweight threads managed by the Go runtime. They are smaller in terms of memory footprint compared to traditional threads, allowing for the creation of thousands of goroutines without excessive overhead.
Example:
package main
import (
"fmt"
"time"
)
func computeNumbers() {
s := 0
for i := 1; i <= 5; i++ {
time.Sleep(1 * time.Second)
s = s + i
fmt.Println(i)
}
}
func main() {
go computeNumbers() // Launching a goroutine
fmt.Println("I am not waiting for computeNumbers to finish")
fmt.Println("Exiting program")
}
In this example, computeNumbers is executed as a goroutine, which allows the main function to continue running without waiting for computeNumbers to complete.
Note: the keyword go
is used to run a function in a goroutine
It is important to note that we can’t return data from a goroutine. So then how do we pass data between goroutines i.e. if we have delegated a task to our goroutine how do we get the result back?
That brings us to the next concept: channels
Channels
Channels are a core part of Go's concurrency model they allow goroutines to communicate and synchronize their execution. in simple terms, channels are used to pass data between goroutines.
Example:
Following the previous example, assuming we wanted to get the result of the computeNumbers function
package main
import "fmt"
func computeNumbers(ch chan) {
s := 0
for i := 1; i <= 5; i++ {
time.Sleep(1 * time.Second)
s = s + i
fmt.Println(i)
}
ch <- s
}
func main() {
ch := make(chan string)
go computeNumbers(ch)
result := <-ch
fmt.Println(result)
}
Here, a channel ch is created for transmitting the result data
ch := make(chan string)
.
The goroutine computeNumbers sends the result to ch ch <- s
The main goroutine waits and reads the result from ch result := <-ch
.
One thing to note about channels is that when you send data to a channel, there must be another goroutine reading from the channel simultaneously. else the writing goroutine will be blocked until the value written to the channel has been read by another goroutine
Example
package main
import "fmt"
func main() {
ch := make(chan int)
ch <- 1 // This line will block and cause a deadlock!
fmt.Println(<-ch) // This line is supposed to read from the channel
}
this program will block indefinitely because the value written to the channel on ch <- 1
needs to be read at the same time. it blocks until the value is read from the channel. and execution will never get to fmt.Println(<-ch)
.
We have to do the sending and receiving in separate goroutines like in the previous example.
A way to avoid the blocking effect of a channel is to use what we call a buffered channel. the channels we have discussed so far are unbuffered because they do not have a capacity, they cannot store any values. you have to write to it and read from it at the same time else it blocks.
But with a buffered channel, you can declare a capacity n and the channel would allow you to send n values without reading before it blocks. You can think of it like a Queue of size n. sending a value to the channel adds to the Queue, and receiving from the channel removes from the Queue. You can send n values to the Q without removing from it until the Queue gets full at which point the channel blocks until you receive from the channel.
Example:
package main
import "fmt"
func main() {
ch := make(chan int, 2) // buffered channel with capacity of 2
ch <- 1
ch <- 2
fmt.Println(<-ch)
fmt.Println(<-ch)
}
in ch := make(chan int, 2)
we create a new channel with capacity 2. This is what makes it a buffered channel.
If we run the code, we see that it runs successfully even though we are sending and receiving in the same goroutine. This is because the channel ch
can hold up to two values which can be read later.
WaitGroups
when you have multiple goroutines in go, a WaitGroup is a simple way to wait for your goroutines to finish their execution before proceeding. A wait group has 3 methods, Add
, Done
, and Wait
.
Think of it like a counter: when you launch a goroutine, you increase the counter (.Add), and when a goroutine finishes its execution, it decreases the counter(.Done). The Wait() function is used to block the execution of the program until the counter is zero, meaning all the goroutines have finished.
Example:
package main
import (
"fmt"
"sync"
)
func worker(id int, wg *sync.WaitGroup) {
defer wg.Done()
fmt.Printf("Worker %d starting\n", id)
// perform some work...
fmt.Printf("Worker %d done\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 5; i++ {
wg.Add(1)
go worker(i, &wg)
}
wg.Wait()
fmt.Println("All workers completed")
}
In the example we created a wait group using var wg sync.WaitGroup
, then we spun 5 goroutines, for each of the goroutines we add 1 to our wait group wg.Add(1)
. After the execution of each of the goroutines, we call wg.Done()
indicating that one of the worker goroutines has been completed. in our main goroutine we call wg.Wait()
to wait for all the worker goroutines to complete before proceeding with execution.
Again the WaitGroup functions by maintaining a counter which is incremented for each launched goroutine with wg.Add(1), the counter tracks how many goroutines are still running. Each worker goroutine, upon completion, calls wg.Done() to decrement this counter. The main function then calls wg.Wait(), which blocks until the counter reaches zero, indicating all worker goroutines have finished.
Mutexes
(short for mutual exclusion) The primary purpose of a mutex is to safely allow multiple goroutines to access and modify shared resources by ensuring that only one goroutine can access the resource at a time. it allows you to lock access to a shared resource so another goroutine can't access it until it is freed(unlocked) this prevents the occurrence of things like race conditions which might occur if multiple goroutines are trying to access and modify the same resource at the same time.
Example:
package main
import (
"fmt"
"sync"
)
var counter int
var lock sync.Mutex
func increment(wg *sync.WaitGroup) {
defer wg.Done()
lock.Lock() // Lock the mutex before accessing the shared variable
counter++
lock.Unlock() // Unlock the mutex after updating
}
func main() {
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
increment(&wg)
}
wg.Wait()
fmt.Printf("Final counter value: %d\n", counter)
}
we declare a mutex with var lock sync.Mutex
. we will use this mutex to guard access to our counter.
Each increment goroutine safely increases the counter. The increment function uses the mutex to lock access to the counter (lock.Lock()
) before modifying it, and then it releases the lock (lock.Unlock()
) after the modification is complete. This ensures that only one goroutine can modify the counter at any given time, maintaining data integrity.
Mutexes are useful in scenarios where multiple threads or goroutines need to access and modify a shared resource such as a global counter, a cache, a shared configuration setting, etc. For instance, in a web server, mutexes can be used to synchronize access to a shared in-memory cache, ensuring that updates to the cache don't corrupt its state when handled concurrently by multiple request handlers.
Top comments (2)
Great post
Thank you @martinarias