DEV Community

Cover image for Things I learned about Go
Mahamed Belkheir
Mahamed Belkheir

Posted on

Things I learned about Go

Go is a statically typed, compiled, imperative language with support for elements of functional and object oriented programming, its main language goals are readability, first class concurrency support, minimal build times, performance and runtime efficiency.

While readability is somewhat subjective, Go manages to achieve the rest of its goals. Go has seen success in building web services and tooling the most, some of the most notable projects built with Go include Docker, Kubernetes, Hugo, and CockroachDB.

This is an attempt at an unbiased evaluation, but it's still an opinion piece, I'm a backend developer, with Javascript and Python as my primary languages at the time when I first picked up Go. I mostly use Typescript and Go now.

Criteria

I already had an idea of what kind of language I needed to complement my tech stack, which helped narrow down my options.

  • Performant at CPU-heavy tasks
  • First class concurrency support
  • Compiles to native binary
  • Statically typed

This left me to pick between Rust and Go, I opted for Go for multiple reasons:

  1. Lower learning curve, not only was it easier to learn, it would leave simpler legacy codebases behind me.
  2. At the time when I made my choice, Rust's Async implementations didn't seem as mature as Go's (Built in language feature vs third party library at v.0.*).
  3. A slightly larger job market for Go developers.

The neutral parts

Go is similar to C in syntax, it's an imperative language with some elements of other paradigms, Go supports functions as a first class value, but it does not support generics yet, This leaves you with the following:

func mapInt(arr []int, cb func(a int) int) []int {
    newArr := make([]int, len(arr))
    for i := range arr {
        newArr[i] = cb(arr[i])
    }
    return newArr
}
Enter fullscreen mode Exit fullscreen mode

You're able to define higher order functions, but only for specific types, or alternatively, using an empty interface type, which leads to more verbose type casting code.

Go uses C-style Structs for objects, but you can define methods for structs. as well as embed struct types into each other, allowing for some inheritance.

type Human struct {
    name string
}

func (h Human) Greet() {
    fmt.Printf("my name is %v \n", h.name)
}

type Employee struct {
    Human
    salary int
}

func (e Employee) ShowSalary() {
    fmt.Printf("%v's salary is %v \n", e.name, e.salary)
}

func main() {
    employee := Employee{Human{"bob"}, 10} // you must assign an instance of the embedded struct
    employee.Greet()
    employee.ShowSalary()
}
Enter fullscreen mode Exit fullscreen mode

Struct embedding isn't common, I have yet to need it, it seems more common with library code than user code, e.g with ORM models.

The bad parts

I prefer to start with the bad news, Go is quite opinionated, and would not be to the liking of many. Most opinions are language enforced rather than suggested practice.

Error handling

Errors in Go are values, there are no exceptions, with the exception of panics.

The default way of error handling is that functions return the error value as the last value, if the error is nil (null value), the function executed successfully.

val, err := doTheThing()
if err != nil {
    fmt.Fatalf("encountered an error: %v", err)
}
fmt.Println("received value: %v", val)
Enter fullscreen mode Exit fullscreen mode

Go does not allow you to accidentally forget about methods that return an error

val := doTheThing() // compile error
val, _ := doTheThing() 
Enter fullscreen mode Exit fullscreen mode

You're free to explicitly ignore the error, but that means val is an empty value, if not a null pointer, and using a null pointer will cause a panic.

Error as value avoids the pitfalls of uncaught errors, without the strictness that declared exceptions like Java uses, exceptions also introduce a performance penalty. But Go does not attempt to solve the problem, it decided the solutions are not good enough, you're left to deal with every error manually.

Take for example, creating multiple SQL prepared statements. creating a prepared statement requires a network request to the DB, and the SQL statement to be correct, many things can go wrong.
Thus the function sql.DB.Prepare() can error out, But when you're preparing multiple statements the error handling will always be the same for each one.

conn := sql.Connect(connectionString)

selectAllUsers, err := conn.Prepare("SELECT * FROM users;")
if err != nil {
    fmt.Fatalf("error preparing statement: %v", err)
}
selectUserById, err := conn.Prepare("SELECT * FROM users WHER id = ?;")
if err != nil {
    fmt.Fatalf("error preparing statement: %v", err)
}
selectAllActiveUsers, err := conn.Prepare("SELECT * FROM users WHERE status = 'active';")
if err != nil {
    fmt.Fatalf("error preparing statement: %v", err)
}
updateUserStatus, err := conn.Prepare("UPDATE users SET status = ? WHERE id = ?;")
if err != nil {
    fmt.Fatalf("error preparing statement: %v", err)
}
deleteUser, err := conn.Prepare("DELETE FROM users WHERE id = ?;")
if err != nil {
    fmt.Fatalf("error preparing statement: %v", err)
}
Enter fullscreen mode Exit fullscreen mode

The number of redundant error checks here increases proportionally with every SQL statement you require, there is however, a pattern you can use.

Whenever you have a number of methods that would lead to the same error handling, without any operations being taken between the calls, you can turn the operations into a struct type, one that defines all the operations, and handles error checking.

// this struct wraps over making prepared statements to the DB
struct sqlStatement struct {
    conn *sql.DB
    err  error
}
// a method on the sqlStatement struct to create prepared statements
func (s *sqlStatement) prepare(query string) *sql.Stmt {
    // make sure no error was found previously
    if s.err == nil {
        // set the struct's error value to the function's error result
        // if an error was encountered, stmt would be nil
        stmt, s.err := s.conn.Prepare(query)
        return stmt
    }
    return nil
}

statement := sqlStatement{conn}

selectAllUsers := statement.prepare("SELECT * FROM users;") // succeeded
selectUserById := statement.prepare("SELECT * FROM users WHER id = ?;") // error! sqlStatement.err was set to the error
selectAllActiveUsers := statement.prepare("SELECT * FROM users WHERE status = 'active';") // sqlStatement.err is not nil, do nothing
updateUserStatus := statement.prepare("UPDATE users SET status = ? WHERE id = ?;") // sqlStatement.err is not nil, do nothing
deleteUser := statement.prepare("DELETE FROM users WHERE id = ?;") // sqlStatement.err is not nil, do nothing

// first error we caught is preserved
if statement.err != nil {
    fmt.Fatalf("error preparing sql statements: %v", statement.err)
}

Enter fullscreen mode Exit fullscreen mode

This pattern allows us to get rid of redundant error checking, but it's not perfect, first, you have to manually extract the operation into its own type.

Secondly, if you have to use the values returned by any of the operations, you would have to error check at that point again, e.g:

struct userApi struct {
    api *apiRepository
    err  error
}

func (a *apiQuery) query(id string) User {
    if s.err == nil {
        user, s.err := s.api.QueryUser(id)
        return user
    }
    return nil
}

func (q *apiQuery) update(id string, user User) {
    if a.err == nil {
        s.err := s.api.UpdateUser(id, user)
    }
}

api := userApi{apiConn}

// incorrect usage:
user := api.query(1) // errors out, api.err is set to error
if user.Status == "VIP" { // user is a nil value, because the query failed, this will cause a panic because we attempted to use a nil pointer (nil.Status)
    user.Balance += 100
    api.update(1, user)
}

// correct usage:
user := api.query(1)
if api.err != nil {
    fmt.Fatalf("error retrieving user: %v", api.err)
}
if user.Status == "VIP" { 
    user.Balance += 100
    api.update(1, user)
}
Enter fullscreen mode Exit fullscreen mode

As you can see, if we rely on the value returned from any of those operations, we would have to error check again.

Go chooses maintenance and readability over writability in the case of error handling, preferring verbosity over unexpected control flow changes.

Type system limitations

Go's built in types include generics, maps map[KeyType]ValueType, slices []Type, channels chan Type, but user defined types can not include type parameters yet. Take for example a Box type:

type Box[T] struct {
    value T
}

func (b *Box) Store(item T) {
    b.value = item
}

func (b *Box) Retrieve() T {
    return b.value
}

intBox := Box[int]{}
intBox.Store(100)
var val int = intBox.Retrieve 

Enter fullscreen mode Exit fullscreen mode

We don't care what's stored or retrieved, until we actually use the Box, this isn't possible with Go yet, you'd have to use the empty interface type, interface{}.

type Box struct {
    value interface{}
}

func (b *Box) Store(item interface{}) {
    b.value = item
}

func (b *Box) Retrieve() interface{} {
    return b.value
}

intBox := Box{}
intBox.Store(100)
val, ok := intBox.Retrieve.(int)
if !ok {
    fmt.Fatalf("failed to cast box value to int")
}  

Enter fullscreen mode Exit fullscreen mode

This can quickly become a pain point when multiple possible types are used, you would require a type switch statement to check which type the value was, and if your function has to return the value, you'd have to return an empty interface again, and repeat the type assertions on the layer above.

Generally you'd avoid moving generic interface{} data between functions due to the redundant amount of type checks you'll have to perform when making any action on the data.

You can reduce the amount of clutter by type casting without checking for validity, this reduces the boilerplate but forgoes type safety, an unchecked failed type assertion is a panic during runtime.

type Car struct {
    price int
}

type Building struct {
    price int
}

x := Car{100}
var i interface{}
i = x
y := (i).(Building) // panic!

Enter fullscreen mode Exit fullscreen mode

However, Go v1.18 is slated to introduce generics.

Another limitation is the lack of tuple types, despite functions practically being able to return tuple types, you're unable to replicate that elsewhere, for example in channels.

func results() (int, err) {
    return 10, nil
}

a, err := results() // works

resultsChannel := make(chan (int, error)) // parse error, expects only one type

a, err := <- resultsChannel // does not work
Enter fullscreen mode Exit fullscreen mode

to group results together, you're forced to declare your own tuple type.


type intResult struct {
    result int
    err error
}

resultsChannel := make(chan intResult)

res := <- resultsChannel
a, err := res.result, res.err

Enter fullscreen mode Exit fullscreen mode

Somewhat a minor inconvenience, and arguably solvable with Generics, but first class tuple types would've been more consistent with Go's error handling system.

The good parts

Now into the parts that Go does well

References and Values

In contrast to languages like javascript, java and python, Go allows you to choose whether you want to pass a value or a reference.

Passing a value would clone it, while passing a reference would pass a reference type that points to the memory block with the value, this is true for both primitives and struct types, with the exception of reference types like slices and maps.

for types, a type of int denotes an integer, *int denotes a reference to an integer, this the same for every type, e.g string, *string, Person, *Person,

func PureFunction(num int) {
    num += 10
    fmt.Println("Pure: ", num)
}
a := 10
PureFunction(a) // "Pure: 20"
PureFunction(a) // "Pure: 20"
PureFunction(a) // "Pure: 20"
fmt.Println(a) // 10

func MutatingFunction(num *int) {
    *num += 10
    fmt.Println("Mutate: ", *num)
}
b := 10
MutatingFunction(&b) // "Mutate: 20"
MutatingFunction(&b) // "Mutate: 30"
MutatingFunction(&b) // "Mutate: 40"
fmt.Println(b) // 40
Enter fullscreen mode Exit fullscreen mode

This applies to structs too, You're able to define struct methods that either clone the method's struct, or mutate it

type Human struct {
    firstName string
    lastName  string
    age       int
}

func (h Human) NewKid(name string) Human {
    h.firstName = name
    h.age = 0
    return h
}

func (h *Human) AgeUp() {
    h.age += 1
}

bob := Human{"bob", "johnson", 32}
adam := bob.NewKid("adam")
fmt.Println(bob, adam) // {bob johnson 32} {adam johnson 0}
bob.AgeUp()
adam.AgeUp()
fmt.Println(bob, adam) // {bob johnson 33} {adam johnson 1}
Enter fullscreen mode Exit fullscreen mode

Go allows for easy and efficient copying of structs, this allows for some very easy immutable and pure function code even with objects/structs, having clone by default means you can use the same struct in multiple functions without worrying if the state would mutate in an unpredictable way.

If your struct represents a value instead of an entity, prefer reassigning value over mutating state.

Concurrency

Go's concurrency model is based around goroutines, lightweight green threads controlled by the go runtime instead of the operating system.

Go is smart enough to know how to spread your goroutines between the available computer resources. You can have thousands of goroutines spread over any number of OS threads, with blocked goroutines making way for other hanging goroutines.

Cooperation between goroutines is done through channels or shared memory space.

Let's take for example the following function:

func filterBadFruit(fruits []Fruit) ([]Fruit, error) {
    cleanFruit := make([]Fruit, len(fruits))
    for index, fruit := range fruits {
        if fruit.IsHealthy() {
            cleanFruit[index] = fruit
        }
    }
    if len(cleanFruit) < 1 {
        return nil, errors.New("no healthy fruit found!")
    }
    return cleanFruit, nil
} 

goodApples, err := filterBadFruit(mixedBag)
if err != nil {
    fmt.Fatal(err)
}
sendToCustomer(goodApples)
Enter fullscreen mode Exit fullscreen mode

Filtering the bad apples is a compute-bound operation, we have to iterate over all the elements and conditionally perform operations. let's assume we have another call for filtering bad oranges.

goodApples, err := filterBadFruit(mixedAppleBag)
if err != nil {
    fmt.Fatal(err)
}
goodOranges, err := filterBadFruit(mixedOrangeBag)
if err != nil {
    fmt.Fatal(err)
}
Enter fullscreen mode Exit fullscreen mode

Here we would have to filter all the apples first, and then filter all of the oranges, we can split the workload into another goroutine, Go's runtime would handle splitting the goroutines over the OS's threads, allowing for parallel execution.

To spawn a new go routine we use the go keyword before a function call.

goodApples, err := go filterBadFruit(mixedAppleBag)
if err != nil {
    fmt.Fatal(err)
}
goodOranges, err := go filterBadFruit(mixedOrangeBag)
if err != nil {
    fmt.Fatal(err)
}
Enter fullscreen mode Exit fullscreen mode

This would not work, and the compiler will complain about this too. once a goroutine is started, it's no longer connected to the goroutine it started from, you can't expect it to return values like normal.

As mentioned earlier to have goroutines communicate, we either use shared memory or channels. We'll attempt to use shared memory first.

func filterBadFruits(fruits []Fruit, results *[]Fruit){
    cleanFruits := make([]Fruit, len(Fruits))
    for index, fruit := range fruits {
        if fruit.isHealthy() {
            cleanFruits[index] = fruit
        }
    }
    if len(cleanFruits) < 1 {
        results = nil
    }
    *results = cleanFruits
} 

// Apple and Orange struct types fulfill the Fruit interface
var goodApples []Apple
var goodOranges []Orange
go filterBadFruits(mixedAppleBag, &goodApples)
go filterBadFruits(mixedOrangeBag, &goodOranges)
if goodApples == nil {
    fmt.Fatal("no good apples were found!")
}
if goodOranges == nil {
    fmt.Fatal("no good oranges were found!")
}
go sendToCustomer(goodApples)
go sendToCustomer(goodOranges)
Enter fullscreen mode Exit fullscreen mode

Here we changed our filtering function to take a reference, where we store the results, references can be shared safely between goroutines as long as they're only read. But we have multiple problems here.

Firstly, we're writing and reading from the same reference without any synchronization, this can lead to undefined behavior.

Secondly, related to the first point, we read the results before making sure the two functions are finished, remember if goodApples == nil is executed by another goroutine, it has no reason to block and wait for filterBadFruits to finish executing before checking if it's nil, in this case, we will always find goodApples == nil to be true, and crash, because filterBadFruits takes way more time than a single check.

Thirdly, we have no way of sending errors encountered back to the main goroutine, we were able to circumvent this issue here since we only had one error case, which we represented
with the nil value, not a good idea.

Had you ran this code, and got the panic error no good apples were found!, you'd assume you somehow have a problem with the apples supply, not that you have incorrect concurrent code.
We can send another pointer for errors, but this doesn't solve our first two issues.

One solution here is channels, a channel is a built in Go type chan Type, you can use it to send values channel <- value or receive values value := <- channel.

There's two types of channels, non-buffered and buffered channels, non-buffered channels will stop your code on both send and receive lines, until another goroutine sends or receives back.

meaning your code will stop at value := <- channel until another goroutine passes a value with channel <- value, and vice versa, you're unable to send until another goroutine accepts your message.

As for buffered channels, you can define a buffer size, allowing you to send without waiting, but receiving is still a blocking operation, and if the buffer size is reached, sends are blocked as usual.

For our use case here, any would work.

func filterBadFruits(fruits []Fruit, results *[]Fruit, errChan chan error){
    cleanFruit := make([]Fruit, len(fruits))
    for index, fruit := range fruits {
        if fruit.IsHealthy() {
            cleanFruit[index] = fruit
        }
    }
    if len(cleanFruit) < 1 {
        errChan <- errors.New("no healthy fruit found!")
    }
    *results = cleanFruit
    errChan <- nil
} 

var goodApples []Apple
var goodOranges []Orange

appleError, orangeError := make(chan error), make(chan error)

go filterBadFruits(mixedAppleBag, &goodApples, appleError)
go filterBadFruits(mixedOrangeBag, &goodOranges, orangeError)

err := <- appleError
if err != nil {
    fmt.Fatalf("error getting apples: %v", err)
}
go sendToCustomer(goodApples)

err = <- orangeError
if err != nil {
    fmt.Fatalf("error getting oranges: %v", err)
}
go sendToCustomer(goodOranges)

Enter fullscreen mode Exit fullscreen mode

The difference here is that the main goroutine will block and wait at err := <- appleError, while the two filterBadFruits goroutines continue their work.

once the apple filtering goroutine finishes and responds, either with an error or with nil, meaning no error, we can access the goodApples result safely, because know the other routine finished its work and it wouldn't possibly write to it again. The same is repeated with the orange results.

That is just one example of concurrency patterns in Go, for example, if your goroutines return multiple results at different times, it's better to use a channel for results too.

Another thing to note is that, in this case, our goroutines were handling compute-bound workloads, running this code on a single threaded machine wouldn't bear any useful results.

That said, it would still work as intended. If your goroutines were handling IO-bound workloads, you would still benefit from goroutines, as blocked goroutines are replaced by the runtime, meaning you can run multiple concurrent network or file system requests on a single thread, akin to Nodejs's Promise.all/allSettled.

While Go makes concurrency easier, it doesn't make it easy, concurrent programming is a non trivial problem.

Standard Library

Go has a very comprehensive standard library, and defines multiple standards, including the SQL and HTTP APIs.

Go's database/sql package defines a standard way to interact with SQL databases, the only dependency being the driver, this is similar to Java's JDBC and C# SQL packages, although it can be quite verbose, third party libraries like sqlx and sqlc provide ways to abstract away the redundancy of the standard API without breaking compatibility.

The same holds true for the net/http package, which provides both client and server implementations. The server implementation is quite comprehensive compared to the default HTTP server implementations I've tried in Nodejs and Java, net/http provides multiple conveniences out of the box like parsing of query parameters, and multipart/url-encoded request bodies.

While it also suffers from the same verbosity problem, it's considerably more serviceable than the average language http server implementation, and it's a very likely solution to opt to only use the default net/http package for creating APIs and websites.

Go also comes with well rounded templating libraries, one for general text, and the other for generating HTML, the latter works well with the net/http package for creating server side rendered web apps.

Overall, the Go standard library makes it possible to build entire applications with the bare minimum dependencies, without relying on frameworks.

This is often the general opinion online, people will shun framework usage, but I would personally suggest to use frameworks when you feel the need arises, a framework where it's not really justified is not as bad as incorrectly implemented abstractions over the standard library.

Language Tooling

The language tooling provided by the language out of the box is great, and covers many aspects.

gofmt is the default go formatter, having a language style ensures a uniform style between codebases, and adds to readability, it may not be to everyone's liking, but a set standard has more positives than negatives IMO.

The language server gopls has improved a lot in recent times, and it provides a very good user experience out of the box with VS Code, auto imports, auto completion of function arguments based on type inference, instant formatting on save, the debugger and break points also work out of the box without any configuration.

Go also provides a test runner built into the go toolchain, files postfixed with _test.go and functions prefixed with Test and take a *testing.T parameter are ran with the go test command, while the standard library doesn't offer comprehensive assertion functionality, you can easily create some with the reflect library. Go allows for comprehensive testing without any third party libraries.

Go also offers other build tools, including built in race condition checking and benchmarking.

Conclusion

Go chooses not to be a flexible language, and that is by design, for the sake of build speeds and readability/clarity, they took a very opinionated route.

The more you try to bend the language to your own mold the more disappointed you will become, the best option is to accept the Go's opinions, at least while you're writing Go code, I believe the positives of Go outweigh its shortcomings, but that's a subjective take that depends on how much of Go's opinions go against your own.

Top comments (7)

Collapse
 
klvenky profile image
Venkatesh KL

Loved the Article. The conclusion is true.
I've used typescript for quite sometime & java long ago. So I'm comfortable with typed languages. There should be a nice balance between standards & customisation.
Go takes away the customisation part, so it looks kind of monopoly to people white from other languages where there are multiple ways to do the same thing.
Go targets for eventual consistency than absolute consistency. It's better & easy to onboard developers as you don't need to explain them your entire webpack setup & the bunch of lint rules that you use.
So that's a win for me as someone who talks to lot of onboarding engineers in our organization.

Even as the article is amazingly written, I think anyone who doesn't know about go well, might be little skeptical to complete as it's a big list of things that you've put on. I'm not suggesting to write smaller articles but an article with so much information may not be well understood by people who are new due to amount of technicality involved.
Bookmarking it right away.
Cheers πŸ‘

Collapse
 
belkheir profile image
Mahamed Belkheir

Thanks, I've chosen to go with a long article because most blog posts I've seen talking about Go never go into enough details to really help make a decision.

Collapse
 
klvenky profile image
Venkatesh KL

That's true. We mostly find articles which are too short to decide if we can use a tool or not. Also, I would like to know your views on channels as I always see people talk that channels are great. But never saw an article that would demonstrate an example, where we will see What problems does a Channel solve.

I think you can probably go with a series of articles, which I have seen people doing a lot these days in dev,to to solve the issues of big articles. I personally feel that people tend to not read till end if they are too long. That's the reason I advised on the long article.

Thread Thread
 
belkheir profile image
Mahamed Belkheir

Oh yea I plan to start a small series of articles, it's just I don't have a common theme for them yet, I'd like to talk about types, references and interfaces, and concurrency patterns

Thread Thread
 
klvenky profile image
Venkatesh KL

I wanted to write something about how similar & different TypeScript & Go were. But it never took off as I didn't go deep into go.
That might be a good starting point that you can try. πŸ‘

Collapse
 
pavelbisse profile image
Pavel Bisse

I really like your POV. Thanks for interesting article. Can only confirm the things you described here :)

Collapse
 
adammomen profile image
AdamMomen

This is such valueable and consice article! Not only exhilarating to read it's like a kickass movie trailer about Go.

Coming from NodeJs, admittedly, shipping a ton of code to production is fun, nevertheless, l the more I dove in the software industry the more I realized the importance of having maintable and bugless-code.
It is a discipline, something that we, as craftsmanship should respect no matter how much it suck.

Although that Typescript, solves type-safety problems with JS, it's not fast enough relatively speaking to Go, especially when you would like to squeeze from every last Hertz from your CPU.

To give an example, I am working on developing a parser, while wtiting an MPV in python is awfully convenient, but it's fast enough. I didn't try CPython yet so this could provide better results!

Moreover, the though of refactoring the parser with C++ came cross to my mind before, you inspired to look at Go as a potential option.