DEV Community

Cover image for Learning Backend #1
Gurpreet Singh
Gurpreet Singh

Posted on

Learning Backend #1

Building Systems, Not Just Running Them

Hi, I’m Gru.

I work as a Site Reliability Engineer.

My daily job is to keep systems alive — make sure services don’t crash, latency doesn’t spike, alerts don’t explode at 3 AM, and users never feel the chaos underneath.

But over time, one question started bothering me:

How can I truly make systems reliable if I only understand them from the outside?

As SREs and DevOps engineers, we often operate black boxes. We deploy them, monitor them, scale them, and debug them when they break. Yet many of us never actually build them.

We tune JVM flags without knowing how memory is allocated.

We autoscale APIs without understanding how request handling really works.

We debug “high CPU” without knowing what the code is doing underneath.

That’s like being a mechanic who never opened an engine.

This blog is my attempt to open that engine. Not as an expert, but as a learner who wants to understand systems from the inside out.


Why Backend?

Let’s strip everything down to first principles.

A backend is just a program that listens for requests, applies rules, reads or writes data, and sends a response. That’s it. Every API, every microservice, every “platform” is just a more evolved version of this loop.

So learning backend is really about learning how programs talk over the network, how memory and CPU behave under load, how concurrency actually works, how failures propagate, and how design choices slowly turn into production incidents.

These are exactly the forces we deal with in SRE.

If I want to make systems reliable, I must understand how they are born.

We are going to learn it in go language .


A Short History of Go (and Why It Exists)

Gopher

Go is young when you place it next to giants like C, Java, or Python. But age is not what made it interesting to me. Its origin is. Go was not born in a lab or as part of some academic experiment. It was born inside Google, under real production pressure, where systems were already operating at a scale most engineers never experience.

Around 2007, engineers at Google were dealing with enormous codebases, painfully slow build times, multi-core machines that existing languages did not fully exploit, and systems that were increasingly network-driven and distributed. The tools they had were showing cracks. Languages were either fast but painful to work with, or pleasant to use but slow and unpredictable at scale. They needed something that could compile fast, run efficiently, and still be easy for humans to reason about. No existing language satisfied all three at once.

Instead of patching old ideas, they asked a deeper question: What would a language look like if it were designed for modern systems from the very beginning? That question led three engineers—Rob Pike, Ken Thompson, and Robert Griesemer—to design Go. It was open-sourced in 2009, reached its first stable release in 2012, and has been quietly shaping the infrastructure of the internet ever since.

Go did not grow because it was clever or flashy. It grew because it matched reality. It fit the world of large teams, distributed systems, and production software that has to be understood, debugged, and trusted under pressure.


What Go Was Designed For

gopher

Most languages were designed in a world of single-core CPUs, local programs, small codebases, and few collaborators. Go was designed in a world of multi-core machines, networked services, massive codebases, large teams, and distributed systems.

So its goals were different.

Not “How expressive can we make this language?”

But: How fast can teams build together? How easily can code be read by others? How naturally can we model concurrency? How predictable is this in production? How little magic exists when debugging at 3 AM?

This led to some strong design choices: few keywords, no inheritance hierarchies, composition over abstraction, explicit dependencies, built-in concurrency, single static binaries, and garbage collection with low latency.

Many features that exist in other languages are intentionally missing in Go. No inheritance. No constructors. No exceptions. No pointer arithmetic. No implicit type conversions.

At first, that feels like limitation. But the intention is clarity.

Go’s philosophy is simple:

Code is read more than it is written.

The reader matters more than the writer.

When you read Go code, it should be obvious what happens. When you debug Go in production, it should behave predictably.

Less magic. More truth.


Why Go Fits the Real World

At a high level, Go shines in areas where systems are long-running, network-heavy, concurrent, and deployed at scale. That’s why you see Go everywhere in web backends, cloud and network services, DevOps and SRE tooling, and command-line interfaces.

But what actually makes Go suitable for these?


Single Binary Executable

One of the most underrated aspects of Go is that it compiles into a single binary file. No interpreter. No runtime environment. No dependency jungle. Just a file you can run.

This matters because containers become smaller and faster, deployments become simpler, startup time is near-instant, and failures are easier to reason about. In production, fewer moving parts mean fewer ways to break. A Go service is not “an app plus a runtime plus a config maze.” It is just a program.


Minimalism by Design

Go is not trying to impress you. It doesn’t offer ten ways to do the same thing. It gives you just enough to build real systems.

That forces discipline. You model problems directly. You write what you mean. You don’t hide behavior behind clever abstractions. In large teams and long-lived systems, this matters more than elegance.


Automatic Garbage Collection

Go manages memory for you. You don’t manually allocate and free like in C, and you don’t fight object lifetimes all day. This lets you focus on data flow, system behavior, and failure modes.

Some people dislike garbage collection, but Go’s goal is productivity without sacrificing too much performance. In backend systems, clarity and safety often beat micro-optimizations.


One Way to Format Code

Go ships with a single formatter: gofmt. Everyone uses it. There are no style wars, no prettier configs, no debates.

Code looks the same everywhere. This sounds small, but at scale it is huge. It removes friction between humans.


Built-in Testing & Benchmarking

Testing in Go is not an afterthought. You don’t install frameworks or wire tools. You just write tests. The same is true for benchmarking.

This nudges you toward measuring behavior, thinking in performance, and treating correctness as part of development. That’s how production-grade systems are built.


Concurrency That Matches Reality

Modern systems are concurrent by nature. Requests arrive in parallel. IO waits overlap. Tasks run independently.

Go models this directly. Goroutines are cheap. Channels make communication explicit. WaitGroups and Mutexes give structure. You don’t “bolt on” concurrency. You think in it.

Compared to callback-based or promise-based models, this feels closer to how systems actually behave.


Why This Series Exists

This is not a “copy-paste tutorial” blog.

You won’t see:

“Run this command and boom, backend ready.”

Instead, every post will ask: What problem does this concept exist to solve? Why does this problem exist in the first place? What breaks if we ignore it? How do real systems behave under load? How does Go approach this problem? How should we think about it as engineers?

We’ll go from “What is a server?” to “How does a request move through memory?” to “Why does concurrency shape system design?” to “How small design choices become outages.”

Slow. Grounded. From first principles.


Who This Is For

github

This is for beginners who feel overwhelmed by buzzwords, for DevOps and SRE engineers who want to build, not just operate, and for anyone who wants to understand systems from the inside out.

I’m not teaching from mastery.

I’m learning in public.

And I’ll follow one rule throughout this journey:

If I can’t explain it simply and clearly, I don’t understand it yet.

Let’s stop treating systems like magic.

Let’s build them.

Top comments (0)