Originally published on The Go Engineer:
Building The Go Engineer: Teaching Go as a Software Engineering Discipline
Most Go learning material teaches the language as a sequence of syntax lessons.
Variables.
Slices.
Structs.
Interfaces.
Goroutines.
Channels.
HTTP handlers.
Maybe a database example near the end.
That kind of material is useful. I learned from resources like that too.
But at some point, I started noticing a gap.
You can finish many Go tutorials and still not know how to think about service boundaries, tenant isolation, graceful shutdown, background workers, migrations, rate limiting, observability, validation, CI, or how to keep documentation and code from drifting apart.
That gap is why I built The Go Engineer.
I did not want to build another folder of Go snippets.
I wanted to build a repository-first Go engineering curriculum.
The core idea behind the project is simple:
A repository should not only contain the curriculum.
The repository should be the curriculum.
That single idea shaped almost every decision: the curriculum structure, the machine-readable registry, the validator, the code standards, the tests, the CI workflow, the documentation, the known limitations, and the flagship backend project called Opslane.
The Go Engineer is my attempt to teach Go as engineering, not just programming.
Why syntax is not enough
Go is a small language, but production Go is not small.
The syntax is intentionally simple. That is one of Go’s strengths.
But the hard parts of backend engineering usually live somewhere else:
- How should packages be organized?
- Where should transactions begin and end?
- How should request cancellation flow through a system?
- Who owns a goroutine?
- Who closes a channel?
- What happens when the process receives
SIGTERM? - How should a service expose metrics?
- How should migrations stay synchronized?
- How do you prevent tenant data leaks?
- How do you prove the repository still matches its own documentation?
Most beginner tutorials do not need to answer those questions.
A serious engineering curriculum eventually does.
That is why The Go Engineer is structured as more than a language walkthrough. The early sections teach fundamentals, but the deeper goal is to help learners build the instincts required to read, design, test, review, and maintain Go systems.
The curriculum is not just asking:
“Do you understand this Go feature?”
It is also asking:
“Can you use this feature inside a system that has boundaries, failures, tests, documentation, deployment, and maintenance pressure?”
That is the difference between learning Go syntax and becoming a Go engineer.
Why I made the repository part of the lesson
One of the most important decisions I made was to treat the repository itself as a teaching surface.
The Go Engineer is not only a place where lessons live. It is a system with:
- a locked curriculum architecture
- a machine-readable curriculum registry
- runnable lessons
- README-first explanations
- tests and verification surfaces
- code standards
- testing standards
- CI validation
- a curriculum validator
- known limitations
- a flagship backend project
That matters because real engineering work is never only about code.
A real project has many surfaces that must stay aligned:
- source code
- tests
- documentation
- examples
- module maps
- migration files
- CI workflows
- architecture decisions
- release notes
- contribution rules
- validation scripts
When those surfaces drift, trust drops.
The README says one thing.
The code does another.
The validator misses it.
The learner gets stuck.
The maintainer loses confidence.
The repository stops teaching clearly.
I wanted The Go Engineer to fight that drift directly.
So I started treating consistency as something the repository should enforce, not something I should simply remember.
That is why the project has proof surfaces.
A proof surface is any part of a repository that helps prove the system is still what it claims to be.
In The Go Engineer, proof surfaces include tests, CI, curriculum metadata, validation scripts, progress maps, documentation standards, module READMEs, and known limitations.
That may sound strict for an educational project.
I think it is exactly the point.
Good engineering is not about having no constraints. It is about building the right constraints into the system.
Why Opslane exists
The flagship project in The Go Engineer is called Opslane.
I built Opslane because isolated exercises are not enough.
Exercises are useful for learning individual concepts. But they do not force enough system-level decisions.
A real backend, even an educational backend, forces questions like:
- Where does configuration live?
- How does the application start?
- How are dependencies wired?
- Where does authentication belong?
- How is tenant scope enforced?
- How are migrations applied?
- How are background workers stopped?
- How are metrics exposed?
- How does rate limiting work across multiple instances?
- How does the service shut down safely?
- What should be production-grade, and what should remain a teaching implementation?
That is why Opslane exists.
Opslane is where the curriculum becomes concrete.
It is not a toy “hello world” API. It is a production-shaped backend that brings together configuration, PostgreSQL, authentication, tenant isolation, order workflows, payments, caching, workers, observability, rate limiting, migrations, Docker, CI, and graceful shutdown.
The goal is not to pretend Opslane is a drop-in SaaS template.
The goal is to teach the shape of production code.
That distinction matters.
Architecture should make ownership visible
One of my favorite principles in Go is this:
Architecture should make ownership visible.
That idea shows up throughout Opslane.
The project uses a clear application entrypoint and internal implementation boundaries:
cmd/server
internal/auth
internal/config
internal/db
internal/events
internal/handlers
internal/logging
internal/metrics
internal/middleware
internal/otel
internal/payment
internal/ratelimit
internal/services
internal/workers
The exact folder names are less important than the lesson behind them.
When someone opens the project, I want them to quickly understand:
- Where does the server start?
- Where does configuration load?
- Where does authentication live?
- Where are persistence boundaries?
- Where are background workers?
- Where are metrics collected?
- Where is rate limiting enforced?
- Where is shutdown coordinated?
A codebase should not make readers reverse-engineer ownership.
Good structure reduces guesswork.
That is especially important in a learning repository. The structure teaches before the code is even read.
I chose explicit composition over hidden magic
In Opslane, dependency wiring is intentionally direct.
The server builds its dependencies in one visible place: database, store, services, event bus, worker pools, metrics, tracing, rate limiter, and HTTP application.
That style is not flashy.
That is why I like it.
For teaching code, explicit composition is more valuable than hiding everything behind framework magic.
When dependencies are assembled directly, a learner can ask useful questions:
- Who owns the database connection?
- Who owns the metrics registry?
- Who owns the root application context?
- Who stops the workers?
- What happens if a worker pool fails to start?
- What does the HTTP layer depend on?
- What is optional, and what is required?
These questions are not distractions from engineering.
They are engineering.
I want learners to see that a backend is not only a collection of handlers. It is a runtime system with ownership, lifecycle, and failure modes.
Tenant isolation is not decoration
One of the core backend lessons in Opslane is tenant isolation.
Many simple examples start with:
Find user by email.
That is fine for a small demo.
But in a tenant-aware system, identity is not just about the user. It is also about the boundary where that user is allowed to exist.
A better question is:
Find this user inside this tenant boundary.
That is why tenant scope appears throughout Opslane: in the models, repository contracts, authentication flow, handlers, and service methods.
This is a deliberate teaching choice.
Tenant isolation should not be sprinkled into a codebase later as a patch. It should be part of the system’s shape from the beginning.
That is the lesson I want learners to absorb:
Security and tenancy are architectural concerns, not decorations.
Migrations are part of the product
One important hardening step in Opslane was making migrations more explicit.
The project now has formal SQL migrations for tenants, users, orders, payments, seed data, and rate limits.
It also keeps startup migrations aligned with those SQL files.
That alignment matters.
Migration drift is one of those problems that looks small until it breaks a real environment.
The application starts one way.
The manual migration path behaves another way.
A table exists locally but not in CI.
A feature depends on a schema that only one path creates.
That is not just a database problem. It is a proof problem.
If the repository claims to teach production-shaped backend engineering, migrations must be treated as a first-class surface.
That is why I added validation to detect Opslane consistency issues, including progress drift and migration drift.
The repository should not rely on me remembering to keep those surfaces aligned.
The repository should help prove they are aligned.
Observability should be wired, not just described
A backend that cannot explain itself is not production-shaped.
That is why Opslane includes structured logging, correlation IDs, metrics, and tracing concepts.
But adding observability packages is not enough.
A common mistake is to create observability code and never wire it into the running application. That is not observability. That is unused code.
Opslane now wires metrics into the HTTP stack and exposes a Prometheus-compatible /metrics endpoint.
The application records request counts, response classes, and latency histograms. The metrics are not just sitting in a package. They are part of the server path.
The project also includes an OpenTelemetry teaching boundary.
The tracer is wired into application startup, but the OTLP export remains intentionally documented as a teaching stub. That is an important distinction.
I want the repository to be honest about what is production-shaped and what is intentionally simplified.
A learning project should not pretend a stub is a complete production exporter.
It should say:
- This is the concept.
- This is what it teaches.
- This is where production systems go further.
That honesty is part of the curriculum.
Rate limiting should respect deployment reality
Another hardening step was rate limiting.
Opslane originally had an in-memory rate limiter. That is useful for teaching the concept, but it has an obvious limitation: each process has its own counters.
That is not enough for a horizontally scaled service.
So Opslane now includes a PostgreSQL-backed rate limiter. The running API wires that limiter into the request path.
This teaches a better backend lesson:
Rate limiting is not only an algorithm. It is a deployment concern.
If you run multiple instances, rate limit state needs to be shared or intentionally scoped.
Opslane uses PostgreSQL for this because the project already uses PostgreSQL as its system of record. That avoids adding Redis or another dependency just to teach the distributed rate-limiting concept.
The rate limiter also uses trusted-proxy-aware IP extraction.
That matters because X-Forwarded-For should not be trusted blindly. A service should only honor forwarded headers when the direct peer is a trusted proxy.
That is the kind of detail that turns a simple middleware into a real engineering lesson.
Graceful shutdown is a lifecycle problem
Graceful shutdown is one of the best teaching surfaces in backend engineering.
A beginner might think shutdown means:
The process received
SIGTERM.
A backend engineer needs to think differently:
The service must stop accepting new work, let in-flight requests finish, stop event publishing, drain background workers, cancel the application context, release resources, and exit within a bounded time.
Opslane models that explicitly.
The server has a shutdown coordinator that:
- listens for termination signals
- marks the app as draining
- lets
/healthreport the drain state - shuts down the HTTP server with a configured timeout
- closes the event bus to new publications
- cancels the root application context
- drains worker pools
- lets the main goroutine close final resources
This is not glamorous code.
It is important code.
Bad shutdown behavior causes dropped work, unreliable deployments, broken assumptions, and confusing production incidents.
That is why I wanted graceful shutdown to be part of the flagship.
A service lifecycle should be visible.
CI is part of the curriculum too
The Go Engineer uses CI as another teaching surface.
The project does not only say “write tests.”
It runs:
go build ./...
go vet ./...
gofmt checks
go mod tidy checks
go test ./...
go test -race ./...
govulncheck ./...
go test -coverprofile=coverage.out ./...
go run ./scripts/validate_curriculum.go
docker build ...
It also enforces a coverage threshold and keeps benchmarks in a separate workflow.
This matters because CI teaches engineering priorities.
If CI only checks that code compiles, the repository teaches that compilation is enough.
I do not want that.
I want the repository to teach that release-quality work needs stronger evidence:
- formatting
- static checks
- tests
- race detection
- vulnerability scanning
- coverage
- curriculum validation
- Docker build validation
- benchmark visibility
CI is not just automation.
CI is a statement about what the project refuses to silently break.
The validator is the hidden backbone
One of the most important parts of The Go Engineer is the curriculum validator.
A repository-first curriculum has a lot of structure. That structure can drift.
Lesson paths can break.
Run commands can become stale.
README links can point nowhere.
Curriculum metadata can disagree with folders.
Module progress can lie.
Migration documentation can fall behind implementation.
The validator exists to catch those problems.
Recently, I extended validation deeper into Opslane itself. The validator now checks that Opslane progress surfaces stay aligned and that migration surfaces do not drift silently.
That is the kind of tooling I want this project to model.
A mature repository does not depend only on human memory.
It encodes important expectations into checks.
This is one of the biggest lessons I want learners to take away:
Quality should not depend only on personal discipline.
Quality should be supported by the shape of the repository.
Known limitations are a feature, not a weakness
I added a KNOWN_LIMITATIONS.md document because I do not want The Go Engineer to pretend that every teaching implementation is production-complete.
That would be dishonest.
Some implementations are intentionally simplified to make the underlying mechanics visible.
For example:
- The custom JWT-compatible token manager teaches signing, base64url encoding, and identity extraction. In production, you would usually use mature libraries or managed identity infrastructure.
- The in-memory metrics registry teaches counters, histograms, synchronization, and instrumentation mechanics. In production, you would usually use the official Prometheus client or OpenTelemetry metrics.
- The worker pools teach bounded concurrency and graceful draining. In production, durable background work often needs a queue or outbox pattern.
- The event bus teaches in-process publish/subscribe boundaries. In distributed systems, you would likely use Kafka, NATS, EventBridge, or another external event backbone.
- The OpenTelemetry exporter boundary teaches tracing concepts, but the final network dispatch is intentionally stubbed for educational clarity.
This is not a weakness.
It is part of the teaching contract.
Learners should know which parts are production-shaped, which parts are simplified, and what production systems usually require next.
That distinction builds judgment.
And judgment is the real goal.
Source-available, not traditional open source
Another important part of the project is licensing.
The Go Engineer is source-available for personal, educational, and non-commercial use. Commercial use requires permission.
That means I should be careful with language.
I do not describe the project as traditional open source in the strict sense.
I describe it as:
A source-available educational curriculum.
That framing is more accurate and more honest.
Good engineering communication includes accurate project positioning, not only accurate code.
What “release-quality” means to me
I do not like calling software “perfect.”
Perfect is not how serious engineering works.
A better standard is release-quality.
For The Go Engineer, release-quality means:
- The curriculum architecture is locked.
- The machine-readable registry is validated.
- Lessons have runnable surfaces.
- Documentation is intentionally structured.
- Opslane modules have clear progress surfaces.
- The flagship backend integrates real backend concerns.
- Known limitations are documented.
- CI checks build, tests, race conditions, vulnerabilities, coverage, Docker builds, and curriculum consistency.
- The repository can prove more of its own claims.
That is the bar I care about.
Not perfection.
Proof.
What I learned while building it
Building The Go Engineer taught me something important:
A curriculum is not only the content.
A curriculum is also:
- the path
- the structure
- the validation
- the examples
- the mistakes it prevents
- the questions it forces
- the boundaries it makes visible
- the proof it requires before moving forward
That is why I keep coming back to the same idea:
The repository is the product.
A repository teaches through everything it exposes.
The README teaches.
The folder structure teaches.
The CI workflow teaches.
The tests teach.
The comments teach.
The validator teaches.
The known limitations teach.
The flagship project teaches.
Even the constraints teach.
If the repository is going to teach, it should teach intentionally.
That is what I am trying to build with The Go Engineer.
Who The Go Engineer is for
The Go Engineer is for people who want to move beyond syntax.
It is for learners who do not only want to know how to write Go code, but how to reason about Go systems.
It is for engineers who want to understand:
- package boundaries
- backend architecture
- HTTP APIs
- PostgreSQL persistence
- migrations
- tenant isolation
- authentication
- service workflows
- payment reliability
- background workers
- caching
- metrics
- tracing
- rate limiting
- graceful shutdown
- CI validation
- release discipline
It is also for me.
Because building this project forces me to be more disciplined.
If I claim the repository is the curriculum, then every inconsistency matters.
That pressure is useful.
It makes the project better.
Conclusion
I built The Go Engineer because I wanted to teach Go beyond syntax.
I wanted a curriculum that moves from language fundamentals into the decisions engineers actually make when building backend systems: boundaries, lifecycle, persistence, concurrency, observability, validation, security, deployment, and maintenance.
Opslane exists because isolated examples are not enough. At some point, learners need to see how decisions interact inside one integrated backend.
The validator exists because documentation, metadata, examples, and implementation drift unless the repository actively checks them.
The CI pipeline exists because release quality needs evidence.
The known limitations exist because honest teaching should explain where simplified implementations stop and production systems begin.
The Go Engineer is not just a Go tutorial.
It is a repository-first Go engineering curriculum.
And the biggest lesson I learned while building it is this:
A repository is not just where the curriculum lives.
The repository is the curriculum.
If you want to follow the project, explore the repository here:
swe-labs
/
the-go-engineer
The Go Engineer is a complete Go backend engineering path where learners go from machine fundamentals to building Opslane, a production-shaped SaaS backend.
The Go Engineer
The Go Engineer is a repository-first Go software engineering curriculum. It teaches Go by combining runnable lessons, production-shaped examples, tests, validation, and a final integrated backend project.
The stable v2.1 line is organized as a 5-phase, 12-section learning system with 215 registered curriculum items. The public structure is locked in ARCHITECTURE.md, and the machine-readable registry is curriculum.v2.json.
Status
Current stable line: v2.1.x
Current stable release: v2.1.1
Supported branches:
| Branch | Purpose |
|---|---|
main |
active post-v2.1 implementation and integration line |
release/v2 |
stable v2.1.x maintenance line |
release/v1 |
stable v1 maintenance line |
Architecture v2.1 is locked. Normal work may improve lessons, tests, documentation, validators, or the Opslane flagship implementation, but must not add, remove, rename, or reorder public root sections without explicit maintainer approval.
Quick Start
Requirements:
- Go version declared in go.mod
- CGO-capable C compiler for
go test -race ./...and SQLite-backed paths
git clone https://github.com/swe-labs/the-go-engineer.git
cd the-go-engineer
go mod…You can also read the original article on Hashnode:
Top comments (0)