Django developers are introducing good mental models into Go - and that is the issue. The frameworks are based on entirely divergent philosophies: Django hides complexity with layers of ORM, middleware stack, and magic conventions, whereas Go requires explicit, low-level control of virtually every boundary of the system. Bringing Django instincts into Go codebases does not only result in you writing inefficient Go, but in broken Go. There is a difference in the runtime model, the concurrency model and the error model are categorically different. This article breaks down the most technically harmful Django practices, why they crash at the Go layer and what good idiomatic Go would be like instead.
Treating the ORM as the Data Layer
The ORM used by Django is an active record system that is based on the dynamic type system of Python. QuerySet objects can be lazy in nature, they do not execute SQL until the time of iteration, and they can have queryable filters that are dynamically assembled at runtime. It is natural that a Django user would reach out to .filter, .select related and .prefetch_related method calls without contemplating the SQL that is being emitted since the ORM is transparent in its translation.
Go does not have an in-built ORM. There are such libraries as GORM, but the performance problems that will arise when using them in the same way you used the ORM of Django will be minor, yet challenging to debug. The more threatening practice is conceptual: the access to data should be considered as the automatically occurring process on the basis of struct relationships.
All interactions with a database are expressive in Go. With database/sql, you are left to write the raw SQL, read rows into structs manually and deal with connection pool exhaustion yourself:
rows, err := db.QueryContext(ctx, "SELECT id, name FROM users WHERE active = $1", true)
if err != nil {
return nil, fmt.Errorf("querying users: %w", err)
}
defer rows.Close()
var users []User
for rows.Next() {
var u User
if err := rows.Scan(&u.ID, &u.Name); err != nil {
return nil, fmt.Errorf("scanning user row: %w", err)
}
users = append(users, u)
}
The implication of performance here is not a mere one. The N+1 query problem of Django is a famous footgun, but Go developers who reach to GORM to use Preload() or auto-load associations will generate the same pathology with less insight into the queries being emitted, and the performance profile of Go relies on that insight.
Relying on Django's Request/Response Lifecycle Assumptions
Django is a synchronous HTTP server. Every request operates in a thread (or greenlet in uWSGI/gevent settings) and middleware is a stack read out sequentially applied to every request. Django developers create middleware that makes the assumption that requests are not shared between each other - a request is received on the stack, data is added to the request object, and it is processed. A lot of the implicit state passing is supported by thread-local storage.
The Go net/http package does not make use of threads, but rather goroutines. The Go scheduler (GOMAXPROCS regulates parallelism) spins up a goroutine with each incoming request and multiplexes goroutines over OS threads. The fatal flaw of Django developers is that they are trying to transfer request-scoped data by using package-level variables or global state, repeating the thread-local model used in Django. This is a data race that is bound to occur.

Context is the idiomatic solution to Go.Context. Request-scoped values are bound to the context at the handler boundary, and passed explicitly down the call stack:
type contextKey string
const userIDKey contextKey = "userID"
func AuthMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
userID := extractUserIDFromToken(r)
ctx := context.WithValue(r.Context(), userIDKey, userID)
next.ServeHTTP(w, r.WithContext(ctx))
})
}
Cancellation signals and deadlines are also contained in the context package. In case of a client disconnection in the middle of the request, ctx.Done() fires and any downstream database query or HTTP that respects the context will abort. The request model of Django lacks such a mechanism and the requests either succeed or fail at the server level. Go is context-driven, and is inherently event-driven.
Ignoring Explicit Error Handling
Python exceptions automatically propagate along the call stack. The view layer in Django executes in try/except at the framework level, and at the settings.py, unhandled exceptions are handled by custom exception handlers in the entire system. The real-world consequence is that Django developers tend to write code that allows errors to be implicitly propagated with the hope that some higher-level code will then handle them.
There are no exceptions in Go. Values returned by the functions are called errors. The Go compiler makes no reference to error checking, though, it will not mind you discarding an error return, but discarding errors in production Go code is functionally identical to pass in Python. A failed operation will silently terminate the runtime, and the state corruption will be reflected in the downstream state.
It is idiomatic Go to deal with errors at the call site, enclose them with context by using fmt.Propagate up explicitly: Errf and the %w verb:
func GetUserProfile(ctx context.Context, db *sql.DB, id int) (*Profile, error) {
user, err := fetchUser(ctx, db, id)
if err != nil {
return nil, fmt.Errorf("GetUserProfile: fetching user %d: %w", id, err)
}
prefs, err := fetchPreferences(ctx, db, user.ID)
if err != nil {
return nil, fmt.Errorf("GetUserProfile: fetching preferences for user %d: %w", id, err)
}
return buildProfile(user, prefs), nil
}
As() unwrind the chain of errors to be handled. Sentinel errors (package-level var ErrNotFound = errors.New(not found) is the replacement of the exception hierarchy of Django. This is neither boilerplate nor coincidence, but rather the mechanism by which Go programs continue to be debuggable when under production load.
Misunderstanding Memory Semantics and Object Lifetimes
The runtime used by Django developers abstracts away memory management to Python reference counting and garbage collections. Objects are dynamically created, freely exchanged, and seldom need to be thought about in terms of being copied, shared or written in place. This gives rise to a mental model in which objects are references, and mutation is implicit and global in nature.
Go is a violation of this assumption on a fundamental level.
Go has a value semantics–first model. The default mode of copying Structs is not referencing. When a struct is passed to a function, it makes a copy but not when a pointer is explicitly used. This distinction is not a matter of style, it influences rightness and doing.
A Python/Django developer could code with the assumption of mutation propagating:
type User struct {
Name string
}
func updateName(u User) {
u.Name = "updated"
}
This role is of no use. The copy of the caller is not changed since u is passed by value. The right technique involves the explicit use of pointers:
func updateName(u *User) {
u.Name = "updated"
}
The implications do not stop at mere mutation. Big data structures that are passed by value add silent copying expenses. This is a quantifiable performance problem in high-throughput systems, particularly when structs have slices, maps, or nested data.
The essence of the change is as follows: in Go, you should always know the ownership, copying, and mutation boundaries. Memory is not a detail of implementation, it is a part of the API contract.
Misusing Goroutines as Celery-Style Task Workers
Background jobs in Django are commonly handled with Celery, where tasks are serialized and sent to a broker such as Redis or RabbitMQ. Worker processes consume these tasks, providing durability, retry semantics, and fault tolerance.
Developers moving to Go often encounter goroutines and use them as fire-and-forget task runners, attempting to replicate a Celery-style dispatch pattern. However, goroutines are in-memory concurrency primitives and do not provide persistence, retry, or delivery guarantees. You can almost equate them to Python's asyncio.
Goroutines are lightweight and start with a small stack, but they are not free. Spawning unbounded goroutines in response to request volume can lead to resource exhaustion, including memory pressure and downstream system overload.
Additionally, an unhandled panic inside a goroutine will crash the entire process unless it is explicitly recovered within that goroutine. Unlike Celery workers, which are isolated and can be restarted without losing queued tasks, goroutines do not provide fault isolation or recovery guarantees.
Bounded concurrency in Go should instead use a worker pool pattern backed by channels:
func NewWorkerPool(size int, jobs <-chan Job) {
var wg sync.WaitGroup
for i := 0; i < size; i++ {
wg.Add(1)
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
log.Printf("worker panic recovered: %v", r)
}
}()
for job := range jobs {
job.Execute()
}
}()
}
wg.Wait()
}
While recover() can prevent crashes and allow logging, it does not provide retry, persistence, or task durability. For Celery-like behavior in Go, task queue systems such as asynq (Redis-backed) provide features like retries and at-least-once delivery.
Applying Django's Settings Module Pattern to Go Configuration
Django puts the configuration in settings.py, a python module which is imported upon starting the Django app's process. The variables are accessed in the codebase through from django.conf import settings. This is often emulated by Django developers using a global-config-struct in Go, which is filled at init() time and can be read anywhere in the package graph.
The issue with this approach is testability. Go package-level globals are shared state between test runs. go test runs the same set of tests in the same process and a global config struct that mutates in one test is bled to another. The test runner of Django is isolated in terms of fixtures whereas Go is not.
The idiomatic Go approach is dependency injection via constructor functions. Configuration is loaded when main runs, and dependencies are explicitly wired:
type Config struct {
DatabaseURL string
Port int
JWTSecret []byte
}
func main() {
cfg := loadConfig() // reads env vars or config file
db, err := sql.Open("pgx", cfg.DatabaseURL)
if err != nil {
log.Fatalf("opening database: %v", err)
}
svc := NewUserService(db, cfg.JWTSecret)
srv := NewServer(cfg.Port, svc)
srv.ListenAndServe()
}
Only the configuration is added to each layer. Tests build their own test-specifically configured instances. There is no mutant world state. This simply does not fit structurally in the Django setting pattern and will have to be discarded altogether when using Go.
Conclusion
The Django to Go transition is not about learning new syntax. It is a change in the underlying assumptions your code has regarding memory, concurrency, errors and lifecycle. The existence of conventions in Django is due to the naturalness of these conventions in Python, as dictated by its runtime. The conventions of Go are due to the fact that its runtime render alternatives unsafe. Bringing ORM thinking to a SQL-first system, not caring about error returns since they are addressed by exceptions, creating goroutines to invoke tasks in Celery-like fashion, or binding configuration to global state, are all category errors, not styles. Go will reward developers with an idea of what the runtime is actually doing. Django has made you believe in the structure. Go requires you to be the framework.
Bye :)

Top comments (0)