Cursor Rules for Go: 6 Rules That Make AI Write Idiomatic Go Code
If you use Cursor or Claude Code for Go development, you've watched the AI generate code that compiles but makes experienced Gophers cringe. Bare errors.New without wrapping. Naked returns hiding in 40-line functions. Missing context.Context parameters. Interfaces defined next to structs instead of at point of use.
The fix isn't better prompting. It's better rules.
Here are 6 cursor rules for Go that make your AI assistant write code that passes code review the first time. Each one includes a before/after example so you can see exactly what changes.
1. Enforce Error Wrapping with fmt.Errorf and %w
Without this rule, AI assistants return bare errors.New or raw error strings. You lose the entire error chain, and debugging in production becomes a guessing game.
The rule:
Always wrap errors with fmt.Errorf and the %w verb to preserve the error chain.
Never return errors.New for errors that originate from a called function.
Include the function or operation name in the wrap message for traceability.
Bad — what the AI generates without the rule:
func GetUser(id int) (*User, error) {
row := db.QueryRow("SELECT name, email FROM users WHERE id = $1", id)
var u User
err := row.Scan(&u.Name, &u.Email)
if err != nil {
return nil, errors.New("failed to get user")
}
return &u, nil
}
Good — what the AI generates with the rule:
func GetUser(id int) (*User, error) {
row := db.QueryRow("SELECT name, email FROM users WHERE id = $1", id)
var u User
if err := row.Scan(&u.Name, &u.Email); err != nil {
return nil, fmt.Errorf("GetUser(%d): %w", id, err)
}
return &u, nil
}
Now errors.Is and errors.As work through the entire call chain. When this fails at 3 AM, the log tells you exactly which function, which ID, and what the underlying database error was.
2. Ban Naked Returns — Use Named Returns Only When They Aid Clarity
AI loves naked returns. They save keystrokes and lose readability. In functions longer than a few lines, a return with no values forces you to scroll up to the signature to understand what's being returned.
The rule:
Never use naked returns in functions longer than 5 lines.
Named return values are allowed only when they improve documentation
(e.g., distinguishing multiple return values of the same type).
Always use explicit return values in the return statement.
Bad — naked returns hiding the actual values:
func ParseConfig(path string) (cfg Config, err error) {
data, err := os.ReadFile(path)
if err != nil {
return
}
err = json.Unmarshal(data, &cfg)
if err != nil {
return
}
return
}
Good — explicit return values, names used only for clarity:
func ParseConfig(path string) (Config, error) {
data, err := os.ReadFile(path)
if err != nil {
return Config{}, fmt.Errorf("ParseConfig: read %s: %w", path, err)
}
var cfg Config
if err := json.Unmarshal(data, &cfg); err != nil {
return Config{}, fmt.Errorf("ParseConfig: unmarshal: %w", err)
}
return cfg, nil
}
Every return statement now tells you exactly what's going back to the caller. No scrolling, no guessing.
3. Enforce Context Propagation — First Param Must Be ctx context.Context
Without this rule, AI generates functions that silently drop context. Your timeouts don't propagate. Your cancellations don't cancel. Your traces break mid-chain.
The rule:
All functions that perform I/O (database, HTTP, gRPC, file, or service calls) must
accept ctx context.Context as the first parameter. Never use context.Background()
or context.TODO() inside a function that already has access to a context.
Pass ctx through the entire call chain.
Bad — context dropped, timeouts ignored:
func CreateOrder(order Order) error {
err := db.Exec("INSERT INTO orders (user_id, total) VALUES ($1, $2)",
order.UserID, order.Total)
if err != nil {
return err
}
err = notifyService.Send(order.UserID, "Order created")
return err
}
Good — context flows through the entire chain:
func CreateOrder(ctx context.Context, order Order) error {
if err := db.ExecContext(ctx,
"INSERT INTO orders (user_id, total) VALUES ($1, $2)",
order.UserID, order.Total,
); err != nil {
return fmt.Errorf("CreateOrder: insert: %w", err)
}
if err := notifyService.Send(ctx, order.UserID, "Order created"); err != nil {
return fmt.Errorf("CreateOrder: notify: %w", err)
}
return nil
}
Now when a client disconnects or a deadline expires, every downstream call respects it. Traces propagate end-to-end. This is the single most impactful Go rule you can add.
4. Enforce Go Interface Segregation — Define Interfaces at Point of Use
AI models trained on Java and C# define big interfaces next to the structs that implement them. In Go, the idiom is the opposite: small interfaces defined where they're consumed.
The rule:
Define interfaces in the package that uses them, not the package that implements them.
Keep interfaces small — 1 to 3 methods maximum.
Name single-method interfaces with the -er suffix (Reader, Storer, Notifier).
Never define an interface alongside the concrete struct.
Bad — large interface defined in the implementation package:
// package storage
type UserStorage interface {
GetUser(id int) (*User, error)
CreateUser(u User) error
UpdateUser(u User) error
DeleteUser(id int) error
ListUsers(filter Filter) ([]User, error)
CountUsers() (int, error)
}
type PostgresUserStorage struct {
db *sql.DB
}
// ... implements all 6 methods
Good — small interface at point of use:
// package orderservice
// UserGetter retrieves a user by ID. Defined here because this is
// the only capability this package needs from the user store.
type UserGetter interface {
GetUser(ctx context.Context, id int) (*User, error)
}
type OrderService struct {
users UserGetter
}
func NewOrderService(users UserGetter) *OrderService {
return &OrderService{users: users}
}
The consumer asks for only what it needs. The concrete PostgresUserStorage satisfies UserGetter implicitly — no explicit implements keyword, no coupling. Testing is trivial because you mock 1 method, not 6.
5. Enforce Table-Driven Tests with testify
Without this rule, AI writes one test function per case with duplicated setup, no assertions library, and no subtests. The result: 200-line test files for 3 scenarios.
The rule:
Use table-driven tests for all functions with multiple input/output scenarios.
Use testify/assert for assertions (assert.Equal, assert.NoError, assert.ErrorIs).
Each test case must run as a named subtest using t.Run.
Name test cases with the pattern: "description of scenario".
Bad — one function per case, manual assertions:
func TestParseAge(t *testing.T) {
result, err := ParseAge("25")
if err != nil {
t.Fatal(err)
}
if result != 25 {
t.Errorf("expected 25, got %d", result)
}
}
func TestParseAgeNegative(t *testing.T) {
_, err := ParseAge("-1")
if err == nil {
t.Fatal("expected error")
}
}
func TestParseAgeEmpty(t *testing.T) {
_, err := ParseAge("")
if err == nil {
t.Fatal("expected error")
}
}
Good — table-driven with testify:
func TestParseAge(t *testing.T) {
tests := []struct {
name string
input string
want int
wantErr error
}{
{name: "valid age", input: "25", want: 25},
{name: "negative age returns error", input: "-1", wantErr: ErrInvalidAge},
{name: "empty string returns error", input: "", wantErr: ErrInvalidAge},
{name: "non-numeric returns error", input: "abc", wantErr: ErrInvalidAge},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ParseAge(tt.input)
if tt.wantErr != nil {
assert.ErrorIs(t, err, tt.wantErr)
return
}
assert.NoError(t, err)
assert.Equal(t, tt.want, got)
})
}
}
One table. Four cases. Every scenario named, isolated, and using t.Run so go test -run "negative" works. Adding a new case is one struct literal.
6. Ban Global Var Mutation — Use Dependency Injection
AI reaches for var db *sql.DB at package level because it's easy. Then your tests fight over shared state, your init functions hide dependencies, and parallel tests break randomly.
The rule:
Never declare mutable global variables. Use const for true constants only.
Pass dependencies (DB connections, HTTP clients, loggers) through struct fields
or function parameters. Use constructor functions (NewXxx) to wire dependencies.
Never use init() for dependency setup.
Bad — global mutable state:
var db *sql.DB
var logger *log.Logger
func init() {
var err error
db, err = sql.Open("postgres", os.Getenv("DATABASE_URL"))
if err != nil {
log.Fatal(err)
}
logger = log.New(os.Stdout, "", log.LstdFlags)
}
func GetUser(id int) (*User, error) {
row := db.QueryRow("SELECT name FROM users WHERE id = $1", id)
var u User
err := row.Scan(&u.Name)
return &u, err
}
Good — dependencies injected through a struct:
type UserRepository struct {
db *sql.DB
logger *slog.Logger
}
func NewUserRepository(db *sql.DB, logger *slog.Logger) *UserRepository {
return &UserRepository{db: db, logger: logger}
}
func (r *UserRepository) GetUser(ctx context.Context, id int) (*User, error) {
row := r.db.QueryRowContext(ctx, "SELECT name FROM users WHERE id = $1", id)
var u User
if err := row.Scan(&u.Name); err != nil {
return nil, fmt.Errorf("UserRepository.GetUser(%d): %w", id, err)
}
return &u, nil
}
Now your tests create their own UserRepository with a test database, run in parallel, and never interfere with each other. Dependencies are visible in the constructor — no hidden coupling.
Put These Rules to Work
These 6 rules cover the patterns where AI coding assistants fail most often in Go projects. Add them to your .cursorrules or CLAUDE.md and the difference is immediate — fewer review comments, idiomatic code from the first generation, and less time rewriting AI output.
I've packaged these rules (plus 44 more covering Go microservices, gRPC, CLI tools, and concurrency patterns) into a ready-to-use rules pack: Cursor Rules Pack v2
Drop it into your project directory and stop fighting your AI assistant.
Top comments (0)