DEV Community

Olivia Craft
Olivia Craft

Posted on

Cursor Rules for Python: 6 Rules That Fix the Most Common AI Coding Mistakes

Cursor Rules for Python: 6 Rules That Fix the Most Common AI Coding Mistakes

If you use Cursor or Claude Code for Python development, you've seen the AI generate code that works but violates every convention your team cares about. Bare dict returns instead of typed models. Generic except Exception. Imports scattered randomly.

The fix isn't better prompting. It's better rules.

Here are 6 cursor rules for Python that eliminate the most common AI coding mistakes. Each one includes a bad vs. good example so you can see exactly what changes.


1. Enforce Type Hints Everywhere

Without this rule, AI assistants default to untyped Python—bare dict, list, and tuple returns that make your codebase impossible to navigate.

The rule:

Always use type hints from the typing module. Never return bare dict, list, or tuple.
Use TypedDict, dataclass, or Pydantic models for structured data.
Function signatures must include parameter types and return types.
Enter fullscreen mode Exit fullscreen mode

Bad — what the AI generates without the rule:

def get_user(user_id):
    result = db.query(user_id)
    return {"name": result.name, "email": result.email, "active": True}
Enter fullscreen mode Exit fullscreen mode

Good — what the AI generates with the rule:

from dataclasses import dataclass

@dataclass
class UserResponse:
    name: str
    email: str
    active: bool

def get_user(user_id: int) -> UserResponse:
    result = db.query(user_id)
    return UserResponse(name=result.name, email=result.email, active=True)
Enter fullscreen mode Exit fullscreen mode

The typed version gives you autocomplete, catches bugs at lint time, and makes refactoring safe.


2. Google-Style Docstrings with Args and Returns

AI models love generating either no docstrings or inconsistent ones. This rule forces a standard format that tools like Sphinx and mkdocs actually parse.

The rule:

Use Google-style docstrings for all public functions and classes.
Always include Args and Returns sections. Include Raises if the function raises exceptions.
Keep the summary line under 80 characters.
Enter fullscreen mode Exit fullscreen mode

Bad — vague or missing docstring:

def calculate_discount(price, tier):
    """Calculate discount."""
    multiplier = {"gold": 0.8, "silver": 0.9, "bronze": 0.95}
    return price * multiplier.get(tier, 1.0)
Enter fullscreen mode Exit fullscreen mode

Good — complete Google-style docstring:

def calculate_discount(price: float, tier: str) -> float:
    """Apply tier-based discount to the given price.

    Args:
        price: Original price before discount.
        tier: Customer tier. One of "gold", "silver", "bronze".

    Returns:
        Discounted price. Returns original price if tier is unknown.

    Raises:
        ValueError: If price is negative.
    """
    if price < 0:
        raise ValueError(f"Price must be non-negative, got {price}")
    multiplier = {"gold": 0.8, "silver": 0.9, "bronze": 0.95}
    return price * multiplier.get(tier, 1.0)
Enter fullscreen mode Exit fullscreen mode

3. Custom Exceptions, Never Bare Except

This is the rule that saves you at 2 AM. Without it, the AI writes except Exception blocks that silently swallow errors and make debugging impossible.

The rule:

Never use bare except or except Exception for flow control.
Define custom exception classes for each domain error.
Always log or re-raise  never silently swallow exceptions.
Enter fullscreen mode Exit fullscreen mode

Bad — catches everything, hides the real error:

def process_payment(amount, card_token):
    try:
        charge = stripe.Charge.create(amount=amount, source=card_token)
        return charge.id
    except Exception:
        return None
Enter fullscreen mode Exit fullscreen mode

Good — specific exceptions, clear error handling:

class PaymentError(Exception):
    """Base exception for payment failures."""

class PaymentDeclinedError(PaymentError):
    """Raised when the payment method is declined."""

class PaymentGatewayError(PaymentError):
    """Raised when the payment gateway is unreachable."""

def process_payment(amount: int, card_token: str) -> str:
    """Process a payment and return the charge ID.

    Args:
        amount: Amount in cents.
        card_token: Tokenized card identifier.

    Returns:
        The charge ID from the payment processor.

    Raises:
        PaymentDeclinedError: If the card is declined.
        PaymentGatewayError: If the gateway is unreachable.
    """
    try:
        charge = stripe.Charge.create(amount=amount, source=card_token)
        return charge.id
    except stripe.error.CardError as exc:
        raise PaymentDeclinedError(str(exc)) from exc
    except stripe.error.APIConnectionError as exc:
        raise PaymentGatewayError("Stripe unreachable") from exc
Enter fullscreen mode Exit fullscreen mode

4. Django/FastAPI Patterns Done Right

AI models mix raw SQL with ORM calls, skip Pydantic validation in FastAPI, and create Django views that bypass the queryset API. This rule keeps framework code idiomatic.

The rule:

In Django: use the ORM queryset API. Never write raw SQL unless explicitly required for performance.
In FastAPI: use Pydantic models for all request/response bodies. Never accept bare dicts from request bodies.
Always use select_related/prefetch_related to avoid N+1 queries.
Enter fullscreen mode Exit fullscreen mode

Bad — raw SQL in Django, bare dict in FastAPI:

# Django — skips ORM, invites SQL injection risk
def get_active_users(request):
    from django.db import connection
    cursor = connection.cursor()
    cursor.execute("SELECT * FROM users WHERE active = 1")
    rows = cursor.fetchall()
    return JsonResponse({"users": rows})

# FastAPI — no validation
@app.post("/users")
async def create_user(data: dict):
    name = data["name"]
    email = data["email"]
    return save_user(name, email)
Enter fullscreen mode Exit fullscreen mode

Good — ORM in Django, Pydantic in FastAPI:

# Django — uses ORM, safe and readable
def get_active_users(request: HttpRequest) -> JsonResponse:
    users = User.objects.filter(active=True).values("id", "name", "email")
    return JsonResponse({"users": list(users)})

# FastAPI — validated with Pydantic
class CreateUserRequest(BaseModel):
    name: str = Field(min_length=1, max_length=100)
    email: EmailStr

class UserResponse(BaseModel):
    id: int
    name: str
    email: str

@app.post("/users", response_model=UserResponse)
async def create_user(data: CreateUserRequest) -> UserResponse:
    user = await save_user(data.name, data.email)
    return UserResponse(id=user.id, name=user.name, email=user.email)
Enter fullscreen mode Exit fullscreen mode

5. Pytest Structure with Fixtures

Without this rule, AI generates tests with hardcoded data, no fixtures, and unittest.TestCase when you're using pytest. The tests pass but are unmaintainable.

The rule:

Use pytest for all tests. Never use unittest.TestCase.
Use fixtures for test data and setup  no hardcoded values in test functions.
Name tests: test_<function>_<scenario>_<expected_result>.
Use parametrize for testing multiple inputs.
Enter fullscreen mode Exit fullscreen mode

Bad — hardcoded data, no fixtures:

import unittest

class TestDiscount(unittest.TestCase):
    def test_discount(self):
        result = calculate_discount(100, "gold")
        self.assertEqual(result, 80.0)

    def test_discount2(self):
        result = calculate_discount(100, "unknown")
        self.assertEqual(result, 100.0)
Enter fullscreen mode Exit fullscreen mode

Good — pytest with fixtures and parametrize:

import pytest

@pytest.fixture
def base_price() -> float:
    return 100.0

@pytest.mark.parametrize("tier,expected", [
    ("gold", 80.0),
    ("silver", 90.0),
    ("bronze", 95.0),
])
def test_calculate_discount_known_tier_returns_discounted_price(
    base_price: float, tier: str, expected: float
) -> None:
    result = calculate_discount(base_price, tier)
    assert result == expected

def test_calculate_discount_unknown_tier_returns_original_price(
    base_price: float,
) -> None:
    result = calculate_discount(base_price, "unknown")
    assert result == base_price
Enter fullscreen mode Exit fullscreen mode

6. Import Organization: stdlib, third-party, local

AI-generated imports are almost always a mess. Mixed ordering, relative imports, unused imports left behind. This rule makes every file consistent.

The rule:

Organize imports in three groups separated by blank lines:
1. Standard library
2. Third-party packages
3. Local/project imports
Use absolute imports only. Never use relative imports.
Sort alphabetically within each group.
Enter fullscreen mode Exit fullscreen mode

Bad — random import order, relative imports:

from .utils import format_name
import os
from pydantic import BaseModel
import json
from datetime import datetime
from ..models import User
import requests
Enter fullscreen mode Exit fullscreen mode

Good — organized, absolute, sorted:

import json
import os
from datetime import datetime

import requests
from pydantic import BaseModel

from myapp.models import User
from myapp.utils import format_name
Enter fullscreen mode Exit fullscreen mode

Clean imports make merge conflicts rarer and code reviews faster. Tools like isort enforce this automatically, but the rule ensures the AI gets it right from the start.


Put These Rules to Work

These 6 rules cover the patterns where AI coding assistants fail most often in Python projects. Add them to your .cursorrules or CLAUDE.md file and the difference is immediate—fewer corrections, cleaner diffs, less time fixing generated code.

I've packaged these rules (plus 44 more covering Django, FastAPI, Flask, data science workflows, and async patterns) into a ready-to-use rules pack: Cursor Rules Pack v2

Drop it into your project directory and stop fighting your AI assistant.

Top comments (0)