Elixir is the language where the right code looks almost boring — a function head, three tiny pattern-matched clauses, a with chain for the happy path, a supervisor that restarts anything that breaks, and a @spec that Dialyzer actually checks. The wrong code looks like someone translated a Ruby service class and a Node.js controller into BEAM syntax: cond blocks five levels deep, a try/rescue around every DB call because exceptions scare the author, GenServers used as global mutable dictionaries, Task.async/1 spawned bare into the void, Ecto queries scattered across controllers, and String.to_atom/1 called on user input so the VM's atom table fills and the node dies three weeks later during Black Friday.
Then you add an AI assistant.
Cursor and Claude Code were trained on a decade of BEAM code that includes some excellent OTP work but also a mountain of blog-post snippets — Phoenix controllers that talk to Ecto directly, GenServer.call/3 used as a database, defensive rescue clauses that catch Ecto.NoResultsError and return nil, processes spawned without a supervisor so they die silently when the node restarts, @spec omitted entirely because "the pattern match documents the types," Task.await/2 with no timeout, and Phoenix contexts that are thin wrappers over schemas with no business logic. Ask for "a function that fetches a user and charges their card," and you get a flat function with three case expressions, a try around the Stripe call, and a Repo.get!/2 that will raise a 500 when the ID is wrong. The code runs on the happy path. It does not survive production.
The fix is .cursorrules — one file in the repo that tells the AI what idiomatic, production-grade Elixir and OTP look like. Eight rules below, each with the failure mode, the rule that prevents it, and a before/after. Copy-paste .cursorrules at the end. Examples target modern Elixir (1.16+) with Phoenix 1.7, Ecto 3, and standard OTP primitives.
How Cursor Rules Work for Elixir Projects
Cursor reads project rules from two locations: .cursorrules (a single file at the repo root, still supported) and .cursor/rules/*.mdc (modular files with frontmatter, recommended for any non-trivial Elixir app). For Elixir I recommend modular rules so the Phoenix web-layer conventions don't bleed into pure-OTP libraries in an umbrella and so Ecto patterns only fire near database code:
.cursor/rules/
ex-core.mdc # pattern matching, with, tagged tuples, @spec
ex-otp.mdc # GenServer, Supervisor, let-it-crash, restart strategies
ex-ecto.mdc # queries in contexts, Multi, changesets, migrations
ex-phoenix.mdc # controllers, contexts boundary, LiveView discipline
ex-concurrency.mdc # Task.Supervisor, async_stream, timeouts
ex-testing.mdc # ExUnit, async, Mox, property tests
Frontmatter controls activation: globs: ["**/*.ex", "**/*.exs", "**/mix.exs"] with alwaysApply: false. Now the rules.
Rule 1: Pattern Match Over Conditionals — Function Heads First, with for Chains
The single most common AI failure in Elixir is "a big flat function with if/case/cond at every branch." That pattern works in every language Cursor was trained on, but in Elixir the idiom is to push branching into function heads — which turns control flow into a table of patterns, makes missing cases a compile-time warning, and dialyzes cleanly. When you genuinely do have a sequence of fallible steps, with replaces the tower-of-case.
The rule:
Prefer pattern matching over conditionals:
- Multiple function clauses with different heads BEAT case/cond/if
inside a single function body.
- Let the compiler tell you about missing patterns via exhaustiveness
warnings (use @dialyzer :no_match or :no_return sparingly).
- Prefer `case` over `cond` when the value is a single expression.
- Prefer `cond` over nested `if` when you have multiple independent
boolean branches.
- Use `with` to chain operations that each return {:ok, _} | {:error, _}.
An `else` clause catches specific error shapes.
Destructure in function heads where possible. Fall through to a generic
clause only when you want a runtime pattern guard.
Never write a 6-clause cond. That's always three functions in disguise.
Bad — control flow in a conditional tower:
def process_payment(user_id, amount) do
user = Accounts.get_user(user_id)
if user == nil do
{:error, :not_found}
else
if user.status == :active do
if amount > 0 do
case Stripe.charge(user.customer_id, amount) do
{:ok, charge} -> {:ok, charge}
{:error, e} -> {:error, e}
end
else
{:error, :invalid_amount}
end
else
{:error, :inactive}
end
end
end
Good — function heads + with:
def process_payment(user_id, amount) when amount > 0 do
with {:ok, user} <- fetch_active_user(user_id),
{:ok, charge} <- Stripe.charge(user.customer_id, amount) do
{:ok, charge}
end
end
def process_payment(_user_id, _amount), do: {:error, :invalid_amount}
defp fetch_active_user(user_id) do
case Accounts.get_user(user_id) do
%User{status: :active} = user -> {:ok, user}
%User{} -> {:error, :inactive}
nil -> {:error, :not_found}
end
end
Each clause does one thing. The happy path is a straight line. Missing a case is a compiler warning. The AI can't talk you into a sixth if branch because the structure rejects it.
Rule 2: Tagged Tuples Are the Error Protocol — {:ok, value} / {:error, reason}
In Elixir, functions that can fail return {:ok, value} or {:error, reason}. Functions that cannot fail return the value directly. A bang version (function!/n) raises on error. AI assistants mix these idioms freely — they'll return {:ok, nil} when the row isn't found, false to mean "error," raise inside a function whose caller expects a tuple, or return {:error, "not found"} as a string instead of an atom. Every mismatch turns callers into a blob of defensive code.
The rule:
Error protocol:
- Fallible functions return {:ok, value} | {:error, reason}.
- `reason` is an atom (:not_found, :unauthorized) or a struct
(Ecto.Changeset, a custom error struct). Never a bare string.
- Infallible functions return the raw value.
- The bang version (`get_user!/1`) raises an exception on failure.
Use it only where the caller is okay being killed by the supervisor.
NEVER:
- Return {:ok, nil} to mean "not found" — use {:error, :not_found}.
- Return `false` to mean "error" — it's indistinguishable from a valid
boolean result.
- Raise from a function that has a non-bang name.
- Use strings as error reasons — they can't be matched on.
- Mix {:ok, _} returns with bare-value returns in the same function
(different return shapes per branch). Callers can't handle it.
@spec every public function. Dialyzer catches return-shape bugs at
compile time, and the spec IS the contract.
Bad — inconsistent return shapes, string reasons, bang on a non-bang name:
def find_user(email) do
case Repo.get_by(User, email: email) do
nil -> {:ok, nil} # should be :error
user -> user # should be {:ok, user}
end
end
def authorize(user, action) do
if can?(user, action), do: true, else: {:error, "nope"}
end
Good — tagged tuples everywhere, atom reasons, specs:
@spec find_user(String.t()) :: {:ok, User.t()} | {:error, :not_found}
def find_user(email) do
case Repo.get_by(User, email: email) do
nil -> {:error, :not_found}
%User{} = user -> {:ok, user}
end
end
@spec authorize(User.t(), atom()) :: :ok | {:error, :forbidden}
def authorize(%User{} = user, action) when is_atom(action) do
if can?(user, action), do: :ok, else: {:error, :forbidden}
end
Callers can now write with {:ok, user} <- find_user(email), :ok <- authorize(user, :pay) do ... end. Dialyzer checks every call site against the spec.
Rule 3: Let It Crash — Use Supervisors, Not try/rescue
The Elixir idiom that confuses every non-BEAM-trained AI is "let it crash." In most languages, you catch exceptions at every layer because the alternative is a process that dies and takes the program with it. In Elixir, processes are cheap and supervised — so catching Postgrex.Error inside your function and returning nil isn't defensive, it's actively harmful. You've swallowed the crash the supervisor was designed to respond to, turned a transient error into silent data loss, and hidden the failure from your monitoring.
The rule:
Let processes crash. A crashed process is restarted by its supervisor;
a silent error is a bug that festers.
try/rescue is acceptable only:
- At the boundary with a non-Elixir system (Port, NIF, external lib
with known exception contracts).
- In a top-level request handler (Plug, Phoenix endpoint) that must
translate an unexpected exception into a 500 response for the user.
- When you genuinely need to do cleanup and RE-RAISE — in which case
use try/after, not try/rescue.
NEVER:
- Wrap Repo calls in try/rescue to return nil/[] on error.
- Catch Ecto.NoResultsError — use Repo.get (not Repo.get!) if you
want {:ok, _} | {:error, _}.
- rescue _ or rescue Exception — this catches process EXITs that
should propagate (:shutdown, :timeout, :killed).
- Use try/rescue to implement control flow. If you find yourself
raising an exception to signal "not found," you wrote the wrong code.
Every long-running process (GenServer, Task, Agent) lives under a
supervisor with an explicit restart strategy (:one_for_one, :rest_for_one).
Bare `spawn`/`Task.async` without supervision is forbidden except in
one-off scripts.
Bad — rescue as control flow, swallowed crash:
def get_balance(user_id) do
try do
user = Repo.get!(User, user_id)
Accounts.calculate_balance(user)
rescue
_ -> 0
end
end
A deleted user now has a balance of zero. A DB outage returns zero. A bug in calculate_balance/1 returns zero. You'll never know.
Good — crash on unexpected failure, return tagged tuple for known outcomes:
@spec get_balance(pos_integer()) :: {:ok, Money.t()} | {:error, :not_found}
def get_balance(user_id) do
case Repo.get(User, user_id) do
nil -> {:error, :not_found}
%User{} = user -> {:ok, Accounts.calculate_balance(user)}
end
end
A deleted user returns {:error, :not_found}. A DB outage crashes, the supervisor restarts, telemetry fires, monitoring pages. A bug in calculate_balance/1 crashes immediately — you get a stack trace instead of a silent zero.
Rule 4: GenServer Discipline — State Per Process, Not a Global Dictionary
GenServer is OTP's general-purpose stateful process. AI assistants wildly overuse it — as a global key-value store, as a cache, as a mutex around pure computation, as a wrapper for every module that has internal state. The result: a single GenServer serializing every call in the system, a message queue that grows unboundedly under load, and an architecture that's impossible to scale because one process is the bottleneck.
The rule:
Reach for GenServer when you need:
- Long-lived state that outlives any single call.
- Serialized access to a resource (a specific external handle,
a bounded connection pool).
- A process with a well-defined message protocol.
DO NOT use GenServer for:
- Pure stateless computation (use a plain module).
- Caching — use :ets, :persistent_term, or Cachex (which wraps them).
- "Singleton" patterns imported from OOP — usually wrong in BEAM.
- Storing configuration — use Application.get_env or a struct.
GenServer rules:
- init/1 is for SETUP, not WORK. Move slow work into a {:continue, _}
step or a handle_info(:init_complete, ...) after init.
- handle_call is for replies you need synchronously. Long-running work
in a call blocks every caller — use handle_cast or GenServer.reply/2
from a spawned task.
- Every call takes a timeout. The default 5_000 is rarely what you want.
- The state shape is a typed struct: `defstruct` + `@type t :: %__MODULE__{...}`.
- A GenServer that just wraps an Agent probably wants to be an Agent
(or neither — a plain module with :ets behind it).
- Supervise every GenServer. Its child_spec is explicit; its
restart: strategy is chosen (default :permanent is rarely right
for workers doing one job).
Bad — GenServer as a global dictionary that serializes every write:
defmodule Counters do
use GenServer
def start_link(_), do: GenServer.start_link(__MODULE__, %{}, name: __MODULE__)
def init(state), do: {:ok, state}
def bump(key), do: GenServer.call(__MODULE__, {:bump, key})
def handle_call({:bump, key}, _from, state) do
state = Map.update(state, key, 1, &(&1 + 1))
{:reply, :ok, state}
end
end
Every Counters.bump/1 in every request is a GenServer.call blocking on a single process. A 5ms call becomes 5s of tail latency under load.
Good — :counters or :ets for shared counters, no GenServer at all:
defmodule Counters do
@table :counters
def setup, do: :ets.new(@table, [:public, :named_table, :set, write_concurrency: true])
@spec bump(term()) :: :ok
def bump(key) do
:ets.update_counter(@table, key, {2, 1}, {key, 0})
:ok
end
@spec get(term()) :: non_neg_integer()
def get(key) do
case :ets.lookup(@table, key) do
[{^key, n}] -> n
[] -> 0
end
end
end
Concurrent writes. No serialization. No message queue. No process to supervise for what is fundamentally shared memory.
Rule 5: Ecto in Contexts — Queries Live in the Domain, Transactions Use Multi
Phoenix's context boundary exists precisely so the web layer doesn't know about Ecto. AI-generated code breaks that boundary immediately: controllers import Ecto.Query and build queries inline, schemas embed business logic in before_insert hooks, and multi-step writes are a tower of Repo.transaction(fn -> ... end) with manual rollback. The result is a web layer that can't be tested without a database and a schema that can't be understood without reading three controllers.
The rule:
Context boundary:
- All Ecto queries live in context modules (Accounts, Billing, Catalog).
- Controllers, LiveViews, and channels call context functions only.
- Schemas hold types + changesets. NO database calls, NO business
rules beyond validation.
Changesets:
- Every write goes through a changeset that validates, casts, and
constrains. cast_assoc / put_assoc for associations.
- Never call Repo.insert with a raw map/struct.
Transactions:
- Multi-step writes use `Ecto.Multi`. Each step is named; each
takes the accumulator of prior steps; a step failure rolls back
cleanly and returns {:error, step_name, changeset, changes_so_far}.
- Never nest Repo.transaction/2 manually with multiple inserts.
- Never use Repo.transaction/2 with a function that doesn't return
{:ok, _} | {:error, _} — the rollback behavior is subtle.
Query rules:
- Prefer named/reusable query functions (by_user(query, user_id))
over inline Ecto.Query in controllers.
- `Repo.preload` at context boundaries, not scattered across views.
- N+1: if a list query is followed by an attribute access that hits
the DB, add preload.
- Never call `|> Repo.all() |> Enum.filter(...)` when a WHERE clause works.
Forbidden: Repo in a LiveView mount; `import Ecto.Query` in a view or
controller; `Repo.get!` in a controller (let the context return
{:error, :not_found}).
Bad — controller queries directly, manual transaction:
def create(conn, %{"order" => params}) do
Repo.transaction(fn ->
order = Repo.insert!(%Order{params})
_ = Repo.insert!(%OrderLine{order_id: order.id, ...})
_ = Repo.update!(Ecto.Changeset.change(%Inventory{id: params["item"]}, stock: ...))
order
end)
|> case do
{:ok, order} -> render(conn, "show.json", order: order)
{:error, _} -> send_resp(conn, 500, "")
end
end
Good — context with Ecto.Multi, controller stays thin:
defmodule Shop.Orders do
alias Ecto.Multi
alias Shop.Repo
alias Shop.Orders.{Order, OrderLine, Inventory}
@spec create_order(map()) ::
{:ok, Order.t()} | {:error, :insufficient_stock | Ecto.Changeset.t()}
def create_order(attrs) do
Multi.new()
|> Multi.insert(:order, Order.changeset(%Order{}, attrs))
|> Multi.insert(:line, fn %{order: o} -> OrderLine.changeset(%OrderLine{}, o, attrs) end)
|> Multi.run(:stock, fn repo, %{line: line} ->
Inventory.decrement(repo, line.item_id, line.quantity)
end)
|> Repo.transaction()
|> case do
{:ok, %{order: order}} -> {:ok, order}
{:error, :stock, :insufficient, _} -> {:error, :insufficient_stock}
{:error, _step, changeset, _changes} -> {:error, changeset}
end
end
end
def create(conn, %{"order" => params}) do
case Shop.Orders.create_order(params) do
{:ok, order} -> render(conn, "show.json", order: order)
{:error, :insufficient_stock} -> send_resp(conn, 409, "out of stock")
{:error, %Ecto.Changeset{} = cs} -> render(conn, "error.json", changeset: cs)
end
end
The controller is five lines and tests without a database. The transaction rolls back cleanly on stock failure. The Multi pipeline reads like a recipe.
Rule 6: Structured Concurrency — Task.Supervisor, Timeouts, Linked Tasks
Bare Task.async/1 followed by Task.await/1 works in tutorials. In production it leaks tasks on crash, blocks forever on default timeouts, and leaves no way for a supervisor to clean up. AI assistants love Task.async because it looks like JavaScript promises. It isn't — every spawned task is a linked process, and a crash in the task takes down the caller unless you remember to use Task.yield or a Task.Supervisor.
The rule:
Concurrency primitives:
- Task.Supervisor.async_stream/3 (or .async_stream_nolink/3) for
bounded parallelism over a collection. Always pass max_concurrency.
- Task.Supervisor.async_nolink/1 for fire-and-forget with the ability
to observe failure via handle_info({:DOWN, _, _, _, reason}).
- GenServer.call/3 for request/reply to a named process.
- Task.await/2 ONLY with an explicit timeout and an understanding
that a timeout means the task is still running (or has crashed).
Rules:
- Every external I/O call has an explicit timeout. Never rely on :infinity
or library defaults for network calls.
- Every Task.Supervisor.async_stream has max_concurrency and on_timeout
(:kill_task by default) chosen explicitly.
- Never Task.async_stream WITHOUT a supervisor in library code —
a crash would kill the caller.
- Never Task.async inside a GenServer.handle_call — you've now linked
the task to the GenServer; a task crash crashes the server.
- Use Task.async_stream_nolink when the caller should survive a task
crash (reports errors via Stream output).
NEVER:
- spawn/1 in library or service code. It's unlinked, unsupervised,
unmonitored — a silent ghost process.
- Task.await without a timeout. The default is :infinity.
- Receive a :DOWN message without demonitoring the reference.
Bad — unbounded parallelism, no supervisor, crash takes down the caller:
def fetch_all(urls) do
urls
|> Enum.map(&Task.async(fn -> HTTPoison.get!(&1) end))
|> Enum.map(&Task.await/1) # :infinity timeout
end
1000 URLs spawn 1000 linked tasks that all blast the network. The first failure crashes the parent process. Every await waits forever on the default timeout.
Good — supervised, bounded, timeouts explicit:
@spec fetch_all([String.t()]) :: [{:ok, Response.t()} | {:error, term()}]
def fetch_all(urls) do
Task.Supervisor.async_stream_nolink(
MyApp.TaskSup,
urls,
&fetch_one/1,
max_concurrency: 8,
timeout: 5_000,
on_timeout: :kill_task
)
|> Enum.map(fn
{:ok, result} -> result
{:exit, reason} -> {:error, reason}
end)
end
defp fetch_one(url) do
case Finch.build(:get, url) |> Finch.request(MyApp.Finch, receive_timeout: 4_000) do
{:ok, %Finch.Response{status: s}} when s in 200..299 = resp -> {:ok, resp}
{:ok, %Finch.Response{status: s}} -> {:error, {:http, s}}
{:error, reason} -> {:error, reason}
end
end
Concurrency bounded at 8. Task crashes don't kill the caller. Every HTTP call has a timeout shorter than the task timeout. Timeouts produce a normal error return.
Rule 7: @spec and Dialyzer on Every Public API
Dialyzer is the single most underused tool in the Elixir ecosystem. Run it once and it will find calls that can't possibly succeed, case clauses that are unreachable, functions that always return the error branch, and nil leaking into places that don't handle it. AI-generated code ships without specs because "the pattern match is the documentation," which is fine until someone calls your function with the wrong shape and gets a generic FunctionClauseError at runtime.
The rule:
Every PUBLIC function has a @spec. Private helpers are encouraged but
not required.
Spec conventions:
- Use the most precise types available. t() for struct types,
String.t() for strings, pos_integer() / non_neg_integer() when
appropriate.
- Return types reflect every branch. {:ok, User.t()} | {:error, :not_found}.
- Never `any()` / `term()` unless truly unknown (JSON decoding, reflection).
- Opaque types for domain values: @opaque Money.t(); users interact
via module functions, not struct internals.
Dialyzer:
- Every release includes a Dialyzer run in CI. No new warnings pass.
- Use `mix dialyzer --plt` to populate the PLT once; commit the PLT
files for faster CI (optional — or cache in GitHub Actions).
- @dialyzer {:nowarn_function, ...} is acceptable ONLY with a comment
explaining why (usually a library returning too-broad a spec).
@doc and @moduledoc accompany every public function/module. At minimum
one line. Examples in @doc that are also @spec-checked with `iex>` are
doctests — ExUnit runs them.
Bad — no spec, implicit contract, dialyzer cannot help:
def charge(user, amount) do
Stripe.create_charge(%{customer: user.customer_id, amount: amount})
end
Does it return {:ok, _} or the charge directly? Does it raise? What happens if amount is a float? What if user.customer_id is nil? Nobody knows without reading the implementation and chasing Stripe.
Good — specs, doc, clear contract:
@typedoc "An amount in the smallest currency unit (cents for USD)."
@type cents :: pos_integer()
@doc """
Charges the user's card for the given amount.
Returns `{:ok, charge}` on success, `{:error, reason}` on failure.
"""
@spec charge(User.t(), cents()) ::
{:ok, Stripe.Charge.t()} | {:error, :no_card | Stripe.Error.t()}
def charge(%User{customer_id: nil}, _amount), do: {:error, :no_card}
def charge(%User{customer_id: cid}, amount) when is_integer(amount) and amount > 0 do
case Stripe.create_charge(%{customer: cid, amount: amount}) do
{:ok, %Stripe.Charge{} = charge} -> {:ok, charge}
{:error, %Stripe.Error{} = e} -> {:error, e}
end
end
Dialyzer catches charge(user, "10.00") at compile time. The :no_card branch is handled in one clause, not a nil inside the function body. The spec IS the contract the AI should write to.
Rule 8: Phoenix Contexts Are the Domain — Web Layer Stays Thin
Phoenix 1.3 introduced contexts because "everything in the controller" didn't scale. AI assistants, trained on Phoenix 1.0-era tutorials, still treat the controller as the place where business logic lives. The fix is brutal and simple: the controller does three things — parse input, call a context function, render output — and nothing else. Everything else lives in the context (or a LiveView's handle_event / handle_info for reactive flows).
The rule:
Controllers (and LiveView handle_* callbacks) do ONLY:
1. Parse and validate the shape of input (Plug / cast_assoc at the edge).
2. Call ONE context function.
3. Render the result (JSON, HTML, or reply to a LiveView client).
Controllers must NOT:
- Import Ecto.Query.
- Call Repo.
- Contain conditional logic beyond the pattern match of the context's
return value.
- Call a second context function "to enrich" — add a function to the
context that does both.
- Reach across contexts (Users calling Billing directly). Contexts are
coupling boundaries; cross-context work lives in an orchestrator
module.
LiveView discipline:
- mount/3 loads initial data via context calls. Use async_result /
start_async for slow loads.
- Events flow through handle_event to a context function.
- Pub/sub subscriptions receive via handle_info and update assigns.
- No Ecto in the LiveView. Period.
Routing:
- Route naming follows Phoenix conventions (resources, nested scopes).
- Plug pipelines are composed, not copied.
- `action_fallback` handles {:error, _} uniformly.
Forbidden: business rules in templates; Repo calls in HEEx components;
Plug.Conn assigns leaking into contexts.
Bad — controller with logic, direct DB access, no context in sight:
def update(conn, %{"id" => id, "user" => params}) do
user = Repo.get!(User, id)
if user.id == conn.assigns.current_user.id or conn.assigns.current_user.admin do
changeset = User.changeset(user, params)
case Repo.update(changeset) do
{:ok, user} ->
Logger.info("updated user #{user.id}")
render(conn, "show.json", user: user)
{:error, cs} ->
conn |> put_status(422) |> render("error.json", changeset: cs)
end
else
send_resp(conn, 403, "")
end
end
Good — controller stays thin, context owns the work:
defmodule MyApp.Accounts do
@spec update_user(actor :: User.t(), user_id :: pos_integer(), attrs :: map()) ::
{:ok, User.t()} | {:error, :not_found | :forbidden | Ecto.Changeset.t()}
def update_user(%User{} = actor, user_id, attrs) do
with {:ok, user} <- fetch_user(user_id),
:ok <- authorize(actor, user),
{:ok, user} <- do_update(user, attrs) do
:telemetry.execute([:accounts, :user, :updated], %{count: 1}, %{user_id: user.id})
{:ok, user}
end
end
defp fetch_user(id), do: Repo.get(User, id) |> ok_or(:not_found)
defp authorize(%User{admin: true}, _), do: :ok
defp authorize(%User{id: id}, %User{id: id}), do: :ok
defp authorize(_, _), do: {:error, :forbidden}
defp do_update(user, attrs), do: user |> User.changeset(attrs) |> Repo.update()
defp ok_or(nil, tag), do: {:error, tag}
defp ok_or(value, _), do: {:ok, value}
end
# fallback_controller.ex handles every {:error, _} shape in one place.
def update(conn, %{"id" => id, "user" => params}) do
with {:ok, user} <- Accounts.update_user(conn.assigns.current_user, id, params) do
render(conn, "show.json", user: user)
end
end
The controller is two lines. The context can be unit-tested without a web connection. Authorization is a separate clause that composes. Telemetry fires from the domain, where the event has meaning.
The Complete .cursorrules File for Elixir
Drop this into your repo root as .cursorrules, or split into .cursor/rules/*.mdc files. It's the consolidated version of every rule above plus the tooling defaults.
# Elixir Cursor Rules
## Pattern Matching
- Prefer multi-clause function heads with pattern matching over if/case/cond.
- Use `with` for chained {:ok, _} | {:error, _} operations.
- Destructure in function heads where possible.
- No 6-clause cond — that's three functions in disguise.
## Error Protocol
- Fallible functions return {:ok, value} | {:error, reason}.
- `reason` is an atom or struct — never a bare string.
- Infallible functions return raw values; bang versions raise.
- Never return {:ok, nil} to mean "not found" — use {:error, :not_found}.
- Every public function has a @spec.
## Let It Crash
- Crash on unexpected failure; the supervisor restarts the process.
- try/rescue is for boundaries (Ports, NIFs, top-level handlers) and
re-raised cleanup only.
- Never `rescue _` or `rescue Exception` — swallows :shutdown, :timeout.
- Every long-running process lives under a supervisor.
- No bare spawn/Task.async outside one-off scripts.
## GenServer
- GenServer is for long-lived state or serialized resource access.
- Not for pure computation, not for caching, not as a global dictionary.
- init/1 is for SETUP. Slow work moves to {:continue, _} or handle_info.
- Every call has an explicit timeout. State is a typed struct.
- Every GenServer is supervised; child_spec and restart strategy are explicit.
## Ecto and Contexts
- Ecto queries live in context modules, not controllers.
- Schemas hold types + changesets — no DB calls, no business logic.
- Multi-step writes use Ecto.Multi; never nest Repo.transaction manually.
- Preload at context boundaries; name reusable query functions.
- No `import Ecto.Query` in views, controllers, or LiveViews.
## Concurrency
- Use Task.Supervisor.async_stream_nolink with explicit max_concurrency
and timeout.
- Every external I/O call has an explicit timeout — never rely on :infinity.
- No Task.async inside GenServer.handle_call.
- No spawn/1 in library or service code.
## Specs and Dialyzer
- @spec on every public function, using the most precise types.
- CI runs mix dialyzer; no new warnings allowed.
- @doc + @moduledoc on every public function/module.
- Doctests where the @spec is checkable via iex> examples.
## Phoenix
- Controllers do three things: parse, call context, render. Nothing else.
- No Repo in controllers, views, LiveViews, or HEEx components.
- action_fallback handles {:error, _} uniformly.
- LiveView mount/3 loads via context; handle_event routes to context;
handle_info updates assigns from pub/sub.
- Cross-context work lives in an orchestrator module.
## Tooling
- `mix format` on every file; formatter config committed.
- `mix credo --strict` in CI.
- `mix dialyzer` in CI; PLT cached.
- Every public module has @moduledoc.
- Tests run `async: true` where state allows.
Real Examples: AI-Generated Code Before and After Rules
Here's what changes in practice when the rules above are loaded into Cursor.
Example 1: "Write a function that creates an order and charges the customer."
Without rules — typical AI output:
def place_order(user_id, params) do
user = Repo.get!(User, user_id)
try do
order = Repo.insert!(%Order{user_id: user.id, total: params["total"]})
case Stripe.charge(user.stripe_id, order.total) do
{:ok, charge} ->
Repo.update!(Ecto.Changeset.change(order, paid: true, charge_id: charge.id))
{:ok, order}
{:error, err} ->
Repo.delete!(order)
{:error, err}
end
rescue
e -> {:error, e}
end
end
Sin count: Repo.get! raises on missing user, manual compensation instead of transaction, rescue e swallows every error including shutdown, no spec, direct Ecto in the domain function without a changeset, no timeout on Stripe, no Multi, bang functions used for control flow.
With rules in .cursorrules — same prompt, idiomatic output:
alias Ecto.Multi
alias MyApp.{Repo, Payments.Stripe}
@spec place_order(pos_integer(), map()) ::
{:ok, Order.t()}
| {:error, :not_found | :charge_failed | Ecto.Changeset.t()}
def place_order(user_id, params) do
with {:ok, user} <- Accounts.fetch_user(user_id),
{:ok, %{order: order}} <- insert_and_charge(user, params) do
{:ok, order}
end
end
defp insert_and_charge(user, params) do
Multi.new()
|> Multi.insert(:order, Order.changeset(%Order{}, Map.put(params, "user_id", user.id)))
|> Multi.run(:charge, fn _repo, %{order: order} ->
case Stripe.charge(user.stripe_id, order.total, timeout: 5_000) do
{:ok, %Stripe.Charge{} = charge} -> {:ok, charge}
{:error, _} = err -> err
end
end)
|> Multi.update(:paid, fn %{order: o, charge: c} ->
Order.mark_paid_changeset(o, c.id)
end)
|> Repo.transaction()
|> case do
{:ok, _} = ok -> ok
{:error, :charge, _reason, _} -> {:error, :charge_failed}
{:error, _step, changeset, _} -> {:error, changeset}
end
end
Transaction rolls back on any failure. Every branch has a tagged-tuple return. Repo.get! is gone. The spec documents every outcome. No try/rescue. Stripe has a timeout.
Example 2: "Fetch 100 product records from 3 upstream APIs in parallel."
With rules:
@spec fetch_catalog([product_id()]) :: [{product_id(), {:ok, Product.t()} | {:error, term()}}]
def fetch_catalog(product_ids) do
[A, B, C]
|> Enum.flat_map(&Enum.zip(product_ids, List.duplicate(&1, length(product_ids))))
|> Task.Supervisor.async_stream_nolink(
MyApp.TaskSup,
fn {id, source} -> fetch_one(source, id) end,
max_concurrency: 12,
timeout: 4_000,
on_timeout: :kill_task
)
|> Enum.map(fn
{:ok, {id, result}} -> {id, result}
{:exit, reason} -> {:error, reason}
end)
end
Bounded concurrency, linked to the app's Task.Supervisor, per-request timeout shorter than the surrounding call, crashes isolated from the caller.
Get the Full Pack
These eight rules cover the highest-leverage Elixir patterns where AI assistants consistently fail — the ones that turn into silent data loss, unbounded message queues, and 3am pages. Drop them into .cursorrules and you'll see the difference on the very next prompt.
If you want the same depth for Rust, Go, Java, TypeScript, Python, React, Next.js, Phoenix LiveView, and more — all the rules I've packaged from a year of refining Cursor configs across production BEAM codebases — they're all at:
One pack. Twenty-plus languages and frameworks. Battle-tested rules with before/after examples for each. Stop fighting your AI assistant and start shipping idiomatic OTP code on the first try.
Top comments (0)