DEV Community

Olivia Craft
Olivia Craft

Posted on

CLAUDE.md for PostgreSQL: 13 Rules That Make AI Write Safe, Production-Ready SQL

You ask Claude to "add a search endpoint that filters orders by customer email," and 30 seconds later you get back something that looks fine in review:

  • An f-string SQL query that interpolates the email directly into the WHERE clause.
  • A migration that adds a NOT NULL column with a default in one statement on a 50M-row table.
  • A new index dropped in without a single EXPLAIN ANALYZE.
  • A transaction that fans out to a third-party HTTP API between BEGIN and COMMIT.
  • A SELECT * against a table that has a 200KB bytea column nobody mentioned.
  • A DROP COLUMN in the same migration as the code that stopped writing to it — guaranteed pod crashes during the rolling deploy.

The model didn't fail. It pattern-matched on the millions of half-finished tutorials and Stack Overflow answers it was trained on, where none of those concerns existed because the demo table had 12 rows. Production is what makes them all explode.

A CLAUDE.md at the root of your repo fixes this. Claude Code reads it on every task. Cursor, Aider, Codex, and any AI assistant that respects context files do the same. Below are 13 rules I drop into every Postgres project. Each one closes a class of incident I have actually watched happen, and each one is short enough that the model won't quietly skip it.

The full ruleset is also live as a free gist: gist.github.com/oliviacraft/91e8b91ebe06585b27c91769cfb7c252.


Rule 1 — Never Interpolate User Input Into SQL — Always Parameterize

Why: String-concatenated SQL is the #1 entry on OWASP for two decades for a reason. AI assistants reach for f-strings and template literals because they look cleaner in examples — but the moment that variable comes from a request body, you have a SQL injection. Use the driver's parameter binding so the database parses values as values, never as SQL.

Bad:

# Python / psycopg
cur.execute(f"SELECT * FROM users WHERE email = '{email}'")

# Node / pg
client.query(`SELECT * FROM users WHERE email = '${email}'`)
Enter fullscreen mode Exit fullscreen mode

Good:

# Python / psycopg — placeholders, driver binds the value
cur.execute("SELECT id, email, display_name FROM users WHERE email = %s", (email,))

# Node / pg — $1, $2 positional placeholders
client.query("SELECT id, email, display_name FROM users WHERE email = $1", [email])
Enter fullscreen mode Exit fullscreen mode

Rule for CLAUDE.md:

All SQL parameters are bound through the driver's placeholder syntax
(%s for psycopg, $1/$2 for node-postgres, ? for many ORMs).
Never interpolate user input — or *any* runtime value — into a SQL string.
String concatenation is reserved for static identifiers (table names) that come
from a hardcoded allowlist, never from a request.
Enter fullscreen mode Exit fullscreen mode

Rule 2 — Connect Through PgBouncer, Not Directly to Postgres

Why: Postgres allocates roughly 10 MB per backend and starts falling over around max_connections = 200. A serverless app that opens a fresh connection per request will exhaust the pool in minutes. AI tools default to "make a pg.Pool in the handler" because that's what the readme shows — fine for a side project, catastrophic at 1,000 RPS. Front every production cluster with a transaction-mode pooler.

Bad:

DATABASE_URL=postgres://app:****@db.prod.internal:5432/app
# Lambda / serverless: every cold start opens a fresh backend.
# Traffic spike → 800 connections → Postgres OOMs.
Enter fullscreen mode Exit fullscreen mode

Good:

# App → PgBouncer (6432) → Postgres (5432)
DATABASE_URL=postgres://app:****@pgbouncer.prod.internal:6432/app
Enter fullscreen mode Exit fullscreen mode
# pgbouncer.ini
pool_mode = transaction
max_client_conn = 2000
default_pool_size = 25
reserve_pool_size = 5
server_idle_timeout = 60
Enter fullscreen mode Exit fullscreen mode

Transaction-mode pooling breaks session-level features (SET, LISTEN, server-side prepared statements). Use pool_mode = session if your app depends on them, or — better — disable prepared statements at the driver layer.

Rule for CLAUDE.md:

Production apps connect via PgBouncer on port 6432 — never direct to Postgres.
`pool_mode = transaction` is the default; document any pool that needs `session` mode.
Driver-side pools are sized small (5–10) — PgBouncer owns the real pooling.
Disable server-side prepared statements when using transaction pooling
(psycopg: `prepare_threshold=None`; node-postgres: `statement_timeout` only).
Enter fullscreen mode Exit fullscreen mode

Rule 3 — Add Columns Nullable First — Never NOT NULL Without a Default in One Shot

Why: ALTER TABLE ... ADD COLUMN x int NOT NULL rewrites the entire table under an ACCESS EXCLUSIVE lock. On a 100M-row table that's a 30-minute outage. Even on Postgres 11+, where adding a column with a constant default is metadata-only, the AI will still write a default that triggers a rewrite (a now() expression, a subquery) without realizing it. The safe pattern is always: add nullable, backfill, enforce.

Bad:

-- Looks fine. Locks the whole table for 30 minutes.
ALTER TABLE orders ADD COLUMN region text NOT NULL DEFAULT 'us-east';
Enter fullscreen mode Exit fullscreen mode

Good:

-- 1. Add nullable. Metadata-only change, instant.
ALTER TABLE orders ADD COLUMN region text;

-- 2. Backfill in batches, off-peak, separate transactions.
UPDATE orders SET region = 'us-east' WHERE id BETWEEN 1 AND 100000 AND region IS NULL;
-- ... repeat in batches ...

-- 3. Once every row is populated, enforce.
ALTER TABLE orders ALTER COLUMN region SET NOT NULL;

-- 4. (Optional) Add the default for new inserts.
ALTER TABLE orders ALTER COLUMN region SET DEFAULT 'us-east';
Enter fullscreen mode Exit fullscreen mode

Rule for CLAUDE.md:

Adding a column to a table > 1M rows is a 3-step migration:
  1. ADD COLUMN nullable.
  2. Backfill in batches of ≤ 50k rows per transaction, off-peak.
  3. ALTER COLUMN ... SET NOT NULL after backfill is verified.
Never combine ADD COLUMN + NOT NULL + DEFAULT (non-constant) in one statement.
For Postgres 11+, a constant DEFAULT is safe; volatile defaults (now(), gen_random_uuid()) still rewrite.
Enter fullscreen mode Exit fullscreen mode

Rule 4 — Never DROP COLUMN in the Same Release as the Code That Stops Using It

Why: Rolling deploys mean old and new pods serve traffic side by side for minutes. If the migration drops the column before the old pods are gone, every old-pod query that referenced that column fails. AI assistants love to bundle "code change + migration" in one PR because the diff looks tidy. It is also exactly how you get a 4-minute outage at 14:02 on a Tuesday. Use a multi-release expand/contract.

Bad:

-- Same PR as code that stopped reading legacy_status.
ALTER TABLE orders DROP COLUMN legacy_status;
Enter fullscreen mode Exit fullscreen mode

Good:

-- Release N:    code stops WRITING to legacy_status (still reads as fallback).
-- Release N+1:  code stops READING legacy_status entirely. Verify in metrics.
-- Release N+2:  migration drops the column.
ALTER TABLE orders DROP COLUMN legacy_status;
Enter fullscreen mode Exit fullscreen mode

For renames, use the same pattern: add new column, dual-write, backfill, switch reads, drop old.

Rule for CLAUDE.md:

DROP COLUMN and column renames ship across THREE releases:
  N:   stop writing to the column (or start dual-writing the new one).
  N+1: stop reading the column. Confirm zero references in app + analytics.
  N+2: migration removes the column.
Never DROP COLUMN in the same PR as the code change that stops using it.
Enter fullscreen mode Exit fullscreen mode

Rule 5 — Always EXPLAIN ANALYZE Before Adding (or Removing) an Index

Why: Indexes are not free. Every index slows INSERT, UPDATE, and DELETE, consumes disk, and bloats WAL. AI tools add indexes like seasoning — "this query feels slow, let's add an index on (customer_id, created_at, status)" — without checking what the planner is actually doing or whether an existing index already covers the access path. Verify, build concurrently, re-verify.

Bad:

-- Vibes-based indexing. May duplicate an existing index. May not help.
CREATE INDEX ON orders (customer_id, created_at, status);
Enter fullscreen mode Exit fullscreen mode

Good:

-- 1. Look at the actual plan.
EXPLAIN (ANALYZE, BUFFERS)
  SELECT id, total FROM orders WHERE customer_id = $1 ORDER BY created_at DESC LIMIT 20;

-- 2. List existing indexes — you may already have one that covers this.
SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'orders';

-- 3. Build without locking writes.
CREATE INDEX CONCURRENTLY idx_orders_customer_created
  ON orders (customer_id, created_at DESC);

-- 4. Re-run EXPLAIN ANALYZE — confirm the planner picked it.
Enter fullscreen mode Exit fullscreen mode

Rule for CLAUDE.md:

Every new index ships with: (a) the EXPLAIN ANALYZE before, (b) the EXPLAIN ANALYZE after,
included in the PR description. No index is added on a hunch.
Use `CREATE INDEX CONCURRENTLY` for tables > 100k rows — never plain CREATE INDEX in prod.
Use `DROP INDEX CONCURRENTLY` for the same reason.
Before adding, query pg_indexes to confirm an equivalent doesn't already exist.
Enter fullscreen mode Exit fullscreen mode

Rule 6 — Wrap Multi-Statement Work in Explicit Transactions — Keep Them Short

Why: A transaction holds locks until commit. Code that does BEGIN, calls a third-party HTTP API, then COMMIT will hold row locks for hundreds of milliseconds (or seconds, on a slow Stripe day) and block every other writer touching those rows. The fix is mechanical: do all external I/O before opening the transaction, then commit fast.

Bad:

cur.execute("BEGIN")
cur.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s", (amt, src))
response = requests.post("https://payments.api/charge", json={...})  # network I/O inside the tx
cur.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s", (amt, dst))
cur.execute("COMMIT")
Enter fullscreen mode Exit fullscreen mode

Good:

# All external I/O happens BEFORE the transaction.
charge_id = payments_api.charge(amount=amt, source=src)

with conn.transaction():
    cur.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s", (amt, src))
    cur.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s", (amt, dst))
    cur.execute(
        "INSERT INTO ledger (charge_id, amount, src, dst) VALUES (%s, %s, %s, %s)",
        (charge_id, amt, src, dst),
    )
# Commits and releases locks immediately.
Enter fullscreen mode Exit fullscreen mode

Rule for CLAUDE.md:

Transactions wrap database work only — never network calls, file I/O, or human input.
Pattern: do all external work first, then open a short transaction to persist results.
Target transaction duration: < 50ms p99. Long transactions block VACUUM and bloat the DB.
Use SAVEPOINT for partial rollback; never simulate it with try/except + DELETE.
Enter fullscreen mode Exit fullscreen mode

Rule 7 — Run Schema Changes Inside a Transaction — and Know Which DDL Can't

Why: Most Postgres DDL is transactional, which is a superpower compared to MySQL: wrap the migration in BEGIN / COMMIT and a failure rolls back cleanly. AI-generated migrations often skip the BEGIN and you end up with half-applied state at 3am — column added, constraint failed, no way to retry without manual cleanup. The two exceptions — CREATE INDEX CONCURRENTLY and ALTER TYPE ... ADD VALUE (in older versions) — must run outside any transaction block. Document both.

Bad:

-- Half-applied if the constraint check fails: column exists, constraint doesn't, app crashes on insert.
ALTER TABLE orders ADD COLUMN region text;
ALTER TABLE orders ADD CONSTRAINT region_check CHECK (region IN ('us','eu','ap'));
UPDATE orders SET region = 'xx';  -- violates the constraint, but the column is already there.
Enter fullscreen mode Exit fullscreen mode

Good:

BEGIN;
ALTER TABLE orders ADD COLUMN region text;
ALTER TABLE orders ADD CONSTRAINT region_check CHECK (region IN ('us','eu','ap'));
COMMIT;

-- Concurrent index — must be its own statement, NOT inside a transaction.
CREATE INDEX CONCURRENTLY idx_orders_region ON orders (region);
Enter fullscreen mode Exit fullscreen mode

Rule for CLAUDE.md:

Every migration file wraps its DDL in BEGIN / COMMIT (or the framework's equivalent).
Statements that cannot run in a transaction live in their own migration file:
  - CREATE INDEX CONCURRENTLY
  - DROP INDEX CONCURRENTLY
  - ALTER TYPE ... ADD VALUE (Postgres < 12)
  - REINDEX CONCURRENTLY
The migration filename or header comment makes the non-transactional nature explicit.
Enter fullscreen mode Exit fullscreen mode

Rule 8 — Enable pg_stat_statements and Treat It as the Source of Truth for Slow Queries

Why: Application timing tells you which endpoint is slow. pg_stat_statements tells you which SQL statement is burning your CPU and IO budget — across every app, cron job, and ad-hoc psql session. AI assistants will happily add a Datadog histogram and call it a day; the database itself already has the data, you just have to turn on the extension.

Bad: guess at slow queries from APM alone, then optimize whichever query happened to be in the trace.

Good:

# postgresql.conf
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all
pg_stat_statements.max = 10000
Enter fullscreen mode Exit fullscreen mode
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

-- Top 20 by total time — these are the queries to fix first.
SELECT
  substring(query, 1, 80) AS query,
  calls,
  round(total_exec_time::numeric, 0)        AS total_ms,
  round(mean_exec_time::numeric, 2)         AS mean_ms,
  round((100 * total_exec_time / sum(total_exec_time) OVER ())::numeric, 1) AS pct
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 20;

-- Reset weekly so the data reflects current behavior, not three months ago.
SELECT pg_stat_statements_reset();
Enter fullscreen mode Exit fullscreen mode

Rule for CLAUDE.md:

pg_stat_statements is enabled in every environment, including local dev.
Performance investigations start with `pg_stat_statements ORDER BY total_exec_time DESC`,
not with APM traces. APM tells you the endpoint; pg_stat_statements tells you the query.
The stats are reset on a known schedule (weekly) so historical comparisons are meaningful.
Enter fullscreen mode Exit fullscreen mode

Rule 9 — Use JSONB (Not JSON) and Index It With GIN

Why: json stores raw text and re-parses it on every read. jsonb is binary, deduplicates keys, supports indexing, and exposes the @>, ?, ?& operators for real querying. The only correct choice for new columns is jsonb. AI tools often pick json because the docs example uses it, then wonder why the query is doing a sequential scan over 50M rows.

Bad:

CREATE TABLE events (id bigserial PRIMARY KEY, payload json);
-- Sequential scan, parses every row:
SELECT * FROM events WHERE payload->>'user_id' = '42';
Enter fullscreen mode Exit fullscreen mode

Good:

CREATE TABLE events (id bigserial PRIMARY KEY, payload jsonb NOT NULL);

-- GIN index on the whole document — supports @> containment.
CREATE INDEX idx_events_payload ON events USING gin (payload);

-- Containment query uses the GIN index.
SELECT id, payload FROM events WHERE payload @> '{"user_id": 42}'::jsonb;

-- Or a targeted expression index for one hot key.
CREATE INDEX idx_events_user_id ON events ((payload->>'user_id'));
Enter fullscreen mode Exit fullscreen mode

Rule for CLAUDE.md:

New JSON columns use jsonb, never json. (Migrate existing json columns when touched.)
For containment / existence queries, add a GIN index on the column.
For one hot key, prefer a B-tree expression index on `(payload->>'key')` — smaller and faster.
Validate jsonb shape with a CHECK constraint or a trigger if the schema is fixed
("the application owns the schema" is not a constraint Postgres knows about).
Enter fullscreen mode Exit fullscreen mode

Rule 10 — Enforce Row-Level Security on Multi-Tenant Tables

Why: A single forgotten WHERE tenant_id = ? leaks every other tenant's data. RLS makes the database itself enforce the boundary, so a missing predicate fails closed instead of leaking. AI tools generate code that filters in the application layer because that's what most tutorials show — and that's exactly the layer that gets bypassed by the next quick fix. Defense in depth: app filters, and the database enforces.

Bad: trust that every query in every code path will remember to filter by tenant.

# One forgotten WHERE → cross-tenant leak.
db.execute("SELECT id, total FROM invoices ORDER BY created_at DESC LIMIT 50")
Enter fullscreen mode Exit fullscreen mode

Good:

ALTER TABLE invoices ENABLE ROW LEVEL SECURITY;
ALTER TABLE invoices FORCE ROW LEVEL SECURITY;  -- applies even to the table owner

CREATE POLICY tenant_isolation ON invoices
  USING (tenant_id = current_setting('app.tenant_id')::uuid);
Enter fullscreen mode Exit fullscreen mode
# App sets the tenant once per request, before any query.
with conn.transaction():
    conn.execute("SET LOCAL app.tenant_id = %s", (request.tenant_id,))
    rows = conn.execute("SELECT id, total FROM invoices ORDER BY created_at DESC LIMIT 50").fetchall()
# Even without the WHERE, RLS scopes the result.
Enter fullscreen mode Exit fullscreen mode

Rule for CLAUDE.md:

Every multi-tenant table has ROW LEVEL SECURITY enabled AND forced.
A USING policy ties row visibility to a session GUC (`app.tenant_id`, `app.user_id`).
The web layer sets the GUC inside the request transaction, before any query runs.
RLS is defense in depth — application code STILL filters, but the DB is the final guard.
Migration test suite includes a "wrong-tenant cannot read" assertion per RLS-protected table.
Enter fullscreen mode Exit fullscreen mode

Rule 11 — Never SELECT * in Application Code — List the Columns You Need

Why: SELECT * ships every column over the wire — including the bytea blob, the text payload, and the deprecated columns nobody told the AI about. It breaks ORM caches when columns are added. It makes index-only scans impossible. And when somebody adds a 200KB column for "internal use," your latency p99 quietly doubles. List the columns.

Bad:

SELECT * FROM users WHERE id = $1;
Enter fullscreen mode Exit fullscreen mode

Good:

SELECT id, email, display_name, created_at
FROM users
WHERE id = $1;
Enter fullscreen mode Exit fullscreen mode

The narrower projection lets the planner satisfy the query from a covering index without touching the heap, and the diff stays stable when somebody adds a new column.

Rule for CLAUDE.md:

Application SQL lists explicit columns — `SELECT *` is forbidden in app code.
ORMs that default to SELECT * (Active Record, some SQLAlchemy patterns) override
with .select(:id, :email, ...) or load_only(...) on hot paths.
SELECT * is acceptable in ad-hoc psql, dump scripts, and migration verification — never in shipped code.
Enter fullscreen mode Exit fullscreen mode

Rule 12 — Route Reads to a Replica Only If the Code Can Tolerate Replication Lag

Why: Read replicas multiply read throughput, but they're asynchronous — a row written to the primary may be 50ms (or 5s, on a busy day) behind on the replica. A "read after write" against the replica can return zero rows for a record you just inserted. The pattern AI tools generate by default — "always use the read pool for SELECTs" — silently breaks user-visible flows. Be explicit per call site.

Bad:

# Race: the SELECT can return zero rows because the replica hasn't caught up.
new_id = db.write.execute(
    "INSERT INTO orders (...) VALUES (...) RETURNING id"
).scalar()
order = db.read.execute("SELECT * FROM orders WHERE id = %s", (new_id,)).fetchone()
Enter fullscreen mode Exit fullscreen mode

Good:

# Read-your-own-writes: stay on the primary inside the request.
with db.primary() as conn:
    new_id = conn.execute(
        "INSERT INTO orders (customer_id, total) VALUES (%s, %s) RETURNING id",
        (customer_id, total),
    ).scalar()
    order = conn.execute(
        "SELECT id, customer_id, total, created_at FROM orders WHERE id = %s",
        (new_id,),
    ).fetchone()

# Replica is fine for cross-request, eventually-consistent reads.
recent = db.replica.execute(
    "SELECT id, total FROM orders WHERE user_id = %s ORDER BY created_at DESC LIMIT 50",
    (user_id,),
).fetchall()
Enter fullscreen mode Exit fullscreen mode

Rule for CLAUDE.md:

Reads after writes in the same request stay on the primary.
The replica handles cross-request, list / dashboard, and analytics reads.
Code paths that hit the replica annotate the call site (comment or named pool)
so reviewers can spot a "read-after-write on the replica" mistake instantly.
Replication lag is monitored; pages fire above a defined threshold (e.g. 5s).
Enter fullscreen mode Exit fullscreen mode

Rule 13 — Let Autovacuum Work — and Watch pg_stat_user_tables When It Can't Keep Up

Why: Postgres MVCC creates dead tuples on every UPDATE and DELETE. Autovacuum reclaims them. If autovacuum falls behind, tables bloat, indexes bloat, queries slow down, and eventually you reach for VACUUM FULL at 4am. AI tools sometimes "solve" autovacuum CPU usage by disabling it on the noisy table — which guarantees the 4am page. Don't disable. Tune per-table thresholds and monitor.

Bad:

-- "Autovacuum is using too much CPU" → disable it on the hot table.
ALTER TABLE orders SET (autovacuum_enabled = false);
-- Three weeks later: 80% bloat, queries 10x slower, emergency VACUUM FULL needed.
Enter fullscreen mode Exit fullscreen mode

Good:

-- Tune thresholds for write-heavy tables instead of disabling.
ALTER TABLE orders SET (
  autovacuum_vacuum_scale_factor  = 0.05,   -- vacuum after 5% of rows change
  autovacuum_analyze_scale_factor = 0.02,
  autovacuum_vacuum_cost_limit    = 2000    -- give vacuum more IO budget on this table
);

-- Monitor what's getting starved.
SELECT relname,
       n_live_tup, n_dead_tup,
       round(100.0 * n_dead_tup / NULLIF(n_live_tup + n_dead_tup, 0), 1) AS dead_pct,
       last_autovacuum, last_autoanalyze
FROM pg_stat_user_tables
WHERE n_dead_tup > 10000
ORDER BY dead_pct DESC NULLS LAST
LIMIT 20;
Enter fullscreen mode Exit fullscreen mode

Rule for CLAUDE.md:

Autovacuum is never disabled — not on hot tables, not "temporarily."
Hot tables get per-table tuning (autovacuum_vacuum_scale_factor, autovacuum_analyze_scale_factor,
autovacuum_vacuum_cost_limit) — document the values and the reasoning.
A weekly job alerts on any table with dead_pct > 20% or last_autovacuum > 7 days.
VACUUM FULL is a break-glass operation: locks the whole table, requires a maintenance window.
Enter fullscreen mode Exit fullscreen mode

How to Use These Rules

  1. Drop a CLAUDE.md at the root of the repo, next to your migrations/ and src/.
  2. Paste the rules above. Keep what fits, edit what doesn't, add anything specific to your stack (which ORM, which migration framework, which connection pooler, which monitoring vendor).
  3. Restart Claude Code in the project so it picks up the new context file.

CLAUDE.md is a per-repo contract. Vague guidance ("write safe SQL") gets ignored. Concrete guidance ("pool_mode = transaction via PgBouncer on 6432, RLS forced on every multi-tenant table, no SELECT * in app code, every new index ships with EXPLAIN ANALYZE before and after") changes every output the model produces.

The same file works for Cursor, Aider, Codex, Copilot Workspace, and any AI assistant that respects context files. Symlink it to .cursorrules and AGENTS.md if you want belt-and-braces coverage across tools.


Why This Pays Off Fast

Every rule above traces to a real production incident from an AI-generated PR. A SELECT * that doubled p99 latency the day a bytea column was added. A NOT NULL DEFAULT migration that took an order-management table offline for 28 minutes. A DROP COLUMN that crashed every old pod during a 4-minute rolling deploy. A multi-tenant query that forgot a single WHERE tenant_id = ? and shipped one customer's invoices to another customer's dashboard.

You can keep catching these in code review forever. Or you can write a CLAUDE.md, drop it at the repo root, and stop seeing 80% of them.


Want the Full Pack?

These 13 rules are one chapter of the CLAUDE.md Rules Pack — 35+ stacks (Go, Rust, Python, FastAPI, Next.js, React Native, Terraform, Docker, Kubernetes, PostgreSQL, and more) of production-tested AI guardrails, packaged as drop-in CLAUDE.md files.

Get the full pack on Gumroad: oliviacraftlat.gumroad.com/l/skdgt — one-time payment, lifetime updates.

Free Postgres gist with all 13 rules → gist.github.com/oliviacraft/91e8b91ebe06585b27c91769cfb7c252

Olivia · @OliviaCraftLat

Top comments (0)