Production Convex: What 70 Modules Looks Like
What I learned building one Convex backend for web, mobile, and API
When I started building Client Commander, I had one Next.js app and maybe ten tables. A year later, it's serving a web dashboard, a mobile app, a background sync service, and a full REST API. 50+ tables. 70+ modules. Same deployment.
I didn't plan for any of that. I just kept building, and the architecture held up. That surprised me.
Here's how it happened.
What We Built
Client Commander is a multi-tenant CRM. Companies sign up, add their team, manage contacts and deals. Nothing revolutionary about the domain — but the technical requirements add up fast: permissions, real-time sync, mobile, API access, background jobs.
Quick terminology so the rest makes sense:
- Agents: The users (employees) using the system.
- Contacts: The people/customers being tracked.
- Companies: The tenants (customers of the SaaS).
Permissions That Update Themselves
Change someone's role, and their dashboard updates in real-time. They see different data instantly — no refresh, no logout, no waiting.
I didn't build that. I just wrote a normal permission check at the top of my queries, and it worked.
Took me a while to understand why. Convex queries are subscriptions, not one-off requests. They keep running — every time the underlying data changes, the query re-evaluates. So my permission check runs again when contacts change. It runs again when the user's role changes. Same code, always current.
const { agentId, companyId, role } = await getAuthContext(ctx);
if (!hasPermission(role, "contacts.view")) {
throw new Error("Forbidden");
}
return await ctx.db.query("contacts")
.withIndex("by_company", q => q.eq("companyId", companyId))
.collect();
Adding new permission levels was just config after that. Owner sees everything. Team Leader sees their team. Agent sees only their own contacts. New roles? Add them to the config. Queries stay the same.
The part that impressed me: Convex tracks which data each query actually touches. My query reads the user's role, so the platform knows that query depends on that role record. Change the role, and only queries that care about it re-run — not everything. I didn't set up subscriptions or configure channels. It figured that out from the code.
With most backends, you'd need WebSocket infrastructure, cache invalidation, push logic — a whole thing. Here it's just... the default.
Adding Mobile Without Adding Backend
We shipped a mobile app. Zero new backend code.
But honestly, the time savings wasn't the main thing. The main thing was this: we stopped discovering "mobile is out of sync" from user bug reports. We started discovering it before the code compiles.
The Expo app imports the exact same API as the Next.js app. Same queries, same mutations, same types. Fix a bug in a query? Fixed on both. Change the schema? Both apps break until they handle it — during development, not after users complain.
// Same import, whether you're in Next.js or Expo
import { api } from "@workspace/backend/convex/_generated/api";
const contacts = useQuery(api.contacts.list);
const createContact = useMutation(api.contacts.create);
Convex generates typed APIs from your schema. That generated code is the contract. Write a query, and it creates types for the arguments and return value. Call it from React — web or mobile, doesn't matter — you get autocomplete and type checking.
The monorepo makes it work. Backend is a shared package. Both apps depend on it. Schema is the single source of truth.
Add a field? Both apps get type errors. Rename a query? TypeScript shows you every callsite that needs updating — across platforms, in one compile. I've caught so many things this way that would've been production bugs otherwise.
Most cross-platform setups have duplicated types, separate clients, manual syncing. Here, the generated API handles sync. Change the source, types change everywhere. No process to remember. It just breaks if you forget.
Real-Time for Users, REST for Everything Else
Real-time is great for dashboards. But external integrations don't speak WebSocket. They want REST. Webhooks need HTTP endpoints.
So we added a REST API — 40+ endpoints. No separate service. The HTTP layer just authenticates, rate limits, then calls the same functions that power the UI.
http.route({
path: "/v1/contacts",
method: "GET",
handler: httpAction(async (ctx, request) => {
const auth = await verifyApiKey(request);
if (!auth.ok) return auth.response;
const contacts = await ctx.runQuery(internal.contacts.list, {
companyId: auth.companyId,
});
return new Response(JSON.stringify(contacts), {
status: 200,
headers: { "Content-Type": "application/json" },
});
}),
});
The nice part: ctx.runQuery inside an HTTP action runs the exact same code as the real-time subscriptions. I'm not maintaining two implementations — the REST endpoint is just a thin wrapper around stuff that already exists.
Webhooks work the same way. Payment provider sends a POST, I verify the signature, call a mutation. Same mutation the UI calls. One code path.
No separate API server. No connection pooling headaches. HTTP routes deploy with everything else. We went from "we need an API" to live endpoints in a day.
Workflows That Outlive Deployments
We have workflows that wait three days before executing the next step. Some wait a week.
Here's the thing: I don't lose them when I deploy. I don't wake up to half-finished workflows. I don't build state machines to track what step we're on.
They just continue. Server restarts, new deployment happens, doesn't matter. The delay finishes, the next step runs, picks up exactly where it left off.
If you've ever used scheduler.runAfter, you know it works for one-off delayed functions. But when you need a chain — wait, then do X, then wait again, then do Y — suddenly you're managing state. What if step 2 fails? How do you know step 1 finished? How do you retry?
The workflow component handles that. Each step gets recorded. If something restarts mid-execution, it replays from where it stopped — skipping steps that already ran.
const myWorkflow = workflow.define({
args: { userId: v.id("users") },
handler: async (step, { userId }) => {
await step.runMutation(internal.users.markOnboardingStarted, { userId });
await step.runMutation(internal.emails.sendFollowUp,
{ userId },
{ runAfter: 3 * 24 * 60 * 60 * 1000 } // 3 days
);
await step.runMutation(internal.users.checkEngagement, { userId });
},
});
Each step is checkpointed. Server restarts after step 2 is scheduled? Fine. It recovers and picks up where it was.
I build automations that span weeks now without worrying about them. Onboarding sequences, trial expirations — they just run. No job queue to maintain. No polling for stuck jobs. No "what state is this in?" debugging at 2am.
Search Without the Infrastructure
We needed search. Full-text, on contacts. My first thought was "okay, time to figure out Elasticsearch."
Nope. Three lines in the schema.
contacts: defineTable({
companyId: v.id("companies"),
fullName: v.string(),
})
.searchIndex("search_name", {
searchField: "fullName",
filterFields: ["companyId"]
})
That's it. Search works. Filters by company. Ranks by relevance. Deploy, it's live.
Same pattern kept repeating. Need to find contacts by phone number? Add an index. Look up deals by stage? Index. Sort by next task due date? Store it denormalized and index it.
contactPhones: defineTable({
contactId: v.id("contacts"),
value: v.string(),
})
.index("by_value", ["value"])
contacts: defineTable({
// ...
nextTaskDueAt: v.optional(v.number()),
})
.index("by_company_nextTask", ["companyId", "nextTaskDueAt"])
Normally you'd be deciding: which search service, how to sync data, how to handle the lag between your database and search index. Here, search indexes update transactionally with your data. No sync. No eventual consistency weirdness.
The trade-off: I denormalize more than I would elsewhere. fullName gets computed and stored. nextTaskDueAt gets copied from tasks to contacts. Writes get a bit messier. But queries stay fast, and I don't manage infrastructure.
50+ tables, dozens of indexes. Every single one was a schema change, not a project.
That's what 70 modules looks like. One deployment. Web, mobile, REST, all hitting the same backend. Permissions that update in real-time. Types that catch drift before production. Workflows that survive restarts. Search that's three lines.
I didn't do anything clever to make this work. I just kept building, and the platform didn't get in the way.
If you're thinking about using Convex for something real — this is what happens when you do.
Hamza Saleem is the founder of Client Commander and a Convex Champion. Previously: Keeping Users in Sync with Convex.


Top comments (0)