I've been building CineLog — a pre-production software for filmmakers — solo for over a year. It syncs data in real time across devices, works fully offline, handles media storage through a private CDN, and runs on a custom sync engine that I designed from scratch. Here's what the architecture looks like.
This isn't a "look what I shipped" post. It's a technical deep dive into the architecture, the stack, and how I use LLMs as a solo developer — honestly, including where it helps and where it doesn't.
**
The Product
**
CineLog is a pre-production tool built for filmmakers and production companies. Film production involves a staggering amount of coordination — shot lists, scripts, storyboards, reference media — usually spread across a dozen different tools and formats. CineLog replaces all of that with a single platform.
The core features today: a visual shot list with drag-and-drop scene management, a fontain script editor, storyboards, and configurable PDF exports for everything. And in the most recent update - call sheets and cast & crew management.
The key architectural decision — the one that defines every engineering choice in the product — is that CineLog is local-first. The app works fully offline. You can be on a remote location scout with no cell signal, rearrange your entire shot list, import a script, and assign shots to scenes. When you're back online, everything syncs seamlessly. The remote database is still the source of truth, but the app can work autonomosly for a good amount of time. This isn't a feature toggle — it's a philosophy that shapes every layer of the stack.
Currently in public beta with real users on real productions. Built with Flutter — running natively on macOS, Windows, iOS, and Android.
**
The Stack
**
Client — Flutter + Dart
A single Dart codebase targeting macOS, Windows, iOS, and Android. The client is structured as a feature-driven architecture with three clean layers per feature: Data (Drift ORM for local SQLite — typed, reactive, cross-isolate capable), Domain (services that orchestrate business logic), and Presentation (Riverpod for reactive state management, StateNotifier stores for UI state).
Every feature — shot list, script editor, storyboard — follows the same layered pattern. This consistency is what makes it possible for one person to maintain a codebase of this size without drowning in complexity.
Backend — NestJS + TypeScript
The API runs on NestJS with TypeORM and PostgreSQL, deployed on Google Cloud Run. It's a feature-driven modular architecture with dependency injection, code-first migrations, and soft deletes everywhere (no data is ever truly gone).
Infrastructure is managed with Terraform — from Cloud SQL instances and CDN HMAC authentication to monitoring uptime checks that ping the health endpoint from three global regions every 60 seconds. Media goes through Google Cloud Storage with signed URLs for both uploads and downloads, served through Cloud CDN. Error tracking runs on Sentry across both client and server.
**
The Sync Engine
**
This is the heart of the product, and where the most interesting engineering lives.
If you're into local-first engineering, you've probably heard of Linear's sync engine. Linear's approach is genuinely world-class — the way they handle model metadata, transaction queuing, delta packets, and lazy hydration is something I deeply admire. Their sync engine is what convinced me that local-first, real-time sync is possible for a product company, not just an academic exercise. I study their work and aspire to reach that level of polish. My implementation is much simpler — built for the constraints and pace of a solo developer — but the core philosophy is similar.
The core idea, borrowed from Linear's architecture: transactions, not raw mutations. User changes are expressed as semantic actions that carry intent. These are queued, persisted, and synchronized — rather than applying low-level database writes directly.
A few of the more interesting technical decisions:
Hybrid Logical Clock (HLC) — Every action carries a timestamp in the format physicalMs:counter:nodeId. This gives you causal ordering that's immune to clock skew between devices. When two users edit on devices with slightly different system clocks, the HLC ensures actions are still totally ordered. It's a well-known technique in distributed systems, but implementing it across a Dart client and TypeScript server required careful coordination.
Fractional Indexing — Lists (shot lists, script nodes, storyboard frames) are ordered using base-62 string sequences instead of integer positions. Insert an item between position a0 and a1? The new item gets sequence a0V. No rebalancing, no cascading updates, infinite precision. This handles tens of thousands of items without performance degradation and makes conflict resolution between concurrent editors dramatically simpler.
Optimistic UI with Server Confirmation — When a user makes a change, the UI reflects it immediately, but the data doesn't persist to the local database until the server confirms it. This ensures data integrity — the server is always the authority on what's actually committed. If the server rejects an action (conflict, permissions), the optimistic state rolls back cleanly. Users get instant feedback without sacrificing consistency.
HTTP for submission, WebSocket for broadcast — Actions are submitted via HTTP (reliable, retryable). WebSockets are used solely for broadcasting confirmed changes to other connected clients. A deliberate hybrid that keeps the protocol simple.
The server enforces a multi-layered security model: entity-level ownership checks, project-scoped data isolation on every query, TypeORM column immutability on sensitive fields, and a subscription guard that prevents lost echoes. Passwordless authentication (OTP), signed URLs for all media operations, and strict CI enforcement where case convention violations block PRs — no exceptions.
**
How I Use LLMs
**
I use multiple AI agents — specifically through Antigravity, an agentic coding assistant — as my planning and prototyping department. Not my engineering team. The distinction matters.
Here's the actual workflow:
I write a feature spec. Before any agent touches a line of code, I write a detailed spec — the same document I'd hand to an engineering team. Data models, UI behavior, sync considerations, edge cases. My /tech-docs folder has 73+ technical documents that define every pattern, convention, and hazard in the codebase.
Multiple agents plan independently. I spin up separate agent sessions, each analyzing the codebase and my documentation. They independently produce implementation plans — different architectural approaches to the same problem. In a few hours, I have 2–3 fully thought-through strategies that would have taken me days to prototype solo.
Each agent builds a proof of concept. Not a code snippet — a working PoC that follows the existing patterns in the codebase. The agents read my standardization docs, understand the naming conventions, know which layers should talk to which. They produce code that's structurally consistent with what's already there.
I evaluate and choose. I compare the PoCs side by side. Which approach integrates cleanest with the existing sync engine? Which handles the edge cases I care about? Which is simplest to maintain long-term? Sometimes none of them are right, but they've mapped the solution space for me.
I build by hand. This is the part that matters most: I take the winning direction and implement it myself, line by line. Most of the production code is written by hand. I understand every decision, every trade-off, every potential failure mode.
The value isn't in the code the agents produce. It's in the speed of exploration. Testing three architectural paths as a solo developer would take weeks. With agents, I compare them in hours. The agents read my 73+ tech docs, stay consistent with the patterns, and don't forget the naming conventions — even when I might. But they also make mistakes, produce suboptimal code, and miss edge cases. They're a planning tool, not a replacement for engineering judgment.
If I had to put it simply: AI didn't replace my engineering team — it became my planning department.
**
What I've Learned
**
Documentation is an investment, not overhead. My 73+ tech docs aren't bureaucracy — they're the context that makes everything else work. They make the AI agents effective. They make it possible to return to code I wrote months ago and immediately understand why. They're the institutional knowledge that a solo developer otherwise has to keep entirely in their head.
Local-first is hard but worth it. Once you commit to the local database being the source of truth, every architectural decision flows from that. Conflict resolution, offline support, optimistic UI — they're not features you bolt on. They're consequences of a philosophy. It constrains you in useful ways.
Solo doesn't mean alone. Between AI agents for fast planning, a tight feedback loop with real users in beta, and a discipline around documentation and testing, a solo developer can build systems that would normally require a small team. The key isn't working more hours — it's eliminating the bottlenecks that slow down the thinking, not just the typing.
CineLog is in public beta right now. If you're a filmmaker tired of managing productions across ten different tools, or if you're an engineer interested in local-first architecture and sync engines — I'd love to connect.
And if you're a solo developer wondering whether it's possible to build something this complex alone: it is. It just requires being very deliberate about where you spend your time, and very honest about where you need help.
Top comments (0)