This is part 3 of my series on AppReviews.
Part 1 is available here
Part 2 is available here
After talking about why AppReviews exists and how I forced myself to ship a focused v1, it feels natural to talk about the technical side.
Not in a “here’s my shiny stack” way, but as a snapshot of the decisions I made as a solo developer, working nights and weekends, with limited time and a very clear goal: build something reliable, understandable, and good enough to run in production without turning into a second full-time job.
This post isn’t about best practices. It’s about trade-offs.
A boring architecture on purpose
At a high level, AppReviews is a monolith.
It’s a single Next.js 16 application using the App Router. Frontend, API routes, background jobs, and scheduling all live in the same codebase and are deployed together as one unit.
- No microservices.
- No separate worker processes.
- No distributed queues.
Everything runs in one process.
That was a very conscious choice.
The problem AppReviews solves doesn’t require a complex architecture. Reviews are fetched on a schedule, stored, optionally processed, and pushed to Slack. Latency isn’t critical. If a review shows up a few seconds later, nobody cares.
What matters more is reliability and simplicity.
Backend: TypeScript, Next.js, and in-process jobs
The backend is written in TypeScript, running on Node.js 20+, using ESM modules.
I chose Next.js App Router for everything, including APIs. All HTTP endpoints are standard route handlers living under src/app/api. It’s REST-style, straightforward, and easy to reason about.
Background work is handled with node-cron, started inside Next.js instrumentation when the server boots. This is probably the most “controversial” choice, but also one of the most pragmatic.
- There’s no Redis.
- No BullMQ.
- No external queue.
Cron jobs run in-process.
Review fetching runs every 5 minutes in development and every 30 minutes in production. Weekly Slack summaries go out on Monday mornings. Rating checks run at different frequencies depending on the subscription plan. Export cleanup runs hourly.
Is this horizontally scalable? No.
Is it good enough for v1? Absolutely.
If the process crashes, Coolify restarts it and the jobs resume on the next tick. For this use case, that’s acceptable.
Fetching reviews: pragmatic over perfect
For the App Store, there are two paths.
If users want to see appStore reviews, they can configure App Store Connect API credentials and the system will fetch them.
For Google Play, it uses the official Google Play Developer API with service account authentication.
Since it's going to be needed to reply to reviews anyway, why bother with scrappers.
Integrations everywhere, but kept simple
AppReviews integrates with a few external services, but each one is intentionally scoped.
Slack is used to send new reviews, weekly summaries, and rating changes. It uses @slack/web-api. Messages are fire-and-forget. There’s no retry system yet. If Slack is down, the world doesn’t end.
Stripe handles subscriptions, checkout, and webhooks. This is one area where I didn’t try to be clever. Stripe’s patterns are well-documented and battle-tested, so I followed them closely.
Translations are handled via DeepL, optionally.
Transactional emails are sent using Gmail SMTP via Nodemailer. Not SendGrid. Not Postmark. Just something I already had access to and that works.
Analytics are tracked with Umami. Simple, privacy-friendly, and enough to know if people are using the product.
Data layer: Prisma and “good enough” modeling
The database is PostgreSQL in production and SQLite locally. Prisma abstracts the difference, which makes local development painless.
Schema changes are handled through Prisma migrations. There are already quite a few of them, which tells the real story: the model evolved over time.
The data model covers tenants, apps, reviews, replies, embeddings, analysis, rating history, teams, subscriptions, exports, and more.
One evolution I’m happy about is the introduction of an AppData model to deduplicate shared app metadata across tenants. That came later, once it became clear that multiple teams might track the same app.
There’s no Redis. No cache layer. No fancy optimization. Just indexes on the columns that are queried often and trust in Prisma to generate reasonable queries.
So far, it’s been fine.
Frontend: speed first, elegance later
The frontend is also built with Next.js App Router and React.
Most of it is client-side. Server components exist, but I didn’t force myself to use them everywhere just because they’re “the new thing”.
Authentication is handled by NextAuth v5 beta. OAuth providers include Google, Apple, GitHub, and LinkedIn. There’s also a custom email/password flow with bcrypt and email verification. The JWT strategy is mapped directly to the tenant model, which doubles as both user and organization.
Styling is intentionally boring. Plain CSS and inline styles. No Tailwind. No CSS-in-JS. No component library.
The main dashboard page is a single, very large file. Over 2,500 lines. All the state, effects, and UI live together.
Is that ideal? No.
Is it fast to work with as a solo dev? Yes.
Forms use React Hook Form and Zod. Charts use ApexCharts, dynamically imported to avoid SSR issues.
This is very much a “get it working, clean it later if needed” frontend.
Deployment and infrastructure: minimal and cheap
The app is built as a Docker image using Next.js standalone output. There’s a multi-stage Dockerfile and a small entrypoint script that runs Prisma migrations before starting the server.
Secrets are managed via environment variables. There’s no dedicated secrets manager.
Logging uses Winston. Analytics use Rybbit, and it's really great. There’s no APM, no Sentry, no Datadog. But I've setup a grafana with Loki. Yep being solo also means having to learn and setup those things.
CI/CD isn’t visible in the repo. Deployments happen automatically on git push via Coolify.
This setup is intentionally boring and cost-conscious. I mean, cost is close to zero.
Constraints, compromises, and being honest about them
This codebase has a lot of “solo developer energy”.
There are no tests.
There’s no message queue.
There’s no horizontal scaling story.
There are big files.
There are shortcuts.
All of that is intentional.
I optimized for shipping something useful, keeping costs very low, and being able to maintain it alone.
Some things will be easy to change later. Others won’t. That’s fine. Perfect architectures don’t ship products. Imperfect ones do.
If AppReviews grows, the first things I’d revisit are the in-process cron jobs and the single-instance assumption. Background work would need a proper queue, Slack delivery would need retries, and monitoring would become more serious. The big dashboard page would also need to be split at some point.
None of this is urgent today. They’re good problems to have, and I’d rather solve them when they’re real.
And most importantly, I understand it.
In the next post, I’ll go deeper into the AI part: why it’s optional, how it’s implemented, and how I’m trying to use it to reduce noise instead of adding more dashboards.
Top comments (0)