DEV Community

Aoun Abu Hassan
Aoun Abu Hassan

Posted on • Originally published at nexcord.app

I Built a Discord SaaS in 3 Months at 19. Here's What It Actually Took.

I'm 19. I build from my bedroom. I have no co-founder, no investors, and no team.

Three months ago I started building Nexcord — a community management platform for Discord servers. Today it's live, it charges real money via Paddle, and real servers are using it.


Why Discord?

I've been building Discord bots since I was 16. JavaScript/Node.js was where I started. I'd watched server admins run the same broken workflows for years: manually copying ticket conversations into Google Docs, DMing users just to verify they're human, no way to search past discussions.

Nobody had built a clean, unified platform that treated community management as a serious product problem. That felt like the gap.


What Nexcord Does

  • Automated transcripts — every ticket or thread saved automatically, searchable
  • AI summarization — Mistral 7B running locally via Ollama. No data sent to OpenAI.
  • Web-based verification — custom flows in the browser, not bot commands

$4.99/month Pro with a 14-day trial.


The Stack

Monorepo: Yarn Workspaces
Bot:       Discord.js v14
API:       Fastify 4 → Railway (all business logic here)
Dashboard: Next.js 16 → Vercel (UI only)
DB:        Supabase
Cache:     Upstash Redis (pay-as-you-go)
AI:        Ollama — Mistral 7B + LLaVA 7B (local RTX 3080 Ti)
Billing:   Paddle (Merchant of Record)
Enter fullscreen mode Exit fullscreen mode

The hard rule: all business logic lives in Fastify. Next.js is UI only. If the dashboard breaks, billing and plan enforcement still work.


The Mistake I Already Fixed

I built the summarization queue with BullMQ backed by Upstash Redis. Looked solid on paper.

The problem: Upstash's serverless Redis doesn't support the Lua scripts BullMQ uses internally for atomic operations. Jobs were failing silently. I was burning Redis commands on broken queue overhead.

The fix was simpler than I expected — I replaced the entire thing with a summarization_jobs table in Supabase and a polling worker:

// Poll for pending jobs
const { data: job } = await supabase
  .from('summarization_jobs')
  .select('*')
  .eq('status', 'pending')
  .lt('attempts', 3)
  .order('created_at')
  .limit(1)
  .single()

if (!job) return

// Claim it
await supabase
  .from('summarization_jobs')
  .update({ status: 'processing', started_at: new Date() })
  .eq('id', job.id)
  .eq('status', 'pending') // optimistic lock
Enter fullscreen mode Exit fullscreen mode

Deduplication, retry, stall recovery — all handled cleanly. Upstash now only does what it's good at: fast reads, rate limit counters, session caching.

Lesson: if you're not at scale yet, start with Postgres for job queues. You probably don't need Redis for this.


One More Thing — Ollama Cold Start

LLaVA 7B was taking 20+ seconds on first inference. Fix was simple:

# docker-compose.yml
environment:
  - OLLAMA_KEEP_ALIVE=30m
Enter fullscreen mode Exit fullscreen mode

And bumping the LLaVA timeout to 60 seconds. Model stays warm between requests.


Current State (honest numbers)

  • Servers on real bot: ~5
  • Bot verification with Discord: pending (required to scale past 100 servers)
  • Lighthouse: 91 / 92 / 96 / 100

Building in public means posting before things are perfect. This is that post.

If you want to follow along or try Nexcord: nexcord.app

Top comments (0)