This is a submission for Weekend Challenge: Earth Day Edition
What I Built
Every year, billions of pages are printed, scanned, and shuffled
around offices — contracts nobody reads fully, reports that answer
questions nobody asks them, catalogs searched by hand. KnowFlow is
my attempt to change that, one document at a time.
Upload any PDF, DOCX, or Excel file. Ask it a question in Arabic
or English. Get an answer pulled directly from your document — not
from the internet, not hallucinated, from your file. No printing.
No Ctrl+F. No reading 40 pages to find one clause.
The Earth Day angle is simple: the less we print, scan, and
physically shuffle documents, the better. KnowFlow makes digital
documents actually usable — which is the only way to make them
a real alternative to paper.
I built it specifically for Arabic speakers because every tool
I found treated Arabic as an afterthought. RTL from day one.
Answers in Arabic that actually make sense. The Arab market is
400 million people who deserve AI built for them, not translated
at them.
Demo
🔗 Live: tryknowflow.com
Code
KnowFlow
Chat with any document — Arabic & English AI agent, no setup required.
Upload a PDF or paste a URL. Ask questions in Arabic or English.
KnowFlow streams answers in real time — sourced directly from your document.
Features
- Bilingual: Arabic (RTL) + English in the same session
- Streaming responses via Claude Haiku
- PDF, DOCX, and URL ingestion
- Authentication via Supabase
- PRO tier with usage limits
Stack
Next.js 15 · FastAPI · Claude Haiku · Supabase · Railway · Vercel
Architecture
User → Next.js (Vercel)
└→ FastAPI (Railway)
└→ Claude Haiku (streaming)
└→ Supabase (auth + storage)
Run locally
# Frontend
cd frontend && npm install && npm run dev
# Backend
cd backend && pip install -r requirements.txt
uvicorn main:app --reload
ANTHROPIC_API_KEY=
SUPABASE_URL=
SUPABASE_ANON_KEY=
Live
Built by
AboJad — Full Stack AI Engineer, Marrakesh 🇲🇦
How I Built It
I built KnowFlow solo in a few weeks using a stack I could
reason about completely:
Frontend: Next.js 15 + TypeScript + Tailwind CSS, deployed
on Vercel. RTL support built in from the start — not bolted on.
Backend: Next.js API routes (serverless) handling ingestion,
agent queries, and billing webhooks.
AI: Anthropic's Claude API (claude-haiku) for streaming
responses. The model reads the document content directly — no
vector search, no embeddings in v1. Clean and fast.
Ingestion: A Python FastAPI service on Railway using
Microsoft's MarkItDown to convert any file format to Markdown
before storing it.
Database + Auth: Supabase (PostgreSQL + RLS + Storage).
Row-level security means users only ever see their own documents.
Billing: Paddle as Merchant of Record — handles VAT and
compliance so I don't have to.
The hardest decision was keeping v1 simple: no RAG, no
embeddings, no vector search. The full document goes into
context. This trades scale for accuracy — for the document
sizes most users have (contracts, reports, catalogs), it works
better than chunked retrieval.
The most interesting technical moment was the conversation
history implementation. The backend was already saving
everything to Supabase, but the frontend was using useState
only. Fixing it required decoupling the remount key from the
conversation ID — a subtle React batching issue that would
have killed the UX silently.
I'm one developer, building from Morocco, shipping tools for
a market that's been waiting for someone to build for it first.
Top comments (0)