Most "AI solutions" for small businesses are either ChatGPT wrappers or six-figure enterprise projects. We build something in between — and I want to show you exactly how.
At Connective Labs, we build what we call AI Workers: task-specific automation systems for Singapore SMEs. Not chatbots. Not dashboards. Actual workers that do a job — quoting, document review, inventory forecasting, data entry — end to end.
Here's what our stack looks like and why we made these choices.
The Stack
| Layer | Tool | Why |
|---|---|---|
| Frontend | Next.js + Vercel | Fast deploys, edge functions, zero DevOps overhead for a small team |
| Backend / DB | Supabase (Postgres) | Auth, realtime, storage, Row Level Security — one platform instead of stitching 4 services |
| AI orchestration | Python + LangChain / custom agents | Flexibility for multi-step workflows. We tried n8n, Make, etc. — too brittle for production edge cases. |
| LLMs | Claude (Anthropic) + Groq (Llama, Mixtral) + OpenAI | Claude for long documents and reasoning. Groq-hosted open-source models for speed-sensitive tasks. OpenAI as needed. We route by task type. |
| Infra | AWS (via Supabase) + Vercel | Singapore region for data residency. Most SME clients care about where their data lives. |
What an AI Worker Actually Does
Let me walk through a real example. One of our clients is a maritime services company that quotes vessel repairs. Before us:
- Customer emails a job spec (PDF or free text)
- Operations manager reads it, checks 200+ line items against a rate card
- Manually builds a quotation in Excel
- Sends it back
Time: 2-4 hours per quote.
The AI Worker we built:
- Watches the inbox for incoming job specs
- Extracts line items using Claude (structured output, not regex — these PDFs are messy)
- Matches each item against the rate card in Supabase (fuzzy matching + embeddings for items that don't have exact names)
- Generates a draft quotation
- Flags items it's unsure about for human review
- Sends the draft to the ops manager for one-click approval
Time: 8 minutes, including human review.
The key design principle: the AI Worker doesn't replace the human. It does the grunt work and asks for help when it's uncertain. This is what makes it deployable in week 2 instead of month 6.
Why Not Just Use [Insert No-Code Tool]?
We get this a lot. "Why not Zapier? Why not Make? Why not just a GPT?"
Short answer: they break.
Longer answer:
- Zapier/Make — Great for simple A→B automation. Falls apart when you need conditional logic, retries, human-in-the-loop approvals, or any state management. We've rebuilt 3 client systems that started on Make and hit a wall.
- Custom GPTs / ChatGPT — No persistence, no integration with client systems, no way to enforce output structure reliably. Fine for internal exploration, not for client-facing production.
- LangChain alone — We use it, but it's a library, not a product. You still need auth, storage, a UI, monitoring, error handling. That's the 80% of the work that isn't the AI part.
The honest truth is that the AI is maybe 20% of what we build. The rest is plumbing — making the system reliable enough that a non-technical business owner can trust it to run without them watching.
Our 2-Week Prototype Process
Every project starts the same way:
Week 1:
- Day 1-2: Map the current workflow with the client (screen share, not a 40-page requirements doc)
- Day 3-5: Build the core pipeline — input parsing, AI processing, output formatting
- End of week: Internal demo on real client data
Week 2:
- Day 6-7: Client feedback, edge case handling
- Day 8-9: UI polish, error states, monitoring
- Day 10: Deploy to production + handover
If the prototype doesn't work on real data by day 10, the client gets a full refund. This forces us to scope tightly and ship fast.
After the prototype is live, clients move to a monthly retainer for monitoring, tweaks, and new features. Most AI Workers need tuning for the first 2-3 months as edge cases surface from real usage.
What I've Learned Building These
A few things that weren't obvious when we started:
- SME owners don't care about AI. They care about time and money. "This saves you 15 hours a week" lands. "We use retrieval-augmented generation" doesn't.
- The hardest part is trust. Getting a business owner to let an AI touch their client-facing output is a months-long trust-building exercise. The human-in-the-loop step isn't a compromise — it's the product.
- Structured outputs > free text. Every AI Worker we've built uses structured JSON outputs from the LLM, validated before anything hits the database. Free text generation is for content; automation needs deterministic structure.
- Singapore's grant ecosystem is underused. The Enterprise Innovation Scheme (EIS) gives 400% tax deduction on qualifying AI projects up to S$50K. Most SMEs don't know this exists. It effectively makes our projects near-free after tax benefits.
- Start with the ugliest workflow. Don't ask "where could AI help?" — ask "what task does your team hate the most?" That's where the ROI is highest and adoption is fastest.
Get in Touch
We're Connective Labs, based in Singapore. If you're building AI automation for SMEs (or thinking about it), I'd love to compare notes.
I also write about AI adoption in Southeast Asian businesses at GroundLevel — it's a free newsletter breaking down how local brands like Sheng Siong, Old Chang Kee, and Bee Cheng Hiang could use AI practically.
Questions? Drop a comment or reach me at leon@connective-labs.com.
Top comments (0)