How I architected and shipped features of a funding discovery platform for African women entrepreneurs, orchestrating a team of 7 while staying hands-on in the codebase.
Context
As stated last week in my previous article, Ekehi is a resource discovery platform built for women-led businesses across Africa. It surfaces funding opportunities, training programmes, and credit products which are aggregated, vetted, and filterable in one place.
Note: Going through the repository while reading the article will provide more context
As the Engineering Lead, the team's job was to ship the three core features that would make Ekehi real:
- Feature 3.1 — Funding Opportunities: a searchable, filterable directory of active funding across VC, grants, accelerators, loans, and more
- Feature 3.3 — Training & Capacity Building: a curated listing of business programmes, bootcamps, and accelerators for women entrepreneurs
- Feature 3.4 — Sector Classification: a consistent taxonomy enabling precise filtering across all resource types
Seven frontend contributors. A backend to build from scratch. One week.
Designing the Architecture First
Before writing a line of feature code, I had to answer one question: where does the data live, and how does the frontend get it? (this is two grouped into one, but you get the gist).
The stack constraint was already set — Netlify for the frontend, Supabase as the database. We had to introduce an Node.js/Express API layer on Render between them, rather than letting the frontend call Supabase directly.
Client (Netlify) → Node.js/Express API (Render) → Supabase (PostgreSQL + Auth)
This was deliberate. Calling Supabase directly from the frontend would have required exposing an API key in client-side JS — and even with RLS, that creates a surface area I didn't want. The Express server holds the service role key in an environment variable, never exposed to the client. The server became the single security boundary.
The tradeoff is an extra network hop. For this use case, which is mostly read operations on a discovery tool, it was the right call.
The Security Model
Supabase's Row Level Security is enabled on all tables, but the server bypasses it using the service role key. This might seem backwards — why enable RLS if you bypass it? The answer is defence in depth. RLS is a safety net in case something is misconfigured at the server layer. The real gate is the Node.js/Express server, which hardcodes approval_status = 'approved' into every list query. No matter what query params the frontend sends, unapproved records are never reachable.
Layered Architecture
The backend was structured as a strict four-layer system:
Route → Controller → Service → Supabase SDK
A controller never touches the database. A service never touches req or res. This isn't just clean code preference — it makes each layer independently replaceable and testable. When a Supabase query needed changing, I touched only the service file. When a response format needed updating, only the controller.
Keeping the Free-Tier Server Alive
Render's free tier spins down after 15 minutes of inactivity — which would mean a 30-second cold start for the first user every morning. I set up a cron job to ping the meta endpoint every 15 minutes, keeping the instance warm. Small operational detail, significant user experience impact.
Building the Data Layer
The Filter-as-Query-Builder Pattern
Feature 3.1 and 3.3 both required multi-dimensional filtering. Rather than building raw SQL strings or a complex query DSL, I applied each filter conditionally to a Supabase query object:
let query = supabase
.from('funding_opportunities')
.select(FIELDS, { count: 'exact' })
.eq('approval_status', 'approved'); // always applied — not a client param
if (search) query = query.or(`opportunity_title.ilike.%${search}%,...`);
if (sector) query = query.contains('sectors', [sector]);
if (country) query = query.eq('country', country);
{ count: 'exact' } returns the total row count alongside the data in a single query — no second round-trip needed for pagination metadata. Every list endpoint returns a consistent meta object: { page, limit, total, totalPages, hasNextPage, hasPrevPage }.
Feature 3.4 — Sector Taxonomy as a First-Class Design Decision
Sector classification isn't glamorous, but getting it wrong cascades into every filter in the system. I designed the taxonomy as enum slugs — agriculture_food, technology_digital, fashion_textiles — stored as arrays on each record. This meant one opportunity could span multiple sectors (a common real-world case), and filtering used Supabase's contains() operator against the array.
I also built a /meta endpoint that returns all enum values — opportunity types, sectors, stages, cost types, duration ranges — in a single call. Frontend components populate their dropdowns from this rather than having hardcoded option lists scattered across multiple files.
Consistent API Contract
Every endpoint — success or error — returns the same envelope shape:
{ "success": true, "message": "...", "data": [...], "meta": {} }
I documented every endpoint in endpoints.md with request/response examples. This wasn't just good practice — with 7 contributors building frontend integrations, a shared reference prevented mismatched field names and assumptions about response shapes from becoming bugs.
Building the Frontend Foundation for a Team of 7
Before features could be built, the frontend needed an architecture that 7 contributors could work within without constant coordination. I approached this in three layers: a component library, a module system, and a database schema that wouldn't break filtering.
The Component Library
Rather than leaving each contributor to build UI primitives from scratch — and ending up with 7 different button styles — I built a shared component library under client/shared/components/, each component following the same static factory pattern:
const btn = Button.create({ label: 'Apply now', variant: 'primary' });
container.appendChild(btn);
Every component has one public method, create(), that returns a DOM element. Internal rendering logic is hidden behind ES2022 private class fields (#buildClasses(), #buildHTML(), #attachEventListeners()). Contributors couldn't accidentally break internals — the only surface they ever touched was the public API.
The library covered:
-
Button — 4 variants (primary, secondary, outline, ghost), 3 sizes, icon support, renderable as
<a>for link CTAs - Input — form input with validation states
-
Dropdown — custom styled select with keyboard dismissal, click-outside-to-close, and
onChangecallback -
SearchBar — input + search button, fires
onSearchon button click or Enter -
Nav — self-mounting; drop
<nav id="nav-root">anywhere and import the script, it renders itself. Handles mobile hamburger menu, active link detection, and authenticated vs unauthenticated CTA states - Footer — same self-mounting pattern
Every component was documented in docs/components/ with a full API reference, usage examples, and instructions for extending it. The goal was that any contributor could pick up a component without asking me how it worked.
Migrating to ES Modules
Last sprint, every HTML page was loading 4–6 <script> tags in a specific order — api.js before auth.service.js before the page script, or things broke silently. With 7 contributors adding pages, this was a maintenance problem waiting to happen.
I migrated the entire client codebase to native ES modules. Every shared utility and component became an explicit import. Every page went from a stack of script tags to a single:
<script type="module" src="page.js"></script>
type="module" is automatically deferred — no load-order issues. ES modules are cached — auth.service.js imported by both nav.js and login.js evaluates only once. Contributors could add a component to their page with a single import line, without touching HTML at all.
A full migration plan was written in docs/setup/es-modules-migration.md before executing it — mapping every file that needed changes, every new import/export statement, and every HTML page that needed its script tags collapsed. The migration was executed as a single PR (#75) to avoid a partial state where some pages used modules and others didn't.
The Database Refactor That Made Filtering Possible
This was the most consequential piece of work in the sprint, and the least visible.
When wiring the filter queries, I discovered the database schema would break filtering by design. Categorical fields like sector and stage_eligibility were stored as free-text varchar — values like "Technology & Digital Services, Financial Services & Fintech" comma-separated in a single column. A standard .eq('sector', 'technology_digital') would never match.
The schema was refactored from scratch:
-
PostgreSQL enums for single-value categoricals (
opportunity_type,status,format) — validation enforced at the database layer, not application code -
text[]arrays for multi-value fields (sectors,stages) — a single opportunity can belong to multiple sectors, which is the real-world case -
GIN indexes on every array column — PostgreSQL's
@>operator with a GIN index turns a multi-sector filter into a fast indexed lookup -
Lookup tables (
sectors,stages) as the canonical source of display names, decoupled from the enum slugs used in queries
I chose text[] arrays over junction tables deliberately. Supabase's JS SDK maps .contains('sectors', ['technology_digital']) directly to PostgreSQL's @> — one line, no JOINs, no raw SQL. Junction tables would have required supabase.rpc() or nested filters that broke the existing service layer pattern.
The migration ran as 8 sequential scripts, each documented with rollback considerations. The data mapping exercise — converting "Grant-NGO" to grant_ngo, "Rolling Applications" to rolling_applications, fixing edge cases where values were stored without spaces after commas, took as long as writing the migration code itself.
The result: filtering just works. .contains('sectors', [sector]) against a GIN-indexed text[] column is both correct and fast.
Documentation as a Force Multiplier
With 7 contributors and little daily standup, documentation was how I kept the team unblocked. By the end of the sprint, the docs/ directory contained:
-
docs/components/— full API reference for every shared component -
docs/api/endpoints.md— every endpoint with request/response examples, all query params, all error codes -
docs/setup/system-design-case-study.md— the full architectural rationale, for onboarding and for the team's own understanding of what they were building on -
docs/setup/es-modules-migration.md— the migration plan before execution -
docs/setup/db-refactor.md— the schema refactor with every migration script, data mapping, and verification query documented
A contributor building the training page filter section shouldn't need to ask me what the Dropdown API is, what query params the /trainings endpoint accepts, or what slug values are valid for programme_type. That information lived in the docs. The friction of building fell from "wait for the lead to answer" to "read the reference."
The Bug That Broke Everything After Login
Midway through the sprint, I caught a subtle but critical bug: the opportunities page would load correctly for unauthenticated users, but return an empty array immediately after login.
The root cause was a Supabase singleton contamination bug. The auth.service.js was calling signInWithPassword() on the shared service role client — the same singleton used for all database queries. Even with persistSession: false, the GoTrueClient stores the returned user JWT in memory as currentSession. Every subsequent database query then sent Authorization: Bearer <user_jwt> instead of the service role key, making PostgREST apply RLS. Since there's no permissive RLS policy for the authenticated role, queries returned empty.
The fix was architectural: a separate Supabase client initialised with the anon key, used exclusively for user-facing auth operations. The service role singleton is never touched by auth flows.
// auth.service.js — separate client, never shared
const authClient = createClient(supabaseUrl, supabaseAnonKey, {
auth: { autoRefreshToken: false, persistSession: false },
});
This is the kind of bug that's invisible in testing and devastating in production, because it only manifests after a user successfully logs in.
Orchestrating the Team
Breaking Features into Issues
I decomposed each feature into discrete GitHub issues with explicit acceptance criteria and assigned them across the team. The filter section for opportunities, the training page UI, the login wiring, the signup wiring, the navbar auth state — each became a separate issue with clear inputs and outputs.
Some contributors didn't complete their assignments before the sprint deadline. Rather than letting work stall, I reassigned and in several cases picked up the work myself.
PR Reviews — Holding the Bar
I reviewed every PR that touched the three core features. Two patterns emerged in reviews that I pushed back on consistently:
PR #63 — Signup wiring: Requested changes before approval. The initial implementation had issues with how the auth flow was handling the response from the server — needed corrections before merge.
PR #66 — Training & Resources filter section: Requested changes before approval. The initial UI wiring wasn't aligned with the component API established.
On both, the aim was on consistency with the patterns the rest of the codebase had already established. Inconsistency at the integration layer is what creates bugs that take hours to trace.
Wiring the Frontend to the API
Once the backend was live, I oversaw the integration work. Two issues surfaced during review:
Shared utilities extracted to prevent duplication. Both pages needed the same date formatting and amount scaling logic. Rather than letting each page carry its own copy, I extracted formatAmount, formatDate, daysUntil, humanize, and buildQueryString into a shared opportunity.utils.js module — imported by both the listing and detail pages.
Intl.NumberFormat memoization. The original amount formatter was constructing a new Intl.NumberFormat instance on every card render. On a listing page with 20 results, that's 40 expensive constructor calls per page load. I added a Map-based cache keyed by currency code, one construction per currency, reused on every subsequent call.
What Shipped
By end of sprint:
- A live Express API on Render serving Features 3.1 and 3.3, with full filter support, pagination, and a consistent response contract
- An opportunity detail page with full listing data, deadline countdown, sector/stage tags, and an apply CTA
- Filter and search wired end-to-end on both the opportunities and resources pages
- A shared sector taxonomy (Feature 3.4) implemented as enum slugs across the database, API, and frontend filter components
- A
/metaendpoint returning all filter enum values for dynamic dropdown population - Auth flow (signup, login, logout) wired across the frontend, with a critical singleton bug patched in the backend
- PRs reviewed, 2 with requested changes before merge
- Endpoint documentation and system design case study written for the team
What I'd Do Differently
The filter state on both pages is duplicated — the same filters object shape, the same onFilterChange pattern, the same buildQueryString call. With more time I would extract a shared FilteredPage module that both pages compose from, rather than each carrying their own copy of the pattern. It works now. It will diverge later.
The /meta endpoint also isn't being consumed by the frontend yet — filter options are still hardcoded in the JS files. The infrastructure is there; it just needs to be wired in.
Closing Thought
The most important thing I did this sprint wasn't writing code, it was actually making decisions early enough that the team could move in parallel without stepping on each other. The layered backend architecture, the response envelope, the sector taxonomy, the component API — these were the guardrails that let 7 people build towards the same system without needing a daily sync to stay aligned.
Engineering leadership at this scale is mostly about removing ambiguity before it becomes a bug.
Top comments (0)