DEV Community

Michael Onoja
Michael Onoja

Posted on

Scaling to 100K Users: Architecture Lessons from Building Nigeria's Social Commerce Platform

How We Scaled Nigeria's Social Commerce Platform to 100K Users (And The Architecture That Almost Broke)

I still remember the night our server crashed at 3 AM. We'd just hit 50K users, and I was convinced we'd made a terrible architecture decision six months earlier. Turns out, I was both right and wrong.

Let me tell you the real story of building Ayema, Nigeria's homegrown social commerce platform, from zero to 102,000 active users, 2 million engagement events, and actually getting vendors paid for their content and product. This isn't going to be a sanitized case study. I'm going to show you what worked, what failed spectacularly, and the specific technical decisions that made the difference.

The Problem We Set Out to Solve

Here's the thing about building tech in Nigeria: the infrastructure challenges force you to make different choices than developers in Silicon Valley.

Nigerian vendors, creators and small businesses were stuck using global platforms that:

  • Assumed consistent high-speed internet
  • Optimized for desktop and mobile experiences (90% of our market is mobile)
  • Handled payments through systems not integrated with local banks
  • Didn't understand the social commerce culture here

We needed to build something different. A platform where a Kogi based fashion designer could post content, sell products, and actually get paid, all within one ecosystem that works on 4G connections.

The Initial Architecture Decision (That Everyone Questioned)

When we started designing Ayema's architecture in 2021, we made a controversial choice: I went with PHP/Laravel instead of the trendy Node.js/React full-stack everyone was using.

My co-founder literally asked me: "Are you sure? PHP?"

Here's why I made that call:

Backend: PHP/Laravel

Architecture:
- Laravel 8+ (now on 10)
- Modular API structure
- MySQL database
- VPS: 6GB RAM, 120GB SSD, 2.93TB bandwidth
Enter fullscreen mode Exit fullscreen mode

Why Laravel worked for us:

  1. Mature ecosystem - When you're building in Nigeria, you can't afford to debug framework issues at 2 AM. Laravel's maturity meant fewer surprises.

  2. Eloquent ORM - Managing relationships between users, posts, products, transactions, and payments would have been hell with raw SQL. Eloquent made it elegant:

// Getting a user's posts with products and payment status
$posts = Post::with(['products', 'transactions.payment'])
    ->where('user_id', $userId)
    ->whereHas('products', function($query) {
        $query->where('status', 'active');
    })
    ->latest()
    ->paginate(20);
Enter fullscreen mode Exit fullscreen mode
  1. Built-in features we needed immediately - Authentication, job queues, event broadcasting. We didn't have time to wire these together from scratch.

  2. Modular from day one - We structured it so each major feature (social, marketplace, payments, content monetization) lived in its own module. When we hit scaling issues, we could optimize individual parts without refactoring everything.

Frontend: ReactJS + Blade Templates

Yes, we're using both. And yes, it's intentional.

ReactJS for the dynamic social features:

  • Real-time post feeds
  • Interactive comments and reactions
  • Marketplace product browsing

Blade templates for:

  • Static pages (faster load times)
  • SEO-critical content
  • Admin dashboard (doesn't need SPA complexity)

Here's the controversial part: we serve a hybrid. Critical user paths are ReactJS for interactivity. Everything else? Server-rendered Blade templates.

Why? Because a 2MB React bundle takes 30+ seconds to load on spotty 3G. A 50KB Blade-rendered page? 2 seconds.

// Our React entry point is lean
import React from 'react';
import ReactDOM from 'react-dom';
import Feed from './components/Feed';
import Marketplace from './components/Marketplace';

// Only hydrate specific islands, not the entire page
if (document.getElementById('feed-root')) {
    ReactDOM.render(<Feed />, document.getElementById('feed-root'));
}

if (document.getElementById('marketplace-root')) {
    ReactDOM.render(<Marketplace />, document.getElementById('marketplace-root'));
}
Enter fullscreen mode Exit fullscreen mode

This "islands of interactivity" approach kept our initial page loads under 3 seconds even on slow connections.

The Database Design That Saved Us

At 10K users, our naive database structure started showing cracks. By 30K, queries were taking 8+ seconds. Here's what we learned the hard way:

Initial Mistake: Over-Normalization

My computer science professors would be proud of my original schema - perfectly normalized to 5NF. My users? They were getting timeout errors.

The problem: To load a user's feed, we were joining 7 tables:

SELECT * FROM posts
JOIN users ON posts.user_id = users.id
JOIN post_media ON posts.id = post_media.post_id
JOIN reactions ON posts.id = reactions.post_id
JOIN comments ON posts.id = comments.post_id
JOIN products ON posts.id = products.post_id
JOIN transactions ON products.id = transactions.product_id
WHERE ...
Enter fullscreen mode Exit fullscreen mode

This query took 6.4 seconds at 50K users.

The Fix: Strategic Denormalization

I know, I know. But hear me out. We added computed columns and caching:

// Add cached counts to posts table
Schema::table('posts', function (Blueprint $table) {
    $table->integer('reactions_count')->default(0);
    $table->integer('comments_count')->default(0);
    $table->integer('shares_count')->default(0);
    $table->timestamp('last_activity_at')->nullable();
});

// Update these via database triggers and Laravel events
class PostReacted
{
    public function handle($event)
    {
        DB::table('posts')
            ->where('id', $event->post->id)
            ->increment('reactions_count');

        Cache::tags(['feed', "user:{$event->post->user_id}"])
            ->flush();
    }
}
Enter fullscreen mode Exit fullscreen mode

Result? Feed queries dropped from 6.4 seconds to 280ms.

The lesson: Denormalization isn't dirty when you're serving real users on real infrastructure.

The 2AM Crisis: When 50K Users Broke Everything

Remember that crash I mentioned? Here's what happened:

11:43 PM - Everything's running smoothly

12:18 AM - Response times creep up to 2 seconds

1:34 AM - Database connection pool exhausted

2:47 AM - Complete platform outage

I woke up to 47 WhatsApp messages and a dead server.

The Root Cause

Our image upload system was synchronous. When a user uploaded a photo:

  1. PHP received the file
  2. Validated it (held the connection)
  3. Resized it to 5 different sizes (held the connection)
  4. Uploaded to storage (held the connection)
  5. Updated database
  6. Returned response

With 50K+ users posting images simultaneously, our server ran out of worker processes. New requests just... queued. Forever.

The Solution: Async Job Processing

We moved heavy operations to Laravel queues:

// Before - synchronous nightmare
public function uploadImage(Request $request)
{
    $image = $request->file('image');

    // This took 8-15 seconds per image
    $resized = $this->resizeImage($image, [
        'thumbnail' => [150, 150],
        'small' => [480, 480],
        'medium' => [720, 720],
        'large' => [1080, 1080],
    ]);

    $this->uploadToStorage($resized);

    return response()->json(['success' => true]);
}

// After - async, returns immediately
public function uploadImage(Request $request)
{
    $image = $request->file('image');

    // Store original, return immediately
    $tempPath = $image->store('temp');

    // Process async
    ProcessImage::dispatch($tempPath, auth()->id());

    return response()->json([
        'success' => true,
        'message' => 'Image processing...'
    ]);
}
Enter fullscreen mode Exit fullscreen mode

Impact:

  • Upload endpoint response time: 8s → 340ms
  • Server capacity increased 4x
  • User experience: Instant feedback with background processing

The Payment Integration That Nearly Killed Our Momentum

Building payments in Nigeria is... special. We integrated with:

  • Paystack - Our primary payment gateway for card payments and bank transfers
  • Flutterwave - Secondary gateway for redundancy and alternative payment methods
  • Internal wallet system - For peer-to-peer transfers and creator earnings
  • Bank transfer verification - Manual and automated reconciliation

The challenge wasn't the payment providers - Paystack and Flutterwave are excellent, with solid APIs and great documentation. The challenge was building the business logic around multiple payment flows while staying compliant with Nigerian financial regulations.

The Architecture Challenge

Users could earn money from:

  1. Content monetization (people reacting to their posts)
  2. Product sales (via marketplace)
  3. Affiliate commissions

And they needed to:

  • Withdraw to their bank accounts (via Paystack/Flutterwave)
  • Transfer to other Ayema users (internal wallet)
  • Pay vendors (escrow for marketplace transactions)

All while complying with SCUML (Special Control Unit against Money Laundering) regulations and maintaining accurate financial records.

Here's our wallet transaction table structure:

Schema::create('wallet_transactions', function (Blueprint $table) {
    $table->uuid('id')->primary();
    $table->foreignId('user_id')->constrained();
    $table->enum('type', [
        'content_earning',
        'product_sale',
        'affiliate_commission',
        'withdrawal',
        'refund'
    ]);
    $table->decimal('amount', 15, 2);
    $table->decimal('fee', 10, 2)->default(0);
    $table->string('status'); // pending, processing, completed, failed
    $table->string('reference')->unique();
    $table->string('payment_channel')->nullable(); // bank, wallet, etc
    $table->json('metadata')->nullable();
    $table->timestamp('processed_at')->nullable();
    $table->timestamps();

    $table->index(['user_id', 'status']);
    $table->index(['reference']);
    $table->index(['created_at']);
});
Enter fullscreen mode Exit fullscreen mode

The key insight: Use UUIDs for transaction IDs. When reconciling with payment gateways, auto-incrementing IDs exposed how many transactions we were processing. UUIDs kept that private.

Integrating Multiple Payment Gateways

We use both Paystack and Flutterwave for redundancy and to give users options:

class PaymentService
{
    protected $providers = [
        'paystack' => PaystackProvider::class,
        'flutterwave' => FlutterwaveProvider::class,
    ];

    public function initiateWithdrawal($user, $amount, $bankDetails)
    {
        // Try primary provider first
        $provider = $this->getProvider('paystack');

        try {
            $transfer = $provider->transfer([
                'amount' => $amount * 100, // Convert to kobo
                'recipient' => $bankDetails['account_number'],
                'bank_code' => $bankDetails['bank_code'],
                'reason' => 'Ayema wallet withdrawal',
                'reference' => $this->generateReference(),
            ]);

            return $this->recordTransaction($user, $transfer, 'paystack');

        } catch (PaymentProviderException $e) {
            // Fallback to secondary provider
            Log::warning('Paystack failed, trying Flutterwave', [
                'error' => $e->getMessage()
            ]);

            return $this->initiateWithFlutterwave($user, $amount, $bankDetails);
        }
    }

    public function handleWebhook(Request $request, $provider)
    {
        // Verify webhook signature
        if (!$this->verifyWebhookSignature($request, $provider)) {
            return response()->json(['error' => 'Invalid signature'], 401);
        }

        $data = $request->all();

        // Handle different event types
        switch ($data['event']) {
            case 'charge.success':
                $this->handleSuccessfulPayment($data);
                break;

            case 'transfer.success':
                $this->handleSuccessfulWithdrawal($data);
                break;

            case 'transfer.failed':
                $this->handleFailedWithdrawal($data);
                break;
        }

        return response()->json(['status' => 'success']);
    }

    protected function verifyWebhookSignature($request, $provider)
    {
        $signature = $request->header('X-Paystack-Signature') 
                  ?? $request->header('verif-hash');

        $secret = $provider === 'paystack' 
            ? config('services.paystack.secret_key')
            : config('services.flutterwave.secret_hash');

        $computedSignature = hash_hmac('sha512', $request->getContent(), $secret);

        return hash_equals($computedSignature, $signature);
    }
}
Enter fullscreen mode Exit fullscreen mode

Important lessons from payment integration:

  1. Always verify webhook signatures - We had an incident where someone sent fake webhook calls trying to credit accounts. Signature verification saved us.

  2. Idempotency is critical - Payment gateways sometimes send duplicate webhooks. Use unique references:

public function handleSuccessfulPayment($data)
{
    $reference = $data['data']['reference'];

    // Check if already processed
    if (WalletTransaction::where('reference', $reference)->exists()) {
        Log::info('Duplicate webhook received', ['reference' => $reference]);
        return;
    }

    // Process payment...
}
Enter fullscreen mode Exit fullscreen mode
  1. Handle failures gracefully - Sometimes withdrawals fail after money is debited. We implemented automatic refunds:
public function handleFailedWithdrawal($data)
{
    $reference = $data['data']['reference'];
    $transaction = WalletTransaction::where('reference', $reference)->first();

    if ($transaction && $transaction->status === 'processing') {
        // Refund to wallet
        WalletService::credit(
            $transaction->user_id,
            amount: $transaction->amount,
            type: 'refund',
            reference: "refund:{$reference}"
        );

        $transaction->update(['status' => 'failed']);

        // Notify user
        $transaction->user->notify(new WithdrawalFailed($transaction));
    }
}
Enter fullscreen mode Exit fullscreen mode
  1. Bank account verification before withdrawal - Both Paystack and Flutterwave provide APIs to verify account details. We call this before processing withdrawals to prevent sending money to wrong accounts:
public function verifyBankAccount($accountNumber, $bankCode)
{
    $response = Http::withHeaders([
        'Authorization' => 'Bearer ' . config('services.paystack.secret_key'),
    ])->get('https://api.paystack.co/bank/resolve', [
        'account_number' => $accountNumber,
        'bank_code' => $bankCode,
    ]);

    if ($response->successful()) {
        return [
            'valid' => true,
            'account_name' => $response->json('data.account_name'),
        ];
    }

    return ['valid' => false];
}
Enter fullscreen mode Exit fullscreen mode

This prevents the frustrating experience of users withdrawing to wrong account numbers and losing money.

The Double-Entry Bookkeeping Pattern

To ensure money never vanished, we implemented double-entry bookkeeping:

class WalletService
{
    public function transfer($fromUser, $toUser, $amount, $type)
    {
        DB::transaction(function() use ($fromUser, $toUser, $amount, $type) {
            // Debit sender
            WalletTransaction::create([
                'user_id' => $fromUser->id,
                'type' => $type,
                'amount' => -$amount,
                'status' => 'completed',
                'reference' => $this->generateReference(),
            ]);

            // Credit recipient
            WalletTransaction::create([
                'user_id' => $toUser->id,
                'type' => $type,
                'amount' => $amount,
                'status' => 'completed',
                'reference' => $this->generateReference(),
            ]);

            // Update cached balances
            $this->updateBalance($fromUser);
            $this->updateBalance($toUser);
        });
    }
}
Enter fullscreen mode Exit fullscreen mode

This pattern saved us during an incident where a payment gateway sent duplicate callbacks. Because we checked for unique references, no double-payments occurred.

Optimizing for Nigerian Internet Reality

The biggest lesson: You cannot build for Nigerian users the same way you build for US/European users.

Image Optimization Pipeline

Original images from users' phones were 3-8MB. Loading a feed of 20 posts meant 60-160MB of data. On a 500MB/month plan (common in Nigeria), that's... not going to work.

Our solution:

class ImageOptimizationPipeline
{
    public function process($image)
    {
        return Pipeline::send($image)
            ->through([
                RemoveEXIFData::class,      // Remove location/camera data
                ResizeToMax1080::class,     // Cap maximum dimension
                ConvertToWebP::class,       // 30% smaller than JPEG
                CompressTo75Quality::class, // Sweet spot for size/quality
                GenerateThumbnail::class,   // Instant previews
                UploadToCDN::class,         // Serve from nearest edge
            ])
            ->thenReturn();
    }
}
Enter fullscreen mode Exit fullscreen mode

Results:

  • Average image size: 3.2MB → 180KB (94% reduction)
  • Feed load on 3G: 45s → 4.2s
  • User complaints about data usage: 89% reduction

API Response Compression

We implemented aggressive response compression:

// Middleware for API responses
public function handle($request, Closure $next)
{
    $response = $next($request);

    if ($request->expectsJson()) {
        // Minify JSON
        $content = json_encode(
            json_decode($response->getContent()),
            JSON_UNESCAPED_SLASHES | JSON_UNESCAPED_UNICODE
        );

        // Gzip if client supports it
        if (str_contains($request->header('Accept-Encoding'), 'gzip')) {
            $content = gzencode($content, 6);
            $response->header('Content-Encoding', 'gzip');
        }

        $response->setContent($content);
    }

    return $response;
}
Enter fullscreen mode Exit fullscreen mode

API payloads dropped from ~120KB to ~22KB average.

The Infrastructure Setup

Here's our actual production setup:

Server Configuration

Provider: Custom VPS 
OS: AlmaLinux 8
RAM: 12GB
Storage: 120GB SSD
Bandwidth: 2.93TB/month
Virtualization: KVM
Enter fullscreen mode Exit fullscreen mode

Why not AWS/Google Cloud?

Simple math:

  • AWS t3.medium equivalent: $35-40/month + bandwidth costs
  • Our VPS: $25/month, all-inclusive
  • Savings over 2 years: $360+

At our stage, that money went into user acquisition instead.

Database Optimization

MySQL configuration tuned for our workload:

[mysqld]
# Connection pool
max_connections = 200
thread_cache_size = 16

# Buffer pool (4GB on 12GB server)
innodb_buffer_pool_size = 4G
innodb_log_file_size = 512M

# Query cache for repeated queries
query_cache_type = 1
query_cache_size = 256M

# Temp tables
tmp_table_size = 64M
max_heap_table_size = 64M
Enter fullscreen mode Exit fullscreen mode

Combined with proper indexing:

// Critical indexes for feed queries
Schema::table('posts', function (Blueprint $table) {
    $table->index(['user_id', 'created_at']);
    $table->index(['status', 'created_at']);
    $table->index('last_activity_at');
});

Schema::table('reactions', function (Blueprint $table) {
    $table->index(['post_id', 'created_at']);
    $table->unique(['post_id', 'user_id']); // Prevent duplicate reactions
});
Enter fullscreen mode Exit fullscreen mode

Caching Strategy

Three-tier caching:

  1. Application cache (Redis) - User sessions, frequently accessed data
  2. Query cache - MySQL query cache for repeated queries
  3. Edge cache (Cloudflare) - Static assets and API responses with low mutation rates
// Example: Caching user profile data
public function getProfile($userId)
{
    return Cache::tags(['user', "profile:{$userId}"])
        ->remember("user.profile.{$userId}", 3600, function() use ($userId) {
            return User::with(['posts', 'products', 'followers'])
                ->findOrFail($userId);
        });
}

// Invalidate on profile update
public function updateProfile($userId, $data)
{
    $user = User::findOrFail($userId);
    $user->update($data);

    Cache::tags(["profile:{$userId}"])->flush();

    return $user;
}
Enter fullscreen mode Exit fullscreen mode

Security & Compliance (The Boring But Critical Stuff)

Building a platform handling payments means you can't skip security. Here's what we implemented:

API Security

// Rate limiting per user
RateLimiter::for('api', function (Request $request) {
    return $request->user()
        ? Limit::perMinute(120)->by($request->user()->id)
        : Limit::perMinute(20)->by($request->ip());
});

// Encrypted API endpoints
Route::middleware(['auth:sanctum', 'throttle:api', 'verify.signature'])
    ->group(function() {
        Route::post('/transactions', [TransactionController::class, 'create']);
    });
Enter fullscreen mode Exit fullscreen mode

Data Protection (NITDA Compliance)

Nigeria's NITDA requires:

  • User consent for data collection
  • Right to data deletion
  • Secure storage of personal information

Our implementation:

class UserDataExportJob implements ShouldQueue
{
    public function handle()
    {
        $user = User::find($this->userId);

        $data = [
            'profile' => $user->only(['name', 'email', 'phone']),
            'posts' => $user->posts()->get(),
            'products' => $user->products()->get(),
            'transactions' => $user->transactions()->get(),
            'reactions' => $user->reactions()->get(),
        ];

        Storage::put(
            "exports/user-{$user->id}-" . now()->timestamp . ".json",
            json_encode($data, JSON_PRETTY_PRINT)
        );

        // Send download link to user
        Mail::to($user)->send(new DataExportReady($exportPath));
    }
}
Enter fullscreen mode Exit fullscreen mode

SSL & Encryption

All data in transit encrypted via SSL certificates (Let's Encrypt):

# Auto-renewal script
certbot renew --nginx --non-interactive --agree-tos
Enter fullscreen mode Exit fullscreen mode

Database credentials encrypted using Laravel's built-in encryption:

config([
    'database.connections.mysql.password' => 
        decrypt(env('DB_PASSWORD_ENCRYPTED'))
]);
Enter fullscreen mode Exit fullscreen mode

Monitoring & Debugging at Scale

At 100K users, you can't manually check if things are working. We automated everything:

Application Monitoring

// Custom monitoring middleware
class PerformanceMonitor
{
    public function handle($request, Closure $next)
    {
        $start = microtime(true);

        $response = $next($request);

        $duration = (microtime(true) - $start) * 1000;

        // Log slow requests
        if ($duration > 1000) {
            Log::warning('Slow request detected', [
                'url' => $request->fullUrl(),
                'method' => $request->method(),
                'duration' => $duration,
                'user_id' => auth()->id(),
                'ip' => $request->ip(),
            ]);
        }

        // Track metrics
        Metrics::increment('api.requests', [
            'endpoint' => $request->path(),
            'status' => $response->status(),
        ]);

        Metrics::histogram('api.response_time', $duration, [
            'endpoint' => $request->path(),
        ]);

        return $response;
    }
}
Enter fullscreen mode Exit fullscreen mode

Error Tracking

Every exception gets logged with context:

app()->bind(ExceptionHandler::class, function ($app) {
    return new class($app) extends Handler {
        public function report(Exception $exception)
        {
            if ($this->shouldReport($exception)) {
                Log::error($exception->getMessage(), [
                    'exception' => get_class($exception),
                    'file' => $exception->getFile(),
                    'line' => $exception->getLine(),
                    'trace' => $exception->getTraceAsString(),
                    'user_id' => auth()->id(),
                    'url' => request()->fullUrl(),
                    'input' => request()->except(['password']),
                ]);
            }

            parent::report($exception);
        }
    };
});
Enter fullscreen mode Exit fullscreen mode

Automated Alerts

// Send Telegram alert for critical issues
if ($exception instanceof DatabaseConnectionException) {
    Http::post('https://api.telegram.org/bot' . env('TELEGRAM_BOT_TOKEN') . '/sendMessage', [
        'chat_id' => env('TELEGRAM_ADMIN_CHAT'),
        'text' => "🚨 DATABASE DOWN\n\n" .
                  "Time: " . now() . "\n" .
                  "Error: " . $exception->getMessage(),
    ]);
}
Enter fullscreen mode Exit fullscreen mode

The Features That Drove Growth

Technical architecture alone doesn't get you 100K users. Here's what actually drove adoption:

1. Content Monetization That Actually Works

Unlike global platforms where only mega-influencers make money, we enabled monetization from day one:

// User earns when someone reacts to their post
class ReactionController extends Controller
{
    public function react(Post $post)
    {
        $reaction = Reaction::create([
            'post_id' => $post->id,
            'user_id' => auth()->id(),
            'type' => request('type'), // like, love, wow, etc
        ]);

        // Creator earns ₦1 per reaction
        WalletService::credit(
            $post->user_id,
            amount: 1.00,
            type: 'content_earning',
            reference: "reaction:{$reaction->id}"
        );

        // Notify creator
        $post->user->notify(new EarnedFromContent($reaction));

        return response()->json(['success' => true]);
    }
}
Enter fullscreen mode Exit fullscreen mode

This simple feature drove engagement because creators could actually see real money from their content.

2. The Free Internet Initiative

This was my idea: Partner with institutions to provide free Wi-Fi powered by Starlink, branded with Ayema.

The setup:

  • We pay for Starlink hardware and subscription
  • Institution provides physical space
  • Users connect through Ayema-branded portal
  • They discover the platform naturally

Results at Ahmadu Bello University, Zaria:

  • 8,000+ students connected
  • 2,400 signed up for Ayema
  • 30% conversion rate (insane for organic growth)
  • Cost per acquisition: ₦830 (vs ₦2,500 for paid ads)

The technical implementation was straightforward - captive portal redirecting to our registration:

// Captive portal detection and redirect
if (window.location.hostname === 'captive.ayema.ng') {
    // User is on WiFi login page
    const params = new URLSearchParams(window.location.search);
    const mac = params.get('mac');
    const ap = params.get('ap');

    // Track unique device
    fetch('/api/wifi/connect', {
        method: 'POST',
        body: JSON.stringify({ mac, ap, location: 'abu-zaria' })
    });

    // Redirect to signup with pre-filled referral
    window.location.href = '/register?source=wifi-abu';
}
Enter fullscreen mode Exit fullscreen mode

3. Integrated Marketplace with Real Shipping

Nigerian SMEs couldn't easily sell online. Most platforms either:

  • Required technical skills to set up
  • Charged fees
  • Didn't integrate local logistics

We built it directly into the social feed:

// User posts a product, it appears as a feed item
class ProductPost extends Post
{
    public function toFeedItem()
    {
        return [
            'type' => 'product',
            'id' => $this->id,
            'user' => $this->user->profile(),
            'content' => $this->description,
            'product' => [
                'name' => $this->product->name,
                'price' => $this->product->price,
                'images' => $this->product->images,
                'in_stock' => $this->product->stock > 0,
                'shipping_available' => $this->product->ships_nationwide,
            ],
            'created_at' => $this->created_at,
            'reactions_count' => $this->reactions_count,
            'comments_count' => $this->comments_count,
        ];
    }
}
Enter fullscreen mode Exit fullscreen mode

Sellers could post a product like they post regular content. Buyers could purchase directly in the feed. No context switching.

The Metrics That Matter

After 12 months, here's where we stand:

User Engagement

  • 102,000 active users (users who opened the app in last 30 days)
  • 2 million engagement events (posts, reactions, comments, purchases)
  • 695,000 page views
  • Average session duration: 8.4 minutes
  • Daily active users: 31,000 (30% DAU/MAU ratio)

Platform Performance

  • 99.2% uptime
  • Average API response time: 340ms
  • Image load time: 1.8s average
  • Database query time: 180ms average

Commercial Success

  • ₦25 million in total funding received
  • 500+ active vendors on marketplace
  • 4.8★ rating on Google Play Store (1,000+ downloads)
  • Google AdSense approved (earning USD from international ads)

Infrastructure Efficiency

  • Cost per user: ₦180/month (including hosting, bandwidth, storage)
  • Server costs: ₦25,000/month ($25 USD)
  • CDN costs: ₦8,000/month (Cloudflare free tier + paid features)

What I'd Do Differently

Building with hindsight is easy, but here are genuine mistakes I'd fix:

1. Start with Proper Testing Earlier

We didn't write tests until we had 20K users. Big mistake. When we finally added tests:

class PaymentTest extends TestCase
{
    /** @test */
    public function user_can_withdraw_funds()
    {
        $user = User::factory()->create(['wallet_balance' => 5000]);

        $response = $this->actingAs($user)
            ->post('/api/withdrawals', [
                'amount' => 2000,
                'bank_code' => '058',
                'account_number' => '0123456789',
            ]);

        $response->assertStatus(200);
        $this->assertEquals(3000, $user->fresh()->wallet_balance);
        $this->assertDatabaseHas('wallet_transactions', [
            'user_id' => $user->id,
            'amount' => -2000,
            'type' => 'withdrawal',
        ]);
    }
}
Enter fullscreen mode Exit fullscreen mode

This test would have caught a bug where failed withdrawals still deducted money from user wallets. We only found it after 12 users reported it.

2. Implement Feature Flags from Day One

Rolling back a broken feature meant deploying code. With feature flags, we could toggle features:

// Should've used this pattern from the start
if (Feature::active('marketplace-v2')) {
    return view('marketplace.v2');
} else {
    return view('marketplace.v1');
}
Enter fullscreen mode Exit fullscreen mode

Would have saved us during the "marketplace redesign that broke everything" incident.

3. Think About Mobile Data Costs Earlier

We optimized for data after launching. Should have been core from day one. Every feature should answer: "How much data does this use?"

Lessons for Builders in Emerging Markets

If you're building for Nigeria, or similar markets:

1. Internet Assumptions

Don't assume:

  • ✗ Consistent broadband connections
  • ✗ Unlimited data plans
  • ✗ Users on latest devices

Instead, design for:

  • ✓ Intermittent 3G/4G
  • ✓ 500MB - 2GB monthly data budgets
  • ✓ Android devices from 2018-2020

2. Payment Integration

Don't assume:

  • ✗ Credit card prevalence
  • ✗ Simple payment flows
  • ✗ Instant settlements

Instead, design for:

  • ✓ Bank transfers as primary method
  • ✓ Manual reconciliation processes
  • ✓ 24-48 hour settlement delays

3. User Behavior

Don't assume:

  • ✗ Users read instructions
  • ✗ Users understand technical jargon
  • ✗ Users want complex features

Instead, design for:

  • ✓ Intuitive, icon-based interfaces
  • ✓ Familiar patterns from WhatsApp/Facebook
  • ✓ Simple flows with clear outcomes

The Tech Stack Summary

For those who want the TL;DR:

Backend:

  • PHP 8.1 / Laravel 10
  • MySQL 8.0
  • Redis for caching and queues
  • Laravel Sanctum for API authentication

Frontend:

  • ReactJS 18 for interactive components
  • Blade templates for static content
  • Tailwind CSS for styling
  • Axios for API calls

Infrastructure:

  • VPS (AlmaLinux 8, 12GB RAM, 120GB SSD)
  • Cloudflare for CDN and DNS
  • Let's Encrypt for SSL
  • Starlink for initiative WiFi

DevOps:

  • Git for version control
  • GitHub Actions for CI/CD
  • Laravel Forge for server management
  • Daily automated backups

Monitoring:

  • Custom logging to files + database
  • Telegram for critical alerts
  • Google Analytics for user behavior

Open Sourcing Components

I'm working on open-sourcing parts of our stack:

  1. Image optimization pipeline - The entire flow from upload to CDN
  2. Wallet system - Double-entry bookkeeping for Laravel
  3. Feed algorithm - Our approach to relevance ranking
  4. Offline-first PWA utilities - For handling intermittent connections

Follow me here on Dev.to and on GitHub for updates.

What's Next for Ayema

Our roadmap for the next 12 months:

Technical

  • [ ] Migrate to microservices (social, marketplace, payments as separate services)
  • [ ] Implement GraphQL for more efficient mobile data usage
  • [ ] Build native iOS app (currently using web wrapper)
  • [ ] Add offline-first capabilities (service workers + IndexedDB)
  • [ ] Real-time notifications via WebSockets

Product

  • [ ] Launch Ayema Pay (standalone wallet app)
  • [ ] Ayema Ride (ride-hailing integration)
  • [ ] Digital Library (textbooks and educational content)
  • [ ] Bill payments (airtime, data, electricity)
  • [ ] Expand to Ghana, Kenya, South Africa

Scale

  • [ ] Target: 500,000 active users by Q4 2026
  • [ ] 10,000+ active vendors
  • [ ] ₦100 million annual revenue
  • [ ] Partner with 50+ institutions for WiFi initiative

Final Thoughts

Building Ayema taught me that great architecture isn't about using the latest tech. It's about understanding your users' reality and making technical decisions that serve them.

We didn't use Kubernetes, or microservices, or serverless functions. We used boring, proven technology and focused on solving real problems:

  • Slow internet? Optimize everything for data usage
  • Payment complexity? Build simple flows with clear feedback
  • Limited reach? Partner with institutions for physical access points

If you're building something similar, my advice:

  1. Start simple - Monolith > microservices until you know your bottlenecks
  2. Measure everything - You can't optimize what you don't measure
  3. Talk to users - Your assumptions about their needs are probably wrong
  4. Deploy fast - Perfect code that ships in 6 months loses to good code that ships in 2 weeks
  5. Optimize for iteration speed - Ability to fix things quickly > having no bugs

Let's Connect

I'm always happy to talk architecture, scaling challenges, or building for emerging markets:

Have questions about our architecture? Drop them in the comments. I'll answer every single one.


Tags: #webdev #architecture #scaling #laravel #php #reactjs #startup #africa #socialmedia #ecommerce

Top comments (4)

Collapse
 
caramelo-ti profile image
Caramelo da TI

Tks for sharing!

Collapse
 
onoja5 profile image
Michael Onoja

Appreciate it, Caramelo! Glad you found it worth sharing.

Collapse
 
rowleks profile image
Rowland

This is both inspiring and educational. Thank you. I'd like to reach out and learn more about your platform. I've sent a follow on X as Rolex.devv

Collapse
 
onoja5 profile image
Michael Onoja

Thanks, Rowland. Just followed you back on X as @onoja55.

Would love to connect - always happy to chat about the tech stack, scaling challenges, or the Nigerian market. Feel free to DM or email me.