<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mufthi Ryanda</title>
    <description>The latest articles on DEV Community by Mufthi Ryanda (@mufthi_ryanda_84ea0d65262).</description>
    <link>https://dev.to/mufthi_ryanda_84ea0d65262</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mufthi_ryanda_84ea0d65262"/>
    <language>en</language>
    <item>
      <title>The Laravel 12 Docker Blueprint I Wish I Had: Nginx + PHP-FPM, Small Images, Clean CI/CD, and DigitalOcean Private Registry</title>
      <dc:creator>Mufthi Ryanda</dc:creator>
      <pubDate>Sat, 20 Dec 2025 18:09:07 +0000</pubDate>
      <link>https://dev.to/mufthi_ryanda_84ea0d65262/the-laravel-12-docker-blueprint-i-wish-i-had-nginx-php-fpm-small-images-clean-cicd-and-4ai5</link>
      <guid>https://dev.to/mufthi_ryanda_84ea0d65262/the-laravel-12-docker-blueprint-i-wish-i-had-nginx-php-fpm-small-images-clean-cicd-and-4ai5</guid>
      <description>&lt;p&gt;This is the setup that turned my Laravel 12 project from “works on my machine” into “ready anytime”. The goal was simple: smaller builds, cleaner releases, and deployments that don’t need babysitting backed by CI/CD and a DigitalOcean private registry.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you’ll learn
&lt;/h2&gt;

&lt;p&gt;In this post, I’ll walk through the blueprint I used to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run Laravel 12 on PHP 8.2 in containers (the minimum supported version).&lt;/li&gt;
&lt;li&gt;Serve the app with Nginx + PHP-FPM (clean separation of concerns).&lt;/li&gt;
&lt;li&gt;Build smaller images and keep releases tidy (so shipping feels repeatable).&lt;/li&gt;
&lt;li&gt;Push/pull images using a DigitalOcean private container registry (DOCR).&lt;/li&gt;
&lt;li&gt;Automate it with clean CI/CD so deploys don’t need babysitting.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First we will create this Dockerfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Multi-stage build
FROM composer:2.8 AS composer-stage

FROM php:8.3-fpm-alpine

# Install dependencies
RUN apk add --no-cache \
    nginx \
    supervisor \
    libpng-dev \
    libjpeg-turbo-dev \
    freetype-dev \
    oniguruma-dev \
    libxml2-dev \
    libzip-dev \
    zip \
    unzip \
    &amp;amp;&amp;amp; docker-php-ext-configure gd --with-freetype --with-jpeg \
    &amp;amp;&amp;amp; docker-php-ext-install -j$(nproc) pdo_mysql mbstring exif pcntl bcmath gd opcache zip

# Copy Composer
COPY --from=composer-stage /usr/bin/composer /usr/bin/composer

WORKDIR /var/www/html

# Copy composer files
COPY composer.json composer.lock ./

# Install production dependencies WITHOUT running scripts
RUN composer install \
    --no-interaction \
    --no-dev \
    --prefer-dist \
    --no-scripts \
    --no-autoloader

# Copy application
COPY . .

# Finish composer setup
RUN composer dump-autoload --optimize --classmap-authoritative --no-scripts

# Set permissions
RUN mkdir -p storage/logs storage/framework/{cache,sessions,views} bootstrap/cache \
    &amp;amp;&amp;amp; touch database/database.sqlite \
    &amp;amp;&amp;amp; chown -R www-data:www-data storage bootstrap/cache database/database.sqlite \
    &amp;amp;&amp;amp; chmod -R 775 storage bootstrap/cache

# Nginx config
RUN echo 'server {' &amp;gt; /etc/nginx/http.d/default.conf &amp;amp;&amp;amp; \
    echo '    listen 3000;' &amp;gt;&amp;gt; /etc/nginx/http.d/default.conf &amp;amp;&amp;amp; \
    echo '    root /var/www/html/public;' &amp;gt;&amp;gt; /etc/nginx/http.d/default.conf &amp;amp;&amp;amp; \
    echo '    index index.php;' &amp;gt;&amp;gt; /etc/nginx/http.d/default.conf &amp;amp;&amp;amp; \
    echo '    location / {' &amp;gt;&amp;gt; /etc/nginx/http.d/default.conf &amp;amp;&amp;amp; \
    echo '        try_files $uri $uri/ /index.php?$query_string;' &amp;gt;&amp;gt; /etc/nginx/http.d/default.conf &amp;amp;&amp;amp; \
    echo '    }' &amp;gt;&amp;gt; /etc/nginx/http.d/default.conf &amp;amp;&amp;amp; \
    echo '    location ~ \.php$ {' &amp;gt;&amp;gt; /etc/nginx/http.d/default.conf &amp;amp;&amp;amp; \
    echo '        fastcgi_pass 127.0.0.1:9000;' &amp;gt;&amp;gt; /etc/nginx/http.d/default.conf &amp;amp;&amp;amp; \
    echo '        fastcgi_index index.php;' &amp;gt;&amp;gt; /etc/nginx/http.d/default.conf &amp;amp;&amp;amp; \
    echo '        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;' &amp;gt;&amp;gt; /etc/nginx/http.d/default.conf &amp;amp;&amp;amp; \
    echo '        include fastcgi_params;' &amp;gt;&amp;gt; /etc/nginx/http.d/default.conf &amp;amp;&amp;amp; \
    echo '    }' &amp;gt;&amp;gt; /etc/nginx/http.d/default.conf &amp;amp;&amp;amp; \
    echo '}' &amp;gt;&amp;gt; /etc/nginx/http.d/default.conf

# Supervisor config
RUN echo '[supervisord]' &amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo 'nodaemon=true' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo 'user=root' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo '[program:php-fpm]' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo 'command=php-fpm -F' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo 'autostart=true' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo 'autorestart=true' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo 'stdout_logfile=/dev/stdout' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo 'stdout_logfile_maxbytes=0' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo 'stderr_logfile=/dev/stderr' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo 'stderr_logfile_maxbytes=0' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo '[program:nginx]' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo 'command=nginx -g "daemon off;"' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo 'autostart=true' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo 'autorestart=true' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo 'stdout_logfile=/dev/stdout' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo 'stdout_logfile_maxbytes=0' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo 'stderr_logfile=/dev/stderr' &amp;gt;&amp;gt; /etc/supervisord.conf &amp;amp;&amp;amp; \
    echo 'stderr_logfile_maxbytes=0' &amp;gt;&amp;gt; /etc/supervisord.conf

# Entrypoint script
RUN echo '#!/bin/sh' &amp;gt; /entrypoint.sh &amp;amp;&amp;amp; \
    echo 'set -e' &amp;gt;&amp;gt; /entrypoint.sh &amp;amp;&amp;amp; \
    echo 'if [ ! -f .env ]; then' &amp;gt;&amp;gt; /entrypoint.sh &amp;amp;&amp;amp; \
    echo '  cp .env.example .env' &amp;gt;&amp;gt; /entrypoint.sh &amp;amp;&amp;amp; \
    echo 'fi' &amp;gt;&amp;gt; /entrypoint.sh &amp;amp;&amp;amp; \
    echo 'php artisan key:generate --force || true' &amp;gt;&amp;gt; /entrypoint.sh &amp;amp;&amp;amp; \
    echo 'php artisan config:cache || true' &amp;gt;&amp;gt; /entrypoint.sh &amp;amp;&amp;amp; \
    echo 'php artisan route:cache || true' &amp;gt;&amp;gt; /entrypoint.sh &amp;amp;&amp;amp; \
    echo 'php artisan view:cache || true' &amp;gt;&amp;gt; /entrypoint.sh &amp;amp;&amp;amp; \
    echo 'exec /usr/bin/supervisord -c /etc/supervisord.conf' &amp;gt;&amp;gt; /entrypoint.sh &amp;amp;&amp;amp; \
    chmod +x /entrypoint.sh

EXPOSE 3000

ENTRYPOINT ["/entrypoint.sh"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our Dockerfile is a multi stage Laravel image: it pulls in Composer from a lightweight builder stage, then runs on php-fpm (Alpine), installs Nginx + Supervisor + the PHP extensions Laravel commonly needs, installs production Composer deps (cached via composer.lock), copies the app, optimizes autoload, fixes Laravel storage/cache permissions, generates a basic Nginx vhost that serves /public and forwards PHP to FPM, and uses Supervisor to keep Nginx + PHP-FPM running together, Finally, the entrypoint ensures .env exists, warms up caches, then boots Supervisor and exposes port 3000.&lt;/p&gt;

&lt;p&gt;Next, we will try to build it locally&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t registry.digitalocean.com/hawkinstech/urecheapstore:latest .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag registry.digitalocean.com/hawkinstech/laravel-app:latest registry.digitalocean.com/hawkinstech/urecheapstore:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tag naming rules are strict (allowed characters, max length, etc.). DigitalOcean registry images follow this pattern: &lt;code&gt;registry.digitalocean.com/&amp;lt;registry&amp;gt;/&amp;lt;image&amp;gt;:&amp;lt;tag&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For local test, let's run our container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -p 3000:3000 --name urecheapstore registry.digitalocean.com/hawkinstech/urecheapstore:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After it run perfectly on local, let's setup CI/CD&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Build and Push to DigitalOcean Registry

on:
  push:
    branches:
      - master

env:
  REGISTRY: registry.digitalocean.com/hawkinstech
  IMAGE_NAME: urecheapstore

jobs:
  build-and-push:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Install doctl
        uses: digitalocean/action-doctl@v2
        with:
          token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}

      - name: Log in to DigitalOcean Container Registry
        run: doctl registry login --expiry-seconds 600

      - name: Build and push Docker image
        uses: docker/build-push-action@v6
        with:
          context: .
          push: true
          tags: |
            ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
            ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
          cache-from: type=registry,ref=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
          cache-to: type=inline

      - name: Verify image push
        run: |
          echo "Image pushed successfully:"
          echo "${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest"
          echo "${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This workflow turns every push to master into a fresh container release: it checks out your code, boots Docker Buildx (so builds are faster and cache friendly), installs doctl using your DIGITALOCEAN_ACCESS_TOKEN, then logs in to DigitalOcean Container Registry (DOCR) with short lived credentials. After that, it builds the image and pushes two tags—latest for “most recent deploy” and ${{ github.sha }} for an immutable, traceable version while reusing the previously pushed latest layer cache to speed up future builds.&lt;/p&gt;

&lt;p&gt;Save this as &lt;code&gt;.github/workflows/build-push.yml&lt;/code&gt;. On every push to master, it builds the Docker image and pushes it to registry.digitalocean.com/hawkinstech/urecheapstore with both latest and the commit SHA tag.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-up
&lt;/h2&gt;

&lt;p&gt;At this point, you’ve got a repeatable flow: build the same Laravel image every time, ship it to &lt;strong&gt;DigitalOcean Container Registry&lt;/strong&gt;, and let CI/CD do the boring work for you, so releases become predictable instead of stressful. The only “manual” part left is setting up the credentials once, and then you can forget about it and just push code.&lt;/p&gt;

&lt;p&gt;If you want the missing setup steps, I split them into two short follow-ups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Article 1:&lt;/strong&gt; how to add &lt;code&gt;DIGITALOCEAN_ACCESS_TOKEN&lt;/code&gt; as a &lt;strong&gt;GitHub Actions secret&lt;/strong&gt; (Repo → Settings → &lt;em&gt;Secrets and variables&lt;/em&gt; → Actions → &lt;em&gt;New repository secret&lt;/em&gt;). (&lt;a href="https://docs.github.com/actions/security-guides/using-secrets-in-github-actions?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;GitHub Docs&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Article 2:&lt;/strong&gt; how to generate a &lt;strong&gt;DigitalOcean Personal Access Token&lt;/strong&gt; (Control Panel → API → &lt;em&gt;Personal access tokens&lt;/em&gt; → &lt;em&gt;Generate New Token&lt;/em&gt;). (&lt;a href="https://docs.digitalocean.com/reference/api/create-personal-access-token/?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;DigitalOcean Docs&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Bonus tip: the &lt;code&gt;doctl registry login --expiry-seconds 600&lt;/code&gt; approach is nice because it uses &lt;strong&gt;short-lived registry credentials&lt;/strong&gt; during the workflow run. (&lt;a href="https://docs.digitalocean.com/reference/doctl/reference/registry/login/?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;DigitalOcean Docs&lt;/a&gt;)&lt;/p&gt;

</description>
      <category>devops</category>
      <category>laravel</category>
      <category>docker</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Scaling to 300K+ Records Daily: How We Handle High Volume Data Processing with Lumen &amp; MySQL</title>
      <dc:creator>Mufthi Ryanda</dc:creator>
      <pubDate>Sat, 27 Sep 2025 15:30:31 +0000</pubDate>
      <link>https://dev.to/mufthi_ryanda_84ea0d65262/scaling-to-300k-records-daily-how-we-handle-high-volume-data-processing-with-lumen-mysql-1ipo</link>
      <guid>https://dev.to/mufthi_ryanda_84ea0d65262/scaling-to-300k-records-daily-how-we-handle-high-volume-data-processing-with-lumen-mysql-1ipo</guid>
      <description>&lt;p&gt;&lt;em&gt;Building a lean, mean data processing machine that handles 100 I/O operations per second without breaking a sweat&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When your application suddenly needs to process hundreds of thousands of records daily with peak loads hitting 100 I/O operations per second, you quickly learn that standard CRUD operations won't cut it. Here's how we transformed our Lumen application into a high performance data processing powerhouse.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenge
&lt;/h2&gt;

&lt;p&gt;Our monitoring system processes 300,000+ data records daily, generating complex reports and exports while maintaining sub-second response times. The system handles everything from real-time aggregations to massive CSV exports all while keeping memory usage under control.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strategy 1: Database Schema Optimization
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;JSON Columns with Generated Virtual Columns&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of creating multiple tables with complex joins, we leveraged MySQL's JSON capabilities with a twist:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Migration: Create virtual columns for frequently queried JSON fields
Schema::table('data_xxx', function (Blueprint $table) {
    $table-&amp;gt;string('feedback_extracted')-&amp;gt;virtualAs(
        "JSON_UNQUOTE(JSON_EXTRACT(content_data, '$.feedback'))"
    )-&amp;gt;index();

    $table-&amp;gt;decimal('amount_extracted', 15, 2)-&amp;gt;virtualAs(
        "CAST(JSON_EXTRACT(content_data, '$.amount') AS DECIMAL(15,2))"
    )-&amp;gt;index();
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why this works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Virtual columns are computed on the fly but can be indexed&lt;/li&gt;
&lt;li&gt;Eliminates need for complex joins&lt;/li&gt;
&lt;li&gt;Maintains data flexibility while enabling fast queries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Strategic Indexing&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Composite indexes for common query patterns
Schema::table('data_xxx', function (Blueprint $table) {
    $table-&amp;gt;index(['branch', 'visit_date', 'visit_type']);
    $table-&amp;gt;index(['personnel_id', 'visit_date', 'status']);
    $table-&amp;gt;index(['visit_type', 'status', 'feedback_extracted']);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Strategy 2: Query Optimization Patterns
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Avoiding N+1 with Smart Aggregation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of loading relations, we aggregate at the database level:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public function getDataSummary($filters)
{
    return DB::table('data_xxx')
        -&amp;gt;select([
            'branch',
            DB::raw('SUM(CASE WHEN status = "COMPLETED" THEN 1 ELSE 0 END) as completed'),
            DB::raw('SUM(CASE WHEN status = "PLANNED" THEN 1 ELSE 0 END) as planned'),
            DB::raw('AVG(CAST(JSON_EXTRACT(content_data, "$.score") AS DECIMAL)) as avg_score')
        ])
        -&amp;gt;where('visit_date', $filters['date'])
        -&amp;gt;groupBy('branch')
        -&amp;gt;get();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Generator Powered Data Processing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For large datasets, we use PHP generators to maintain constant memory usage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public function processLargeDataset($filters): \Generator
{
    $query = DB::table('data_xxx')
        -&amp;gt;where('visit_date', '&amp;gt;=', $filters['start_date'])
        -&amp;gt;where('visit_date', '&amp;lt;=', $filters['end_date'])
        -&amp;gt;orderBy('id');

    foreach ($query-&amp;gt;lazy(2000) as $record) {
        yield $this-&amp;gt;transformRecord($record);
    }
}

// Usage
foreach ($this-&amp;gt;processLargeDataset($filters) as $processedRecord) {
    // Memory stays constant regardless of dataset size
    $this-&amp;gt;handleRecord($processedRecord);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Strategy 3: High Performance Export System
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Memory-Efficient CSV Generation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our export system handles massive datasets while keeping memory usage under 50MB:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public function exportToCSV($filters): string
{
    // Create temporary file
    $tempFile = tmpfile();
    $tempPath = stream_get_meta_data($tempFile)['uri'];

    // Write headers
    fputcsv($tempFile, ['Date', 'Branch', 'Personnel', 'Customer', 'Result']);

    // Stream data in chunks
    foreach ($this-&amp;gt;getExportData($filters) as $record) {
        fputcsv($tempFile, [
            $record['visit_date'],
            $record['branch_name'],
            $record['personnel_name'],
            $record['customer_name'],
            $record['visit_result']
        ]);
    }

    // Upload to storage
    $finalPath = "exports/data_" . date('Y-m-d_H-i-s') . ".csv";
    Storage::put($finalPath, fopen($tempPath, 'r'));

    fclose($tempFile);
    return $finalPath;
}

private function getExportData($filters): \Generator
{
    $query = DB::table('data_xxx')
        -&amp;gt;select([
            'visit_date', 'branch_name', 'personnel_name', 
            'customer_name', 'visit_result'
        ])
        -&amp;gt;where('visit_date', $filters['date'])
        -&amp;gt;orderBy('id');

    foreach ($query-&amp;gt;lazy(2000) as $record) {
        yield (array) $record;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Background Processing with Chunked Operations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For time intensive operations, we use job queues with intelligent chunking:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public function processInBackground($requestData)
{
    // Create tracking record
    $exportLog = $this-&amp;gt;createExportLog($requestData);

    // Queue the processing job
    Queue::push(new ProcessDataExport($exportLog-&amp;gt;id, $requestData));

    return $exportLog;
}

// In the job class
public function handle()
{
    $startTime = microtime(true);

    foreach ($this-&amp;gt;getDataInChunks() as $chunk) {
        $this-&amp;gt;processChunk($chunk);

        // Prevent memory leaks and timeouts
        if (microtime(true) - $startTime &amp;gt; 300) { // 5 minutes
            Queue::push(new ProcessDataExport($this-&amp;gt;logId, $this-&amp;gt;remainingData));
            return;
        }
    }

    $this-&amp;gt;markAsCompleted();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Strategy 4: Caching and Optimization
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Smart Cache Invalidation&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public function getCachedSummary($filters)
{
    $cacheKey = 'summary_' . md5(serialize($filters));

    // For today's data, cache for 30 minutes
    // For historical data, cache for 24 hours
    $ttl = $filters['date'] === date('Y-m-d') ? 1800 : 86400;

    return Cache::remember($cacheKey, $ttl, function () use ($filters) {
        return $this-&amp;gt;generateSummary($filters);
    });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Performance Results
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Before optimization:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Memory usage: 500MB+ for large exports&lt;/li&gt;
&lt;li&gt;Export time: 5+ minutes for 100K records&lt;/li&gt;
&lt;li&gt;Database CPU: 80%+ during peak hours&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;After optimization:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Memory usage: &amp;lt;50MB consistently&lt;/li&gt;
&lt;li&gt;Export time: 30 seconds for 100K records&lt;/li&gt;
&lt;li&gt;Database CPU: &amp;lt;30% during peak hours&lt;/li&gt;
&lt;li&gt;Response time: &amp;lt;200ms for most queries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Takeaways&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;JSON columns + virtual indexes eliminate complex joins while maintaining query performance&lt;/li&gt;
&lt;li&gt;PHP generators keep memory usage constant regardless of dataset size&lt;/li&gt;
&lt;li&gt;Strategic chunking prevents timeouts and resource exhaustion&lt;/li&gt;
&lt;li&gt;Proper indexing strategy is crucial for high volume operations&lt;/li&gt;
&lt;li&gt;Stream processing beats loading everything into memory&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;The beauty of this approach is its simplicity no complex technology, no exotic databases, just well optimized PHP and MySQL doing what they do best.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>php</category>
      <category>mysql</category>
      <category>programming</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>Building a Google Docs Package in Go: From API Setup to Document Management</title>
      <dc:creator>Mufthi Ryanda</dc:creator>
      <pubDate>Sun, 15 Jun 2025 10:04:24 +0000</pubDate>
      <link>https://dev.to/mufthi_ryanda_84ea0d65262/building-a-google-docs-package-in-go-from-api-setup-to-document-management-2oka</link>
      <guid>https://dev.to/mufthi_ryanda_84ea0d65262/building-a-google-docs-package-in-go-from-api-setup-to-document-management-2oka</guid>
      <description>&lt;p&gt;&lt;strong&gt;Preface&lt;/strong&gt;&lt;br&gt;
Google's document APIs are powerful but can be intimidating to work with directly. Between authentication flows, service accounts, and API quirks, you end up writing a lot of boilerplate just to read or write documents. I built this package after getting tired of copying the same Google Docs integration code across projects. It wraps the complexity behind simple functions while handling authentication and common operations cleanly. This guide walks through the complete setup process and implementation, including the Google Cloud configuration that trips up most developers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Cloud Setup&lt;/strong&gt;&lt;br&gt;
Start by creating a fresh Google Cloud project. Head to the Google Cloud Console and click "Create or select a project", then "New project". Give it a name and hit create.&lt;br&gt;
Next, enable the APIs you'll need. Navigate to "APIs &amp;amp; Services" from the hamburger menu, then click "Enable APIs and services". Search for and enable both "Google Drive API" and "Google Docs API".&lt;br&gt;
Now create a service account for authentication. Go to "IAM &amp;amp; Admin" → "Service Accounts" → "Create Service Account". Fill in a name like "google-docs-service" and add a description. For permissions, select "Owner" role to keep things simple during development.&lt;br&gt;
After creating the service account, click on it and go to the "Keys" tab. Click "Add Key" → "Create new key" and choose JSON format. This downloads your credential file - keep it secure since it can't be recovered if lost.&lt;br&gt;
Save this JSON file in your project and you're ready to start coding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fayrczieedaagmynselv5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fayrczieedaagmynselv5.png" alt="Image description" width="800" height="477"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ngvgdkiys70lm260tv7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ngvgdkiys70lm260tv7.png" alt="Image description" width="800" height="561"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fweganhmeu76k9sniqc1e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fweganhmeu76k9sniqc1e.png" alt="Image description" width="800" height="305"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ye767l98rg1og9xeswi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ye767l98rg1og9xeswi.png" alt="Image description" width="800" height="302"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh2uq2yohggupge254x7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh2uq2yohggupge254x7.png" alt="Image description" width="800" height="1193"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0wd6fd60u6o6w9bypo6b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0wd6fd60u6o6w9bypo6b.png" alt="Image description" width="800" height="228"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8r5g97o78x08g9wv2rs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8r5g97o78x08g9wv2rs.png" alt="Image description" width="800" height="252"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2lzmjjc4nzbz3cn7wnu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2lzmjjc4nzbz3cn7wnu.png" alt="Image description" width="800" height="1193"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frivw3b5rgrofrdmrjy9n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frivw3b5rgrofrdmrjy9n.png" alt="Image description" width="800" height="228"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfhzbvhnukvej60s98fg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfhzbvhnukvej60s98fg.png" alt="Image description" width="800" height="252"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0spwpf04371l3wrqs4v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0spwpf04371l3wrqs4v.png" alt="Image description" width="800" height="182"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkb8rf55mury2m6vnzik6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkb8rf55mury2m6vnzik6.png" alt="Image description" width="800" height="226"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetbpbe6p1ptu4j09g40r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetbpbe6p1ptu4j09g40r.png" alt="Image description" width="800" height="227"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj00rlq5lwlplk2cdo25w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj00rlq5lwlplk2cdo25w.png" alt="Image description" width="800" height="452"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6fippukvgj6n9p706g0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6fippukvgj6n9p706g0.png" alt="Image description" width="800" height="453"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jk36xw37wu69zexo70i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jk36xw37wu69zexo70i.png" alt="Image description" width="800" height="452"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F832l904uhxkcjv7ljr28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F832l904uhxkcjv7ljr28.png" alt="Image description" width="800" height="477"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdi4yma1fh1gygt923p5e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdi4yma1fh1gygt923p5e.png" alt="Image description" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Package Implementation&lt;/strong&gt;&lt;br&gt;
Our Google Docs package wraps both the Docs and Drive APIs into a single clean interface. The core struct DocsPkg holds authenticated clients for both services, which we need since document operations span both APIs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package google

import (
    "context"
    "fmt"
    "golang.org/x/oauth2/google"
    "google.golang.org/api/docs/v1"
    "google.golang.org/api/drive/v3"
    "google.golang.org/api/option"
    "io"
    "os"
)

// PlaceholderData contains all placeholder values to be replaced
type PlaceholderData map[string]string

// DocsPkg wraps the Google Docs and Drive services
type DocsPkg struct {
    ctx         context.Context
    docsClient  *docs.Service
    driveClient *drive.Service
}

// NewDocsService creates a new service with Google API authentication
func NewDocsService(ctx context.Context, credentialsFile string) (*DocsPkg, error) {
    // Read credentials file
    credBytes, err := os.ReadFile(credentialsFile)
    if err != nil {
        return nil, fmt.Errorf("failed to read credentials file: %w", err)
    }

    // Create config from credentials
    config, err := google.JWTConfigFromJSON(credBytes,
        docs.DocumentsScope,
        drive.DriveScope)
    if err != nil {
        return nil, fmt.Errorf("failed to create JWT config: %w", err)
    }

    // Create HTTP client with the config
    client := config.Client(ctx)

    // Initialize the Docs service
    docsService, err := docs.NewService(ctx, option.WithHTTPClient(client))
    if err != nil {
        return nil, fmt.Errorf("failed to create Docs service: %w", err)
    }

    // Initialize the Drive service
    driveService, err := drive.NewService(ctx, option.WithHTTPClient(client))
    if err != nil {
        return nil, fmt.Errorf("failed to create Drive service: %w", err)
    }

    return &amp;amp;DocsPkg{
        ctx:         ctx,
        docsClient:  docsService,
        driveClient: driveService,
    }, nil
}

// CopyTemplate creates a copy of the template document
func (s *DocsPkg) CopyTemplate(templateID string, documentName string) (string, error) {
    // Create a copy of the template
    file := &amp;amp;drive.File{
        Name: documentName,
    }

    // Execute the copy operation
    copiedFile, err := s.driveClient.Files.Copy(templateID, file).Do()
    if err != nil {
        return "", fmt.Errorf("failed to copy template: %w", err)
    }

    return copiedFile.Id, nil
}

// ReplacePlaceholders replaces all placeholders in the document
func (s *DocsPkg) ReplacePlaceholders(documentID string, data PlaceholderData) error {
    // Create a batch update request
    var requests []*docs.Request

    // Add a replace text request for each placeholder
    for placeholder, value := range data {
        requests = append(requests, &amp;amp;docs.Request{
            ReplaceAllText: &amp;amp;docs.ReplaceAllTextRequest{
                ContainsText: &amp;amp;docs.SubstringMatchCriteria{
                    Text:      "{{" + placeholder + "}}",
                    MatchCase: true,
                },
                ReplaceText: value,
            },
        })
    }

    // Execute the batch update
    _, err := s.docsClient.Documents.BatchUpdate(documentID, &amp;amp;docs.BatchUpdateDocumentRequest{
        Requests: requests,
    }).Do()

    if err != nil {
        return fmt.Errorf("failed to replace placeholders: %w", err)
    }

    return nil
}

// ExportToPDF exports the document as PDF
func (s *DocsPkg) ExportToPDF(documentID string) ([]byte, error) {
    // Export the file as PDF
    resp, err := s.driveClient.Files.Export(documentID, "application/pdf").Download()
    if err != nil {
        return nil, fmt.Errorf("failed to export as PDF: %w", err)
    }
    defer resp.Body.Close()

    // Read the response
    pdfBytes, err := io.ReadAll(resp.Body)
    if err != nil {
        return nil, fmt.Errorf("failed to read PDF content: %w", err)
    }

    return pdfBytes, nil
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The NewDocsService function handles the messy authentication setup. It reads your JSON credentials, creates JWT config with proper scopes, and initializes both API clients. This gives you everything needed for document operations in one go.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing the Package&lt;/strong&gt;&lt;br&gt;
Here's a simple test to verify our Google Docs package works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func TestGoogleDocsPackage(t *testing.T) {
    ctx := context.Background()

    // Initialize the service with your credentials
    docsService, err := google.NewDocsService(ctx, "path/to/your/credentials.json")
    if err != nil {
        t.Fatalf("Failed to create docs service: %v", err)
    }

    // Template document ID (create a test document in Google Drive first)
    templateID := "your-template-document-id"

    // Copy the template
    newDocID, err := docsService.CopyTemplate(templateID, "Test Document Copy")
    if err != nil {
        t.Fatalf("Failed to copy template: %v", err)
    }
    t.Logf("Created document: %s", newDocID)

    // Replace placeholders
    placeholders := google.PlaceholderData{
        "NAME":    "John Doe",
        "DATE":    "June 15, 2025",
        "COMPANY": "Test Corp",
    }

    err = docsService.ReplacePlaceholders(newDocID, placeholders)
    if err != nil {
        t.Fatalf("Failed to replace placeholders: %v", err)
    }

    // Export to PDF
    pdfBytes, err := docsService.ExportToPDF(newDocID)
    if err != nil {
        t.Fatalf("Failed to export PDF: %v", err)
    }

    if len(pdfBytes) == 0 {
        t.Fatal("PDF export returned empty data")
    }

    t.Logf("PDF exported successfully, size: %d bytes", len(pdfBytes))
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This test demonstrates the complete workflow: copying a template document, replacing placeholder text with actual values, and exporting the result as PDF. Create a test document in Google Drive with placeholders like {{NAME}} and {{DATE}} to see the replacement in action.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Implementing Azure Blob Storage Package in Go: A Custom Abstraction Layer</title>
      <dc:creator>Mufthi Ryanda</dc:creator>
      <pubDate>Sun, 15 Jun 2025 09:23:44 +0000</pubDate>
      <link>https://dev.to/mufthi_ryanda_84ea0d65262/implementing-azure-blob-storage-package-in-go-a-custom-abstraction-layer-1k0i</link>
      <guid>https://dev.to/mufthi_ryanda_84ea0d65262/implementing-azure-blob-storage-package-in-go-a-custom-abstraction-layer-1k0i</guid>
      <description>&lt;p&gt;&lt;strong&gt;Preface&lt;/strong&gt;&lt;br&gt;
Working with Azure's blob storage SDK directly can be messy. You're juggling connection strings, handling cryptic errors, and writing the same boilerplate code over and over. After building several Go applications that needed file storage, I got tired of this repetitive dance.&lt;br&gt;
So I built a clean abstraction layer that turns Azure storage operations into simple, readable functions. No more wrestling with SDK quirks or debugging obscure error messages. Just clean code that works.&lt;br&gt;
This isn't another tutorial that copies Microsoft's docs. It's a real-world implementation that I actually use in production, complete with proper error handling, SAS URL generation, and comprehensive testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retrieving Your Azure Credentials&lt;/strong&gt;&lt;br&gt;
Before we dive into code, you'll need your Azure storage credentials. Head to your Azure portal and navigate to your storage account. Look for "Security + networking" in the left sidebar and click on "Access keys".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3lvllkwf0vtn9ek3803.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3lvllkwf0vtn9ek3803.png" alt="Image description" width="560" height="1548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmij1d1tfvqdq33qx8dse.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmij1d1tfvqdq33qx8dse.png" alt="Image description" width="554" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here you'll find two keys (key1 and key2) along with their connection strings. Grab either key and its corresponding connection string - that's all you need. The connection string contains everything: your account name, access key, and endpoints bundled into one neat package.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmff41sjigrct3x5izbrx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmff41sjigrct3x5izbrx.png" alt="Image description" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy the connection string and keep it safe. We'll use this to authenticate our custom package with Azure's storage services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting Up the Azure SDK&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, install the Azure SDK for Go:&lt;br&gt;
&lt;code&gt;go get github.com/Azure/azure-sdk-for-go/sdk/storage/azblob&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now let's build our abstraction layer. The core idea is wrapping Azure's client with our own struct that provides cleaner methods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package azure

import (
    "context"
    "errors"
    "fmt"
    "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
    "rje-be-golang/config"
)

// Common errors
var (
    ErrInvalidConnectionString = errors.New("invalid connection string")
    ErrContainerNotFound       = errors.New("container not found")
    ErrBlobNotFound            = errors.New("blob not found")
    ErrInvalidInput            = errors.New("invalid input parameters")
)

type AzureStorageConfig struct {
    Container        string
    AccessKey        string
    ConnectionString string
    AccountName      string
}

// Client represents an Azure Storage client
type Client struct {
    client             *azblob.Client
    azureStorageConfig config.AzureStorageConfig
}

// Config holds configuration for Azure Storage client
type Config struct {
    ConnectionString string
}

// NewClient creates a new Azure Storage client
func NewClient(cfg Config, azureStorageConfig config.AzureStorageConfig) (*Client, error) {
    if cfg.ConnectionString == "" {
        return nil, errors.New("invalid connection string")
    }

    client, err := azblob.NewClientFromConnectionString(cfg.ConnectionString, nil)
    if err != nil {
        return nil, fmt.Errorf("creating azure storage client: %w", err)
    }

    return &amp;amp;Client{
        client:             client,
        azureStorageConfig: azureStorageConfig,
    }, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Core Operations&lt;/strong&gt;&lt;br&gt;
Here are the main functions that make working with Azure storage simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// CreateContainer creates a new container if it doesn't exist
func (c *Client) CreateContainer(ctx context.Context, containerName string) error {
    if containerName == "" {
        return ErrInvalidInput
    }

    _, err := c.client.CreateContainer(ctx, containerName, nil)
    if err != nil {
        return fmt.Errorf("creating container %s: %w", containerName, err)
    }
    return nil
}

// ContainerExists checks if a container exists
func (c *Client) ContainerExists(ctx context.Context, containerName string) (bool, error) {
    if containerName == "" {
        return false, ErrInvalidInput
    }

    containers, err := c.ListContainers(ctx)
    if err != nil {
        return false, err
    }

    for _, name := range containers {
        if name == containerName {
            return true, nil
        }
    }
    return false, nil
}

// UploadBuffer uploads a byte buffer to blob storage
func (c *Client) UploadBuffer(ctx context.Context, containerName, blobPath string, buffer []byte) error {
    if containerName == "" || blobPath == "" || buffer == nil {
        return ErrInvalidInput
    }

    _, err := c.client.UploadBuffer(ctx, containerName, blobPath, buffer, nil)
    if err != nil {
        return fmt.Errorf("uploading blob %s: %w", blobPath, err)
    }
    return nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These methods wrap the Azure SDK calls with proper validation and error handling. Notice how we validate inputs upfront and provide meaningful error messages instead of letting Azure's cryptic errors bubble up.&lt;/p&gt;

&lt;p&gt;Testing the Package&lt;br&gt;
Here's how to test our custom Azure package in action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func TestAzureStorage(t *testing.T) {
    connectionString := "DefaultEndpointsProtocol=https;AccountName=youraccount;AccountKey=your-access-key;EndpointSuffix=core.windows.net"

    // Create Azure storage config
    azureStorageConfig := config.AzureStorageConfig{
        Container:        "mycontainer",
        AccessKey:        "your-access-key",
        ConnectionString: connectionString,
        AccountName:      "youraccount",
    }

    // Create custom Azure client
    azureConfig := azure.Config{
        ConnectionString: connectionString,
    }

    client, err := azure.NewClient(azureConfig, azureStorageConfig)
    if err != nil {
        t.Fatalf("Failed to create client: %v", err)
    }

    ctx := context.TODO()

    // Test container creation - check if exists first
    exists, err := client.ContainerExists(ctx, "mycontainer")
    if err != nil {
        t.Fatalf("Failed to check container existence: %v", err)
    }

    if !exists {
        err = client.CreateContainer(ctx, "mycontainer")
        if err != nil {
            t.Fatalf("Failed to create container: %v", err)
        }
        t.Log("Container created successfully")
    } else {
        t.Log("Container already exists, skipping creation")
    }

    // Test file upload
    data := []byte("Hello Azure!")
    err = client.UploadBuffer(ctx, "mycontainer", "folder1/subfolder/myfile.txt", data)
    if err != nil {
        t.Fatalf("Failed to upload file: %v", err)
    }

    // Test file existence and content retrieval
    content, err := client.GetFileContent(ctx, "mycontainer", "folder1/subfolder/myfile.txt")
    if err != nil {
        t.Fatalf("Failed to get file content: %v", err)
    }

    if string(content) != "Hello Azure!" {
        t.Fatalf("Expected 'Hello Azure!', got '%s'", string(content))
    }

    // Cleanup
    client.DeleteFile(ctx, "mycontainer", "folder1/subfolder/myfile.txt")
    client.DeleteContainer(ctx, "mycontainer")
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This test covers the complete lifecycle: container management, file upload, content verification, and cleanup. It demonstrates how much cleaner our abstraction is compared to working directly with the Azure SDK.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5n24twaxlcca107v317.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5n24twaxlcca107v317.png" alt="Image description" width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wrapping Up&lt;/strong&gt;&lt;br&gt;
Building this Azure storage abstraction has saved me countless hours of debugging and code duplication. The package handles error checking, provides consistent interfaces, and makes Azure storage operations readable and maintainable.&lt;br&gt;
You can extend this further by adding retry logic, bulk operations, or streaming uploads for large files. The foundation is solid, and adding new features becomes straightforward when you have clean abstractions in place.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Invisible Battle: When Multiple Users Fight Over Your Last Product 🥊 [Race Condition on Database Level Explained]</title>
      <dc:creator>Mufthi Ryanda</dc:creator>
      <pubDate>Wed, 22 Jan 2025 12:14:53 +0000</pubDate>
      <link>https://dev.to/mufthi_ryanda_84ea0d65262/the-invisible-battle-when-multiple-users-fight-over-your-last-product-race-condition-on-3jae</link>
      <guid>https://dev.to/mufthi_ryanda_84ea0d65262/the-invisible-battle-when-multiple-users-fight-over-your-last-product-race-condition-on-3jae</guid>
      <description>&lt;p&gt;Recently, we discussed concurrency, its usage, and benefits, as well as how it differs from sequential processing. We also covered concurrency implementation using Goroutines. When implementing concurrent processes and Goroutines, we encountered the Race Condition problem, which can cause serious issues in the future if not handled properly. We also discussed solutions for handling Race Conditions in Goroutines. You can read more about it at &lt;a href="https://dev.to/mufthi_ryanda_84ea0d65262/your-go-code-has-a-hidden-time-bomb-race-conditions-explained-27pn"&gt;Dev.to Post&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even after handling race conditions at the Go language level, we can still encounter similar problems at the database level. Why? This happens because when connecting to a database, there's a possibility that our applications access the same database, table, and specific column simultaneously.&lt;/p&gt;

&lt;p&gt;Here's the breakdown:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consider a concert ticket booking system where JKT48 will perform in Jakarta with only 5 VIP seats remaining&lt;/li&gt;
&lt;li&gt;The ticketing system shows these 5 seats as available to all users browsing the website&lt;/li&gt;
&lt;li&gt;Twenty excited fans find these seats at exactly 2:00 PM when ticket sales open&lt;/li&gt;
&lt;li&gt;Each fan selects a seat and clicks "Purchase" at nearly the same moment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If these purchase requests are processed concurrently:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The system might check seat availability for all requests simultaneously 💫&lt;/li&gt;
&lt;li&gt;Each check would show that seats are still available ✔&lt;/li&gt;
&lt;li&gt;The system would proceed with all purchase attempts 📒&lt;/li&gt;
&lt;li&gt;Multiple fans might end up being assigned the same seat ❤️‍🩹&lt;/li&gt;
&lt;li&gt;The database would try to record more bookings than available 🧯&lt;/li&gt;
&lt;li&gt;This could lead to various issues like double bookings, incorrect seat assignments, or system errors 🤯&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can probably guess what would happen next and what your boss would do to you! 💀&lt;br&gt;
But don't worry - this is where ACID compliance comes to the rescue ✨. Most OLTP databases like PostgreSQL and MySQL implement ACID compliance, and one of its core principles is 'I' (Isolation). The Isolation principle helps us prevent database race condition problems.&lt;/p&gt;

&lt;p&gt;Let's demonstrate this concept. We'll continue with our e-commerce example, which was previously discussed in this &lt;a href="https://dev.to/mufthi_ryanda_84ea0d65262/your-go-code-has-a-hidden-time-bomb-race-conditions-explained-27pn"&gt;Dev.to Post&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, let's setup our database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE DATABASE WRITING;

CREATE TABLE products (
    id SERIAL PRIMARY KEY,
    name VARCHAR(255) NOT NULL,
    stock INT NOT NULL CHECK (stock &amp;gt;= 0),
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

INSERT INTO products (name, stock)
VALUES ('J.Co - Snow White Donuts', 10000);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's set up a simple database connection using Go's standard library. We'll use PostgreSQL in this example, but don't worry - this approach works with MySQL too. You can simply switch the driver while keeping the same methodology:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func PgSQLConnection() *sql.DB {
    connStr := "host=localhost port=5432 user=yourusername password=yourpassword dbname=writing sslmode=disable"

    // Open connection
    db, err := sql.Open("postgres", connStr)
    if err != nil {
        panic(err)
    }

    // Set connection pool settings
    db.SetMaxOpenConns(95)                 // Maximum number of open connections
    db.SetMaxIdleConns(10)                 // Maximum number of idle connections
    db.SetConnMaxLifetime(5 * time.Minute) // Maximum lifetime of a connection
    db.SetConnMaxIdleTime(5 * time.Minute) // Maximum idle time of a connection

    // Test connection
    err = db.Ping()
    if err != nil {
        panic(err)
    }

    return db
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this code, you can adjust the configuration based on your database settings. Note that we've implemented Connection Pooling here. We'll discuss the advantages of Connection Pooling later - for now, let's continue with this implementation.&lt;/p&gt;

&lt;p&gt;Let's create a simple flow to process customer orders:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func (p *Product) ProcessOrderWithNormalQuery(db *sql.DB, orderQuantity int32) error {
    //Open TX
    tx, err := db.Begin()
    if err != nil {
        return err
    }
    defer tx.Rollback()

    //Let Say We have Other Process that take 500ms (We assume it as 500ms)
    time.Sleep(500 * time.Millisecond)

    //Query to Find Stock
    var stock int32
    err = tx.QueryRow(`SELECT stock FROM products WHERE id = $1;`, p.ID).Scan(&amp;amp;stock)
    if err != nil {
        return err
    }

    //Validate Stock
    if stock &amp;lt; orderQuantity {
        return errors.New("failed ! Stock not Enough")
    }

    //Calculate Stock
    newStock := stock - orderQuantity
    time.Sleep(100 * time.Millisecond) //Also Assume we have 100ms Business Process when Decrease Stock

    //Query to Update Stock
    _, err = tx.Exec(`
        UPDATE products 
        SET stock = $1,
            updated_at = CURRENT_TIMESTAMP 
        WHERE id = $2`,
        newStock, p.ID)
    if err != nil {
        return err
    }

    return tx.Commit()
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's the breakdown of how this code works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, we open a database transaction (essential when working with ACID-compliant databases)&lt;/li&gt;
&lt;li&gt;We add a simulated delay of 500ms to represent other processes in our system&lt;/li&gt;
&lt;li&gt;After this delay, we query the database to find the current product stock&lt;/li&gt;
&lt;li&gt;We then validate if enough stock is available for the order&lt;/li&gt;
&lt;li&gt;If the validation passes, we calculate the new stock by subtracting the order quantity from the current stock&lt;/li&gt;
&lt;li&gt;We add another simulated 100ms delay to represent business logic processing during stock reduction&lt;/li&gt;
&lt;li&gt;Then we update the stock in the database&lt;/li&gt;
&lt;li&gt;Finally, we commit the transaction. If anything fails during this process, the transaction automatically rolls back thanks to our defer tx.Rollback() statement, ensuring our database remains consistent&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The defer tx.Rollback() statement provides a safety net - if anything goes wrong, the transaction will be rolled back, and no changes will be made to our database.&lt;/p&gt;

&lt;p&gt;Let's verify our implementation by creating this test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func TestUnsafeOrder(t *testing.T) {
    // Open 2 Connection, Let simulate that we have 2 Go Apps Running
    db1 := PgSQLConnection()
    db2 := PgSQLConnection()
    defer db1.Close()
    defer db2.Close()

    product := &amp;amp;Product{ID: 1}
    wg := &amp;amp;sync.WaitGroup{}

    // Get initial stock
    var initialStock int32
    err := db1.QueryRow("SELECT stock FROM products WHERE id = $1", product.ID).Scan(&amp;amp;initialStock)
    if err != nil {
        t.Fatal(err)
    }
    fmt.Printf("Initial stock: %d\n", initialStock)

    batchSize := 20    // Process in smaller batches
    totalOrders := 100 // Reduce total orders to demonstrate the race

    for i := 0; i &amp;lt; totalOrders; i += batchSize {
        for j := 0; j &amp;lt; batchSize; j++ {
            wg.Add(1)
            go func(orderNum int) {
                defer wg.Done()
                // Alternate between connections to simulate distributed access
                dbConn := db1
                if orderNum%2 == 0 {
                    dbConn = db2
                }
                err := product.ProcessOrderWithNormalQuery(dbConn, 1)
                if err != nil {
                    fmt.Printf("Error: %v\n", err)
                }
            }(i + j)
        }
        wg.Wait()
    }

    var finalStock int32
    err = db1.QueryRow("SELECT stock FROM products WHERE id = $1", product.ID).Scan(&amp;amp;finalStock)
    if err != nil {
        t.Fatal(err)
    }
    fmt.Printf("Final stock: %d\n", finalStock)
    fmt.Printf("Expected stock: %d\n", initialStock-int32(totalOrders))
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this test, we simulate having multiple applications accessing the same database by opening two connections. First, we get the initial stock for testing purposes. Then, we simulate 100 users making simultaneous purchases, with each user buying 1 unit.&lt;/p&gt;

&lt;p&gt;The test case looks fine, but when we run it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;opt/homebrew/Cellar/go/1.22.0/libexec/bin/go tool test2json -t /private/var/folders/q4/lfxksx5x7knd4xjt3qqhyq4m0000gn/T/GoLand/___TestUnsafeOrder_in_race_condition.test -test.v -test.paniconexit0 -test.run ^\QTestUnsafeOrder\E$
=== RUN   TestUnsafeOrder
Initial stock: 10000
Final stock: 9994
Expected stock: 9900
--- PASS: TestUnsafeOrder (3.55s)
PASS

Process finished with the exit code 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Something's wrong! We only ordered 100 donuts, and with an initial stock of 10000, we expected 9900 donuts to remain. Instead, we got 9994 - meaning only 6 orders were processed successfully. This race condition is causing our inventory management to fail.&lt;/p&gt;

&lt;p&gt;Here's what's happening (the breakdown):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let's say two customers (A and B) try to buy donuts at exactly the same time&lt;/li&gt;
&lt;li&gt;Customer A's process checks the stock (sees 10000 available)&lt;/li&gt;
&lt;li&gt;Customer B's process checks the stock (also sees 10000 available)&lt;/li&gt;
&lt;li&gt;Customer A calculates new stock (10000 - 1 = 9999)&lt;/li&gt;
&lt;li&gt;Customer B calculates new stock (10000 - 1 = 9999)&lt;/li&gt;
&lt;li&gt;Customer A updates the stock to 9999&lt;/li&gt;
&lt;li&gt;Customer B updates the stock to 9999&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The problem: Even though two orders were placed, the stock only decreased by 1! This is happening because both processes read the same initial value before either one completed their update. This is known as the "Lost Update" problem in concurrent database operations.&lt;/p&gt;

&lt;p&gt;Now that we've simulated how database race conditions can cause failures during concurrent operations, let's discuss the solution. As mentioned earlier, the "I" (Isolation) principle in ACID will help us solve this. To prevent database race conditions, we simply need to implement the proper Isolation Level.&lt;/p&gt;

&lt;p&gt;What does this mean? It means we'll lock the database queries. Here's how it works: When Customer A and Customer B try to check the stock simultaneously, we'll lock the database so that Customer B must wait until Customer A finishes their process.&lt;/p&gt;

&lt;p&gt;For implementing locking mechanisms, there are generally two approaches: Optimistic and Pessimistic locking. In this example, we'll implement Pessimistic locking.&lt;/p&gt;

&lt;p&gt;Pessimistic Locking can be implemented using any of these SQL locking clauses:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FOR UPDATE
FOR UPDATE NOWAIT
FOR UPDATE SKIP LOCKED
FOR SHARE
FOR SHARE NOWAIT
FOR SHARE SKIP LOCKED
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each of these locking clauses has a specific use case:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;FOR UPDATE&lt;/code&gt; : Locks the selected rows for updating. Other transactions must wait until the current transaction finishes.&lt;br&gt;
&lt;code&gt;FOR UPDATE NOWAIT&lt;/code&gt;: Same as FOR UPDATE, but instead of waiting, it immediately returns an error if the rows are locked.&lt;br&gt;
&lt;code&gt;FOR UPDATE SKIP LOCKED&lt;/code&gt;: Similar to FOR UPDATE, but skips any rows that are already locked by other transactions.&lt;br&gt;
&lt;code&gt;FOR SHARE&lt;/code&gt;: Locks the rows for reading only. Other transactions can also read but cannot modify the locked rows.&lt;br&gt;
&lt;code&gt;FOR SHARE NOWAIT&lt;/code&gt;: Same as FOR SHARE, but immediately returns an error if the rows are locked.&lt;br&gt;
&lt;code&gt;FOR SHARE SKIP LOCKED&lt;/code&gt;: Similar to FOR SHARE, but skips any rows that are already locked.&lt;/p&gt;

&lt;p&gt;You can choose the locking keyword that best suits your use case. In this example, we'll use FOR UPDATE :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func (p *Product) ProcessOrderWithLock(db *sql.DB, orderQuantity int32) error {
    //Open TX
    tx, err := db.Begin()
    if err != nil {
        return err
    }
    defer tx.Rollback()

    //Let Say We have Other Process that take 500ms (We assume it as 500ms)
    time.Sleep(500 * time.Millisecond)

    //Query to Find Stock WITH LOCK
    var stock int32
    err = tx.QueryRow(`
        SELECT stock 
        FROM products 
        WHERE id = $1 
        FOR UPDATE;`, p.ID).Scan(&amp;amp;stock)
    if err != nil {
        return err
    }

    //Validate Stock
    if stock &amp;lt; orderQuantity {
        return errors.New("failed ! Stock not Enough")
    }

    //Calculate Stock
    newStock := stock - orderQuantity
    time.Sleep(100 * time.Millisecond) //Also Assume we have 100ms Business Process when Decrease Stock

    //Query to Update Stock
    _, err = tx.Exec(`
        UPDATE products 
        SET stock = $1,
            updated_at = CURRENT_TIMESTAMP 
        WHERE id = $2`,
        newStock, p.ID)
    if err != nil {
        return err
    }

    return tx.Commit()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here the test case implementation :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func TestOrderWithLockSQL(t *testing.T) {
    // Open 2 Connection, Let simulate that we have 2 Go Apps Running
    db1 := PgSQLConnection()
    db2 := PgSQLConnection()
    defer db1.Close()
    defer db2.Close()

    product := &amp;amp;Product{ID: 1}
    wg := &amp;amp;sync.WaitGroup{}

    // Get initial stock
    var initialStock int32
    err := db1.QueryRow("SELECT stock FROM products WHERE id = $1", product.ID).Scan(&amp;amp;initialStock)
    if err != nil {
        t.Fatal(err)
    }
    fmt.Printf("Initial stock: %d\n", initialStock)

    batchSize := 20    // Process in smaller batches
    totalOrders := 100 // Reduce total orders to demonstrate the race

    for i := 0; i &amp;lt; totalOrders; i += batchSize {
        for j := 0; j &amp;lt; batchSize; j++ {
            wg.Add(1)
            go func(orderNum int) {
                defer wg.Done()
                // Alternate between connections to simulate distributed access
                dbConn := db1
                if orderNum%2 == 0 {
                    dbConn = db2
                }
                err := product.ProcessOrderWithLock(dbConn, 1)
                if err != nil {
                    fmt.Printf("Error: %v\n", err)
                }
            }(i + j)
        }
        wg.Wait()
    }

    var finalStock int32
    err = db1.QueryRow("SELECT stock FROM products WHERE id = $1", product.ID).Scan(&amp;amp;finalStock)
    if err != nil {
        t.Fatal(err)
    }
    fmt.Printf("Final stock: %d\n", finalStock)
    fmt.Printf("Expected stock: %d\n", initialStock-int32(totalOrders))
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After resetting the stock to 10000 for the same demonstration, here's the result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/opt/homebrew/Cellar/go/1.22.0/libexec/bin/go tool test2json -t /private/var/folders/q4/lfxksx5x7knd4xjt3qqhyq4m0000gn/T/GoLand/___TestOrderWithLockSQL_in_race_condition.test -test.v -test.paniconexit0 -test.run ^\QTestOrderWithLockSQL\E$
=== RUN   TestOrderWithLockSQL
Initial stock: 10000
Final stock: 9900
Expected stock: 9900
--- PASS: TestOrderWithLockSQL (13.77s)
PASS

Process finished with the exit code 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Perfect! After implementing the lock, everything works as expected.&lt;br&gt;
Here's the breakdown of what's happening now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let's say Customer A and Customer B try to buy donuts simultaneously&lt;/li&gt;
&lt;li&gt;Customer A's process starts and executes SELECT ... FOR UPDATE
(This locks the row in the products table for Customer A's transaction)&lt;/li&gt;
&lt;li&gt;Customer B's process also tries to execute SELECT ... FOR UPDATE
(Instead of reading the same stock value, it's forced to wait because the row is locked)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Customer A completes their entire process:&lt;br&gt;
Reads stock (10000)&lt;br&gt;
Validates quantity&lt;br&gt;
Updates stock to 9999&lt;br&gt;
Commits transaction&lt;br&gt;
Releases the lock&lt;/p&gt;

&lt;p&gt;Only then can Customer B proceed:&lt;br&gt;
Reads the new stock (9999)&lt;br&gt;
Validates quantity&lt;br&gt;
Updates stock to 9998&lt;br&gt;
Commits transaction&lt;/p&gt;

&lt;p&gt;The key difference from our previous version:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before: Both processes could read the same initial value&lt;/li&gt;
&lt;li&gt;Now: The second process must wait for the first one to complete&lt;/li&gt;
&lt;li&gt;This ensures each order correctly decrements the stock once&lt;/li&gt;
&lt;li&gt;Notice the test took longer to run (13.77s vs 3.55s) because of the waiting time&lt;/li&gt;
&lt;li&gt;But we got the correct final stock of 9900, exactly as expected after 100 orders&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This demonstrates how database-level locking prevents race conditions by ensuring each transaction processes completely before the next one can begin.&lt;/p&gt;

</description>
      <category>go</category>
      <category>postgres</category>
      <category>programming</category>
      <category>database</category>
    </item>
    <item>
      <title>Your Go Code Has a Hidden Time Bomb: Race Conditions Explained 💣</title>
      <dc:creator>Mufthi Ryanda</dc:creator>
      <pubDate>Mon, 20 Jan 2025 09:30:01 +0000</pubDate>
      <link>https://dev.to/mufthi_ryanda_84ea0d65262/your-go-code-has-a-hidden-time-bomb-race-conditions-explained-27pn</link>
      <guid>https://dev.to/mufthi_ryanda_84ea0d65262/your-go-code-has-a-hidden-time-bomb-race-conditions-explained-27pn</guid>
      <description>&lt;p&gt;One of the best features in Go is Goroutines. A goroutine is a lightweight thread managed by the Go runtime. Goroutines enable functions to run concurrently.&lt;/p&gt;

&lt;p&gt;Imagine if we receive a thousand orders in one second. Should we process them one by one? Think about this: if one process takes 300-500 milliseconds, and we have 1000 orders, here's the estimation for sequential processing (one by one):&lt;/p&gt;

&lt;p&gt;For 1000 Orders:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimum total time = 1000 × 300ms = 300,000ms = 300 seconds = 5 minutes&lt;/li&gt;
&lt;li&gt;Maximum total time = 1000 × 500ms = 500,000ms = 500 seconds = 8.33 minutes&lt;/li&gt;
&lt;li&gt;Average total time = 1000 × 400ms = 400,000ms = 400 seconds = 6.67 minutes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is terrible, right? 🤓&lt;br&gt;
Taking an average of 6 minutes to process a thousand orders is inefficient. This is where Goroutines play a crucial role 🌟&lt;/p&gt;

&lt;p&gt;When we process those thousand orders using Goroutines, we switch to a concurrent approach instead of sequential processing. Using Goroutines significantly reduces the processing time for a thousand orders from around 6 minutes to approximately 500 milliseconds. See the difference? It's about 5 minutes faster 🚀&lt;/p&gt;

&lt;p&gt;Goroutines are a powerful feature in Go that enable concurrent processing, dramatically reducing execution time. As we saw earlier, processing 1000 orders sequentially takes around 6 minutes, while with Goroutines it takes just 500 milliseconds. However, this power comes with responsibility! 🚀&lt;/p&gt;

&lt;p&gt;When multiple Goroutines access and modify shared resources simultaneously, we can encounter race conditions - a situation where the final result becomes unpredictable. Without proper synchronization mechanisms like Mutex or Atomic operations, our lightning-fast concurrent processing could lead to data corruption or memory leaks.&lt;/p&gt;

&lt;p&gt;For example, if we don't handle Goroutines properly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data inconsistency due to simultaneous access&lt;/li&gt;
&lt;li&gt;Unpredictable results from concurrent operations&lt;/li&gt;
&lt;li&gt;Memory leaks from improper resource management&lt;/li&gt;
&lt;li&gt;System instability from uncontrolled concurrent access
That's why we need synchronization tools like:&lt;/li&gt;
&lt;li&gt;sync.Mutex for locking access to shared resources&lt;/li&gt;
&lt;li&gt;atomic operations for thread-safe operations&lt;/li&gt;
&lt;li&gt;proper error handling and resource cleanup
Think of it like installing traffic lights at a busy intersection - yes, it might slow things down a tiny bit, but it prevents accidents and ensures everything runs smoothly! 🚦&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's use a real-world example from an E-commerce service. Consider a Product struct that contains an ID, product name, and stock quantity:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Product struct {
    ID    int
    Name  string
    Stock int32
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We process orders by simply decreasing the stock based on the order quantity:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Unsafe version - has race condition
func (p *Product) ProcessOrderUnsafe(orderQuantity int32) bool {
    if p.Stock &amp;gt;= orderQuantity {
        p.Stock -= orderQuantity
        return true
    }
    return false
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's create a scenario where we have 5 customers, each buying 1 item. We'll group these 5 orders together and process 1000 such groups concurrently.&lt;/p&gt;

&lt;p&gt;Starting with an initial stock of 10,000 items, we process 1000 groups of orders. In each group, 5 users each buy one item:&lt;/p&gt;

&lt;p&gt;1 + 1 + 1 + 1 + 1 = 5 items per group&lt;br&gt;
1000 groups × 5 items = 5000 total items&lt;/p&gt;

&lt;p&gt;Therefore, we expect the stock to decrease by 5000, resulting in a final stock of 5000 (10,000 - 5000).&lt;/p&gt;

&lt;p&gt;Here's the test scenario:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func TestOrderUnsafe(t *testing.T) {
    currentProduct := Product{
        ID:    14045,
        Name:  "J.Co - Snow White Donuts",
        Stock: 10000,
    }
    userBought := []int32{1, 1, 1, 1, 1}

    fmt.Printf("Current Stock: %d\n", currentProduct.Stock)

    wg := &amp;amp;sync.WaitGroup{}
    for i := 0; i &amp;lt; 1000; i++ {
        for _, bought := range userBought {
            wg.Add(1)
            go func(orderQTY int32) {
                defer wg.Done()
                currentProduct.ProcessOrderUnsafe(orderQTY)
            }(bought)
        }
    }

    wg.Wait()
    fmt.Printf("Final inventory (unsafe): %d\n", currentProduct.Stock)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's run the test using go test -v. &lt;br&gt;
Everything seems fine until we see the results 💣&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38dja0yqwl9r4xp7pis1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38dja0yqwl9r4xp7pis1.png" alt="Image description" width="800" height="187"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feaqrdzuiu04ugaevna0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feaqrdzuiu04ugaevna0r.png" alt="Image description" width="800" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2ms9edfhn0ng62dv2vc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2ms9edfhn0ng62dv2vc.png" alt="Image description" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After running 3 tests, we notice something strange - our final results are different each time, and they keep changing with every test run:&lt;br&gt;
First Test: 5450&lt;br&gt;
Second Test: 5222&lt;br&gt;
Third Test: 5205&lt;/p&gt;

&lt;p&gt;Something is seriously wrong here. The final stock should be exactly 5000 (10000 - 5000), but we're getting inconsistent and incorrect results (5450, 5222, 5205). None of these results are correct!&lt;/p&gt;

&lt;p&gt;This inconsistency occurs because we've encountered a race condition. When multiple Goroutines try to access and modify the stock value simultaneously without proper synchronization, they interfere with each other's operations. It's like multiple cashiers trying to update the same inventory record at the same time - they might miss some updates or count the same transaction twice.&lt;/p&gt;

&lt;p&gt;The race condition happens in these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Multiple Goroutines read the stock value at the same time&lt;/li&gt;
&lt;li&gt;Each Goroutine thinks it has the correct current value&lt;/li&gt;
&lt;li&gt;They all try to decrease the stock simultaneously&lt;/li&gt;
&lt;li&gt;Some updates are lost or overwritten in the process&lt;/li&gt;
&lt;li&gt;The final result becomes unpredictable and incorrect&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is why we need proper synchronization mechanisms to handle concurrent access to shared resources! 🔒&lt;/p&gt;
&lt;h2&gt;
  
  
  Mutex Solution
&lt;/h2&gt;

&lt;p&gt;Mutex works like a lock-and-key system. When a Goroutine needs to update the stock, it must first acquire a lock. While one Goroutine holds the lock, all others must wait their turn. The defer mu.Unlock() ensures we never forget to release the lock, preventing deadlocks. It's like having a single key that gets passed around - only the Goroutine holding the key can access the stock value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Solution 1: Using mutex
func (p *Product) ProcessOrderWithMutex(orderQuantity int32, mu *sync.Mutex) bool {
    mu.Lock()
    defer mu.Unlock()

    if p.Stock &amp;gt;= orderQuantity {
        p.Stock -= orderQuantity
        return true
    }
    return false
}

func TestOrderWithMutex(t *testing.T) {
    currentProduct := Product{
        ID:    14045,
        Name:  "J.Co - Snow White Donuts",
        Stock: 10000,
    }
    userBought := []int32{1, 1, 1, 1, 1}

    fmt.Printf("Current Stock: %d\n", currentProduct.Stock)

    wg := &amp;amp;sync.WaitGroup{}
    mu := &amp;amp;sync.Mutex{} // Create single mutex to be shared

    for i := 0; i &amp;lt; 1000; i++ {
        for _, bought := range userBought {
            wg.Add(1)
            go func(orderQTY int32) {
                defer wg.Done()
                currentProduct.ProcessOrderWithMutex(orderQTY, mu) // Pass mutex to method
            }(bought)
        }
    }

    wg.Wait()
    fmt.Printf("Final inventory (mutex): %d\n", currentProduct.Stock)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's how it works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;mu.Lock() blocks other Goroutines from entering this section&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;defer mu.Unlock() ensures the lock is released even if errors occur&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Only one Goroutine can modify the stock at a time&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This creates a queue of Goroutines waiting their turn&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The stock updates happen sequentially within the locked section&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here the final test :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/opt/homebrew/Cellar/go/1.22.0/libexec/bin/go tool test2json -t /private/var/folders/q4/lfxksx5x7knd4xjt3qqhyq4m0000gn/T/GoLand/___TestOrderWithMutex_in_race_condition.test -test.v -test.paniconexit0 -test.run ^\QTestOrderWithMutex\E$
=== RUN   TestOrderWithMutex
Current Stock: 10000
Final inventory (mutex): 5000
--- PASS: TestOrderWithMutex (0.00s)
PASS
Process finished with the exit code 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Atomic Solution
&lt;/h2&gt;

&lt;p&gt;Atomic Operations function like a precise surgical tool. They perform operations on variables in a way that can't be interrupted by other Goroutines. When we use atomic operations, each stock update happens in a single, unbreakable step. If multiple Goroutines try to update the stock simultaneously, the atomic Compare-and-Swap (CAS) ensures only one succeeds while others retry. Think of it as a high-speed traffic intersection with sensors that only let one car pass at a time, but so quickly that traffic still flows smoothly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Solution 2: Using atomic operations
func (p *Product) ProcessOrderAtomic(orderQuantity int32) bool {
    for {
        currentInventory := atomic.LoadInt32(&amp;amp;p.Stock)
        if currentInventory &amp;lt; orderQuantity {
            return false
        }
        if atomic.CompareAndSwapInt32(&amp;amp;p.Stock, currentInventory, currentInventory-orderQuantity) {
            return true
        }
    }
}

func TestOrderWithAtomic(t *testing.T) {
    currentProduct := Product{
        ID:    14045,
        Name:  "J.Co - Snow White Donuts",
        Stock: 10000,
    }
    userBought := []int32{1, 1, 1, 1, 1}

    fmt.Printf("Current Stock: %d\n", currentProduct.Stock)

    wg := &amp;amp;sync.WaitGroup{}
    for i := 0; i &amp;lt; 1000; i++ {
        for _, bought := range userBought {
            wg.Add(1)
            go func(orderQTY int32) {
                defer wg.Done()
                currentProduct.ProcessOrderAtomic(orderQTY)
            }(bought)
        }
    }

    wg.Wait()
    fmt.Printf("Final inventory (unsafe): %d\n", currentProduct.Stock)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's how it works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;atomic.LoadInt32 safely reads the current stock value&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We check if we have enough stock&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CompareAndSwapInt32 (CAS) atomically updates the stock only if no other Goroutine has modified it&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If another Goroutine changed the value, the CAS fails and we retry the whole operation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This ensures all stock updates are processed accurately&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here the final test :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/opt/homebrew/Cellar/go/1.22.0/libexec/bin/go tool test2json -t /private/var/folders/q4/lfxksx5x7knd4xjt3qqhyq4m0000gn/T/GoLand/___TestOrderWithAtomic_in_race_condition.test -test.v -test.paniconexit0 -test.run ^\QTestOrderWithAtomic\E$
=== RUN   TestOrderWithAtomic
Current Stock: 10000
Final inventory (unsafe): 5000
--- PASS: TestOrderWithAtomic (0.00s)
PASS
Process finished with the exit code 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both solutions solve our race condition problem, consistently giving us the correct final stock of 5000. 😄&lt;/p&gt;

&lt;p&gt;The main difference lies in their approach: Atomic operations are generally faster for simple operations as they don't require full locks, while Mutex provides a more straightforward solution that's easier to understand and maintain, especially for complex operations involving multiple steps. The choice between them often depends on your specific use case and performance requirements. 🔒&lt;/p&gt;

</description>
      <category>go</category>
      <category>programming</category>
      <category>debugging</category>
      <category>concurrency</category>
    </item>
  </channel>
</rss>
