<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ango Jeffrey</title>
    <description>The latest articles on DEV Community by Ango Jeffrey (@angojay).</description>
    <link>https://dev.to/angojay</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/angojay"/>
    <language>en</language>
    <item>
      <title>Shipping My First Production Site with Lovable: What I Learned</title>
      <dc:creator>Ango Jeffrey</dc:creator>
      <pubDate>Sun, 19 Apr 2026 15:46:51 +0000</pubDate>
      <link>https://dev.to/angojay/shipping-my-first-production-site-with-lovable-what-i-learned-43ko</link>
      <guid>https://dev.to/angojay/shipping-my-first-production-site-with-lovable-what-i-learned-43ko</guid>
      <description>&lt;p&gt;I’ve spent the better part of my career building things the traditional way: hand-coding components, managing state, and meticulously translating Figma files into code. However, for a recent project, we decided to try a different workflow to build and ship a production level website using an AI tool.&lt;/p&gt;

&lt;p&gt;We used &lt;strong&gt;Lovable&lt;/strong&gt; to see if we could bridge the gap between design and production faster. It was an eye-opening experience. In this post, I'll break down our process, where the tool truly surprised us, and the specific areas where an engineer’s touch remains non-negotiable.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Workflow: Designer in the Driver's Seat
&lt;/h3&gt;

&lt;p&gt;In a typical sprint, a designer would hand me a file and I’d spend days or weeks recreating it. With Lovable, the roles shifted significantly. The designer was able to handle most of the actual UI implementation directly within the tool.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Faster Iterations:&lt;/strong&gt; Instead of waiting for me to "code it up" to see how a layout flowed, the designer could iterate in real-time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rapid Feedback:&lt;/strong&gt; We could look at a live URL within minutes of an idea being sparked.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This shift allowed us to focus on the &lt;em&gt;user experience&lt;/em&gt; rather than the &lt;em&gt;implementation details&lt;/em&gt; during the early stages.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where the Developer Comes In: Guiding the AI
&lt;/h3&gt;

&lt;p&gt;There’s a misconception that AI-driven tools make developers obsolete. In reality, they just change our focus. While Lovable handled the bulk of the UI, I had to step in as the architect to ensure the site was professional, functional, and scalable.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Responsive UI Fixes
&lt;/h4&gt;

&lt;p&gt;AI is powerful, but it doesn't always account for every edge case. I guided Lovable through specific responsive fixes to ensure that complex layouts didn't break on tablets or ultra-wide monitors. This required a solid understanding of &lt;strong&gt;CSS Flexbox and Grid&lt;/strong&gt; to tell the AI exactly how to restructure elements for those tricky breakpoints.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Global State Management
&lt;/h4&gt;

&lt;p&gt;This is where the difference between a "working" site and a "well-engineered" site becomes clear. Initially, the AI wanted to duplicate modal code across multiple pages. I stepped in and instructed it to implement a &lt;strong&gt;Global State Management&lt;/strong&gt; pattern using &lt;strong&gt;React Context&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;I specifically chose React Context because I wanted to keep the codebase lean and simple without the overhead of heavier state management libraries. By managing the modal state at the &lt;code&gt;Layout&lt;/code&gt; level, we prevented code duplication and kept the app performant and easy to maintain.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. API Integration
&lt;/h4&gt;

&lt;p&gt;A production site needs to actually function. I handled the heavy lifting of connecting the UI to our backend services, which involved structuring data fetching logic, handling loading states, and ensuring secure communication between the frontend and our APIs.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Hardening SEO for Production
&lt;/h4&gt;

&lt;p&gt;AI tools often provide the "bones" of a site but rarely give you the necessary visibility. To make this site production-ready, I manually guided the implementation of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Meta &amp;amp; Open Graph Tags:&lt;/strong&gt; Ensuring the site looks professional on social media with custom OG images.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema Structured Data:&lt;/strong&gt; Helping search engines understand the content hierarchy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Technical SEO Assets:&lt;/strong&gt; Generating a &lt;code&gt;sitemap.xml&lt;/code&gt;, a &lt;code&gt;robots.txt&lt;/code&gt; file, and a &lt;code&gt;site.webmanifest&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Professional Bridge: GitHub &amp;amp; CI/CD
&lt;/h3&gt;

&lt;p&gt;The last and most important step was the &lt;strong&gt;GitHub sync&lt;/strong&gt;. For this to be a professional project, it couldn't live in a silo. &lt;/p&gt;

&lt;p&gt;I synced the Lovable project to a GitHub repository, which allowed me to review the generated code for quality, add custom logic, and set up &lt;strong&gt;GitHub Workflows&lt;/strong&gt; for automated deployment. Now, every significant change triggers a proper CI/CD pipeline to keep our production environment up to date.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Verdict: The Tool vs. The Craftsman
&lt;/h3&gt;

&lt;p&gt;Lovable is an absolute game-changer for websites and straightforward web apps. It allowed us to ship at a speed that would have been impossible just a few months ago. It empowered the designer to be more involved in the build and allowed me to focus on high-level engineering.&lt;/p&gt;

&lt;h4&gt;
  
  
  A Note on Stack &amp;amp; Constraints
&lt;/h4&gt;

&lt;p&gt;One caveat I had with Lovable was the stack limitation. Given the choice, I would have preferred to use &lt;strong&gt;Astro&lt;/strong&gt; for this project. Astro’s SEO-friendly architecture, faster deployment build times, and high level of customizability make it an ideal choice for content-heavy sites. However, Lovable doesn't support it currently, so we optimized within the React ecosystem.&lt;/p&gt;

&lt;h4&gt;
  
  
  Looking Ahead
&lt;/h4&gt;

&lt;p&gt;While this worked perfectly for this project, more complex applications still demand intricate knowledge of state management and component architecture. As an engineer, you still need to know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Component Strategy:&lt;/strong&gt; Determining what should be a reusable component to avoid duplication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex State:&lt;/strong&gt; Knowing how to structure data flow beyond simple contexts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimization:&lt;/strong&gt; Preventing unnecessary re-renders in data-heavy environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The takeaway is clear: AI builds the house, but the engineer ensures the foundation is solid. You still need to know what you’re doing to achieve the best results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Have you tried building with Lovable yet? I’d love to hear how your workflow changed in the comments!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>nocode</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Why We Think in Systems: The Blueprint for Sustainability</title>
      <dc:creator>Ango Jeffrey</dc:creator>
      <pubDate>Thu, 02 Apr 2026 12:10:44 +0000</pubDate>
      <link>https://dev.to/angojay/why-we-think-in-systems-the-blueprint-for-sustainability-431h</link>
      <guid>https://dev.to/angojay/why-we-think-in-systems-the-blueprint-for-sustainability-431h</guid>
      <description>&lt;p&gt;We humans have a natural obsession with order. From the way we map the stars to the way we structure a workday, we are constantly searching for "the system." But thinking in systems isn't just about being organised, it is a fundamental strategy for predictability, scalability, and long-term survival.&lt;/p&gt;

&lt;p&gt;Here is a breakdown of why systems are the essential foundation for any successful endeavour.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. The Power of Predictability
&lt;/h4&gt;

&lt;p&gt;The core premise of a system is that it reveals patterns. When we view a challenge through a systemic lens, we move away from reacting to isolated incidents and toward understanding the underlying mechanics.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Engineering for the Future (Maintainability)
&lt;/h4&gt;

&lt;p&gt;In the world of software engineering, code is rarely a "write once and forget" task. A developer’s goal is to build something that lasts.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Hand-off:&lt;/strong&gt; Writing code within a system (using established frameworks and documentation) ensures that the next engineer can understand the logic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Legacy:&lt;/strong&gt; Systems prevent "knowledge silos." When the system is the source of truth, the project doesn't collapse just because the original creator moved on.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3. Business Scalability and SOPs
&lt;/h4&gt;

&lt;p&gt;A successful business is rarely the result of a single person’s daily brilliance; it is the result of Standard Operating Procedures (SOPs). Without them, a business is just a series of lucky breaks that cannot be scaled.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Why successful businesses lean on systems:&lt;/strong&gt; Codifying the "Secret Sauce": SOPs take the intuitive knowledge of a founder and turn it into a manual. This ensures that the quality remains consistent whether the CEO is in the room or not.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Repeatability Factor:&lt;/strong&gt; Systems ensure that tasks are performed with the same level of excellence every single time. This predictability is what allows a small startup to grow into a global enterprise.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Removing "Hero Dependence":&lt;/strong&gt; When a business relies on one "hero" employee to save the day, it is fragile. Systems move the power from the individual to the process, ensuring longterm viability and health.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Key Takeaway:&lt;/strong&gt; The goal of a business system is to make success a repeatable habit rather than a onetime event.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  4. The Relationship Between Systems and Creativity
&lt;/h4&gt;

&lt;p&gt;A common misconception in the dev community is that systems stifle creativity. In reality, systems provide the foundation that makes meaningful innovation possible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Architect Analogy&lt;/strong&gt;&lt;br&gt;
Think of an architect designing a high rise. The structural integrity, the plumbing, and the electrical grids are all rigid systems. These non negotiables don't stop the architect from being creative with the facade or the interior flow; instead, they provide the safety and stability required to explore bold new designs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Creative Freedom:&lt;/strong&gt; When the "boring" parts (the foundation) are handled by the system, your brain is free to innovate on the "user experience."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Innovation Loop:&lt;/strong&gt; New, proven innovations eventually become "best practices." Over time, today’s breakthrough becomes tomorrow’s system.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  5. The Ultimate Goal: Sustainability
&lt;/h4&gt;

&lt;p&gt;At its heart, the idea of a system is &lt;strong&gt;sustainability&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Whether it is a codebase that survives for a decade, a business that thrives across generations, or a creative process that never runs dry, systems allow us to build things that outlast our immediate efforts. We think in systems because we want the things we create to endure.&lt;/p&gt;

&lt;h3&gt;
  
  
  My Experience with Systems
&lt;/h3&gt;

&lt;p&gt;Having spent over 6 years as a software engineer, I’ve seen projects at every stage of their lifecycle. Whether I’m joining a team to kick off a greenfield project or stepping into a decade-old brownfield codebase, my first instinct is always to reach for systems.&lt;/p&gt;

&lt;p&gt;Here is how that looks in practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;For New Projects:&lt;/strong&gt; I leverage established systems and best practices honed from previous wins. Starting with a proven structure means I don't have to reinvent the wheel for every new feature.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;For Brownfield Projects:&lt;/strong&gt; I take the time to study the codebase to understand the existing patterns and conventions—assuming the previous engineers didn't leave a plate of spaghetti code behind 😅!&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The ROI of Systems&lt;/strong&gt;&lt;br&gt;
Thinking this way isn't just about "neatness." It fundamentally improves the &lt;strong&gt;Developer Experience (DX)&lt;/strong&gt;. When the system is clear, you stop wrestling with the environment and start focusing on what actually matters: achieving business goals as fast as possible in the most scalable and secure way.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>productivity</category>
      <category>softwareengineering</category>
      <category>career</category>
    </item>
    <item>
      <title>From Heavy to Lightweight: Compressing Images in Expo for Better Performance</title>
      <dc:creator>Ango Jeffrey</dc:creator>
      <pubDate>Tue, 10 Feb 2026 09:34:40 +0000</pubDate>
      <link>https://dev.to/angojay/from-heavy-to-lightweight-compressing-images-in-expo-for-better-performance-50fa</link>
      <guid>https://dev.to/angojay/from-heavy-to-lightweight-compressing-images-in-expo-for-better-performance-50fa</guid>
      <description>&lt;p&gt;Hey there, mobile dev 👋 Ever noticed how some apps load images instantly, while others leave you staring at a spinner or some other loading animation or worse just a blank screen? Yep image compression is often the unsung hero.&lt;/p&gt;

&lt;p&gt;Big, unoptimised images can slow down your mobile app, eat up user data, and even make your app feel clunky. But don't worry, in this guide, we'll learn how to shrink those images down to size using Expo's powerful tools, making your app smoother and more professional.&lt;/p&gt;

&lt;p&gt;Let's dive in!&lt;/p&gt;

&lt;p&gt;Why Bother With Compression in the First Place?&lt;br&gt;
Imagine your app is a fancy restaurant. If every dish is huge and takes ages to serve, customers will get impatient and leave! Similarly, if your app tries to load massive image files:&lt;/p&gt;

&lt;p&gt;Slower Loading: Your users will stare at blank screens longer.&lt;/p&gt;

&lt;p&gt;More Data Usage: Especially bad for users on limited data plans.&lt;/p&gt;

&lt;p&gt;Increased Storage: Bigger app bundles.&lt;/p&gt;

&lt;p&gt;Poor SEO (this one is for web apps): Search engines prefer fast-loading websites.&lt;/p&gt;

&lt;p&gt;By compressing images, we make them "lighter," so they load faster and provide a better experience for everyone.&lt;/p&gt;
&lt;h3&gt;
  
  
  Our Goal: The 100KB Sweet Spot
&lt;/h3&gt;

&lt;p&gt;For many mobile and web images, you really want to hit that sweet spot of 100 -200kb. This is often small enough to load quickly without sacrificing too much visual quality. In this guide we'll also be converting our images to WebP format, which is like a magic trick for better compression and it’s supported on most modern devices (Android, and iOS from version 14 and up).&lt;/p&gt;
&lt;h4&gt;
  
  
  The Tool: &lt;code&gt;expo-image-manipulator&lt;/code&gt;
&lt;/h4&gt;

&lt;p&gt;Since we did put expo in our title, it should be no surprise that our tool of choice is &lt;code&gt;expo-image-manipulator&lt;/code&gt;, this tool lets us resize, crop, rotate, and compress images right on the user's device. No need to send images to a server just to shrink them!&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 1: Install the Manipulator
&lt;/h3&gt;

&lt;p&gt;First things first, let's get the tool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx expo install expo-image-manipulator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: The Core Idea - Iterative Compression
&lt;/h3&gt;

&lt;p&gt;Instead of just trying to compress once and hoping for the best, we'll use an "iterative" approach. Think of it like a sculptor: they don't just whack off a huge chunk of marble; they chip away gradually until they get the perfect shape.&lt;/p&gt;

&lt;p&gt;Our compression will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Try to compress the image.&lt;/li&gt;
&lt;li&gt;Check its size.&lt;/li&gt;
&lt;li&gt;If it's still too big, try again with slightly lower quality or smaller dimensions.&lt;/li&gt;
&lt;li&gt;Repeat until it's small enough or we decide it's "good enough."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s a look at the logic we'll use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { ImageManipulator, SaveFormat } from 'expo-image-manipulator';
import * as FileSystem from 'expo-file-system';

/**
 * Compress image for SEO
 * Target: 100KB max size, iterative compression with quality and dimension reduction
 */
export const compressImageWithExpo = async (
  imageUri: string, // The local URI of the image (e.g., from ImagePicker)
  fileName: string   // The original file name (e.g., "my-photo.jpg")
): Promise&amp;lt;string | null&amp;gt; =&amp;gt; {
  try {
    const targetMaxSizeKB = 100 * 1024; // 100KB in bytes
    let currentQuality = 0.85;        // Starting compression quality
    const reductionFactor = 0.9;      // How much to reduce dimensions each time
    const maxIterations = 7;          // Safety limit to prevent endless loops

    // 1. Get original image dimensions
    const imageInfo = await ImageManipulator.manipulate(imageUri, [], {
      compress: 1, // We just want info, not to compress yet
      format: SaveFormat.PNG // Format doesn't matter for info
    });

    // Default to common sizes if info is missing
    let currentWidth = imageInfo.width || 1920;
    let currentHeight = imageInfo.height || 1080;

    // Note: ImageManipulator.manipulate() automatically
 // closes the file handle after running so we dont have to worry 
// about memory leaks 

    // 2. Check original file size to decide if we need aggressive reduction
    let originalSize = 0;
    try {
      const info = await FileSystem.getInfoAsync(imageUri);
      if (info.exists) {
        originalSize = info.size || 0;
      }
    } catch (e) {
      console.warn("Could not get original file info:", e);
    }
    const needsAggressiveReduction = originalSize &amp;gt; 2 * 1024 * 1024; // &amp;gt; 2MB

    // This is our recursive function that keeps compressing
    const compressImageIteratively = async (
      width: number,
      height: number,
      quality: number,
      iteration: number = 0
    ): Promise&amp;lt;string&amp;gt; =&amp;gt; {
      // Safety check: stop if we've tried too many times
      if (iteration &amp;gt;= maxIterations) {
        console.warn('Max compression iterations reached. Returning last best attempt.');
        // Even if over limit, return the last generated URI
        const finalResult = await ImageManipulator.manipulate(imageUri, [
          { resize: { width: width, height: height } }
        ], {
          compress: quality,
          format: SaveFormat.WEBP,
        });
        return finalResult.uri;
      }

      // Try compressing with current settings
      const compressedResult = await ImageManipulator.manipulate(imageUri, [
        { resize: { width: width, height: height } } // Resize first
      ], {
        compress: quality,             // Then compress
        format: SaveFormat.WEBP,       // And convert to WebP
      });

      // Get the size of the newly compressed image
      const fileInfo = await FileSystem.getInfoAsync(compressedResult.uri);
      const fileSize = fileInfo.exists ? (fileInfo.size || 0) : 0;

      // Check if we hit our target size
      if (fileSize &amp;lt;= targetMaxSizeKB) {
        console.log(`Image compressed to ${Math.round(fileSize / 1024)}KB`);
        return compressedResult.uri; // Success!
      } else if (width &amp;gt; 300 &amp;amp;&amp;amp; height &amp;gt; 300 &amp;amp;&amp;amp; quality &amp;gt; 0.4) {
        // Still too big, AND we have room to shrink further
        const widthReduction = needsAggressiveReduction &amp;amp;&amp;amp; iteration === 0 ? 0.5 : reductionFactor;
        const heightReduction = needsAggressiveReduction &amp;amp;&amp;amp; iteration === 0 ? 0.5 : reductionFactor;

        console.log(`Still too big (${Math.round(fileSize / 1024)}KB). Reducing dimensions and quality...`);

        // Call ourselves again with smaller dimensions and lower quality
        return compressImageIteratively(
          Math.floor(width * widthReduction),
          Math.floor(height * heightReduction),
          quality - 0.05, // Reduce quality slightly
          iteration + 1
        );
      } else {
        // We can't shrink it any further without making it tiny or ugly
        console.warn('Could not reach target size, returning best possible compression.');
        return compressedResult.uri;
      }
    };

    // Start the iterative compression process
    return await compressImageIteratively(currentWidth, currentHeight, currentQuality, 0);

  } catch (error) {
    console.error("Unable to compress image : ", error);
    return null; // Something went wrong
  }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Code Breakdown for Beginners:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;targetMaxSizeKB&lt;/code&gt;: Our desired limit (100KB).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;currentQuality&lt;/code&gt;: Starts high (0.85 means 85% quality) and goes down.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;reductionFactor&lt;/code&gt;: Each time we retry, we multiply the width/height by 0.9 (making it 90% of its previous size).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;maxIterations&lt;/code&gt;: If our loop runs 7 times and the image is still too big, we just stop and use the best result we got. This prevents endless loops!&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;FileSystem.getInfoAsync(uri)&lt;/code&gt;: This is how we check the size of a file on the device. Super important!&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;ImageManipulator.manipulate(imageUri, actions, options)&lt;/code&gt;: This is the core function.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;imageUri&lt;/code&gt;: The path to the image on the device, this will come from the image picker you're using (eg &lt;code&gt;expo-image-picker&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;actions&lt;/code&gt;: An array of things to do (like { resize: { width:   ..., height: ... } }).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;options&lt;/code&gt;: How to save it (like { compress: ..., format: SaveFormat.WEBP }).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;SaveFormat.WEBP&lt;/code&gt;: This tells expo-image-manipulator to save the image as a WebP file, which is usually smaller than JPG or PNG for the same quality.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;recursive function (compressImageIteratively)&lt;/code&gt;: This function calls itself if the image is still too big, but with slightly smaller dimensions and lower quality. It's like saying, "Okay, that didn't work. Let's try again, but be a bit more aggressive this time!"&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Use It in Your App
&lt;/h3&gt;

&lt;p&gt;You'd typically use compressImageWithExpo after a user picks an image using expo-image-picker.&lt;/p&gt;

&lt;p&gt;Here's a simplified example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import React, { useState } from 'react';
import { View, Button, Image, ActivityIndicator, Text, Alert } from 'react-native';
import * as ImagePicker from 'expo-image-picker';
import { compressImageWithExpo } from './image-utils'; // Assuming your compression code is in image-utils.ts

export default function App() {
  const [pickedImageUri, setPickedImageUri] = useState&amp;lt;string | null&amp;gt;(null);
  const [compressedImageUri, setCompressedImageUri] = useState&amp;lt;string | null&amp;gt;(null);
  const [loading, setLoading] = useState(false);

  const pickImage = async () =&amp;gt; {
    // Request camera roll permissions
    const { status } = await ImagePicker.requestMediaLibraryPermissionsAsync();
    if (status !== 'granted') {
      Alert.alert('Sorry, we need camera roll permissions to make this work!');
      return;
    }

    let result = await ImagePicker.launchImageLibraryAsync({
      mediaTypes: ImagePicker.MediaTypeOptions.Images,
      allowsEditing: true, // You can allow editing if needed
      aspect: [4, 3],
      quality: 1, // Get the highest quality original
    });

    if (!result.canceled) {
      const uri = result.assets[0].uri;
      setPickedImageUri(uri);
      setCompressedImageUri(null); // Clear previous compressed image
      await handleCompressImage(uri);
    }
  };

  const handleCompressImage = async (uri: string) =&amp;gt; {
    setLoading(true);
    try {
      // Extract a simple filename (e.g., "photo.jpg")
      const filename = uri.split('/').pop() || 'image.jpg'; 
      const compressedUri = await compressImageWithExpo(uri, filename);
      if (compressedUri) {
        setCompressedImageUri(compressedUri);
        Alert.alert('Success!', 'Image compressed to WebP and saved locally.');
      } else {
        Alert.alert('Error', 'Image compression failed.');
      }
    } catch (error) {
      console.error('Compression process failed:', error);
      Alert.alert('Error', 'An error occurred during compression.');
    } finally {
      setLoading(false);
    }
  };

  return (
    &amp;lt;View style={{ flex: 1, justifyContent: 'center', alignItems: 'center', padding: 20 }}&amp;gt;
      &amp;lt;Button title="Pick an image from camera roll" onPress={pickImage} /&amp;gt;

      {pickedImageUri &amp;amp;&amp;amp; (
        &amp;lt;View style={{ marginTop: 20 }}&amp;gt;
          &amp;lt;Text style={{ fontWeight: 'bold' }}&amp;gt;Original Image:&amp;lt;/Text&amp;gt;
          &amp;lt;Image source={{ uri: pickedImageUri }} style={{ width: 200, height: 150, marginTop: 10, borderWidth: 1, borderColor: 'gray' }} /&amp;gt;
          &amp;lt;Text style={{ fontSize: 12, color: 'gray' }}&amp;gt;{pickedImageUri.split('/').pop()}&amp;lt;/Text&amp;gt;
        &amp;lt;/View&amp;gt;
      )}

      {loading &amp;amp;&amp;amp; (
        &amp;lt;View style={{ marginTop: 20 }}&amp;gt;
          &amp;lt;ActivityIndicator size="large" color="#0000ff" /&amp;gt;
          &amp;lt;Text&amp;gt;Compressing image...&amp;lt;/Text&amp;gt;
        &amp;lt;/View&amp;gt;
      )}

      {compressedImageUri &amp;amp;&amp;amp; (
        &amp;lt;View style={{ marginTop: 20 }}&amp;gt;
          &amp;lt;Text style={{ fontWeight: 'bold' }}&amp;gt;Compressed WebP Image:&amp;lt;/Text&amp;gt;
          &amp;lt;Image source={{ uri: compressedImageUri }} style={{ width: 200, height: 150, marginTop: 10, borderWidth: 1, borderColor: 'green' }} /&amp;gt;
          {/* You can add a button here to upload this compressed image */}
          &amp;lt;Text style={{ fontSize: 12, color: 'green' }}&amp;gt;{compressedImageUri.split('/').pop()}&amp;lt;/Text&amp;gt;
        &amp;lt;/View&amp;gt;
      )}
    &amp;lt;/View&amp;gt;
  );
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Beyond Compression: Uploading to the Cloud
&lt;/h4&gt;

&lt;p&gt;Once your image is perfectly compressed, you'll usually want to upload it to a cloud storage service like Amazon S3, Google Cloud Storage, or similar.&lt;/p&gt;

&lt;h4&gt;
  
  
  Wrapping Up
&lt;/h4&gt;

&lt;p&gt;And thats it! You've taken your first steps into the world of image optimisation in Expo. By using &lt;code&gt;expo-image-manipulator&lt;/code&gt; and an iterative compression strategy, you can dramatically improve your app's performance and give your users a much snappier experience.&lt;/p&gt;

&lt;p&gt;Keep experimenting, and happy coding!&lt;/p&gt;

</description>
      <category>reactnative</category>
      <category>mobile</category>
      <category>performance</category>
    </item>
    <item>
      <title>Optimizing Next.js Docker Images with Standalone Mode</title>
      <dc:creator>Ango Jeffrey</dc:creator>
      <pubDate>Wed, 30 Jul 2025 10:28:53 +0000</pubDate>
      <link>https://dev.to/angojay/optimizing-nextjs-docker-images-with-standalone-mode-2nnh</link>
      <guid>https://dev.to/angojay/optimizing-nextjs-docker-images-with-standalone-mode-2nnh</guid>
      <description>&lt;p&gt;Recently, I was discussing with the DevOps engineer at my company and he asked me why the Docker image for the Next.js frontend application was so large, we're talking over 2GB large, and I didn't have a good answer to that, I knew that because we were taking advantage of some Next.js features like middleware and redirects, we had to run the app as a server, which meant we couldn't serve the pages and assets as static exports, That would naturally make the image larger than a static export, but 2GB still felt excessive.&lt;br&gt;
 So, over the weekend I did a deep dive researching for possible causes for bloated Docker images and how to reduce the file size. After quite a bit of research and trial and error, I came across a very useful (and surprisingly under-discussed) feature: Next.js Standalone Mode. &lt;/p&gt;
&lt;h4&gt;
  
  
  What Is Standalone Mode?
&lt;/h4&gt;

&lt;p&gt;Standalone mode strips out the irrelevant parts of your application from your build, such as unused &lt;code&gt;node_modules&lt;/code&gt; files, unused component files and folders, giving you only the bare bones files needed to make your app work.&lt;/p&gt;
&lt;h4&gt;
  
  
  Implementing Standalone Mode Builds
&lt;/h4&gt;

&lt;p&gt;To enable standalone mode you simply add this to your &lt;code&gt;next.config.js&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const nextConfig: NextConfig = {
output: "standalone",

  /* ...other configurations */
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, go to your &lt;code&gt;Dockerfile&lt;/code&gt; and implement a multi stage build process to keep your image lean and efficient:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Stage 0: Base image definition (optional, but good for consistency)
FROM node:18-alpine AS base

# Stage 1: Dependencies Installation
FROM base AS deps

RUN apk add --no-cache libc6-compat
WORKDIR /app

# Install dependencies (including devDependencies needed for the build)
COPY package.json yarn.lock* ./
RUN yarn --frozen-lockfile --prefer-offline --no-audit 

# Stage 2: Application Build
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY ./ ./ 

# build app
RUN yarn build

# Stage 3: Production Runner
FROM node:18-alpine AS runner

USER node

WORKDIR /app

COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/public ./public


EXPOSE 3000

CMD ["node", "server.js"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ &lt;strong&gt;Note&lt;/strong&gt;: The &lt;code&gt;server.js&lt;/code&gt; file is generated automatically in the &lt;code&gt;.next/standalone&lt;/code&gt; directory when using standalone mode.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why this works
&lt;/h4&gt;

&lt;p&gt;This Docker image uses a multi-stage build process to package a Next.js app in a way that’s both efficient and production-ready. It starts with a lightweight Node.js Alpine image and installs the app’s dependencies in a separate stage to keep things clean. Then, during the build stage, it compiles the app using &lt;code&gt;yarn build&lt;/code&gt;, taking advantage of Next.js’s standalone mode, which bundles only the files and dependencies the app actually needs to run. Finally, the runner stage creates a clean, production ready image by copying only the standalone output (&lt;code&gt;.next/standalone&lt;/code&gt;, &lt;code&gt;.next/static&lt;/code&gt;, and &lt;code&gt;public&lt;/code&gt;) into a fresh Node.js Alpine image. It sets the working directory, switches to a non-root node user for security, and runs the app with &lt;code&gt;node server.js&lt;/code&gt;. By avoiding unnecessary files like full source code and unused &lt;code&gt;node_modules&lt;/code&gt;, we end up with a much smaller image that’s faster to build, deploy, and run without sacrificing any of the dynamic features Next.js offers.&lt;/p&gt;

&lt;h4&gt;
  
  
  🚀The Results
&lt;/h4&gt;

&lt;p&gt;Opting into Next.js standalone mode reduced our docker image size by over 90%, from over 2GB to less than 200MB! &lt;br&gt;
Crazy right? Standalone mode is a nifty, underrated feature that deserves way more attention-especially if you're deploying server-rendered web apps with Docker.&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>webdev</category>
      <category>docker</category>
    </item>
    <item>
      <title>Managing Async State with TanStack Query</title>
      <dc:creator>Ango Jeffrey</dc:creator>
      <pubDate>Thu, 10 Apr 2025 19:37:50 +0000</pubDate>
      <link>https://dev.to/angojay/managing-async-state-with-tanstack-query-31k8</link>
      <guid>https://dev.to/angojay/managing-async-state-with-tanstack-query-31k8</guid>
      <description>&lt;p&gt;For years, Redux has been my go-to library for managing complex application state, especially when dealing with asynchronous operations. Its predictable state management and centralised store have been invaluable for most of my projects. But I have to admit Redux does have its cons, a bit of boilerplate (though this has been somewhat reduced with Redux Toolkit), slow data update speed, the need for thunks/sagas when dealing with async data and if we're being honest it's a bit too complex a tool to use when working with simple applications. &lt;br&gt;
Enter TanStack Query (formerly React Query), a powerful and elegant library specifically designed for managing, caching, synchronising, and updating server state in your web applications. When it comes to managing asynchronous data, it presents a strong substitute for Redux, requiring a lot less code and providing a more developer-friendly environment.   &lt;/p&gt;
&lt;h3&gt;
  
  
  The Pain Points of Redux for Async Operations:
&lt;/h3&gt;

&lt;p&gt;Before diving into TanStack Query, let's go over why managing async state with Redux can be challenging and tricky:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Significant Boilerplate: Fetching data typically involves defining action types, action creators (for request, success, and failure states), reducers to handle these actions, and often middleware like Redux Thunk or Redux Saga to orchestrate the asynchronous logic. This can lead to a lot of repetitive code.   &lt;/li&gt;
&lt;li&gt;Manual State Management: Developers are responsible for manually managing loading states, error states, and cached data within the Redux store. This requires careful implementation and can be prone to errors.&lt;/li&gt;
&lt;li&gt;Complex Data Synchronization: Ensuring data consistency across different components and handling background updates often requires intricate logic within reducers and middleware.&lt;/li&gt;
&lt;li&gt;Potential for Over-Engineering: For applications with primarily server-driven data, using the full power of Redux for every API call can feel like overkill.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Using Tanstack Query for server side state management
&lt;/h3&gt;

&lt;p&gt;TanStack Query takes a different approach. It focuses specifically on simplifying the process of fetching, caching, and updating data from your backend. Here's how it shines as a Redux alternative for async state:   &lt;/p&gt;

&lt;p&gt;Declarative Data Fetching: Instead of manually dispatching actions and managing state transitions, TanStack Query allows you to declaratively define your data fetching logic using the useQuery hook. You provide a unique query key and a function that fetches your data, and TanStack Query handles the rest.   &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic Caching and Deduping: TanStack Query intelligently caches fetched data in the background, preventing redundant API calls for the same data. It also automatically deduplicates concurrent requests for the same resource.   &lt;/li&gt;
&lt;li&gt;Background Updates and Refetching: The library provides mechanisms for automatic background updates based on various events (e.g., window focus, network reconnection) and offers easy ways to manually refetch data.   &lt;/li&gt;
&lt;li&gt;Optimistic Updates: TanStack Query facilitates optimistic updates, allowing you to immediately update the UI as if the mutation was successful, while handling potential errors in the background.   
Simplified Error Handling: Error states are automatically managed and readily accessible within the useQuery result.  &lt;/li&gt;
&lt;li&gt;Mutations for Data Modification: For POST, PUT, DELETE, and other data modification operations, TanStack Query offers the useMutation hook, which simplifies handling these asynchronous actions and updating the cache accordingly.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Working with TanStack Query a practical example
&lt;/h3&gt;

&lt;p&gt;First things first, create a query client and pass it into a query client provider that's wrapped around your app&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {
  useQuery,
  QueryClient,
  QueryClientProvider,
} from '@tanstack/react-query'
import { getTodos, postTodo } from '../my-api'

// Create a client
const queryClient = new QueryClient()

function App() {
  return (
    // Provide the client to your App
    &amp;lt;QueryClientProvider client={queryClient}&amp;gt;
      &amp;lt;Main /&amp;gt;
    &amp;lt;/QueryClientProvider&amp;gt;
  )
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets create a query to fetch a users profile data&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// queries/user

const useGetUserProfile = () =&amp;gt; {
  const GET_USER = async () =&amp;gt; {
    return await axios.get("/user/profile");
  };

  const query = useQuery({
    queryKey: ["getUserProfile"],
    queryFn: GET_USER,
    staleTime: Infinity,
  });

  return {
    ...query,
    data: query.data?.data,
  };
};


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can import our query anywhere in our app to access our user data, for example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// components/accountDetails

import Spinner from "../loader-utils/spinner";

const AccountDetails = () =&amp;gt; {
  const { data: user, isFetching } = useGetUserProfile();
  if (isFetching) {
    return &amp;lt;Spinner /&amp;gt;;
  }

  return (
    &amp;lt;div&amp;gt;
      &amp;lt;p&amp;gt;{user.name}&amp;lt;/p&amp;gt;
      &amp;lt;p&amp;gt;{user.role}&amp;lt;/p&amp;gt;
    &amp;lt;/div&amp;gt;
  );
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here in our &lt;code&gt;AccountDetails&lt;/code&gt; component we are accessing the successfully fetched user data and even the loading state directly from our query hook, without needing to set state ourselves. And by setting the &lt;code&gt;staleTime&lt;/code&gt; field to Infinity, we are telling TanStack Query that this query can be cached indefinitely, so we don't have to worry about our query refetching each time we call it in any component, although we can still set to behave like that if needed. If there are any updates to the data on the backend, all we need to do is to invalidate our user profile query cache and refetch the data, here is a quick example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// components/updateUser

const UpdateUser = ({ userName }) =&amp;gt; {
  const [name, setName] = useState(userName);
  // Access the client
  const queryClient = useQueryClient();
  // update user mutation
  const { mutateAsync } = useMutation({
    mutationFn: async (data) =&amp;gt; {
      return await axios.post("/user", data);
    },
  });
  const handleUpdateName = async () =&amp;gt; {
    try {
      await mutateAsync({
        userName: name,
      });
// manually invalidate the cached getUserProfile query
      await queryClient.invalidateQueries({
        queryKey: ["getUserProfile"],
      });
    } catch {}
  };

  return (
    &amp;lt;div&amp;gt;
      &amp;lt;input value={name} onChange={(e) =&amp;gt; setName(e.target.value)} /&amp;gt;
      &amp;lt;button onClick={handleUpdateName}&amp;gt;Update Name&amp;lt;/button&amp;gt;
    &amp;lt;/div&amp;gt;
  );
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After making our mutation, we immediately invalidate our &lt;code&gt;getUserProfile&lt;/code&gt; query cache; this tells TanStack Query that we want this query to be refetched as the data is now stale.&lt;/p&gt;

&lt;p&gt;As you can see this is way less code and complexity than if we used Redux, TanStack Query handles the loading and error states, caching, and even background refetching automatically.&lt;/p&gt;

&lt;p&gt;TanStack Query and Redux can still coexist peacefully in the same application. You might use TanStack Query for managing server state and Redux for managing global client-side UI state or other application-specific data. &lt;br&gt;
  &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>react</category>
      <category>programming</category>
    </item>
    <item>
      <title>Caching on the frontend</title>
      <dc:creator>Ango Jeffrey</dc:creator>
      <pubDate>Tue, 01 Apr 2025 06:07:21 +0000</pubDate>
      <link>https://dev.to/angojay/caching-on-the-frontend-227m</link>
      <guid>https://dev.to/angojay/caching-on-the-frontend-227m</guid>
      <description>&lt;p&gt;Caching, at its core, is about storing frequently accessed data closer to where it's needed, minimising the time and resources required to retrieve it. Think of it as keeping frequently used tools on your workbench instead of retrieving them from a distant toolbox every time.&lt;/p&gt;

&lt;p&gt;In the world of web development, speed is king. Users expect lightning-fast load times, and even a few extra milliseconds can impact engagement and conversion rates. This is where caching comes in, a powerful technique to optimize your frontend and deliver a smoother, more responsive user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is Caching Important on the Frontend?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Performance&lt;/strong&gt;&lt;br&gt;
When we cache a resource we reduce the time taken to access it the next time it’s needed, thereby improving the user experience and reduce the strain on the server (as we only need to make a connection to it once). &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reduced Bandwidth Usage&lt;/strong&gt;&lt;br&gt;
Caching static assets and API responses and serving them when next the user requires them, minimizes data transferred, saving bandwidth for both the user and the server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhanced user experience&lt;/strong&gt;&lt;br&gt;
Users experience snappier applications, leading to higher engagement and satisfaction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Offline access&lt;/strong&gt;&lt;br&gt;
Mobile apps and progressive web apps (PWA) can leverage caching to provide limited functionality even without an internet connection or areas with poor data connectivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caching strategies on the frontend
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Browser caching&lt;/strong&gt;: Leveraging HTTP headers (e.g., Cache-Control, Expires, ETag and Last-Modified) to store static assets (images, CSS, JavaScript) in the user's browser. These headers are used to instruct the browser how to cache resources and though similar have different meanings and some have more priority over others;&lt;/p&gt;

&lt;p&gt;a. &lt;code&gt;Cache-Control&lt;/code&gt;: Controls caching behavior, specifying how long a resource can be cached, whether it can be cached by intermediaries, and more. Common directives include &lt;code&gt;max-age&lt;/code&gt;, &lt;code&gt;no-cache&lt;/code&gt;, and &lt;code&gt;no-store&lt;/code&gt;. It has a higher priority over the &lt;code&gt;Expires&lt;/code&gt; header.&lt;br&gt;
b. &lt;code&gt;Expires&lt;/code&gt;: Specifies a date and time after which the resource is considered stale.&lt;br&gt;
c. &lt;code&gt;ETag and Last-Modified&lt;/code&gt;: Used for conditional requests, allowing the browser to check if a cached resource is still up-to-date before fetching it from the server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service workers&lt;/strong&gt;: These are JavaScript files that run in the background, independent of the web page. They are sort of like personal assistants that can help reduce the load on the main thread. This allows the main thread to focus on rendering your web page while service workers run tasks like caching resources and fetching required data in the background on a different thread. Service workers are non-blocking and fully asynchronous.You can use them to add offline capabilities and advanced caching strategies for your web applications.   &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LocalStorage/ SessionStorage / IndexedDB&lt;/strong&gt;: These are storage solutions that allow you to store application data locally(in your browser) for faster access. Local storage and Session storage are similar except that while localStorage data has no expiration time, sessionStorage data gets cleared when the page session ends — that is, when the page is closed. IndexedDB on the other hand is a much larger storage solution when compared with local or session storage and allows you to store not only more files but also more complex data such as images, audio and video files. IndexedDB is more commonly used in web apps that need to have offline functionality ie Progressive Web Apps (PWA).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Content Delivery Networks(CDN)&lt;/strong&gt;: Distributing static files over many servers to lower latency. CDNs store a cached version of content from the original server. When a user requests a resource, the CDN serves it from the nearest server, reducing latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom cache solutions&lt;/strong&gt;: You can implement a simple in-memory solution to cache data using a key value pair collection such as a map this is very fast, but is lost on page refresh. &lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for Frontend Caching
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use &lt;code&gt;Cache-Control&lt;/code&gt; headers effectively&lt;/strong&gt;: Set appropriate &lt;code&gt;max-age&lt;/code&gt; values for your resources.&lt;br&gt;
&lt;strong&gt;Leverage CDNs&lt;/strong&gt;: Distribute your static assets globally for faster delivery.  &lt;br&gt;
&lt;strong&gt;Use content hashing (e.g., file versioning&lt;/strong&gt;): Append a hash to filenames to force the browser to fetch new versions of resources when they change. Although with modern applications we don't have to worry about this as bundlers such as webpack take care of this for us.&lt;br&gt;
&lt;strong&gt;Cache API responses&lt;/strong&gt;: Store API data in the browser's cache to reduce server requests.   &lt;br&gt;
&lt;strong&gt;Monitor cache performance&lt;/strong&gt;: Use browser developer tools and performance monitoring tools to track cache hit rates and identify potential issues.   &lt;br&gt;
&lt;strong&gt;Always invalidate stale caches&lt;/strong&gt;: Make sure that when the content of a cached resource changes, that the client receives the updated resource at all times. If the underlying data changes on the server, the cached version on the browser becomes stale, potentially leading to inconsistencies and errors. This is where cache invalidation comes into play. There are several cache invalidation strategies to choose from, some of them include;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Time-Based Invalidation (TTL - Time To Live): This is the simplest approach, setting a fixed expiration time for cached data.&lt;/li&gt;
&lt;li&gt;Event-Based Invalidation: This approach invalidates the cache when a specific event occurs, such as a data update on the server. This requires a mechanism for the server to notify the client or cache when data changes, such as web sockets or push notifications.&lt;/li&gt;
&lt;li&gt;Version-Based Invalidation: When the server data changes, the api version number changes, or a version hash is added to the api url. This forces the client to get the new version of the data, and disregard the old cached version. One major limitation of this is that it requires strict version control of the api.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In conclusion, Caching is a powerful tool that can significantly improve the performance, scalability, and user experience of any application. By understanding the various caching strategies and techniques, and by carefully considering the key considerations, you can effectively leverage caching to build faster, efficient, and more reliable systems.&lt;/p&gt;

&lt;p&gt;  &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>performance</category>
    </item>
    <item>
      <title>Code Grooming: Principles for Long-Term Software Health</title>
      <dc:creator>Ango Jeffrey</dc:creator>
      <pubDate>Mon, 24 Mar 2025 09:09:53 +0000</pubDate>
      <link>https://dev.to/angojay/code-grooming-principles-for-long-term-software-health-jh5</link>
      <guid>https://dev.to/angojay/code-grooming-principles-for-long-term-software-health-jh5</guid>
      <description>&lt;p&gt;We've all been there before whether you're working on a fresh project that you built from scratch or maintaining a brownfield project, and realising it's become a tangled mess. Over time, even code written with good intentions can become an unmanageable, complex mess, if not properly maintained. Like an overgrown garden, these codebases become difficult to navigate, update, and maintain.&lt;/p&gt;

&lt;p&gt;The truth is, software development is a journey of continuous learning. What seemed like the perfect solution a year ago might now appear clunky and inefficient. Lets face it future you will always be more knowledgeable and experienced than the present you. Your future self, armed with more experience, will inevitably find improvements to code that past you wrote.&lt;/p&gt;

&lt;p&gt;Code grooming is essential as software engineers, adding new features is nice and important too, but you need to ensure that your existing code is clean, efficient and follows the latest standards, sure it works, but why leave it at that when you can make it better. Obviously I'm not asking you to update your codebase on a weekly basis or even biweekly, that would be impractical, as professional software engineers we have to think of the businesses need first, Nevertheless we need to create time to groom our codebase, to review past decisions, if they're are not up to par improve them.&lt;/p&gt;

&lt;p&gt;There are principles you should adhere to or follow that make the process of updating a codebase easier and not seem so daunting, you want to make life a bit easier for the next guy that has to work on the project, these are a few of them;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;h4&gt;
  
  
  Minimize Interdependence (Low Coupling, High Cohesion):
&lt;/h4&gt;

&lt;p&gt;Instead of tightly coupled components that trigger cascading changes, aim for independent modules. Changes in one module should minimally impact others.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h4&gt;
  
  
  Stick to a Style (Coding Standards):
&lt;/h4&gt;

&lt;p&gt;Maintaining a consistent and readable codebase is paramount for long-term health and collaborative efficiency. To achieve this, establish and enforce coding standards using tools like linters (e.g., ESLint), formatters (e.g., Prettier), and style guides. .Having consistent naming conventions, formatting, and overall structure greatly enhance readability, simplifying collaboration amongst developers. To make sure these standards are consistently applied, integrate these tools into your CI/CD pipeline, automating style checks and preventing deviations from the established guidelines.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h4&gt;
  
  
  Explain the 'Why' (Helpful Comments):
&lt;/h4&gt;

&lt;p&gt;Comments should explain the "why" behind the code, not just the "what." Document architectural decisions, complex logic, and edge cases. Skip the comments that just tell you exactly what the code does, we can already see that.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h4&gt;
  
  
  Write tests:
&lt;/h4&gt;

&lt;p&gt;no matter how you sweat it the best guarantee that a pice of code works is if the tests written for it are passing, tests are important for you as a developer because how would you know that your updates didn't break the codebase if there are no existing tests.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h4&gt;
  
  
  Stay Secure:
&lt;/h4&gt;

&lt;p&gt;Don't forget to check for security issues when you're cleaning up your code. That means adding security checks to your process and keeping all those extra code bits (dependencies) up-to-date. Fix any security problems as soon as you find them. Doing this helps keep your code safe and prevents potential problems.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are just a few ways to keep your codebase healthy and make future updates easier. Remember, maintaining existing code is just as important as writing new features. It's about building sustainable software.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
