DEV Community

Cover image for Micro-Frontend Architecture Patterns: Essential Implementation Strategies for Scalable Web Applications
Aarav Joshi
Aarav Joshi

Posted on

Micro-Frontend Architecture Patterns: Essential Implementation Strategies for Scalable Web Applications

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!

As organizations grow, so does the complexity of their web applications. I've seen firsthand how a single, monolithic frontend can become a bottleneck for development speed and innovation. This is where micro-frontend architecture comes into play. It’s a design approach that structures an application as a collection of smaller, semi-independent units. Each unit is owned by a different team, allowing for parallel work and specialized expertise.

The core idea is to extend the principles of microservices to the frontend. Instead of one large codebase, you have multiple smaller ones. This shift can dramatically improve developer autonomy and deployment frequency. I find it particularly useful for large companies where different teams work on distinct parts of a user experience, like a product page, a checkout flow, or a user dashboard.

Let's explore some of the most effective patterns for implementing this architecture.

One common method is build-time integration. In this pattern, separate micro-frontends are developed and built independently, but then combined into a single bundle during the compilation process. This is often achieved using advanced module bundling techniques.

A powerful tool for this is Webpack Module Federation. It allows a JavaScript application to dynamically load code from another application at runtime, but the dependencies are resolved and shared during the build. This prevents duplication of libraries like React or Vue across different micro-frontends.

Here’s a practical example of how you might configure the host application, often called the shell or container.

// webpack.config.js for the host application
const ModuleFederationPlugin = require('webpack/lib/container/ModuleFederationPlugin');

module.exports = {
  mode: 'development',
  devServer: {
    port: 3000,
  },
  plugins: [
    new ModuleFederationPlugin({
      name: 'host',
      remotes: {
        mfNavigation: 'navigation@http://localhost:3001/remoteEntry.js',
        mfDashboard: 'dashboard@http://localhost:3002/remoteEntry.js',
      },
      shared: ['react', 'react-dom'],
    }),
  ],
};
Enter fullscreen mode Exit fullscreen mode

The remote application, say the navigation component, would have its own configuration exposing its entry point.

// webpack.config.js for the remote navigation application
const ModuleFederationPlugin = require('webpack/lib/container/ModuleFederationPlugin');

module.exports = {
  mode: 'development',
  devServer: {
    port: 3001,
  },
  plugins: [
    new ModuleFederationPlugin({
      name: 'navigation',
      filename: 'remoteEntry.js',
      exposes: {
        './Navigation': './src/components/Navigation',
      },
      shared: ['react', 'react-dom'],
    }),
  ],
};
Enter fullscreen mode Exit fullscreen mode

In the host application, you can then use the remote module as if it were a local dependency.

// In the host's App.jsx
import React from 'react';

const RemoteNavigation = React.lazy(() => import('mfNavigation/Navigation'));

function App() {
  return (
    <div>
      <React.Suspense fallback="Loading Navigation...">
        <RemoteNavigation />
      </React.Suspense>
      <h1>Main Host Application Content</h1>
    </div>
  );
}

export default App;
Enter fullscreen mode Exit fullscreen mode

This approach offers a good balance of independence and optimization. However, it does require a coordinated build process. A change in a remote micro-frontend means the host application must be rebuilt to get the latest version, which can slow down deployments.

A more dynamic alternative is run-time composition. This pattern assembles the application directly in the user's browser. A lightweight shell application is responsible for fetching the correct micro-frontends and rendering them into the DOM. The key advantage here is true independent deployment; you can update a micro-frontend without touching the shell or other frontends.

A simple way to implement this is by dynamically injecting script tags.

// shell application's loader function
function loadMicroFrontend(containerId, scriptUrl, globalVar) {
  return new Promise((resolve, reject) => {
    const scriptId = `micro-frontend-script-${containerId}`;

    // Check if the script is already loaded
    if (window[globalVar]) {
      renderMicroFrontend(containerId, globalVar);
      resolve();
      return;
    }

    const script = document.createElement('script');
    script.id = scriptId;
    script.src = scriptUrl;
    script.onload = () => {
      renderMicroFrontend(containerId, globalVar);
      resolve();
    };
    script.onerror = reject;
    document.head.appendChild(script);
  });
}

function renderMicroFrontend(containerId, globalVar) {
  const container = document.getElementById(containerId);
  const renderFunction = window[globalVar];
  if (renderFunction && typeof renderFunction === 'function') {
    container.innerHTML = ''; // Clear container
    renderFunction(container);
  }
}

// Usage: Loading a dashboard component
loadMicroFrontend('dashboard-container', 'https://dashboard.example.com/app.js', 'renderDashboard');
Enter fullscreen mode Exit fullscreen mode

The micro-frontend itself would be built as a self-contained bundle that exposes a global function.

// dashboard micro-frontend's bundle
function renderDashboard(container) {
  const root = ReactDOM.createRoot(container);
  root.render(<DashboardApp />);
}

// Assign to a global variable so the shell can find it
window.renderDashboard = renderDashboard;
Enter fullscreen mode Exit fullscreen mode

This method provides maximum flexibility but introduces complexity around dependency management and version conflicts. You need a robust strategy for handling different versions of frameworks if teams are allowed to choose their own technology stacks.

For teams seeking a framework-agnostic solution, Web Components offer a compelling pattern. Web Components are a set of web platform APIs that allow you to create new custom, reusable, and encapsulated HTML tags. They work across all modern browsers and with any JavaScript framework.

The beauty of using Web Components for micro-frontends lies in their native isolation. The Shadow DOM scopes styles and markup, preventing CSS conflicts.

Here's how you might build a user profile micro-frontend as a Web Component.

<!-- In your shell application's HTML -->
<user-profile user-id="456"></user-profile>
Enter fullscreen mode Exit fullscreen mode

The implementation of the custom element would handle its own data fetching and rendering.

// user-profile micro-frontend code
class UserProfile extends HTMLElement {
  constructor() {
    super();
    this.attachShadow({ mode: 'open' }); // Create a shadow root for isolation
  }

  static get observedAttributes() {
    return ['user-id'];
  }

  connectedCallback() {
    this.render();
    const userId = this.getAttribute('user-id');
    if (userId) {
      this.loadUserData(userId);
    }
  }

  attributeChangedCallback(name, oldValue, newValue) {
    if (name === 'user-id' && oldValue !== newValue) {
      this.loadUserData(newValue);
    }
  }

  async loadUserData(userId) {
    try {
      const response = await fetch(`/api/users/${userId}`);
      const userData = await response.json();
      this.updateProfile(userData);
    } catch (error) {
      this.showError('Failed to load user data.');
    }
  }

  updateProfile(user) {
    this.shadowRoot.innerHTML = `
      <style>
        .profile { border: 1px solid #ccc; padding: 1rem; font-family: sans-serif; }
        .name { font-weight: bold; }
        .email { color: #666; }
      </style>
      <div class="profile">
        <div class="name">${user.name}</div>
        <div class="email">${user.email}</div>
      </div>
    `;
  }

  showError(message) {
    this.shadowRoot.innerHTML = `<div style="color: red;">${message}</div>`;
  }
}

// Define the custom element
customElements.define('user-profile', UserProfile);
Enter fullscreen mode Exit fullscreen mode

This approach is excellent for integration but may require polyfills for older browsers. It also means each team must work within the constraints of the Web Components standard, which can be less feature-rich than a full framework like React or Angular.

Sometimes, the best composition happens on the server. Server-side composition involves aggregating HTML fragments from multiple backend services before sending the final page to the client. This can lead to faster initial page loads and is better for SEO, as the complete content is available in the initial HTML response.

Techniques like Edge Side Includes (ESI) are designed for this. A reverse proxy or CDN edge server assembles the page from different origins.

Here is a conceptual example using Nginx with SSI (Server Side Includes), which is similar to ESI.

# Nginx configuration for server-side composition
server {
    listen 80;
    server_name example.com;

    # Enable SSI
    ssi on;

    location /product-page {
        # This location fetches the main product content
        proxy_pass http://product-service:8000;
    }

    location /header {
        # This location fetches the global header
        proxy_pass http://ui-components-service:8001/header;
    }

    location /recommendations {
        # This location fetches product recommendations
        proxy_pass http://recommendation-service:8002;
    }

    # The main page template that includes the fragments
    location / {
        return 200 '
            <!DOCTYPE html>
            <html>
            <head><title>Product Page</title></head>
            <body>
                <!--# include virtual="/header" -->
                <main>
                    <!--# include virtual="/product-page" -->
                </main>
                <aside>
                    <!--# include virtual="/recommendations" -->
                </aside>
            </body>
            </html>
        ';
        add_header Content-Type text/html;
    }
}
Enter fullscreen mode Exit fullscreen mode

Each service behind the proxy (product-service, ui-components-service, etc.) is responsible for generating a fragment of HTML. The Nginx server stitches them together. The benefit is performance and resilience; if the recommendations service is slow or fails, the rest of the page can still be displayed. The downside is that it introduces complexity on the server side and requires a sophisticated infrastructure.

For many large applications, the most logical split is by route. In this pattern, the shell application acts as a router. It examines the current URL and decides which micro-frontend is responsible for that section of the site. For example, everything under /admin is handled by an admin team's application, while everything under /shop is handled by an e-commerce team's application.

This is often implemented with a simple JavaScript router in the shell.

// Route-based micro-frontend loader in the shell
const applications = {
  '/shop': {
    name: 'shop',
    activeWhen: (pathname) => pathname.startsWith('/shop'),
    loader: () => import('http://shop.example.com/shop-app.js'),
    container: '#app-container'
  },
  '/admin': {
    name: 'admin',
    activeWhen: (pathname) => pathname.startsWith('/admin'),
    loader: () => import('http://admin.example.com/admin-app.js'),
    container: '#app-container'
  },
  '/': {
    name: 'home',
    activeWhen: (pathname) => pathname === '/',
    loader: () => import('http://home.example.com/home-app.js'),
    container: '#app-container'
  }
};

let loadedApp = null;

async function loadApp() {
  const pathname = window.location.pathname;
  const appConfig = Object.values(applications).find(app => app.activeWhen(pathname));

  if (!appConfig) {
    console.error('No application configured for this route:', pathname);
    return;
  }

  // Unmount the previous app if it exists and is different
  if (loadedApp && loadedApp !== appConfig.name && window[loadedApp]?.unmount) {
    window[loadedApp].unmount(document.querySelector(appConfig.container));
  }

  try {
    const appModule = await appConfig.loader();
    // The micro-frontend should expose a mount function
    appModule.mount(document.querySelector(appConfig.container));
    loadedApp = appConfig.name;
  } catch (error) {
    console.error(`Failed to load application ${appConfig.name}:`, error);
  }
}

// Listen for navigation events
window.addEventListener('popstate', loadApp);
window.addEventListener('pushstate', loadApp); // You'd need to intercept pushState calls

// Initial load
loadApp();
Enter fullscreen mode Exit fullscreen mode

Each micro-frontend would need to adhere to a lifecycle contract, exposing mount and unmount functions.

// shop-app.js micro-frontend
let root = null;

export function mount(container) {
  root = ReactDOM.createRoot(container);
  root.render(<ShopApp />);
}

export function unmount(container) {
  if (root) {
    root.unmount();
    root = null;
  }
}
Enter fullscreen mode Exit fullscreen mode

This pattern creates clear boundaries and ownership. The challenge is managing global state and ensuring a consistent look and feel across the different sections.

Speaking of state, managing it across independent micro-frontends is a critical challenge. A shared state management pattern helps coordinate data without creating tight dependencies. A common solution is a global event bus or a lightweight state container.

An event bus allows micro-frontends to communicate in a decoupled way. One frontend can emit an event, and others can listen and react.

// A simple event bus implementation
class EventBus {
  constructor() {
    this.events = {};
  }

  on(event, callback) {
    if (!this.events[event]) {
      this.events[event] = [];
    }
    this.events[event].push(callback);
  }

  off(event, callback) {
    if (!this.events[event]) return;
    this.events[event] = this.events[event].filter(cb => cb !== callback);
  }

  emit(event, data) {
    if (this.events[event]) {
      this.events[event].forEach(callback => {
        try {
          callback(data);
        } catch (error) {
          console.error(`Error in event listener for ${event}:`, error);
        }
      });
    }
  }
}

// Instantiate a global event bus
window.microFrontendBus = new EventBus();
Enter fullscreen mode Exit fullscreen mode

A micro-frontend, like a shopping cart icon, can listen for events.

// In the header micro-frontend
window.microFrontendBus.on('cartUpdated', (cartItems) => {
  updateCartIcon(cartItems.length);
});
Enter fullscreen mode Exit fullscreen mode

The shopping cart micro-frontend would emit the event when an item is added.

// In the cart micro-frontend
function addToCart(productId) {
  // ... logic to add product to cart
  window.microFrontendBus.emit('cartUpdated', updatedCartItems);
}
Enter fullscreen mode Exit fullscreen mode

For more complex state, you might use a shared state container like a Redux store, but it must be carefully versioned and managed to avoid breaking changes.

Finally, the true power of micro-frontends is realized through independent deployment pipelines. Each micro-frontend should have its own CI/CD process, allowing teams to test, version, and release their work without being blocked by others.

A typical pipeline for a micro-frontend might look like this in a GitHub Actions workflow.

# .github/workflows/deploy.yml for a 'navigation' micro-frontend
name: Deploy Navigation

on:
  push:
    branches: [ main ]
    paths: [ 'packages/navigation/**' ]  # Only trigger on changes in this folder

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Use Node.js
        uses: actions/setup-node@v3
        with:
          node-version: '18'
          cache: 'npm'
          cache-dependency-path: 'packages/navigation/package-lock.json'
      - run: npm ci
        working-directory: ./packages/navigation
      - run: npm test
        working-directory: ./packages/navigation

  build-and-deploy:
    runs-on: ubuntu-latest
    needs: test
    if: github.ref == 'refs/heads/main'
    steps:
      - uses: actions/checkout@v3
      - name: Use Node.js
        uses: actions/setup-node@v3
        with:
          node-version: '18'
          cache: 'npm'
          cache-dependency-path: 'packages/navigation/package-lock.json'
      - run: npm ci
        working-directory: ./packages/navigation
      - run: npm run build
        working-directory: ./packages/navigation
      - name: Deploy to S3
        uses: jakejarvis/s3-sync-action@v0.5.1
        with:
          args: --delete
        env:
          AWS_S3_BUCKET: ${{ secrets.AWS_NAVIGATION_BUCKET }}
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          SOURCE_DIR: 'packages/navigation/dist'
Enter fullscreen mode Exit fullscreen mode

This pipeline ensures that only the navigation component is deployed when its code changes. Other teams can deploy their components on their own schedules.

Choosing the right pattern depends on your team's structure, technology preferences, and performance requirements. Build-time integration offers simplicity and optimized bundles. Run-time composition provides maximum deployment independence. Web Components ensure long-term compatibility. Server-side composition improves perceived performance. Route-based splitting aligns with organizational boundaries. Shared state management maintains consistency, and independent deployments enable team velocity.

The goal is not to choose one pattern exclusively but to find the right combination that supports your application's growth. I've found that starting with a route-based split and run-time composition is a practical first step for many organizations. It establishes clear ownership and allows teams to experiment and learn before adopting more complex patterns. The key is to prioritize clear contracts and communication between teams, ensuring that the sum of the parts creates a cohesive and reliable user experience.

📘 Checkout my latest ebook for free on my channel!

Be sure to like, share, comment, and subscribe to the channel!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!

Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Top comments (0)