DEV Community

Cover image for **Serverless Architecture Explained: Build Scalable Apps with Event-Driven Functions and Zero Server Management**
Nithin Bharadwaj
Nithin Bharadwaj

Posted on

**Serverless Architecture Explained: Build Scalable Apps with Event-Driven Functions and Zero Server Management**

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!

Let me explain serverless architecture like we're building with Lego bricks, but the cloud manages the table and finds all the pieces for you. You just focus on snapping the bricks together.

In the simplest terms, serverless means you write code, and a cloud provider runs it. You don't rent a virtual computer (a server) that's on all the time. Instead, you provide a function—a block of logic—and the cloud executes it only when needed. When no one is using your app, the cost drops to nearly zero. When a thousand people show up at once, the system handles it automatically.

It's a shift in thinking. You move from asking, "How many servers do I need?" to asking, "What event should trigger my code?"

The Function as a Building Block

The most basic pattern is the Function as a Service, or FaaS. Think of it as a single-purpose piece of code that wakes up, does its job, and goes back to sleep. AWS Lambda, Google Cloud Functions, and Azure Functions are examples.

Here’s a real scenario: a user uploads a profile picture. We need to resize it. Instead of having a server running 24/7 waiting for uploads, we create a function that springs to life only when a new image lands in cloud storage.

// This function lives in AWS Lambda.
// It is triggered automatically when a new file is added to an S3 bucket named 'user-uploads'.
const AWS = require('aws-sdk');
const sharp = require('sharp'); // An image processing library

const s3 = new AWS.S3();

exports.handler = async (event) => {
  console.log('Event received:', JSON.stringify(event));

  // The 'event' contains details about the new image file.
  const bucketName = event.Records[0].s3.bucket.name;
  const fileKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));

  console.log(`Processing: ${fileKey} from ${bucketName}`);

  try {
    // 1. Get the original image from storage.
    const originalImage = await s3.getObject({
      Bucket: bucketName,
      Key: fileKey
    }).promise();

    // 2. Process it: create a 150x150 pixel thumbnail.
    const thumbnailBuffer = await sharp(originalImage.Body)
      .resize(150, 150, { fit: 'cover' }) // Crop to fit the square
      .jpeg({ quality: 85 })
      .toBuffer();

    // 3. Save the thumbnail back to storage, in a 'thumbnails' folder.
    const thumbnailKey = `thumbnails/${fileKey.split('/').pop()}`;

    await s3.putObject({
      Bucket: bucketName,
      Key: thumbnailKey,
      Body: thumbnailBuffer,
      ContentType: 'image/jpeg'
    }).promise();

    console.log(`Thumbnail saved: ${thumbnailKey}`);
    return { status: 'success', key: thumbnailKey };

  } catch (error) {
    console.error('Processing failed:', error);
    throw error; // This lets the cloud provider know the function failed.
  }
};
Enter fullscreen mode Exit fullscreen mode

The magic is in the setup, not this code. In the cloud console, I would say, "Hey AWS, run this handler function every time a new .jpg or .png file appears in the user-uploads bucket." I deploy this code and forget about the server. The cloud manages its execution, logging, and scaling.

Connecting Functions with Events

Functions in isolation aren't very useful. Their power comes from being connected. This is the event-driven pattern. One function finishes a task and sends out a message: "I'm done, and here's what I did." Other functions are listening for that message to start their own work.

Imagine an e-commerce system. A placeOrder function runs when a customer clicks "buy." Its job isn't to do everything. It just validates the order and saves it to the database. Then, it shouts into the system, "A new order was placed!" Other functions, which have no direct link to the first, hear this shout and act.

# This is a Serverless Framework configuration file (serverless.yml).
# It defines several functions and what triggers them, all deployed together.
service: online-store

provider:
  name: aws
  runtime: nodejs18.x
  region: us-east-1

functions:
  # 1. HTTP Endpoint: A customer places an order via a web page.
  placeOrder:
    handler: src/orders.placeOrder
    events:
      - httpApi:
          path: /order
          method: POST
    environment:
      ORDERS_TABLE: ${self:custom.ordersTable} # Links to a database table

  # 2. Reactor Function: Listens for the saved order and sends a confirmation email.
  sendOrderEmail:
    handler: src/notifications.sendOrderEmail
    events:
      - stream:
          type: dynamodb
          arn:
            Fn::GetAtt: [OrdersTable, StreamArn] # Listens to changes on the Orders table

  # 3. Reactor Function: Listens for the same event to update the inventory count.
  updateInventory:
    handler: src/inventory.updateInventory
    events:
      - eventBridge: # Uses a central event bus
          pattern:
            source: ["orders.app"]
            detail-type: ["OrderPlaced"]

resources:
  Resources:
    OrdersTable:
      Type: AWS::DynamoDB::Table
      Properties:
        TableName: ${self:custom.ordersTable}
        BillingMode: PAY_PER_REQUEST # Serverless pricing for the database too
        AttributeDefinitions:
          - AttributeName: orderId
            AttributeType: S
        KeySchema:
          - AttributeName: orderId
            KeyType: HASH
        StreamSpecification:
          StreamViewType: NEW_IMAGE # This creates the stream that sendOrderEmail listens to
Enter fullscreen mode Exit fullscreen mode

Here’s the code for the placeOrder function, showing how it publishes an event.

// src/orders.js
const { DynamoDBClient, PutItemCommand } = require('@aws-sdk/client-dynamodb');
const { EventBridgeClient, PutEventsCommand } = require('@aws-sdk/client-eventbridge');

const dbClient = new DynamoDBClient();
const eventBridgeClient = new EventBridgeClient();

exports.placeOrder = async (event) => {
  const body = JSON.parse(event.body);
  const { userId, items } = body;
  const orderId = `ord_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;

  // 1. Save the core order to the database.
  const dbCommand = new PutItemCommand({
    TableName: process.env.ORDERS_TABLE,
    Item: {
      orderId: { S: orderId },
      userId: { S: userId },
      items: { S: JSON.stringify(items) },
      status: { S: 'PLACED' },
      createdAt: { S: new Date().toISOString() }
    }
  });
  await dbClient.send(dbCommand);

  // 2. Publish a high-level event to the event bus.
  const eventCommand = new PutEventsCommand({
    Entries: [
      {
        Source: 'orders.app',
        DetailType: 'OrderPlaced',
        Detail: JSON.stringify({ orderId, userId, itemCount: items.length }),
        EventBusName: 'default'
      }
    ]
  });
  await eventBridgeClient.send(eventCommand);

  return {
    statusCode: 201,
    body: JSON.stringify({ orderId, message: 'Order received' })
  };
};
Enter fullscreen mode Exit fullscreen mode

The updateInventory function is completely separate. It doesn't know about the placeOrder function. It only knows to listen for events with Source: 'orders.app'.

// src/inventory.js
exports.updateInventory = async (event) => {
  // The event parameter contains the message from EventBridge.
  const detail = JSON.parse(event.detail);
  console.log(`Reducing inventory for order ${detail.orderId}`);

  // Here you would have logic to update a separate inventory database.
  // For each item in the order, decrement the stock count.
  // This function's failure DOES NOT roll back the order.
  // They are decoupled. You'd handle failures here with retries or alerts.

  // Simulate a database update.
  await updateStockLevels(detail);

  console.log('Inventory updated.');
};

async function updateStockLevels(orderDetail) {
  // Your database logic here.
  // This is where you'd use DynamoDB, Amazon RDS Proxy, or another data store.
}
Enter fullscreen mode Exit fullscreen mode

This loose coupling is a superpower. You can change how emails are sent without touching the order code. You can add a new function that triggers a chatbot notification just by having it listen to the OrderPlaced event.

Working with Data in a Serverless Way

Traditional databases can be a problem. They often rely on persistent connections. If a thousand functions start at once, they might try to make a thousand new database connections, overwhelming it. We need serverless-friendly data patterns.

One approach is to use purpose-built, serverless databases. Amazon DynamoDB or Google Firestore handle massive scale and connection spikes. Another is to use connection pooling services for traditional databases, like Amazon RDS Proxy.

Let's look at a DynamoDB pattern called single-table design. It sounds odd, but it's powerful for serverless. You store different types of data (Users, Orders, Products) in one table, distinguished by a prefix in the key.

// src/data/orderRepository.js
const { DynamoDBClient, TransactWriteItemsCommand } = require('@aws-sdk/client-dynamodb');
const { marshall, unmarshall } = require('@aws-sdk/util-dynamodb');

const client = new DynamoDBClient();

async function createOrderWithUserUpdate(orderId, userId, productSkus) {
  // We use a transaction to ensure both writes succeed or both fail.
  const command = new TransactWriteItemsCommand({
    TransactItems: [
      {
        Put: {
          TableName: 'EcommerceData',
          // Composite primary key: PK (Partition Key) and SK (Sort Key)
          // This stores the Order.
          Item: marshall({
            PK: `USER#${userId}`,    // All data for this user is grouped together.
            SK: `ORDER#${orderId}`,  // The specific order.
            EntityType: 'ORDER',
            OrderId: orderId,
            UserId: userId,
            Skus: productSkus,
            Status: 'CREATED',
            CreatedAt: new Date().toISOString()
          }),
          // Ensure we don't overwrite an existing order with the same ID.
          ConditionExpression: 'attribute_not_exists(PK)'
        }
      },
      {
        Update: {
          TableName: 'EcommerceData',
          // This updates the User's profile record.
          Key: marshall({
            PK: `USER#${userId}`,
            SK: 'PROFILE' // The user's main profile record.
          }),
          UpdateExpression: 'SET #oc = if_not_exists(#oc, :zero) + :inc, #lod = :now',
          ExpressionAttributeNames: {
            '#oc': 'OrderCount',
            '#lod': 'LastOrderDate'
          },
          ExpressionAttributeValues: marshall({
            ':inc': 1,
            ':zero': 0,
            ':now': new Date().toISOString()
          })
        }
      }
    ]
  });

  try {
    await client.send(command);
    console.log(`Order ${orderId} created for user ${userId}.`);
  } catch (error) {
    console.error('Transaction failed:', error);
    // Here you would implement a retry or fallback logic.
    throw new Error('Failed to create order.');
  }
}

// To get all orders for a user, the query is very efficient.
async function getOrdersForUser(userId) {
  const { DynamoDBClient, QueryCommand } = require('@aws-sdk/client-dynamodb');
  const queryClient = new DynamoDBClient();

  const command = new QueryCommand({
    TableName: 'EcommerceData',
    KeyConditionExpression: 'PK = :pk and begins_with(SK, :prefix)',
    ExpressionAttributeValues: marshall({
      ':pk': `USER#${userId}`,
      ':prefix': 'ORDER#'
    })
  });

  const result = await queryClient.send(command);
  return result.Items ? result.Items.map(item => unmarshall(item)) : [];
}
Enter fullscreen mode Exit fullscreen mode

This pattern keeps related data together for fast queries and minimizes the number of network calls, which is crucial for fast, cheap function execution.

The Cold Start Challenge and How to Manage It

Here's a common question I get: "If the function is asleep, doesn't it take time to wake up?" Yes. This initial delay is a "cold start." The cloud provider must find a machine, load your code and its dependencies, and then run it. Subsequent calls (while the function is still "warm") are much faster.

For a user-facing API, a delay of 500ms to 2 seconds on the first request might be unacceptable. We can mitigate this.

  1. Keep Functions Lean: Use smaller, focused runtimes. A Node.js function will generally start faster than a Java function with a large framework. Bundle only the libraries you need.
  2. Use Provisioned Concurrency: This is like paying a bit extra to keep a few copies of your function "warmed up" and ready to go at all times. You tell the cloud, "Please always have 5 instances of this function ready."
# In your serverless.yml, for a critical API function:
functions:
  userApi:
    handler: src/api.userHandler
    provisionedConcurrency: 5  # 5 instances are always warm.
    memorySize: 1024           # More memory often means faster CPU.
    timeout: 10                # Max time it can run (seconds).
    layers:
      - arn:aws:lambda:us-east-1:123456789012:layer:AWSLambdaPowertoolsTypeScript:1 # Shared utilities layer
    environment:
      NODE_ENV: production
Enter fullscreen mode Exit fullscreen mode
  1. Optimize Your Initialization Code: Move slow setup (like creating database connection pools) outside the main handler function. The cloud may reuse the execution environment, so that setup persists for future warm calls.
// Good pattern: Initialize connections once, reuse them.
const { DynamoDBClient } = require('@aws-sdk/client-dynamodb');
const someHeavyLibrary = require('heavy-library');

// This code runs ONLY during the cold start.
const dbClient = new DynamoDBClient();
const heavyThing = someHeavyLibrary.initialize(); // Do this once.

exports.handler = async (event) => {
  // This code runs on EVERY invocation (hot or cold).
  // Use the pre-initialized `dbClient` and `heavyThing`.
  const result = await dbClient.send(someCommand);
  const processed = heavyThing.process(result);

  return processed;
};
Enter fullscreen mode Exit fullscreen mode

Letting the Cloud Handle Security

Writing secure authentication is hard. In serverless, you should almost never write it yourself. Use managed services.

For APIs, you can use Amazon Cognito, Auth0, or Firebase Authentication. They handle user sign-up, sign-in, and token generation. Your function just needs to validate the token.

A common pattern is the Lambda Authorizer. It's a function you write that sits in front of your API. Its only job is to say "yes" or "no" to incoming requests.

// authorizer.js - A Lambda Authorizer for API Gateway
const jwt = require('jsonwebtoken');
const jwksClient = require('jwks-rsa');

// Create a client to fetch the public keys from your auth provider (like Auth0).
const client = jwksClient({
  jwksUri: `https://${process.env.AUTH0_DOMAIN}/.well-known/jwks.json`
});

// Helper to get the signing key.
function getKey(header, callback) {
  client.getSigningKey(header.kid, function(err, key) {
    const signingKey = key.publicKey || key.rsaPublicKey;
    callback(null, signingKey);
  });
}

exports.handler = (event, context, callback) => {
  console.log('Authorizer event:', event);

  const token = event.authorizationToken?.replace('Bearer ', '');

  if (!token) {
    console.error('No token provided.');
    return callback('Unauthorized'); // Deny the request.
  }

  // Verify the JWT token.
  jwt.verify(token, getKey, { audience: process.env.API_AUDIENCE, issuer: `https://${process.env.AUTH0_DOMAIN}/` }, (err, decoded) => {
    if (err) {
      console.error('Token verification failed:', err.message);
      return callback('Unauthorized'); // Deny the request.
    }

    // The token is valid! Create a policy that allows the user to invoke the API.
    console.log('User authenticated:', decoded.sub);
    const policy = {
      principalId: decoded.sub, // The user's unique identifier from the token.
      policyDocument: {
        Version: '2012-10-17',
        Statement: [
          {
            Action: 'execute-api:Invoke',
            Effect: 'Allow',
            // This grants access to the specific API method the user requested.
            Resource: event.methodArn
          }
        ]
      },
      // Pass user details to the main function.
      context: {
        userId: decoded.sub,
        email: decoded.email
      }
    };

    return callback(null, policy); // Allow the request.
  });
};
Enter fullscreen mode Exit fullscreen mode

In your main API function, you can now access the user ID from the request context, confident that they are authenticated.

// src/api/protectedFunction.js
exports.handler = async (event) => {
  // The authorizer passes the context. It's available in `event.requestContext.authorizer`.
  const userId = event.requestContext.authorizer.userId;
  const userEmail = event.requestContext.authorizer.email;

  console.log(`Processing request for user ${userId} (${userEmail})`);

  // Your business logic here, using the trusted user ID.
  const userData = await getUserData(userId);

  return {
    statusCode: 200,
    body: JSON.stringify({ data: userData })
  };
};
Enter fullscreen mode Exit fullscreen mode

Seeing What's Happening in Your Ephemeral System

When your code runs in milliseconds across hundreds of temporary containers, old-school logging into a server is impossible. You need new observability patterns.

  1. Structured Logging: Every log message should be a machine-readable JSON object, not just a text string. This lets you search and filter easily.
// Using a logger designed for serverless, like 'pino'
const pino = require('pino');

// Create a logger instance. It automatically adds useful context.
const logger = pino({
  level: process.env.LOG_LEVEL || 'info',
  messageKey: 'message',
  formatters: {
    bindings() {
      return {
        functionName: process.env.AWS_LAMBDA_FUNCTION_NAME,
        awsRequestId: process.env.AWS_REQUEST_ID, // The unique ID for THIS invocation
        environment: process.env.STAGE
      };
    }
  }
});

exports.handler = async (event, context) => {
  // Create a child logger with a correlation ID for this specific request.
  // This ID should be passed through all functions in a chain.
  const correlationId = event.headers['x-correlation-id'] || context.awsRequestId;
  const log = logger.child({ correlationId });

  log.info({ eventType: 'HTTP_REQUEST', path: event.path }, 'Function started');

  try {
    const result = await processRequest(event);
    log.info({ resultSize: result.length }, 'Function completed successfully');
    return { statusCode: 200, body: JSON.stringify(result) };
  } catch (error) {
    // Log the full error object, not just the message.
    log.error({ error: error.message, stack: error.stack }, 'Function failed');
    return { statusCode: 500, body: 'Internal Server Error' };
  }
};
Enter fullscreen mode Exit fullscreen mode
  1. Distributed Tracing: Tools like AWS X-Ray automatically track a request as it flows through API Gateway, to a Lambda function, then to DynamoDB, and maybe to another function. You get a visual map showing where time is spent and where errors occur.
# Enable X-Ray in your serverless.yml
provider:
  name: aws
  tracing:
    lambda: true  # Enable tracing for all functions
    apiGateway: true

functions:
  myFunction:
    handler: src/handler.hello
    # You can also enable it per function
    tracing: Active
Enter fullscreen mode Exit fullscreen mode
  1. Custom Metrics: You can emit metrics for business events, like "OrderValue" or "ImagesProcessed." CloudWatch Metrics or Datadog can collect these and trigger alarms.
const { MetricUnits, logMetrics } = require('@aws-lambda-powertools/metrics');
const metrics = new Metrics({ namespace: 'EcommerceApp', serviceName: 'OrderService' });

exports.handler = async (event) => {
  // ... process order ...

  // Add a custom metric for revenue.
  metrics.addMetric('OrderRevenue', MetricUnits.Count, orderTotal);
  // Add a metric for the number of items.
  metrics.addMetric('ItemsSold', MetricUnits.Count, itemCount);

  // Record a dimension (like order type) to slice the data later.
  metrics.addDimension('OrderType', orderType);

  // This utility will ship the metrics at the end of the function.
  await logMetrics();
};
Enter fullscreen mode Exit fullscreen mode

Deploying Safely and Automatically

You can't log into a server to update code. Deployment must be automated and safe. The common pattern is to use a CI/CD pipeline (like GitHub Actions, GitLab CI, or AWS CodePipeline) with a strategy like blue/green or canary deployments.

The Serverless Framework or AWS SAM help by creating CloudFormation stacks—they define your entire application (functions, databases, permissions) as code. Deploying updates is a single, atomic operation.

# A GitHub Actions workflow to deploy a serverless app.
name: Deploy Application

on:
  push:
    branches: [ main ]

jobs:
  test-and-deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Code
        uses: actions/checkout@v3

      - name: Setup Node.js
        uses: actions/setup-node@v3
        with:
          node-version: '18'

      - name: Install Dependencies
        run: npm ci  # Clean, consistent install

      - name: Run Unit Tests
        run: npm test

      - name: Deploy to Staging
        uses: serverless/github-action@v3
        with:
          args: deploy --stage staging --verbose
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

      - name: Run Integration Tests on Staging
        run: |
          # Run tests against the live staging API endpoint
          API_URL=$(sls info --stage staging | grep "endpoint" | head -1 | cut -d" " -f3)
          npm run test:integration -- --baseUrl=$API_URL

      - name: Deploy to Production (Canary)
        if: success()
        uses: serverless/github-action@v3
        with:
          args: deploy --stage production --alias live
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
Enter fullscreen mode Exit fullscreen mode

A canary deployment for Lambda might use weighted aliases. You deploy the new version, but initially only send 10% of traffic to it. If errors spike, you roll back automatically. If all looks good, you gradually increase to 100%.

This is the essence of modern serverless patterns. You build with small, event-driven blocks of logic. You let the cloud handle the heavy lifting of servers, security, scaling, and deployment. Your focus stays on the unique value of your application. It requires a different mindset, but the payoff is immense: systems that are inherently scalable, cost-effective, and resilient, freeing you to solve business problems rather than infrastructure puzzles.

📘 Checkout my latest ebook for free on my channel!

Be sure to like, share, comment, and subscribe to the channel!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!

Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Top comments (0)