DEV Community

Kaushikcoderpy
Kaushikcoderpy

Posted on • Originally published at logicandlegacy.blogspot.com

FastAPI Graceful Shutdown: Handling SIGTERM in Kubernetes

Ensuring Smooth Exits: Implementing Graceful Shutdowns in Kubernetes

Imagine a critical deployment. You push an update, expecting a seamless transition. Instead, your monitoring dashboard lights up: thousands of in-flight operations fail, active user sessions drop, and background tasks vanish mid-processing. The culprit? Your server didn't gracefully step aside; it was abruptly terminated. While container orchestrators like Kubernetes manage pod lifecycles, they don't inherently guarantee a "graceful" exit for your applications. That responsibility falls to you, the application developer.

The Pitfalls of Abrupt Termination

When Kubernetes decides to terminate a pod—perhaps during a rolling update, scale-down, or node drain—it sends a SIGTERM signal to the primary process within the container. Many developers, especially in Python, might rely on application framework-specific shutdown hooks, like FastAPI's lifespan events or on_event("shutdown").

However, these framework-level events often trigger too late in the termination sequence. By the time your application code receives the shutdown notification, the underlying server might have already stopped accepting new connections, or even worse, severed existing ones (like WebSocket connections). Any tasks queued, in progress, or users awaiting a final response are immediately impacted. To prevent this, your application needs to intercept the SIGTERM signal at a lower level, closer to the operating system, and initiate a controlled shutdown before the framework itself begins to unravel.

Why Just Closing the Database Isn't Enough

A truly graceful shutdown isn't just about cleaning up internal resources like database connections or file handles. The paramount concern is traffic draining. If your application doesn't signal its impending termination to the load balancer or service mesh before it stops processing requests, new traffic will continue to be routed to a dying instance for several seconds. This creates a race condition where users encounter errors from a server that's already in its final moments.

The Analogy: A Ship Abandonment Plan

Consider your server as a ship. A SIGTERM is the order to abandon ship. A poorly managed ship captain might immediately jump overboard, leaving passengers (active tasks and connections) to fend for themselves. A responsible captain, however, would first announce that no new passengers can board (stop accepting new traffic), then ensure all current passengers are safely offloaded into lifeboats (allow existing tasks to complete) before finally leaving the ship themselves. This is the essence of a graceful shutdown.

The Readiness Flag Pattern

A robust approach to graceful shutdowns centers around a simple, global boolean flag. Let's call it SHOULD_ACCEPT_TRAFFIC. Initially, this flag is True, and your application's /healthz or /readiness endpoint returns a 200 OK status.

The moment your application receives the SIGTERM signal, you immediately flip SHOULD_ACCEPT_TRAFFIC to False. Consequently, your /healthz endpoint now returns a 503 Service Unavailable status.

Kubernetes' readinessProbe continuously monitors this endpoint. Upon seeing the 503 status, and after its configured failureThreshold is met, Kubernetes will stop routing new traffic to that specific pod. This initiates a "quiet period," allowing existing connections and in-progress tasks to complete their work without being interrupted by new requests.

Implementing the Shutdown Guard in Python (FastAPI)

Here's how you can implement this pattern using Python with FastAPI, intercepting the SIGTERM signal directly:

import signal
import asyncio
from fastapi import FastAPI, Response, status

app = FastAPI()

# Global flag to control traffic acceptance
SHOULD_ACCEPT_TRAFFIC = True
ACTIVE_TASKS = 0 # Optional: for more advanced draining

def handle_termination_signal(*_):
    """
    Callback function for SIGTERM signal.
    Immediately sets the flag to stop accepting new traffic.
    """
    global SHOULD_ACCEPT_TRAFFIC
    SHOULD_ACCEPT_TRAFFIC = False
    print("SIGTERM received. Initiating traffic draining...")

# Register the OS signal handler immediately upon application start
signal.signal(signal.SIGTERM, handle_termination_signal)

@app.get("/healthz", status_code=status.HTTP_200_OK)
async def readiness_probe():
    """
    Kubernetes readiness probe endpoint.
    Returns 503 if the application is shutting down.
    """
    if not SHOULD_ACCEPT_TRAFFIC:
        return Response(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, content="Server is shutting down.")
    return {"status": "ok"}

@app.post("/process-data")
async def process_data_endpoint():
    """
    Example endpoint for processing tasks.
    Checks the traffic flag to reject new requests during shutdown.
    """
    global ACTIVE_TASKS
    if not SHOULD_ACCEPT_TRAFFIC:
        # Reject new requests if the server is draining
        return Response(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, content="Server terminating. No new tasks accepted.")

    # Increment active tasks counter (for more advanced draining)
    ACTIVE_TASKS += 1
    try:
        # Simulate some asynchronous work
        await asyncio.sleep(5) 
        print("Task processed.")
        return {"message": "Data processed successfully."}
    finally:
        # Decrement active tasks counter
        ACTIVE_TASKS -= 1

# Optional: A more robust shutdown hook using FastAPI's lifespan
# This would run AFTER the readiness probe starts returning 503
@app.on_event("shutdown")
async def app_shutdown():
    print("FastAPI shutdown event triggered.")
    # Wait for active tasks to complete before truly exiting
    while ACTIVE_TASKS > 0:
        print(f"Waiting for {ACTIVE_TASKS} active tasks to finish...")
        await asyncio.sleep(1)
    print("All active tasks completed. Application shutting down.")

Enter fullscreen mode Exit fullscreen mode

Essential Kubernetes Configuration

Implementing the code is only half the solution. Your Kubernetes deployment must be configured to leverage this pattern effectively:

  1. terminationGracePeriodSeconds: Set a sufficient grace period in your deployment.yaml. This value dictates how long Kubernetes will wait after sending SIGTERM before forcibly killing the pod. A common value is 30 or 60 seconds, allowing ample time for tasks to drain.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app
    spec:
      template:
        spec:
          terminationGracePeriodSeconds: 60 # Give the app 60 seconds to shut down
          containers:
          - name: my-app-container
            image: my-app-image:latest
            # ... other container settings
    
  2. readinessProbe: Configure a readinessProbe that points to your /healthz endpoint. Adjust periodSeconds and failureThreshold to control how quickly Kubernetes detects the 503 status and stops sending traffic.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app
    spec:
      template:
        spec:
          containers:
          - name: my-app-container
            image: my-app-image:latest
            readinessProbe:
              httpGet:
                path: /healthz
                port: 8000
              initialDelaySeconds: 5 # Wait 5s before first probe
              periodSeconds: 5     # Check every 5 seconds
              failureThreshold: 3  # After 3 consecutive failures (503s), mark as NotReady
            # ... other container settings
    

By combining the in-application readiness flag with appropriate Kubernetes probe and termination settings, you build a robust mechanism for controlled, graceful server shutdowns. This ensures that your application can politely decline new work, finish existing tasks, and exit without causing service disruptions or data loss.

Top comments (1)

Collapse
 
arvavit profile image
Vadym Arnaut

The OS-level signal handler vs FastAPI lifespan distinction is the part most posts miss, glad to see it called out. One pattern worth adding: even with the readiness probe flipping to 503, kube-proxy's iptables update isn't instant, so you still get a window of 1-3 seconds where new connections arrive at a draining pod. A preStop: sleep 5 before SIGTERM closes that gap. Costs you 5s of grace period on rolling deploys but eliminates the drop-connection class of bugs in load-balanced setups. terminationGracePeriodSeconds: 60 gives plenty of headroom for it.