DEV Community

Akash for MechCloud Academy

Posted on

Level Up Your NGINX Skills with These 6 Configurations

We've all been there. You're staring at an NGINX configuration file, you've read the official documentation three times, but you're still not sure how to combine all the pieces to solve your actual problem.

Documentation is great for explaining what auth_request or proxy_buffering do. But how do you assemble them to build a secure authentication gateway that has exceptions for public routes? How do you really handle CORS pre-flight requests without making a mess?

That gap between syntax and solution is what this playbook is all about.

Your Visual Guide: The NGINX Playbook Video

For those who prefer to watch and learn, we've created a complete 10-minute video guide that walks through every pattern explained below. We cover the "why" behind each decision and trace the request flow step-by-step.

The Core Philosophy: Centralize at the Edge

Before we dive in, let's establish the "why." The goal of these patterns is to let NGINX, your reverse proxy, handle infrastructure concerns. This keeps your backend application code cleaner, simpler, and more focused on business logic. By centralizing logic for security, routing, and protocol handling at the edge, you create a more maintainable and scalable system.

Pattern 1: The Basic Authentication Gateway

The Problem: You need to protect your services without bloating every single app with authentication logic.

The NGINX Solution: Delegate the check to an external microservice using auth_request. NGINX will send a sub-request to your auth service; if it gets a 200 OK, the original request proceeds. If it gets a 401 or 403, NGINX blocks the request instantly.

# Protects all routes by default
location / {
    auth_request /_internal_auth_check;
    proxy_pass http://user_service;
}

location = /_internal_auth_check {
    internal;
    proxy_pass http://auth_service/verify;
    proxy_pass_request_body off;
}
Enter fullscreen mode Exit fullscreen mode

Pattern 2: Passing User Data to the Backend

The Problem: Your protected service needs to know who the authenticated user is.

The NGINX Solution: Use auth_request_set to capture response headers (like X-User-ID) from your auth service after a successful check. Then, use proxy_set_header to pass that trusted information along to your backend.

location / {
    auth_request /_internal_auth_check;

    # Capture identity from the auth service's response
    auth_request_set $auth_user_id $upstream_http_x_user_id;
    auth_request_set $auth_user_email $upstream_http_x_user_email;

    # Pass it to the backend
    proxy_set_header X-User-ID $auth_user_id;
    proxy_set_header X-User-Email $auth_user_email;

    proxy_pass http://user_service;
}
Enter fullscreen mode Exit fullscreen mode

Pattern 3: Disabling Authentication for Public Endpoints

The Problem: Your login page, health check, or public assets shouldn't be protected by the gateway.

The NGINX Solution: Create more specific, exact-match location blocks for your public routes. Inside these blocks, the auth_request off; directive creates a "pinhole" in your global security policy.

# Exception for a public health check
location = /healthz {
    auth_request off;
    return 200 "OK";
}

# Exception for the login route
location = /auth/login {
    auth_request off;
    proxy_pass http://auth_service/login;
}
Enter fullscreen mode Exit fullscreen mode

Pattern 4: Managing Cross-Origin Resource Sharing (CORS)

The Problem: The browser is blocking your frontend's API calls due to CORS policy.

The NGINX Solution: Handle the browser's pre-flight OPTIONS request directly in NGINX. Intercept it with an if block, add the necessary Access-Control-* headers, and return an empty 204 response. This satisfies the browser without ever hitting your backend for pre-flight checks.

location /api/ {
    # Handle the pre-flight request
    if ($request_method = 'OPTIONS') {
        add_header 'Access-Control-Allow-Origin' 'https://ui.example.com';
        add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
        add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type';
        return 204;
    }

    # Add headers to actual responses
    add_header 'Access-Control-Allow-Origin' 'https://ui.example.com' always;
    proxy_pass http://my_backend_services;
}
Enter fullscreen mode Exit fullscreen mode

Pattern 5: Enabling Server-Sent Events (SSE)

The Problem: Your real-time updates are getting delayed because NGINX is buffering them.

The NGINX Solution: Turn NGINX into a transparent stream tunnel. The key is proxy_buffering off;. This forces NGINX to forward data packets the instant they arrive from the backend, creating a low-latency connection perfect for SSE.

location /api/events {
    proxy_pass http://event_stream_service;

    # Turn off all buffering
    proxy_buffering off;
    proxy_cache off;

    # Headers for persistent streaming
    proxy_http_version 1.1;
    proxy_set_header Connection "";
}
Enter fullscreen mode Exit fullscreen mode

Pattern 6: Conditionally Forwarding User Identity

The Problem: You want to show different content to logged-in vs. anonymous users, and the backend needs to make that decision.

The NGINX Solution: Use error_page to intercept auth failures and redirect them internally instead of blocking. This creates two paths. On success, auth_request_set captures the user's email. On failure, the fallback location sets the email header to an empty string. The backend always gets the request and can decide what to do based on whether the email header is present.

location / {
    auth_request /_internal_auth_check;
    error_page 401 403 = @fallback_auth;

    # Success path: capture the email
    auth_request_set $auth_user_email $upstream_http_x_user_email;
    proxy_set_header X-User-Email $auth_user_email;

    proxy_pass http://user_service;
}

# Failure path: set an empty header
location @fallback_auth {
    proxy_set_header X-User-Email "";
    proxy_pass http://user_service;
}
Enter fullscreen mode Exit fullscreen mode

Conclusion

These six patterns demonstrate a key architectural principle: NGINX is more than just a web server; it's a powerful control plane for your application network. By combining these simple directives, you can build clean, secure, and scalable systems.

What are your go-to NGINX patterns? Share them in the comments below!

Top comments (0)