<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: martinfernandezcx</title>
    <description>The latest articles on DEV Community by martinfernandezcx (@martinfernandezcx).</description>
    <link>https://dev.to/martinfernandezcx</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/martinfernandezcx"/>
    <language>en</language>
    <item>
      <title>Securing Microservices with JWT Validation at the Nginx Proxy Layer</title>
      <dc:creator>martinfernandezcx</dc:creator>
      <pubDate>Wed, 28 May 2025 20:16:43 +0000</pubDate>
      <link>https://dev.to/cloudx/securing-microservices-with-jwt-validation-at-the-nginx-proxy-layer-3dn</link>
      <guid>https://dev.to/cloudx/securing-microservices-with-jwt-validation-at-the-nginx-proxy-layer-3dn</guid>
      <description>&lt;p&gt;In a microservices architecture, separating concerns is critical for maintainability, scalability, and security. One key decision when building APIs is how and where to handle authentication. A common pattern is to delegate authentication to a dedicated &lt;strong&gt;authentication microservice&lt;/strong&gt;, which issues tokens (e.g., JWTs), and use those tokens to access protected resources on &lt;strong&gt;independent backend APIs&lt;/strong&gt;. When working on an infrastructure change, we faced the challenge of either integrating authentication in the Node.js backend (without the proper libraries) or maintaining a single backend solely for authorization.&lt;/p&gt;

&lt;p&gt;The options we considered were:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Having the Go backend validate the token and proxy to the Node.js backend over authenticated routes. (We tried this, but the Go proxy became messy and difficult to maintain.)&lt;/li&gt;
&lt;li&gt;Performing authentication in Node.js (infrastructure restrictions led us to abandon this approach.)&lt;/li&gt;
&lt;li&gt;Implementing a different authentication method using the existing infrastructure&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And this third one is what we came up with after investigating.&lt;/p&gt;

&lt;p&gt;This post demonstrates how to validate JWT tokens directly in &lt;strong&gt;Nginx&lt;/strong&gt; before routing requests to your protected &lt;strong&gt;Node.js API&lt;/strong&gt;, centralizing authorization enforcement at the gateway layer.&lt;br&gt;
This keeps the authentication within the infrastructure boundaries and allows us to simplify both the Go backend and the Node.js backend by relying on the NGINX layer.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why JWT at the Proxy?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Decouples concerns&lt;/strong&gt;: Authentication logic doesn't pollute your API code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistent enforcement&lt;/strong&gt;: All routes must pass the same token checks before hitting backend services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: Nginx (especially via OpenResty) is efficient and fast at handling token validation.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Options for JWT Validation
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Validate JWT in each backend service&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Pros: Full control per service.&lt;/li&gt;
&lt;li&gt;Cons: Repeated logic, potential for inconsistency.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Nginx with a third-party JWT module&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Commercial option with NGINX Plus.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use OpenResty (Nginx + Lua) with &lt;code&gt;lua-resty-jwt&lt;/code&gt;&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Open-source, flexible, and efficient.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  OpenResty + Lua
&lt;/h2&gt;

&lt;p&gt;We use OpenResty and the &lt;code&gt;lua-resty-jwt&lt;/code&gt; library to inspect JWTs in the Nginx layer. If valid, we forward requests to the backend. Otherwise, Nginx returns a 401 response.&lt;/p&gt;
&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;auth-api&lt;/code&gt;: issues JWTs via login endpoint.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;node-api&lt;/code&gt;: protected and public routes.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;nginx&lt;/code&gt;: gateway with Lua-based JWT validation.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Security Considerations
&lt;/h2&gt;

&lt;p&gt;Some of these concerns  were left out of this POC but we would like to mention for a proper production implementation. Please read through and evaluate wether it fits to your scenario or not.&lt;/p&gt;
&lt;h3&gt;
  
  
  Protection Against Common Attacks
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Replay Attacks&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement token expiration (exp claim)&lt;/li&gt;
&lt;li&gt;Use short-lived tokens (15-60 minutes)&lt;/li&gt;
&lt;li&gt;Consider implementing a token blacklist for revoked tokens&lt;/li&gt;
&lt;li&gt;Use nonce values in token claims&lt;/li&gt;
&lt;li&gt;Implement request timestamp validation&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Token Theft Prevention&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Always use HTTPS for token transmission&lt;/li&gt;
&lt;li&gt;Implement secure cookie attributes (HttpOnly, Secure, SameSite)&lt;/li&gt;
&lt;li&gt;Use token binding to prevent token reuse&lt;/li&gt;
&lt;li&gt;Implement rate limiting on authentication endpoints&lt;/li&gt;
&lt;li&gt;Monitor for suspicious patterns (multiple failed validations)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Token Expiration Best Practices
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Short-lived Access Tokens&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set expiration time between 15-60 minutes&lt;/li&gt;
&lt;li&gt;Use refresh tokens for longer sessions&lt;/li&gt;
&lt;li&gt;Implement sliding expiration for active users&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Refresh Token Strategy&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Longer expiration (days/weeks)&lt;/li&gt;
&lt;li&gt;Store refresh tokens securely&lt;/li&gt;
&lt;li&gt;Implement refresh token rotation&lt;/li&gt;
&lt;li&gt;Maintain a refresh token family tree&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Expiration Implementation&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight lua"&gt;&lt;code&gt;   &lt;span class="c1"&gt;-- Example of expiration check in Lua&lt;/span&gt;
   &lt;span class="kd"&gt;local&lt;/span&gt; &lt;span class="n"&gt;jwt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s2"&gt;"resty.jwt"&lt;/span&gt;
   &lt;span class="kd"&gt;local&lt;/span&gt; &lt;span class="n"&gt;validators&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s2"&gt;"resty.jwt-validators"&lt;/span&gt;

   &lt;span class="n"&gt;validators&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;set_system_leeway&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;-- Strict time validation&lt;/span&gt;
   &lt;span class="n"&gt;validators&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;register_validator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"exp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;validators&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;opt_is_not_expired&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Grace Period Considerations&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Implement a small grace period (30 seconds) for clock skew&lt;/li&gt;
&lt;li&gt;Handle token expiration gracefully&lt;/li&gt;
&lt;li&gt;Provide clear error messages for expired tokens&lt;/li&gt;
&lt;li&gt;Implement automatic token refresh when possible&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Project Layout
&lt;/h2&gt;

&lt;p&gt;You can find the full source here:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;GitHub Repo: &lt;a href="https://github.com/martinfernandezcx/NGINXAUTH" rel="noopener noreferrer"&gt;martinfernandezcx/NGINXAUTH&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Client logs in via &lt;code&gt;/api/auth/login&lt;/code&gt;, receives JWT.&lt;/li&gt;
&lt;li&gt;Client sends &lt;code&gt;Authorization: Bearer &amp;lt;token&amp;gt;&lt;/code&gt; on protected requests.&lt;/li&gt;
&lt;li&gt;Nginx runs a Lua script to:

&lt;ul&gt;
&lt;li&gt;Check token structure.&lt;/li&gt;
&lt;li&gt;Validate signature and expiration.&lt;/li&gt;
&lt;li&gt;Inject user ID into a request header.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Validated requests reach the Node.js service with identity attached.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Testing with Postman
&lt;/h2&gt;

&lt;p&gt;The project includes a comprehensive Postman test suite to verify the JWT authentication flow and API endpoints. The test suite covers authentication, public routes, and protected routes with various scenarios.&lt;/p&gt;
&lt;h3&gt;
  
  
  Test Suite Structure
&lt;/h3&gt;

&lt;p&gt;The Postman collection (&lt;code&gt;postman/jwt-nginx-auth-tests.json&lt;/code&gt;) includes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Authentication Tests&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Login endpoint validation&lt;/li&gt;
&lt;li&gt;Token format verification&lt;/li&gt;
&lt;li&gt;Automatic token storage for subsequent requests&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Public Endpoint Tests&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access to public routes&lt;/li&gt;
&lt;li&gt;Response format validation&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Protected Endpoint Tests&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access without token (401)&lt;/li&gt;
&lt;li&gt;Access with invalid token (401)&lt;/li&gt;
&lt;li&gt;Access with valid token (200)&lt;/li&gt;
&lt;li&gt;Response payload validation&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Running the Tests
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install &lt;a href="https://www.postman.com/downloads/" rel="noopener noreferrer"&gt;Postman&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Start the application:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; docker-compose up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Import the Collection&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open Postman&lt;/li&gt;
&lt;li&gt;Click "Import" button&lt;/li&gt;
&lt;li&gt;Select the &lt;code&gt;postman/jwt-nginx-auth-tests.json&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;select the &lt;code&gt;postman\environment.json&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;The collection will be imported with all test cases&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Run the Tests&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the "JWT Nginx Auth Tests" collection&lt;/li&gt;
&lt;li&gt;Click the "Run" button&lt;/li&gt;
&lt;li&gt;Postman will execute all tests in sequence&lt;/li&gt;
&lt;li&gt;View test results in the Postman console&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Test Flow&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tests run in a specific order to ensure proper token handling&lt;/li&gt;
&lt;li&gt;Login test stores the token for subsequent requests&lt;/li&gt;
&lt;li&gt;Protected route tests verify token validation&lt;/li&gt;
&lt;li&gt;Each test includes assertions for status codes and response formats&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Test Cases break down
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Login Test&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;   &lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Status code is 200&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;function &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;have&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;
   &lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Response has token&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;function &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;jsonData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
       &lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;jsonData&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;have&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;property&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;token&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Protected Route Test&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;   &lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Status code is 200&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;function &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;have&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;
   &lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Response contains protected data&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;function &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;jsonData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
       &lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;jsonData&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;have&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;property&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;message&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Environment Variables
&lt;/h3&gt;

&lt;p&gt;The test suite uses Postman environment variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;auth_token&lt;/code&gt;: Automatically set after successful login&lt;/li&gt;
&lt;li&gt;Used in subsequent requests to protected routes&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Continuous Integration
&lt;/h3&gt;

&lt;p&gt;The Postman collection can be integrated into CI/CD pipelines using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/postmanlabs/newman" rel="noopener noreferrer"&gt;Newman&lt;/a&gt; CLI tool&lt;/li&gt;
&lt;li&gt;Postman's CI/CD integrations&lt;/li&gt;
&lt;li&gt;Custom test runners&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example Newman command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;newman run postman/jwt-nginx-auth-tests.json &lt;span class="nt"&gt;-e&lt;/span&gt; postman/environment.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Running the tests
&lt;/h3&gt;

&lt;p&gt;To run the tests you can use npm run test:postman:cli, or import both files on postman and run it there as mentioned above.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Centralizing JWT validation in the proxy simplifies backend services, enforces uniform security, and keeps authentication logic out of each microservice. This pattern is ideal for architectures using distinct auth and business logic APIs.&lt;/p&gt;

&lt;p&gt;In contrast, validating tokens in the Node.js API itself might allow greater control over roles or context-based access logic but at the cost of duplication and potential inconsistency.&lt;/p&gt;

&lt;p&gt;OpenResty strikes a solid balance between &lt;strong&gt;performance&lt;/strong&gt;, &lt;strong&gt;flexibility&lt;/strong&gt;, and &lt;strong&gt;maintainability&lt;/strong&gt; in JWT-based authentication.&lt;/p&gt;

&lt;h2&gt;
  
  
  Apendix-A: Problems Found and Solutions
&lt;/h2&gt;

&lt;p&gt;During the implementation of this JWT authentication system, we encountered several issues that required specific solutions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;OpenResty Dependencies&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Problem: Missing Perl and curl in the OpenResty Alpine image&lt;/li&gt;
&lt;li&gt;Solution: Added required packages in Dockerfile:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt; RUN apk add --no-cache perl curl
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Nginx User Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Problem: Missing nginx user in the container&lt;/li&gt;
&lt;li&gt;Solution: Created nginx user and group:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt; RUN addgroup -S nginx &amp;amp;&amp;amp; adduser -S -G nginx nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;MIME Types Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Problem: Missing mime.types file in OpenResty Alpine image&lt;/li&gt;
&lt;li&gt;Solution: Created custom mime.types file and copied it to the correct location:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt; COPY mime.types /etc/nginx/mime.types
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Lua Package Path&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Problem: Lua package path directive in wrong context&lt;/li&gt;
&lt;li&gt;Solution: Moved lua_package_path to http context in nginx.conf:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt; &lt;span class="k"&gt;http&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="kn"&gt;lua_package_path&lt;/span&gt; &lt;span class="s"&gt;"/usr/local/openresty/lualib/?.lua&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;;&lt;span class="kn"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="kn"&gt;lua_package_cpath&lt;/span&gt; &lt;span class="s"&gt;"/usr/local/openresty/lualib/?.so&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;;&lt;span class="kn"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Log Directory Permissions&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Problem: Nginx couldn't write to log directory&lt;/li&gt;
&lt;li&gt;Solution: Created log directory and set proper permissions:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt; RUN mkdir -p /var/log/nginx &amp;amp;&amp;amp; \
     chown -R nginx:nginx /var/log/nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Unit tests and routes issues&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Problem: Postman tests were failing with 404 on /protected&lt;/li&gt;
&lt;li&gt;Solution: Changed auth-api/index.js /login route and node-api/index.js /protected to /user&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These solutions ensure proper functionality of the JWT authentication system while maintaining security and following best practices for containerized applications.&lt;/p&gt;

</description>
      <category>authentication</category>
      <category>nginx</category>
      <category>microservices</category>
      <category>security</category>
    </item>
    <item>
      <title>Performance Testing: Why, When, and What</title>
      <dc:creator>martinfernandezcx</dc:creator>
      <pubDate>Tue, 03 Oct 2023 19:43:32 +0000</pubDate>
      <link>https://dev.to/cloudx/performance-testing-why-when-and-what-45k3</link>
      <guid>https://dev.to/cloudx/performance-testing-why-when-and-what-45k3</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;When you have an application, regardless of whether it's in production or not, there comes a point when you need to perform performance testing. You might start by searching for various paid services, free tools that are rather difficult to understand, and concepts like load testing, spike testing, stress testing, and more.&lt;br&gt;
Let's try to clarify this matter and provide a useful walkthrough with a live example.&lt;br&gt;
If you have an API, performance can be divided into two sections: internal and external. Internal performance should evaluate memory allocation, the number of threads, disposed objects, etc. External performance is related, but it measures response time, the number of accepted requests, etc.&lt;br&gt;
Let's focus on the external performance for now.&lt;/p&gt;

&lt;h2&gt;
  
  
  The WHY
&lt;/h2&gt;

&lt;p&gt;We perform performance testing for two main reasons, among others: verification and detection. For verification, we intend to check whether the application fulfills the expected demand, and for detection, we test to verify if the application breaks, and at what load. This latter test can be executed when the boundaries are not clear enough or to test the app's behavior outside the usage boundaries, usually above the maximum level. This is really useful when we are trying to prepare for eventual spikes or future increases in demand.&lt;/p&gt;

&lt;h2&gt;
  
  
  The WHEN
&lt;/h2&gt;

&lt;p&gt;We perform performance testing once we know the expected usage of the app at the point of going into production. We run it before going into production and during production.&lt;br&gt;
However, we do not run it with every release, do we? This decision depends on various factors: the environment, your needs, the team you have to perform these tests, the criticality of the affected areas of that release, and the application itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  The WHAT
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IzAx67sA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/wtf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IzAx67sA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/wtf.jpg" alt="What" width="599" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the most crucial part. Location, location, location.&lt;/p&gt;

&lt;h3&gt;
  
  
  Before you start
&lt;/h3&gt;

&lt;p&gt;Things you need to know BEFORE you start thinking about performance, and the first one is boundaries.&lt;br&gt;
What do we understand by boundaries? Well, boundaries are the limits your application will hit when running.&lt;/p&gt;

&lt;p&gt;Going back to boundaries and regarding external testing of an API, what you need to know in advance is:&lt;br&gt;
    - How many requests per second are you expecting, both maximum and minimum?&lt;br&gt;
    - Are you expecting load peaks? How many? At what times? For how long?&lt;br&gt;
Without this information, then running performance tests makes less sense as you don't have a comparison point.&lt;/p&gt;

&lt;p&gt;Nice, you got the info? Then it's time to start doing setups.&lt;br&gt;
Well, not there yet. First, you have to draft a plan.&lt;/p&gt;

&lt;h4&gt;
  
  
  The plan
&lt;/h4&gt;

&lt;p&gt;To draft a plan, you need to consider the types of performance testing and decide on your objectives.&lt;br&gt;
Most commonly, you can:&lt;br&gt;
    - Load test&lt;br&gt;
    - Spike testing&lt;br&gt;
    - Stress test&lt;br&gt;
    - Mixed&lt;br&gt;
Each strategy has its benefits and drawbacks, and this is tightly bound to what you want to test.&lt;/p&gt;

&lt;p&gt;All of these are usually composed of the &lt;strong&gt;start-up&lt;/strong&gt;, &lt;strong&gt;the load&lt;/strong&gt;, and the &lt;strong&gt;tear-down&lt;/strong&gt; phases.&lt;/p&gt;

&lt;h3&gt;
  
  
  The start-up
&lt;/h3&gt;

&lt;p&gt;There are several ways to do this. You can use a step-up/down pattern, linear increase (even varying the steepness), or you could use exponential increase/decrease.&lt;/p&gt;

&lt;p&gt;Step up in 4 phases&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K3DytGz8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/00-stepupplan.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K3DytGz8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/00-stepupplan.png" alt="The Plan" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Important things to note here:&lt;br&gt;
You have a target of virtual users; each one of them will run one or more requests based on your configuration and your needs. As you can see here, we are targeting 1000 virtual users, from 0 to 1000 in 16 minutes, and then holding for 30 minutes and tearing down again in 15 minutes.&lt;/p&gt;

&lt;p&gt;In this case, we were testing for a breaking point, which we found at 500 virtual users. This is also called a Load test pattern, as the heaviest load will occur during sustained time, and we will test how the servers behave when load is held, besides the advantage it provides if you don't know your boundaries.&lt;/p&gt;

&lt;p&gt;If you already know your limits, then you can plan differently like:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FS9ThIoT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/01-constantsteep.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FS9ThIoT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/01-constantsteep.png" alt="The Steep" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the pattern is different, and it's targeting different results.&lt;br&gt;
The former one is increasing by steps so we can measure at different moments in time and also check for infrastructure changes and responses (pods warming up, memory allocation, processor, etc.)&lt;br&gt;
The latter one increases directly to the top, holds, and then decreases. Here, the target could be evaluating if we have memory leaks or if the processor can spawn multiple threads and so on. Numerous metrics can be evaluated, and it will depend on what you are looking for. I'm just mentioning the common ones.&lt;/p&gt;

&lt;h3&gt;
  
  
  The hold
&lt;/h3&gt;

&lt;p&gt;At this stage is where you want to decide if you want to hold, for how much time, if you will have spikes, how many, how long? How steep?&lt;/p&gt;

&lt;p&gt;For instance, the next image shows a holding pattern interrupted by a 4x peak.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RS5YWeAo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/02-spike.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RS5YWeAo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/02-spike.png" alt="Spike" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point, we may want to check container warm-up, horizontal and vertical scaling, failure recovery, and so on, given that the spike increase is really fast; these are the common breaking points (network and scaling).&lt;/p&gt;

&lt;h3&gt;
  
  
  The Tear down
&lt;/h3&gt;

&lt;p&gt;Once you have verified the behavior under load or spikes, it's time to let the services cool down, for the same reason that we stepped in; we want to do the tear down to check how the infrastructure is reacting. It could be memory analysis, how much and how fast it got cleared? How fast instances stopped? And etc…&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-life config
&lt;/h2&gt;

&lt;p&gt;Now, all of this is fine, but WHAT are we testing?? This, of course, varies according to your project. Let's begin by testing a specific endpoint, shall we?&lt;br&gt;
To test an endpoint is basically configuring the given tool to perform a GET/POST/PUT. In this case, I'm going to show how to run a GET to a specific API endpoint.&lt;/p&gt;

&lt;p&gt;Let's use, for this purpose, jMeter.&lt;br&gt;
By default, JMeter has some tools to assemble a query such as thread groups, but there are also some cool plugins that we will be using:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Th2uLej7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/03-plugins.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Th2uLej7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/03-plugins.png" alt="Plugins" width="800" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Start by installing the custom threads group that contains the Ultimate thread group. By using this plugin, you will be able to configure the steps you want and many other options.&lt;br&gt;
Second, add some graph generators of your choice.&lt;/p&gt;

&lt;p&gt;Once you have done this, create a test plan and add an Ultimate thread group.&lt;br&gt;
The thread group should consist of the following components at a minimum:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The HTTP Request

&lt;ul&gt;
&lt;li&gt;Then here, I'm adding input data from a CSV to send query parameters, a Header manager to send a basic auth token, and a throughput timer.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Let's break it into pieces:

&lt;ul&gt;
&lt;li&gt;The HTTP request is pretty self-explanatory:&lt;/li&gt;
&lt;li&gt;You have the name&lt;/li&gt;
&lt;li&gt;Protocol&lt;/li&gt;
&lt;li&gt;Server&lt;/li&gt;
&lt;li&gt;Path&lt;/li&gt;
&lt;li&gt;Verb&lt;/li&gt;
&lt;li&gt;And other parameters like port, etc.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Then there are config elements, the first one is the CSV file input&lt;/li&gt;
&lt;li&gt;Then the HTTP header manager

&lt;ul&gt;
&lt;li&gt;This is optional and also provides options for basic, JWT, and all the other headers you need to add.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Third is the throughput. Here is where you also handle the load. It's not the same to have 1000 VU with 1000 requests per minute per user than having 1 request per second per VU.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see in the image:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OumWQTI0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/04-threadGroup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OumWQTI0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/04-threadGroup.png" alt="ThreadGroup" width="380" height="98"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are sending a maximum of 1 request per second per VU. The options here allow you to set an amount shared across all VUs or per VU, as seen in the image.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Dnw5XgT9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/05-thoughput.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Dnw5XgT9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/05-thoughput.png" alt="Throughput" width="800" height="133"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping up and information analysis
&lt;/h2&gt;

&lt;p&gt;Once the process has completed, it's time to look at the graphs and see what information is being presented and our pain points and the need to improve items. Some results from the previous configuration:&lt;/p&gt;

&lt;h3&gt;
  
  
  Response time and threads
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h3E5E7cX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/06-ResponseTimevsThreads.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h3E5E7cX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/06-ResponseTimevsThreads.png" alt="Results 1" width="800" height="345"&gt;&lt;/a&gt;&lt;br&gt;
This is calculating the average time based on the number of VUs. As you can see in the image, it was averaging 22 seconds for a simple GET.&lt;/p&gt;

&lt;h3&gt;
  
  
  200 responses and threads
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WnpUl_Se--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/07-200AndThreads.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WnpUl_Se--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/07-200AndThreads.png" alt="Results 2" width="800" height="366"&gt;&lt;/a&gt;&lt;br&gt;
Here, you'll notice two key points: the performance under constant throughput and the variance in 200 responses.&lt;br&gt;
If you add to this the following image:&lt;/p&gt;

&lt;h3&gt;
  
  
  Errors and threads
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PzkVh-pw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/08-ErrorsAndThreads.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PzkVh-pw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/08-ErrorsAndThreads.png" alt="Results 3" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see that when reaching about 500 VU, the number of 502 and 504 errors started increasing.&lt;br&gt;
Further analysis revealed that our instances were not able to handle the load, and we determined that horizontal scaling combined with a small increase in resources on the actual pods would solve the problem, as you will see in the following image, that for the same pattern of load, the number of errors drastically reduced.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A2tu9Wnw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/09-afterFix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A2tu9Wnw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://raw.githubusercontent.com/cloudx-labs/posts/main/posts/martinfernandezcx/assets/ART00-Performance/09-afterFix.png" alt="Results 4" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;When running performance analysis, it's really important to know what you will be looking for. Second, of course, is K.I.S.S. You don't need fancy graphics or complicated-to-configure tools, just go simple and straightforward.&lt;br&gt;
Then, make sure you capture all that's important to you and also compare it to the hardware metrics (not covered in this demo) so you can add graphs, logs, and hardware metrics to get a great combo and know what you need to improve.&lt;br&gt;
And last but not least, double-check and triple-check your findings. Running performance tests should be done in isolated environments, but if that is not possible, and you will target staging/UAT, make sure you have two or three runs at different times of the day so you can evaluate your results as well and compare.&lt;br&gt;
Hopefully, this post will aid you in running your first set of performance tests.&lt;/p&gt;

&lt;p&gt;Good Luck.&lt;br&gt;
&lt;strong&gt;this article was fully written by a person, based on a live example and experience and not by an AI&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>performance</category>
      <category>testing</category>
    </item>
  </channel>
</rss>
