<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Daniel Hajek</title>
    <description>The latest articles on DEV Community by Daniel Hajek (@daniel_hajek_f3e950f9157e).</description>
    <link>https://dev.to/daniel_hajek_f3e950f9157e</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/daniel_hajek_f3e950f9157e"/>
    <language>en</language>
    <item>
      <title># 3 From dotnet run to docker-compose up: Taming the FinWiseNest Dev Beast</title>
      <dc:creator>Daniel Hajek</dc:creator>
      <pubDate>Wed, 23 Jul 2025 12:44:57 +0000</pubDate>
      <link>https://dev.to/daniel_hajek_f3e950f9157e/-3-from-dotnet-run-to-docker-compose-up-taming-the-finwisenest-dev-beast-172e</link>
      <guid>https://dev.to/daniel_hajek_f3e950f9157e/-3-from-dotnet-run-to-docker-compose-up-taming-the-finwisenest-dev-beast-172e</guid>
      <description>&lt;p&gt;In our last post, we unpacked how FinWiseNest embraced a real-time, event-driven architecture to power its growing suite of microservices—from portfolio handling to transaction tracking and live market data. But as our backend matured, our local dev environment started to feel… less elegant.&lt;/p&gt;

&lt;p&gt;Imagine opening five terminals, running &lt;code&gt;dotnet run&lt;/code&gt; in each one, then spinning up RabbitMQ and SQL Server separately—all before you even wrote a line of code. That’s not development. That’s sysadmin cosplay.&lt;/p&gt;

&lt;p&gt;So we took a pause from cranking out features and tackled a new challenge: making local development a joy again.&lt;/p&gt;

&lt;p&gt;GitHub &lt;a href="https://github.com/DCodeWorks/FinWiseNest/commit/78f288fbc90c62076a9591b2580dddf1144d02fc" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  The Problem: An Orchestra with No Conductor
&lt;/h3&gt;

&lt;p&gt;As our ecosystem grew to &lt;strong&gt;five .NET microservices&lt;/strong&gt; , &lt;strong&gt;RabbitMQ&lt;/strong&gt; , and &lt;strong&gt;SQL Server&lt;/strong&gt; , starting a dev session became a ritual:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fire up RabbitMQ.&lt;/li&gt;
&lt;li&gt;Launch SQL Server.&lt;/li&gt;
&lt;li&gt;Open five terminal windows.&lt;/li&gt;
&lt;li&gt;Navigate into each microservice.&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;dotnet run&lt;/code&gt; in every single one.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It worked—but it was messy. It was fragile. And let’s be honest: it didn’t spark joy.&lt;/p&gt;

&lt;p&gt;We needed a conductor for our orchestra.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Fix: Compose Yourself
&lt;/h3&gt;

&lt;p&gt;Enter &lt;strong&gt;Docker Compose&lt;/strong&gt; —our backstage pass to a cleaner, one-command local dev experience.&lt;/p&gt;

&lt;p&gt;Our goal was simple but ambitious:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“Start the entire FinWiseNest backend with one command—no manual setup, no terminal gymnastics.”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We achieved it in two key steps:&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Containerize All the Things
&lt;/h4&gt;

&lt;p&gt;Each .NET microservice got its own &lt;strong&gt;multi-stage Dockerfile&lt;/strong&gt; , creating lightweight and production-like containers. These images are optimized, portable, and consistent across machines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/07/image-3.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldl82v3mehtdwqehn3f7.png" width="553" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/07/image-4.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8e3cm6gzyzyz5yz3gldq.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: One File to Rule Them All
&lt;/h4&gt;

&lt;p&gt;We leveled up our &lt;code&gt;docker-compose.yml&lt;/code&gt; to define the entire stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  sqlserver:
    image: mcr.microsoft.com/mssql/server:2022-latest
    container_name: mssql_dev
    ports:
      - "1433:1433"
    environment:
      - ACCEPT_EULA=Y
      - SA_PASSWORD=YourStrong!Passw0rd
      - MSSQL_PID=Developer
    volumes:
      - mssql_data:/var/opt/mssql
    restart: unless-stopped
    networks:
      - backend_network

  rabbitmq:
    image: rabbitmq:3-management-alpine
    container_name: rabbitmq_dev
    ports:
      - "5672:5672"
      - "15672:15672"
    environment:
      - RABBITMQ_DEFAULT_USER=user
      - RABBITMQ_DEFAULT_PASS=password
    networks:
      - backend_network

  portfolioservice:
    build:
      context: .
      dockerfile: PortfolioService/Dockerfile
    ports:
      - "8081:8080"
    environment:
      - ASPNETCORE_URLS=http://+:8080
      - ConnectionStrings__DefaultConnection=Server=mssql_dev;Database=FinWiseDb;User Id=sa;Password=YourStrong!Passw0rd;TrustServerCertificate=True;
      - RabbitMQ__ConnectionString=amqp://user:password@rabbitmq_dev
      - ASPNETCORE_ENVIRONMENT=Development
      - MarketDataService:BaseUrl=http://marketdataservice:8080
    depends_on:
      - rabbitmq
      - sqlserver
      - marketdataservice
    restart: unless-stopped
    networks:
      - backend_network

  transactionservice:
     build:
      context: .
      dockerfile: TransactionService/Dockerfile
     ports:
      - "8082:8080" 
     environment:
      - ASPNETCORE_URLS=http://+:8080
      - ConnectionStrings__DefaultConnection=Server=mssql_dev;Database=FinWiseDb;User Id=sa;Password=YourStrong!Passw0rd;TrustServerCertificate=True;
      - RabbitMQ__ConnectionString=amqp://user:password@rabbitmq_dev
      - ASPNETCORE_ENVIRONMENT=Development
     depends_on:
      - rabbitmq
      - sqlserver
     restart: unless-stopped
     networks:
      - backend_network

  marketdataservice:
     build:
      context: .
      dockerfile: MarketDataService/Dockerfile
     ports:
      - "8083:8080" 
     environment:
      - ASPNETCORE_URLS=http://+:8080
     restart: unless-stopped
     networks:
      - backend_network

  swisshubservice:
     build:
      context: .
      dockerfile: SwissHubService/Dockerfile
     ports:
      - "8084:8080" 
     environment:
      - ASPNETCORE_URLS=http://+:8080
     restart: unless-stopped
     networks:
      - backend_network

  taxservice:
     build:
      context: .
      dockerfile: TaxService/Dockerfile
     ports:
      - "8085:8080" 
     environment:
      - ASPNETCORE_URLS=http://+:8080
      - ConnectionStrings__DefaultConnection=Server=mssql_dev;Database=FinWiseDb;User Id=sa;Password=YourStrong!Passw0rd;TrustServerCertificate=True;
     depends_on:
      - sqlserver
     restart: unless-stopped
     networks:
      - backend_network

volumes:
  mssql_data:

networks:
  frontend_network:
    driver: bridge
  backend_network:
    driver: bridge  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The five .NET services&lt;/li&gt;
&lt;li&gt;RabbitMQ&lt;/li&gt;
&lt;li&gt;SQL Server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The file handles image builds, port mappings, inter-service networking—everything needed to orchestrate the system locally.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Wins: Why This Matters
&lt;/h3&gt;

&lt;p&gt;This wasn’t just a nice-to-have. The shift brought &lt;strong&gt;real impact&lt;/strong&gt; to our project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;One Command to Start It All&lt;/strong&gt;
Just run &lt;code&gt;docker-compose up&lt;/code&gt; and you’re in business. No manual steps. No forgotten dependencies. After you run the command, you should see something like this:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/07/image-5.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkanix5s3s86fpaq6rb01.png" width="700" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Zero “Works on My Machine” Moments&lt;/strong&gt;
Everyone gets the same setup, every time. It’s like giving your team matching superhero suits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clean Isolation&lt;/strong&gt;
No more stepping on each other’s toes. Each service runs in its own neatly contained box, dependency conflicts be gone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smooth On-Ramp to Production&lt;/strong&gt;
These same container images will power our future &lt;strong&gt;Azure Kubernetes Service (AKS)&lt;/strong&gt; deployment. We’re building dev environments that scale.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In you Docker Desktop, the current situation is like below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/07/image-6.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqe5gd6gevjre8cknizyc.png" width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vylstnen7s257ki8q93.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vylstnen7s257ki8q93.png" alt="💡" width="72" height="72"&gt;&lt;/a&gt; Tip:&lt;/strong&gt; I strongly recommend designating &lt;strong&gt;one service as the owner&lt;/strong&gt; responsible for initializing the database. Since our SQL Server now runs in a Docker container with its own volume on the local machine, situations like removing the volume, stopping containers, or switching machines can lead to a fresh database state.&lt;br&gt;&lt;br&gt;
To handle this gracefully, the owning service should automatically &lt;strong&gt;create the database, apply migrations&lt;/strong&gt; , and optionally &lt;strong&gt;seed it with sample data&lt;/strong&gt; at startup. &lt;em&gt;(This is ideal in development only, for Production is not recommended, we will talk about this in next articles)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here’s the code we use to apply migrations when the service starts:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/07/image-7.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwfvmgp8q452y8cd2agev.png" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Which creates this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/07/image-8.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpltqsbv7107d8b7yuwa0.png" width="477" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping Up
&lt;/h3&gt;

&lt;p&gt;We pressed pause on building features to fix the way we build them—and it was worth it. With Docker and Docker Compose in place, our team now enjoys a &lt;strong&gt;repeatable&lt;/strong&gt; , &lt;strong&gt;scalable&lt;/strong&gt; , and dare we say it, &lt;strong&gt;pleasant&lt;/strong&gt; development experience.&lt;/p&gt;

&lt;p&gt;We’ve swapped chaos for clarity.&lt;br&gt;&lt;br&gt;
And with this solid foundation under us, it’s full steam ahead—onto building features that truly move the needle.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>microservices</category>
      <category>opensource</category>
    </item>
    <item>
      <title>#2 FinWiseNest: Building a Real-Time User Experience with SignalR and Event-Driven Architecture</title>
      <dc:creator>Daniel Hajek</dc:creator>
      <pubDate>Tue, 08 Jul 2025 15:08:42 +0000</pubDate>
      <link>https://dev.to/daniel_hajek_f3e950f9157e/2-finwisenest-building-a-real-time-user-experience-with-signalr-and-event-driven-architecture-54n6</link>
      <guid>https://dev.to/daniel_hajek_f3e950f9157e/2-finwisenest-building-a-real-time-user-experience-with-signalr-and-event-driven-architecture-54n6</guid>
      <description>&lt;p&gt;GitHub: &lt;a href="https://github.com/DCodeWorks/FinWiseNest" rel="noopener noreferrer"&gt;https://github.com/DCodeWorks/FinWiseNest&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our last &lt;a href="https://dev.to/daniel_hajek_f3e950f9157e/forging-finwise-from-architectural-blueprint-to-a-live-full-stack-foundation-56hm-temp-slug-8373890"&gt;post&lt;/a&gt;, we detailed the creation of our application’s core structure. We built a system where the frontend could read data from the backend and create new data. However, the user experience was still based on a simple request-and-response model.&lt;/p&gt;

&lt;p&gt;Today, I want to talk about the next evolution: making our application feel alive and responsive. We’ll explore how we implemented a real-time feature that automatically updates a user’s portfolio moments after they add a new transaction, without requiring a manual page refresh.&lt;/p&gt;

&lt;p&gt;In this article we will focus on below components in &lt;strong&gt;GREEN&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/07/image-2.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhp4vzrcce8rty5tzg4av.png" width="800" height="626"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding Message Bus with The Strategy Pattern
&lt;/h3&gt;

&lt;p&gt;A key decision was choosing our message bus technology. Our production architecture calls for &lt;strong&gt;Azure Service Bus&lt;/strong&gt; , but using it during development can be slow and add unnecessary costs. For local development, &lt;strong&gt;RabbitMQ&lt;/strong&gt; running in a Docker container is a perfect free, fast, and powerful alternative.&lt;/p&gt;

&lt;p&gt;To support both, we implemented the &lt;strong&gt;Strategy Pattern&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Instead of writing code that directly depended on RabbitMQ or Azure, we defined a simple &lt;code&gt;IMessagingService&lt;/code&gt; interface. This interface has one job: publish a message. Then, we created two separate classes that implement this interface:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;RabbitMQMessagingService&lt;/code&gt;: Contains the logic to publish messages to our local RabbitMQ instance.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AzureServiceBusMessagingService&lt;/code&gt;: Contains the logic to publish messages to Azure Service Bus.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In our application’s startup code, we use Dependency Injection to decide which strategy to use based on the environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (builder.Environment.IsDevelopment())
{
    // For local development, use the RabbitMQ implementation
    builder.Services.AddSingleton&amp;lt;IMessagingService, RabbitMQMessagingService&amp;gt;();
}
else
{
    // For production, use the Azure Service Bus version
    builder.Services.AddSingleton&amp;lt;IMessagingService, AzureServiceBusMessagingService&amp;gt;();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This powerful pattern means we can switch between our local message broker and a cloud-native one by changing a single line of code, without altering any of our core business logic.&lt;/p&gt;

&lt;p&gt;Before A Message Bus:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/07/azure-service-step-1.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fea49lz0x6sidwx9yrsud.png" width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and after&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/07/azure-service-step-2.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl41mqod3t301r8avthz3.png" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem: A Disconnected Experience
&lt;/h3&gt;

&lt;p&gt;Our initial setup worked, but it had a limitation. A user would add a new transaction through a form, the data would be sent to our backend &lt;code&gt;TransactionService&lt;/code&gt;, and saved to the database. But the portfolio view on the screen would not change. The user had to manually refresh the page to see their updated holdings.&lt;/p&gt;

&lt;p&gt;This delay creates a disconnected experience. In a modern financial application, users expect to see the results of their actions immediately. Our goal was to bridge this gap and make the UI react instantly to changes happening on the backend.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/07/image-1.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fse7ddwy9a3k5dvggmnd5.png" width="800" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/07/image.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6iaeqfv5462hiylxir2.png" width="652" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Architecture of a Real-Time Flow
&lt;/h3&gt;

&lt;p&gt;Here is the high-level plan, which you can visualize in the accompanying diagram:&lt;/p&gt;

&lt;p&gt;To solve this, we needed a way for our server to send a notification directly to the user’s browser. A standard HTTP connection doesn’t allow this. The solution was to create a full, end-to-end asynchronous data flow using two key technologies: &lt;strong&gt;RabbitMQ&lt;/strong&gt; and &lt;strong&gt;SignalR&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;User Action:&lt;/strong&gt; A user submits a new transaction from the UI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event Publishing:&lt;/strong&gt; The &lt;code&gt;TransactionService&lt;/code&gt; saves the transaction and publishes a &lt;code&gt;TransactionCreated&lt;/code&gt; event to a &lt;strong&gt;RabbitMQ&lt;/strong&gt; message queue. This service’s job is now done.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend Processing:&lt;/strong&gt; The &lt;code&gt;PortfolioService&lt;/code&gt;, which is constantly listening to the queue, receives the event. It processes the information and updates the &lt;code&gt;Holdings&lt;/code&gt; table in the database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Push Notification:&lt;/strong&gt; This is the new, crucial part. After the database is updated, the &lt;code&gt;PortfolioService&lt;/code&gt; uses a &lt;strong&gt;SignalR Hub&lt;/strong&gt; to broadcast a simple notification, like “PortfolioUpdated,” to all connected web clients.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UI Refresh:&lt;/strong&gt; The React frontend, which has an open connection to the SignalR Hub, receives this notification. This triggers a function that tells Next.js to refresh its data, updating the portfolio display automatically.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/07/signalr-comunication.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmowr50o2b4facf08ifuz.png" width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Architecture is Powerful
&lt;/h3&gt;

&lt;p&gt;This design provides several key advantages that are important for a professional application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Decoupling and Scalability:&lt;/strong&gt; The &lt;code&gt;TransactionService&lt;/code&gt; has no knowledge of the user interface or who needs to be updated. It simply announces that an event happened. This allows us to add more services in the future that can react to the same event without changing the original service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Responsiveness:&lt;/strong&gt; The user’s initial action of submitting the form is very fast because the backend simply accepts the request and publishes a message. The heavy work of updating the portfolio happens in the background without making the user wait.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Excellent User Experience:&lt;/strong&gt; The final result is a modern, dynamic interface. The application feels interactive and “live,” which builds user trust and satisfaction.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;By combining our event-driven backend with SignalR for real-time communication, we have moved beyond a simple CRUD (Create, Read, Update, Delete) application. We now have a sophisticated system that provides immediate feedback to the user, reflecting the power and flexibility of our initial architectural decisions.&lt;/p&gt;

&lt;p&gt;With this real-time foundation in place, our next focus will be on securing this entire flow by implementing user authentication and authorization, ensuring that each user’s data and notifications are kept private and secure.&lt;/p&gt;

</description>
      <category>netcore</category>
      <category>azureservices</category>
      <category>eventdrivenarchitect</category>
      <category>financialapplication</category>
    </item>
    <item>
      <title>#1 Forging FinWise: From Architectural Blueprint to a Live Full-Stack Foundation</title>
      <dc:creator>Daniel Hajek</dc:creator>
      <pubDate>Thu, 03 Jul 2025 15:09:41 +0000</pubDate>
      <link>https://dev.to/daniel_hajek_f3e950f9157e/1-forging-finwise-from-architectural-blueprint-to-a-live-full-stack-foundation-4mom</link>
      <guid>https://dev.to/daniel_hajek_f3e950f9157e/1-forging-finwise-from-architectural-blueprint-to-a-live-full-stack-foundation-4mom</guid>
      <description>&lt;p&gt;GitHub: &lt;a href="https://github.com/DCodeWorks/FinWiseNest" rel="noopener noreferrer"&gt;https://github.com/DCodeWorks/FinWiseNest&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every solid software product starts with a strong foundation—not with dozens of features, but with smart decisions about structure and scalability. Over the past few weeks, I’ve been working on exactly that for &lt;strong&gt;FinWise&lt;/strong&gt; , my new wealth management platform for the Swiss market.&lt;/p&gt;

&lt;p&gt;This is a quick look at what I’ve built so far: from shaping the architecture to creating a real, working “vertical slice” of the application.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl32whbqq2nf9u4ixb668.png" alt="🧱" width="72" height="72"&gt; &lt;strong&gt;Starting with Architecture First&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Before writing any production code, I focused on designing a scalable, reliable, and flexible architecture. I chose a &lt;strong&gt;decoupled microservices approach&lt;/strong&gt; hosted on &lt;strong&gt;Azure Kubernetes Service (AKS)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This wasn’t just a technical preference—it was a strategic choice. I wanted to ensure that each service, like &lt;code&gt;PortfolioService&lt;/code&gt; or &lt;code&gt;TransactionService&lt;/code&gt;, could be built, updated, and scaled on its own, without affecting the others.&lt;/p&gt;

&lt;p&gt;For the backend, I’m using &lt;strong&gt;.NET 9&lt;/strong&gt; , built around an &lt;strong&gt;event-driven design&lt;/strong&gt; with &lt;strong&gt;Azure Service Bus&lt;/strong&gt;. This means that when something like a transaction happens, the &lt;code&gt;TransactionService&lt;/code&gt; just sends an event. Any other service that cares (for example, &lt;code&gt;PortfolioService&lt;/code&gt;) can respond to it. This keeps the system loosely coupled and easier to maintain.&lt;/p&gt;

&lt;p&gt;For data storage, I’ve gone with a &lt;strong&gt;polyglot persistence&lt;/strong&gt; strategy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Azure SQL&lt;/strong&gt; for structured and transactional data, like portfolios and user transactions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure Cosmos DB&lt;/strong&gt; for high-volume market data and time-series events&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure Cache for Redis&lt;/strong&gt; for caching, to keep things fast and responsive&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The whole infrastructure is managed with &lt;strong&gt;Terraform&lt;/strong&gt; , so I can recreate or update cloud resources with code. It’s clean, consistent, and future-proof.&lt;/p&gt;

&lt;p&gt;Below overall architecture in this phase of the project, this will keep evolving as the project goes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/07/finwisenest_architecture1.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3u4h7juwh48qylzpvqz.png" width="800" height="899"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6npxlgkg7tflsm2otu5.png" alt="🚀" width="72" height="72"&gt; &lt;strong&gt;The Vertical Slice: Making It Real&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Once the architecture was in place, I wanted to test the full flow with a real feature. I decided to build a &lt;strong&gt;“vertical slice”&lt;/strong&gt; — one complete feature that connects the frontend, backend, and database.&lt;/p&gt;

&lt;p&gt;I chose the &lt;strong&gt;Portfolio View&lt;/strong&gt; as the first slice. Here’s how I put it together:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Frontend First (Next.js 14 + Tailwind CSS):&lt;/strong&gt;
I started by designing the UI using mock data. This helped me focus on layout, logic, and design, without worrying about backend code yet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Scaffolding (&lt;code&gt;PortfolioService&lt;/code&gt; in .NET 9):&lt;/strong&gt;
Next, I created a microservice with a basic API endpoint that returned the same mock data. This gave the frontend a real URL to call, which made things feel much more “alive.”&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full-Stack Integration:&lt;/strong&gt;
Then came the exciting part—connecting the Next.js frontend to the backend service. After setting up CORS, the frontend was able to fetch live data from the backend running locally. It was the first moment everything came together.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adding a Real Database (Entity Framework Core):&lt;/strong&gt;
Finally, I replaced the mock data with real data. I used &lt;strong&gt;Entity Framework Core&lt;/strong&gt; to define the data models and generate the SQL schema. I updated the &lt;code&gt;PortfolioController&lt;/code&gt; to query the database using a &lt;code&gt;DbContext&lt;/code&gt;, and now the portfolio data is fully dynamic.&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7j4n1t5ohpm4qmn11ly9.png" alt="✅" width="72" height="72"&gt; &lt;strong&gt;Where Things Stand Now&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As of today, FinWiseNest has a working full-stack base: a user can load the app, which calls a .NET microservice, which then fetches data from a SQL database and shows the user’s portfolio. It’s a simple view, but it proves that the architecture and tech stack work well together.&lt;/p&gt;

&lt;p&gt;This vertical slice validated the main design choices, and gives me a solid starting point for everything that’s coming next.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9h4a8f3xx14t0j623nh6.png" alt="🔜" width="72" height="72"&gt; &lt;strong&gt;What’s Next&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;With the foundation in place, I’m ready to take the next steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build and connect the &lt;code&gt;TransactionService&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Complete the event-driven flow&lt;/li&gt;
&lt;li&gt;Add charts and data visualization to the frontend&lt;/li&gt;
&lt;li&gt;Set up proper CI/CD pipelines for faster and safer deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The technical base is done. Now comes the fun part—expanding the platform and building real value for users.&lt;/p&gt;

&lt;p&gt;Thanks for reading! I’ll keep sharing updates as FinWiseNest evolves.&lt;/p&gt;

</description>
      <category>uncategorized</category>
    </item>
    <item>
      <title>Optimize .Net API Performance with SQL Server</title>
      <dc:creator>Daniel Hajek</dc:creator>
      <pubDate>Wed, 07 May 2025 10:37:37 +0000</pubDate>
      <link>https://dev.to/daniel_hajek_f3e950f9157e/optimize-net-api-performance-with-sql-server-1lhd</link>
      <guid>https://dev.to/daniel_hajek_f3e950f9157e/optimize-net-api-performance-with-sql-server-1lhd</guid>
      <description>&lt;p&gt;In this article we’re going to improve performance of an .Net API application using sql server database. We’re going to use milions of rows to simulate real huge catalog of products, say around 5 millions of products. To make it even more interesting, we’re going to simulate between 10-20 concurrent users browsing our website and looking for the products.&lt;/p&gt;

&lt;p&gt;When dealing with product in retail industry and e-commerce, customers needs to filter by titles, descriptions or even pricing or maybe category like shoes. Our goal will be to ensure that even with huge amount of data, our users will get very good performance and nice user experience. These techniques are also important from costs perspective. Each time we are not cautious about performance or queries, the uses of our system can skyrocket. And most of the time, cloud providers like Azure or AWS are tight to usage of the components and pricing is affected.&lt;/p&gt;

&lt;p&gt;First, I’ll explain how you can setup this project on your local computer. Then, I’m going to start with very basic baseline project, without any optimizations, we will check insights using &lt;a href="https://en.wikipedia.org/wiki/K6_(software)" rel="noopener noreferrer"&gt;k6&lt;/a&gt; testing tool made by Grafana. Our testing scenarios will try to replicate customers actions like browsing the products, filtering by name or description or pricing. But we will test also admin actions like removing a product from catalog or creating or updating existing products.&lt;/p&gt;

&lt;p&gt;Then, step by step, we will optimize our sql database and c# logic to ensure we stay close to our tresholds and limits. For each step we will spend some time on testing outcomes and analyze numbers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech stack
&lt;/h3&gt;

&lt;p&gt;Below is a list of tech stack and tools we’re going to use in this project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;.Net Core 9&lt;/li&gt;
&lt;li&gt;Sql Server Express / Developer Edition&lt;/li&gt;
&lt;li&gt;Entity Framework Core 9&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Redis" rel="noopener noreferrer"&gt;Redis&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;K6 (Grafana) testing tool&lt;/li&gt;
&lt;li&gt;Visual Studio 2022&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Development
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;You can clone my GitHub repository in any time and check final project &lt;a href="https://github.com/DCodeWorks/CatalogX" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;First, lets create a clean architecture splitting our whole project in clean layers like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/05/image-2.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxlks1mufl6e8hkucbb6.png" width="570" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Domain&lt;/strong&gt; project will contain only Products.cs class which represents our Sql table and will be used mainly by EF core and crud operations. I think is always good idea to use domain layer and separate our core logic with domain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure&lt;/strong&gt; project contains sql database context. Having one specific project for Infrastructure like tools is important so we can easily switch for example a database to another type of database and give as clear visibility where the infrastructure stay.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Seeder&lt;/strong&gt; application will be used to seed or import data in our database. This is key for this project because we want to test realistic scenario with huge amount of data. Seeder is using database context defined in Infrastrucutre layer so that it can communicate with the database. that is the only responsability of this project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API&lt;/strong&gt; project contains core logic and endpoints. Is is worth spend more time on this project. Ednpoints we’re going to use are very basic crud operations where we can create/update/delete/read entity like product.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/05/image-4.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkv3io5dhiz5g1b6bgdx0.png" width="330" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sql Server will be used to store our entity permanently.&lt;/p&gt;

&lt;h3&gt;
  
  
  Initial Setup
&lt;/h3&gt;

&lt;p&gt;If you clone my &lt;a href="https://github.com/DCodeWorks/CatalogX" rel="noopener noreferrer"&gt;github&lt;/a&gt;project, you would first need to run EF Core migrations so that products table is created in your sql database.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Important: Please note to not to use docker sql database. Later in the article we will implement replication and most docker versions of sql database doesn’t provide this functionality. Is strongly reccomended to use Sql Server Express or Developer edition for this project.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once you have installed Sql Server instance, your &lt;strong&gt;connection string&lt;/strong&gt; should looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Server=localhost;Database=dbName;UserId=sa;Password=YourStrong!Passw0rd;TrustServerCertificate=True;MultipleActiveResultSets=true;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ensure you have all &lt;strong&gt;EF Core packages&lt;/strong&gt; installed in the infrastructure and API projects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Microsoft.EntityFrameworkCore&lt;/li&gt;
&lt;li&gt;Microsoft.EntityFrameworkCore.Design&lt;/li&gt;
&lt;li&gt;Microsoft.EntityFrameworkCore.SqlServer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next run the migrations commands from your package manager console making sure your default project dropdown is Infrastructure, then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Add-Migration InitialCreate -StartupProject CatalogX.API

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and then&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Update-Database -StartupProject CatalogX.API

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If no errors, you should have your database created, you can login to sql server instance using &lt;a href="https://learn.microsoft.com/en-us/ssms/download-sql-server-management-studio-ssms" rel="noopener noreferrer"&gt;Sql Server management Studio&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/05/image-6.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febjf0atv3k3bgzuipis7.png" width="475" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we are ready to seed our 5 milions products data running Seeder application. The logic are very basic. What we’re doing here is to loop using FOR loop 5 milions of times, creating each time batch of products and generating casual data for each product like Pricing, Title, Description and Category. Run the application and wait till complete, this could take 5 or 10 minutes based on your pc performance.&lt;/p&gt;

&lt;p&gt;Done? Alright, now you should have 5 milions of products in your database with fake products like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/05/image-7.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fonwmqxzkyx4v8h2wjp5a.png" width="541" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Optional: InluxDB and Grafana setup:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you want to see visual graphs instead of plain numbers when testing, you would need to setup &lt;a href="https://www.influxdata.com/" rel="noopener noreferrer"&gt;InfluxDB&lt;/a&gt;, which is a time series database used by Grafana to visualize our precious data collected when running the testing. Basically each time an endpoint is hit, and an action is happening like checks or eventual failures, we collect and capture them in Influx Database. Then Grafana will read from this database and show us in a nice way all the insights.&lt;/p&gt;

&lt;p&gt;To setup InfluxDB and Grafana, the easiest way is to use docker images. You can use docker compose yml file from my github page here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/DCodeWorks/CatalogX/blob/main/CatalogX/docker-compose.yml" rel="noopener noreferrer"&gt;compose-docker.yml&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: The yml file will enable also Redis on your docker app. We will talk about redis later in this article.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For testing we will use k6 tool, so please install it from &lt;a href="https://grafana.com/docs/k6/latest/set-up/install-k6/" rel="noopener noreferrer"&gt;this page&lt;/a&gt; based on your operating system.&lt;/p&gt;

&lt;p&gt;Now we’re ready to start phase 1!&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1 (Baseline – No Optimizations)
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;I would suggest before you start k6 testing to first became comfortable with API call using postman. This will give you context and understanding of k6 js script we will see in this chapter.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now we can start deep diving into actual testing using K6 tool. K6 is using javascript language and is very flexible and powerful if you need to test real world scenario and high traffic on your local computer. You can imagine K6 js script like automation of your postman calls, but instead of having nice UI and doing manual clicks you can automate with a javascript code.&lt;/p&gt;

&lt;p&gt;You can find final javascipt code here on my &lt;a href="https://github.com/DCodeWorks/CatalogX/blob/main/tests/random-test.js" rel="noopener noreferrer"&gt;github page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;What this script does is to simulate six different user’s like scenarios (actually seven scenarios, I added another one later in this article…), with multiple virtual users and calling product API endpoints with params or json payload in random order. For example, to fetch products using pagination, scenario is defined like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
paginationTest: {
    executor: 'constant-vus',
    exec: 'pagination',
    vus: 10,
    duration: '1m',
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For this scenario we want to measure some numerical performance, and the best way is to use percentiles. I’m focus in these tests on p95 percentile which gives us indication on how well our system is doing for 95% of the users and 5% that is actually exceeding our set limit. For example for pagination test I expect this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
'http_req_duration{scenario:paginationTest}': ['p(95)&amp;lt;300']

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means that 95% of the users should receive the paginated result of the products in less then 300ms. I can tolerate that 5% of the users will get the response higher then 300ms.&lt;/p&gt;

&lt;p&gt;Each exported function is defining an API call and is performing some basic logic. This could be random IDs, random page access, body payload for product creation and so on. For example for pagination I decided to create page number between 1 and 500 with size of 20, means I want to get maximum 20 products per page.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
const page = Math.floor(Math.random() * 500) + 1;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For each call we want to check the outcome and capture the result. thsi si where check() function provided by k6 tool comes handy. In our pagination test we want to check if the pagination has success status = 200 and we’ve returned some products.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
check(res, {
    'pagination 200': (r) =&amp;gt; r.status === 200,
    'has items': (r) =&amp;gt; {
        if (r.status !== 200) return false;
        const body = r.json();
        return Array.isArray(body.data) &amp;amp;&amp;amp; body.data.length &amp;gt; 0;
    },
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Other tests are doing similar logic but each is focusing on specific endpoint.&lt;/p&gt;

&lt;p&gt;So, lets recap where we have got so far. We have API with endpoints ready, we have sql server database instance with products table, you should have docker containers running with InfluxDB and Grafana. I hope you have also tried at this point API calls with postman. Now we’re ready to execute our javascript with k6 using below command from your visual studio 2022 developer powershell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
k6 run --out influxdb=http://admin:adminpass@localhost:8086/k6 random-test.js

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will tell to k6 to execute our script &lt;strong&gt;&lt;em&gt;random-test.js&lt;/em&gt;&lt;/strong&gt; and write all the results into influxDB database.&lt;/p&gt;

&lt;p&gt;As we can see, thresholds results looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/05/image-9.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cy5ljyk47s1eq3o9iwd.png" width="520" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What these numbers tell as is that most of the GET requests are extremely slow. We haven’t reach our range allowed and considered as acceptable but most importantly we are far away from our acceptance range. Worst one is SearchTest and I would like to focus on this particularly as first optimization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 2 – Sql Server Full-text search
&lt;/h2&gt;

&lt;p&gt;Sql Server has very powerful feature called &lt;a href="https://learn.microsoft.com/en-us/sql/relational-databases/search/full-text-search?view=sql-server-ver16" rel="noopener noreferrer"&gt;Full-Text search&lt;/a&gt;. You can read official microsoft documentation but let me briefly explain what it does.&lt;/p&gt;

&lt;p&gt;SQL Server &lt;strong&gt;Full-Text Search&lt;/strong&gt; is a tool that helps you &lt;strong&gt;quickly find words or phrases inside large text fields&lt;/strong&gt; , like product descriptions, articles, or documents.&lt;/p&gt;

&lt;p&gt;It will deserve one post article to explain in details how Full-Text Search works behind the scenes. If you want to deep dive into this, check this &lt;a href="https://www.sqlshack.com/hands-full-text-search-sql-server/" rel="noopener noreferrer"&gt;article&lt;/a&gt; on sqlhack.com.&lt;/p&gt;

&lt;p&gt;As it is now, we do the following in API GET product endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
if (!string.IsNullOrEmpty(queryParams.Search))
{
    query = query.Where(p =&amp;gt; p.Name.Contains(queryParams.Search) ||
    p.Description.Contains(queryParams.Search));
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This translates in sql syntax like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
WHERE Name LIKE '%searchTerm%'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This results in very slow query because it scans every row in our table looking for searchTerm in a column.&lt;/p&gt;

&lt;p&gt;Now lets change this so we use &lt;strong&gt;full-text search&lt;/strong&gt; feature. We need to create full text catalog with script like this in our database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
-- Create a full-text catalog if it doesn’t exist
IF NOT EXISTS (SELECT * FROM sys.fulltext_catalogs WHERE name = 'ProductCatalog')
BEGIN
    CREATE FULLTEXT CATALOG ProductCatalog AS DEFAULT;
END
GO

-- Create a full-text index on the Products table
IF NOT EXISTS (SELECT * FROM sys.fulltext_indexes WHERE object_id = OBJECT_ID('dbo.Products'))
BEGIN
    CREATE FULLTEXT INDEX ON dbo.Products(Name, Description)
    KEY INDEX PK_Products -- Replace with the actual name of your primary key index
    ON ProductCatalog;
END
GO

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we can use this feature and change our c# code from this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 if (!string.IsNullOrEmpty(queryParams.Search))
 {
     query = query.Where(p =&amp;gt; p.Name.Contains(queryParams.Search) ||
     p.Description.Contains(queryParams.Search));
 }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
if (!string.IsNullOrEmpty(queryParams.Search))
{
    var quotedSearch = $"\"{queryParams.Search}\"";
    // This leverages SQL Server's CONTAINS function behind the scenes
    query = query.Where(p =&amp;gt; EF.Functions.Contains(p.Name, quotedSearch)
                             || EF.Functions.Contains(p.Description, quotedSearch));
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;EF.Functions.Contains&lt;/strong&gt; change the sql query to this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
SELECT * FROM Products WHERE CONTAINS(Description, 'some description');

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets run the tests again and check SearchTest tresholds&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/05/image-10.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhy4pex2mqo96i767mqr6.png" width="472" height="70"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We passed from 40s to 1.3s. Not too bad! Our threshold is still not passed as I set it to less then 500ms. But considering 5 millions products, we have now 95% of the users getting their products in less then 1.3s. Quite impressive!&lt;/p&gt;

&lt;p&gt;If you remember from our first tests, second worst p95 percentile timing was coming from Category filtering test. Lets focus on this now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 2 – Non Clustered Indexing
&lt;/h3&gt;

&lt;p&gt;Imagine a user that want to see the products related to a specific category. In our code in c#, the query is happening here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
if (!string.IsNullOrEmpty(queryParams.Category))
    query = query.Where(p =&amp;gt; p.Category == queryParams.Category);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This translates in database in a query like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
SELECT * 
FROM Products 
WHERE Category = @Category;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means that database search engine will scan whole table and look for all products that have category equal to &lt;a class="mentioned-user" href="https://dev.to/category"&gt;@category&lt;/a&gt;. This is usually quite bad from performance of view especially when rows number are huge. What we want instead is to apply non-clustered index on queries that we know in advance will be used by end users.&lt;/p&gt;

&lt;p&gt;What is &lt;strong&gt;non-clustered index&lt;/strong&gt;? You can see it like the table of content in a book and pointers are stored and used to quickly access data in a database. &lt;a href="https://www.geeksforgeeks.org/introduction-of-b-tree-2/" rel="noopener noreferrer"&gt;B-Tree&lt;/a&gt; is used in most cases. B-trees are actually very popular in relational databases and is worth reading about this algorithm in more details.&lt;/p&gt;

&lt;p&gt;So lets create a non clustered index on category and pricing column including Name to cover case when we want to show the users name of the product faster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
CREATE NONCLUSTERED INDEX IX_Products_Category_Price ON dbo.Products(Category, Price) INCLUDE (Name);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets run again the test and compare:&lt;/p&gt;

&lt;p&gt;before non clustered index creation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/05/image-11.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78vogsk9w0jzz6c0p31j.png" width="520" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and after&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/05/image-12.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdf4t9av2sn81n8hk3wh0.png" width="484" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have improved performance by 10 time for 95% of the users. That is quite impressive with just one non/clustered index.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Indexes can improve performance but they need more memory usage as they need to create indexes in the memory. This could also affect writes in database because the indexes needs to be updated. So be mindful when using them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Phase 3 – Redis caching
&lt;/h3&gt;

&lt;p&gt;Our endpoint that is filtering products and applying query on database is implementing pagination. Without pagination it will be impossible to run any kind of query as the data, in our case products, won’t be able to return any result in time. The reson behind is that .Net apply timeouts on connection level which will result in timeout errors (default timeout is 30 seconds).&lt;/p&gt;

&lt;p&gt;With pagination however the perfomance is still not great. Imagine you have an ecommerce website and some of the products are very popular, or even a category top seller is so popular that each customer reach this page every time. Does it make sense to query your database every time and parse the result in json? Of course not. Much better solution is to use some kind of caching system.&lt;/p&gt;

&lt;p&gt;Caching system is sort of fast, RAM &lt;strong&gt;in-memory&lt;/strong&gt; store. It is &lt;strong&gt;key-value&lt;/strong&gt; based. Means that you can quickly access a content if you know the exact key. If you are familiar with c# dictionary, so that is the same principle.&lt;/p&gt;

&lt;p&gt;There are several benefits of using caching system like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;improve performance&lt;/li&gt;
&lt;li&gt;reduce db load&lt;/li&gt;
&lt;li&gt;lower latency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Althow there are many tools we can use, the most famous and popular is &lt;a href="https://en.wikipedia.org/wiki/Redis" rel="noopener noreferrer"&gt;Redis&lt;/a&gt; database. It is in-memory key-value database and is very well integrated with .Net ecosystem. In the setup section we mentioned that Redis will be enabled in docker through docker-compose yml file. So you should have it already up and running.&lt;/p&gt;

&lt;p&gt;I won’t be explaining how to inject Redis in .net and setup connection string because you can simply clone my github project or follow up &lt;a href="https://redis.io/docs/latest/develop/clients/dotnet/" rel="noopener noreferrer"&gt;Redis documentation for .net&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So how we’re going to use Redis in our scenario?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We assume that some of the Categories and pages are most popular. Say our most popular products stay on page 1 and are linked to Category 201. this means we know exactly how our key value should looks like.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
var cacheKey = $"products:adv:pg={queryParams.PageNumber}:sz={queryParams.PageSize}"
         + (string.IsNullOrEmpty(queryParams.Category) ? "" : $":cat={queryParams.Category}")
         + (string.IsNullOrEmpty(queryParams.Search) ? "" : $":q={queryParams.Search}");

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even if the system change, and other products will became most popular, this system still works well. Each time a customer reach a pagination and category, this key and its value are saved in cache memory to be used later. If the request is completely new, the key doesn’t exists yet and we miss the cache and we need to read from the sql database (this is called &lt;strong&gt;Cold Cache&lt;/strong&gt; ).&lt;/p&gt;

&lt;p&gt;If the request was already performed previously, we read from the Redis fast in-memory database&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
var cached = await db.StringGetAsync(cacheKey);
if (cached.HasValue)
{
    _logger.LogInformation("Cache hit for {Key}", cacheKey);
    return Content(cached, "application/json");
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is our current implementation with Redis in a sequence diagram:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cold Cache&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/05/image-13.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrpzozzns7ngm5v9ub55.png" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Warm Cache&lt;/strong&gt; &lt;strong&gt;(Hot Cache)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/05/image-15.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9fas4z2ai97exri95432.png" width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But there is a gap in our flow. What happens if we add or update or delete a product from our database? This would lead into an inconsistency and data won’t be updated. Customers will continue receiving old products because the key won’t change and the products will be read from Redis even after database updates.&lt;/p&gt;

&lt;p&gt;To avoid this, we can invalidate cache memory every time there is an update on the catalog. Say we insert a new product in products table using POST endpoint in our product controller. In this case, all the keys linked to products will be deleted from the Redis memory and next paginated results request will add new key and value again with most updated content.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
var server = _redis.GetServer(_redis.GetEndPoints()[0]);
foreach (var key in server.Keys(pattern: "products:adv:*"))
{
    await _redis.GetDatabase().KeyDeleteAsync(key);
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The same logic needs to be added to UPDATE and DELETE endpoints.&lt;/p&gt;

&lt;p&gt;Ok, so now we have in-memory system, consistent data when catalog of products change and we have reduced database load. This last point is very important because it allows the companies to r &lt;strong&gt;educe costs of the infrastructure&lt;/strong&gt;. Database load is usually tight to pricing on Azure cloud or AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lets do some tests.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To test caching system, we would need to tweak little bit our k6 javascript scenarios. First, lets add this scenario&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
cacheTest: {
    executor: 'constant-vus',
    exec: 'cacheTest',
    vus: 5,
    duration: '50s',
    startTime: '0s',
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, &lt;strong&gt;duration&lt;/strong&gt; and &lt;strong&gt;startTime&lt;/strong&gt; are respectively 50s and 0s. What I’m doing here is to anticipate cacheTesting that should start before CREATE/UPDATE and DELETE operations so we don’t invalidate caching memory between the tests.&lt;/p&gt;

&lt;p&gt;Lets implement our function responsible to test paginated endpoint that hits with equal pageNumber, pageSize and Category.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
export function cacheTest() {
    const url = `${API_BASE}`
        + '?PageNumber=1'
        + '&amp;amp;PageSize=20'
        + '&amp;amp;Category=Category%201';

    const res1 = http.get(url, { tags: { scenario: 'cacheTest' } });

    check(res1, {
        'cacheTest first fetch status 200': (r) =&amp;gt; r.status === 200
    });

    sleep(0.2);

    const res2 = http.get(url, { tags: { scenario: 'cacheTest' } });
    check(res2, {
        'cacheTest second fetch status 200': (r) =&amp;gt; r.status === 200,
        'cacheTest fast response &amp;lt;50ms': (r) =&amp;gt; r.timings.duration &amp;lt; 50
    });

    sleep(0.2);
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function is implemented so it hit always the same endpoint with parameters like:&lt;/p&gt;

&lt;p&gt;PageNumber=1&amp;amp;PageSize=20&amp;amp;Category=Category%201&lt;/p&gt;

&lt;p&gt;and we check two scenarios:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;High latency&lt;/strong&gt; when we hit database and the response time should be quite high (around a second)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redis served content&lt;/strong&gt; , which should have very &lt;strong&gt;low latency&lt;/strong&gt; around 50-100ms&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Lets run the tests and check together the results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://danielhajek.wordpress.com/wp-content/uploads/2025/05/image-14.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgpgfob59oss8uloa0jq.png" width="800" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lets compare paginationTest and cacheTest. These two test give us an idea what is happening post caching implementation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pagination&lt;/strong&gt; continue to be quite high in latency. This because pagination test is hitting always different endpoints and filter parameters and we actually see only &lt;strong&gt;Cold cache&lt;/strong&gt;. This means that our users would access some random unpopular products and our system will take a while to show the products to them.&lt;/p&gt;

&lt;p&gt;But for &lt;strong&gt;cacheTest&lt;/strong&gt; the situation improved considerably. important indicator of improvement is that average time requested dropped under 100ms. Max is still high and equal to 1.08s. this is due to the cold cache as first time we hit database query. In one minute test, &lt;strong&gt;&lt;em&gt;p95 percentile is under 300ms&lt;/em&gt;&lt;/strong&gt;. means that 95% of the users would receive the products under 300ms, which is quite good and acceptable. For paginated only, 95% of customers would need to wait almost 2 seconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In this post we introduced very important concepts, techniques and tools that can help you if you are concern about performance of your system or if your traffic increases and is evolving.&lt;/p&gt;

&lt;p&gt;These topics are very important when it comes to distributed systems where performance, latency, throughput and data consistency are crucial to ensure that the system can scale and maintain performance acceptable even if the traffic and data load increases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What next?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Even though we have improved performance there is still space for getting our system even better. &lt;strong&gt;&lt;a href="https://learn.microsoft.com/en-us/sql/relational-databases/replication/sql-server-replication?view=sql-server-ver16" rel="noopener noreferrer"&gt;Replication&lt;/a&gt;&lt;/strong&gt;for example could be very beneficial if your system has many writing in your database or if you want to distribute your data in different nodes and protect your data in case of failures. This will ensure your system is reliable even when errors occurs. Replication is increasing performance as well because we could technically split our readings and writings from two different databases. One used for reading only and second one for writing only.&lt;/p&gt;

&lt;p&gt;Sql Server support also &lt;strong&gt;&lt;a href="https://learn.microsoft.com/en-us/sql/relational-databases/partitions/partitioned-tables-and-indexes?view=sql-server-ver16" rel="noopener noreferrer"&gt;partitioning&lt;/a&gt;&lt;/strong&gt;. Partitioning lets us divide physically a table into smaller parts. In our case, we could split products table into smaller pieces physically by ProductIDs but logically they will stay part of the same table. This will get faster our queries even more because queries are running only on specific partitions.&lt;/p&gt;

&lt;p&gt;Happy coding! :)&lt;/p&gt;

</description>
      <category>netcore</category>
      <category>sqlserver</category>
      <category>performance</category>
      <category>k6</category>
    </item>
  </channel>
</rss>
