<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Oussama Belhadi</title>
    <description>The latest articles on DEV Community by Oussama Belhadi (@zorous).</description>
    <link>https://dev.to/zorous</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/zorous"/>
    <language>en</language>
    <item>
      <title>Kafka | everything you need to get started</title>
      <dc:creator>Oussama Belhadi</dc:creator>
      <pubDate>Mon, 13 Oct 2025 13:48:06 +0000</pubDate>
      <link>https://dev.to/zorous/kafka-everything-you-need-to-get-started-19c4</link>
      <guid>https://dev.to/zorous/kafka-everything-you-need-to-get-started-19c4</guid>
      <description>&lt;h1&gt;
  
  
  Decoding Kafka: The Architectural Shift to Scalable, Decoupled Systems
&lt;/h1&gt;

&lt;p&gt;Apache Kafka is one of the most transformative technologies in modern system design, moving applications from sluggish, tightly coupled messes to real-time, scalable data streams. But what is it, and why is everyone so excited about it?&lt;/p&gt;

&lt;p&gt;Let's dive into the problem Kafka solves and its fundamental concepts, using the real-life example of an e-commerce platform.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. The Tightly Coupled Trap
&lt;/h2&gt;

&lt;p&gt;When a startup builds an e-commerce application, they often start with the simplest microservices architecture where services call each other directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngti7dfugy7ryyrbpuvo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngti7dfugy7ryyrbpuvo.png" alt="tightly-coupled-example" width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Imagine an &lt;strong&gt;Order Service&lt;/strong&gt; that needs to tell the &lt;strong&gt;Payment&lt;/strong&gt;, &lt;strong&gt;Inventory&lt;/strong&gt;, &lt;strong&gt;Analytics&lt;/strong&gt;, and &lt;strong&gt;Notification&lt;/strong&gt; services about a new order. The communication is synchronous:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Hey &lt;strong&gt;Inventory&lt;/strong&gt;, update your stock. Wait for the confirmation. Hey &lt;strong&gt;Payment&lt;/strong&gt;, process this. Wait again..."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It looks like this in Programming :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdm6w1m1ekptanp41qwtd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdm6w1m1ekptanp41qwtd.png" alt="programming-example" width="541" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While straightforward at first, this approach leads to a nightmare when traffic spikes, such as during a busy holiday season:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tight Coupling:&lt;/strong&gt; If the &lt;strong&gt;Payment Service&lt;/strong&gt; or the &lt;strong&gt;Inventory Service&lt;/strong&gt; goes down, the entire order process freezes, blocking the &lt;strong&gt;Order Service&lt;/strong&gt; from completing its task.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Synchronous Communication:&lt;/strong&gt; Each order becomes a game of dominoes. One slow service delays the entire chain reaction, causing customers to stare at loading screens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Single Points of Failure:&lt;/strong&gt; An outage for one essential service can mean hours of order backlogs and lost sales.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The solution is to &lt;strong&gt;decouple&lt;/strong&gt; these services by introducing a highly reliable middleman.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Introducing Kafka: The Post Office Analogy
&lt;/h2&gt;

&lt;p&gt;Think of Kafka as a central &lt;strong&gt;Post Office&lt;/strong&gt; or mail delivery service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcoakt60ba72szo6zjdiu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcoakt60ba72szo6zjdiu.png" alt="kafka-central-post_office-example" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you ship a package, you don't personally drive to every person who needs to know about that shipment. You hand it over to the post office and go home. You trust the post office (the middleman) to handle the delivery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kafka is that middleman.&lt;/strong&gt; Instead of services calling each other directly, they hand their information to Kafka and immediately get back to work.&lt;/p&gt;

&lt;p&gt;This simple change moves the architecture from a synchronous "hot potato" game to an &lt;strong&gt;asynchronous, scalable conveyor belt&lt;/strong&gt; of items.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Kafka's Core Concepts
&lt;/h2&gt;

&lt;p&gt;To understand how Kafka works, we need to know its main components:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Concept&lt;/th&gt;
&lt;th&gt;E-commerce Example&lt;/th&gt;
&lt;th&gt;Post Office Analogy&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Event&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;An order placed, a payment failed, stock updated.&lt;/td&gt;
&lt;td&gt;The letter or package being sent.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Producer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The &lt;strong&gt;Order Service&lt;/strong&gt; writes a new "Order Placed" event to Kafka.&lt;/td&gt;
&lt;td&gt;The sender dropping off a package at the counter.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Broker&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;A Kafka server that stores and manages the events.&lt;/td&gt;
&lt;td&gt;A post office branch or the entire mail delivery infrastructure.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Topic&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;A category or feed for a stream of related events, e.g., &lt;code&gt;orders&lt;/code&gt;, &lt;code&gt;payments&lt;/code&gt;, &lt;code&gt;inventory&lt;/code&gt;.&lt;/td&gt;
&lt;td&gt;A dedicated section for a type of mail (letters, large packages, international mail).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Consumer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Services subscribed to a Topic, e.g., &lt;strong&gt;Notification Service&lt;/strong&gt;, &lt;strong&gt;Inventory Service&lt;/strong&gt;.&lt;/td&gt;
&lt;td&gt;The recipient who is notified and picks up their mail.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  The Flow of an Event
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqz0fvtj4ap0cl84vb0js.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqz0fvtj4ap0cl84vb0js.png" alt="chain-of-event" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; The &lt;strong&gt;Order Service (Producer)&lt;/strong&gt; creates an &lt;strong&gt;Event&lt;/strong&gt; (an order payload with all details) and writes it to the &lt;code&gt;orders&lt;/code&gt; &lt;strong&gt;Topic&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; The Order Service immediately moves on to the next task—it doesn't wait.&lt;/li&gt;
&lt;li&gt; The &lt;strong&gt;Notification Service (Consumer)&lt;/strong&gt;, &lt;strong&gt;Inventory Service (Consumer)&lt;/strong&gt;, and &lt;strong&gt;Analytics Dashboard (Consumer)&lt;/strong&gt; are all subscribed to the &lt;code&gt;orders&lt;/code&gt; Topic.&lt;/li&gt;
&lt;li&gt; Kafka notifies all subscribed consumers about the new event.&lt;/li&gt;
&lt;li&gt; Each consumer performs its own action independently and in parallel:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Notification:&lt;/strong&gt; Sends a confirmation email to the customer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inventory:&lt;/strong&gt; Updates the stock level in the database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Payment:&lt;/strong&gt; Generates the invoice.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0rk1w5lguub3tdu1myf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0rk1w5lguub3tdu1myf.png" alt="chain-of-events" width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because they are decoupled, the failure of the &lt;strong&gt;Inventory Service&lt;/strong&gt; will not stop the &lt;strong&gt;Notification Service&lt;/strong&gt; from sending the email or the Order Service from accepting new orders.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Achieving Scalability: Partitions and Consumer Groups
&lt;/h2&gt;

&lt;p&gt;Kafka's true power lies in its ability to handle millions of events per second. It achieves this primarily through two concepts: &lt;strong&gt;Partitions&lt;/strong&gt; and &lt;strong&gt;Consumer Groups&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Partitions: Scaling Writes and Reads
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;Topic&lt;/strong&gt; is divided into ordered, immutable sequences called &lt;strong&gt;Partitions&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Analogy:&lt;/strong&gt; If the "Letters" section of the Post Office gets overloaded, you add more workers. But instead of random assignments, you distribute work based on a criteria: "Ann processes letters for Europe, Steve handles the US."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;In Kafka:&lt;/strong&gt; The &lt;code&gt;orders&lt;/code&gt; Topic might be partitioned into &lt;code&gt;EU_orders&lt;/code&gt;, &lt;code&gt;US_orders&lt;/code&gt;, and &lt;code&gt;Asia_orders&lt;/code&gt;. This allows Producers to write data to different partitions &lt;em&gt;in parallel&lt;/em&gt;, significantly increasing throughput. Partitions also allow the storage load to be distributed across multiple &lt;strong&gt;Brokers&lt;/strong&gt; (Kafka servers).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Consumer Groups: Scaling Processing
&lt;/h3&gt;

&lt;p&gt;When orders come in too fast, a single consumer service (like &lt;strong&gt;Inventory&lt;/strong&gt;) can get overwhelmed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Analogy:&lt;/strong&gt; The letters are arriving, but the single recipient is getting buried under the pile. You need helpers to sort through the mail.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;In Kafka:&lt;/strong&gt; You can start additional instances (replicas) of the same service (e.g., three instances of the &lt;strong&gt;Inventory Service&lt;/strong&gt;). These replicas form a &lt;strong&gt;Consumer Group&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Kafka automatically distributes the partitions among the consumers in the group. If the &lt;code&gt;orders&lt;/code&gt; Topic has three partitions, each of the three Inventory Service instances will be assigned one partition, allowing them to process the data &lt;strong&gt;in parallel&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  5. Beyond Messaging: The Power of Streams
&lt;/h2&gt;

&lt;p&gt;Unlike traditional message queues, Kafka &lt;strong&gt;persists&lt;/strong&gt; every event/message as long as you need (based on a configurable retention policy). This enables powerful use cases like &lt;strong&gt;Stream Processing&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Traditional Message Queue:&lt;/strong&gt; Once a message is read, it's deleted (like watching live TV—if you miss it, it's gone).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kafka (Stream Platform):&lt;/strong&gt; Events are saved long-term, and Consumers can read them anytime, even multiple times (like watching a streaming service—you can pause, replay, or start from the beginning).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This capability is essential for &lt;strong&gt;real-time analytics&lt;/strong&gt; and complex event processing, like this Driver Live location example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rd9tc2itshst1pdnmd4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rd9tc2itshst1pdnmd4.png" alt="live-strean-example" width="800" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Real-time Analytics:&lt;/strong&gt; The &lt;strong&gt;Analytics Service&lt;/strong&gt; can continuously stream all &lt;strong&gt;Order&lt;/strong&gt; and &lt;strong&gt;Payment&lt;/strong&gt; events to update sales dashboards and revenue numbers in real time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stateful Processing:&lt;/strong&gt; An application can constantly process incoming &lt;strong&gt;Inventory&lt;/strong&gt; events, check if the stock dropped below a certain threshold, and immediately trigger a low-inventory alert, initiating an automatic restock order.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By decoupling services, enabling massive scale through partitioning, and providing stream-processing capabilities, Apache Kafka becomes the circulatory system for a modern, data-driven application.&lt;/p&gt;

&lt;p&gt;I have found &lt;a href="https://www.youtube.com/watch?v=QkdkLdMBuL0" rel="noopener noreferrer"&gt;this video&lt;/a&gt; very helpful, most screenshots and points discussed are extracted from it. &lt;/p&gt;

</description>
      <category>kafka</category>
      <category>architecture</category>
      <category>microservices</category>
      <category>programming</category>
    </item>
    <item>
      <title>Software Engineering/Architecture in a Nutshell: A Four-Year Journey</title>
      <dc:creator>Oussama Belhadi</dc:creator>
      <pubDate>Tue, 02 Sep 2025 13:10:51 +0000</pubDate>
      <link>https://dev.to/zorous/literally-everything-you-need-to-know-about-system-design-3a8</link>
      <guid>https://dev.to/zorous/literally-everything-you-need-to-know-about-system-design-3a8</guid>
      <description>&lt;p&gt;Whether you're a software engineer, system architect, or a student, system design is a crucial skill for building robust, scalable, and reliable applications. It's the art of creating a blueprint for a software system that meets specific requirements. This guide will walk you through the fundamental principles, common architectures, and essential concepts you'll need to master to design and build systems that can stand the test of time.&lt;/p&gt;

&lt;p&gt;Here are the concepts I will pass by in this blog :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Diagram Types in System Design
&lt;/li&gt;
&lt;li&gt; Production App Architecture
&lt;/li&gt;
&lt;li&gt; The Pillars of System Design
&lt;/li&gt;
&lt;li&gt; The Heart of System Design
&lt;/li&gt;
&lt;li&gt; CAP Theorem (Brewer’s Theorem)
&lt;/li&gt;
&lt;li&gt; Building Resilience into a System
&lt;/li&gt;
&lt;li&gt; Measuring Speed
&lt;/li&gt;
&lt;li&gt; Network Basics
&lt;/li&gt;
&lt;li&gt; API Design &amp;amp; Best Practices
&lt;/li&gt;
&lt;li&gt;Scaling &amp;amp; Performance Strategies&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This document is your one-stop resource for understanding everything from the core pillars of a well-designed system to practical strategies like caching, load balancing, and network protocols. Let's dive in.&lt;/p&gt;

&lt;p&gt;I will start with the most interesting part for me which is Diagrams.&lt;br&gt;
Diagrams help you see a zoomed out picture of a system and visualize the foggy parts when things are so interconnected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Diagram Types in System Design
&lt;/h2&gt;

&lt;p&gt;&lt;a id="diagram-types"&gt;&lt;/a&gt;&lt;br&gt;
When building any system, from a simple app to a large-scale enterprise platform, it's essential to have a clear blueprint. This blueprint is created using various types of diagrams that help visualize the architecture at different levels of detail. Diagrams range from a broad overview to a granular, component-specific view.&lt;/p&gt;

&lt;h3&gt;
  
  
  High-Level Diagrams
&lt;/h3&gt;

&lt;p&gt;These diagrams provide a &lt;strong&gt;zoomed-out view&lt;/strong&gt; of the system, focusing on its main components and how they interact with each other and the outside world.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0l8l8eh00emgr4pjms0c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0l8l8eh00emgr4pjms0c.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Context Diagram&lt;/strong&gt;: This is a very high-level diagram that shows the system as a single black box. Its purpose is to show how the system interacts with external entities (users, other systems, etc.). It helps define the scope of the system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Container Diagram&lt;/strong&gt;: A step down in detail from a context diagram, this view shows the major containers or applications within the system (e.g., a web application, a database, a mobile app). It focuses on how these applications work together to deliver the system's functionality.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Low-Level Diagrams
&lt;/h3&gt;

&lt;p&gt;These diagrams provide a &lt;strong&gt;detailed, zoomed-in view&lt;/strong&gt; of the system's internal workings, showing how specific parts are structured and how they behave.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqgs3vqptjbt2q5sudh1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqgs3vqptjbt2q5sudh1.png" alt=" " width="698" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Component Diagram&lt;/strong&gt;: This diagram shows the smaller, modular parts within a container and their dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Class Diagram&lt;/strong&gt;: This type of diagram illustrates the static structure of a system by showing the different classes, their attributes, methods, and relationships. It's used to define the system's blueprint.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbr0r73xbpqhlbvzrd5fw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbr0r73xbpqhlbvzrd5fw.png" alt=" " width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sequence Diagram&lt;/strong&gt;: A sequence diagram shows the dynamic behavior of a system by illustrating the order of events and the interactions between objects or components over time. It's great for visualizing a specific use case or flow.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62m9bzigb8v4gqo4p14h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62m9bzigb8v4gqo4p14h.png" alt=" " width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ERD (Entity-Relationship Diagram)&lt;/strong&gt;: This diagram is a blueprint of the database. It shows the data entities (tables), their attributes (columns), and the relationships between them. It is crucial for understanding how data is stored and connected.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The C4 Model: A Framework for Diagrams
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;C4 model&lt;/strong&gt; is a popular framework for visualizing software architecture that organizes these different diagrams into a logical hierarchy. It stands for &lt;strong&gt;Context, Containers, Components, and Code&lt;/strong&gt;. Each "C" represents a specific level of abstraction, or "zoom level," designed for a different audience.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Context (Level 1)&lt;/strong&gt;: The highest level, matching the Context Diagram.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Containers (Level 2)&lt;/strong&gt;: Zooms in to show the major applications and services, matching the Container Diagram.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Components (Level 3)&lt;/strong&gt;: Breaks down a single container into its internal parts, similar to your Component Diagram.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code (Level 4)&lt;/strong&gt;: The lowest level, showing the actual code structure (e.g., classes and their relationships), which aligns with your Class and Sequence Diagrams.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Production App Architecture example
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1ov4f65q6xcfso9pnjk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1ov4f65q6xcfso9pnjk.png" alt=" " width="800" height="448"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a id="production-app-architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will pass the parts that are known for most people which are CI/CD with &lt;strong&gt;Github Actions&lt;/strong&gt; or &lt;strong&gt;Jenkins,&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;and move to implementing a load balancer like Ngnix to handle user requests, ensuring a reliable storage, using external logging and monitoring services like Sentry or PM2, that send alerts to an alert service that shows alerts to the connected users and sends immediate alert messages to the developers using a communication channel like Slack so action can be taken immediately to fix the issue. &lt;br&gt;
and the first thing developers are gonna look at is the logs, so devs go searching for the root of the problem :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gg5dljgzbhcaeahq33a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gg5dljgzbhcaeahq33a.png" alt=" " width="658" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The golden rule is to never debug in a production environment but create a staging/testing area to ensure that the users wont get infected by the debugging process&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6kidbojvp1l11e5l7is.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6kidbojvp1l11e5l7is.png" alt=" " width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  The Pillars of system design
&lt;/h1&gt;

&lt;p&gt;&lt;a id="the-pillars-of-system-design"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scalability :&lt;/strong&gt; system growth.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintainability :&lt;/strong&gt; ensuring future developers can understand and build on top of the system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency :&lt;/strong&gt; best use of existing resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reliability :&lt;/strong&gt; planning for failure, and ensuring that its handled smoothly and not only working on best case scenarios, (maintaining composure when things go wrong)&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  The Heart of System Design
&lt;/h1&gt;

&lt;p&gt;&lt;a id="the-heart-of-system-design"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Moving Data :&lt;/strong&gt; ensuring that data can flow seamlessly from one part of the system to another, user requests seeding the servers or transfer between databases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storing Data :&lt;/strong&gt; Choosing between Relational and No-Relational based databases and having a  deep understanding of the following concepts :

&lt;ul&gt;
&lt;li&gt;access patterns&lt;/li&gt;
&lt;li&gt;indexing strategies&lt;/li&gt;
&lt;li&gt;backup solutions&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Ensuring that the data is not just stored securely but readably available when needed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Transforming Data :&lt;/strong&gt; taking raw data and turning it into meaningful information, for example aggregating log files for analysis or converting user input to understandable format like json.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  CAP Theorem or Brewer’s Theorem
&lt;/h1&gt;

&lt;p&gt;&lt;a id="cap-theorem"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s a set of principles that guide us in making informed trade-offs between 3 key components of a distributed system : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Consistency :&lt;/strong&gt; ensures that all nodes of the same system have the same data at the same time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zxxfwvarfyqwxah7vva.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zxxfwvarfyqwxah7vva.png" alt=" " width="519" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Availability :&lt;/strong&gt;  the system should be operational and available for requests 24/7, regardless of what happens behind the scenes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqehevppinzyrdaqbdqgb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqehevppinzyrdaqbdqgb.png" alt=" " width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;the SLO is the objective of the service, what’s expected and SLA is the agreement with the users about the expected uptime, what we are committing to provide, for example if we commit to 99.9% availability and drop below that, we might have to provide refunds or other compensations&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Partition Tolerance :&lt;/strong&gt; the system ability to continue functioning even when a network partition occur, meaning if there’s a disruption in communication between nodes, the system still works.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrgmau8gj30aqs2v6pb6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrgmau8gj30aqs2v6pb6.png" alt=" " width="592" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Following to CAP’s Theorem, a system can only achieve 2 of these principles/properties at the same time. if we appreciate Consistency and Partition Tolerance we might have to compromise on Availability and vise versa.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6oh20fj2himpfoh2iqnw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6oh20fj2himpfoh2iqnw.png" alt=" " width="718" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Building Resilience into a system
&lt;/h1&gt;

&lt;p&gt;&lt;a id="building-resilience-into-a-system"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Building resilience into a system means expecting the unexpected, that could mean implementing a resilient system, ensuring that there’s always a backup ready to take over in case of failure&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0b6a9lxj1vax4ja1m47.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0b6a9lxj1vax4ja1m47.png" alt=" " width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Measuring Speed: Throughput &amp;amp; Latency
&lt;/h1&gt;

&lt;p&gt;&lt;a id="measuring-speed"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w1vdy0ktru3eu1alc5f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w1vdy0ktru3eu1alc5f.png" alt=" " width="800" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;we have two unity measures apparently &lt;strong&gt;Throughput&lt;/strong&gt; and &lt;strong&gt;Latency&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Throughput : Its mainly how much data a system can handle in a given time frame&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Server throughout - measured in RPS (Request per second), how many requests our system can take per second, the higher the better.&lt;/li&gt;
&lt;li&gt;DB throughput - measured in QPS (Query Per second), how many queries per second can the system take.&lt;/li&gt;
&lt;li&gt;Data throughput - measured in Bits/s, that reflects the amount of data transferred over the network or processed by the system in a given period of time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Latency : Its simply the amount of time taken to process a single request
&lt;/h3&gt;

&lt;p&gt;Optimizing for one can often lead to sacrifices in the other. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Designing a system poorly can lead to several issues from performance bottlenecks to security vulnerabilities, if you think refactoring code is hard then redesigning a system is x10 times harder (monumental task), so designing the right system could lay the right foundation to support futuristic features and  user growth.&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Network Basics
&lt;/h1&gt;

&lt;p&gt;We are basically discussing how computers communicate with each other.&lt;/p&gt;

&lt;h3&gt;
  
  
  IP Layer :
&lt;/h3&gt;

&lt;p&gt;&lt;a id="network-basics"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each device in the internet is represented by an IP Address, these days its IPv4 which is based on 32bits and can offer 4Billion unique IP addresses but with the growth of number of devices in the world, a migration to an IPv6 will be necessary, which uses 128-bit significantly increasing the number of unique addresses to 340 Tera addresses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tlyeb79ydu94r8mt2dz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tlyeb79ydu94r8mt2dz.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;when devices communicate between each other they send packets of data, each packet contains an IP header which contains some meta data, like the sender and the receiver IP addresses to ensure that the data reaches the right destination, this process is governed by the &lt;strong&gt;Internet Protocol (IP)&lt;/strong&gt; which is a set of rules that defines how data is sent and received.&lt;/p&gt;

&lt;h3&gt;
  
  
  Application Layer :
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F590xemudw98z2e95he4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F590xemudw98z2e95he4c.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;here where we store the data related to the application protocol, the data in this package is formatted according to specific application protocol data like HTTP for web browsing, so the data is interpreted correctly by the receiving device&lt;/p&gt;

&lt;h3&gt;
  
  
  Transport Layer :
&lt;/h3&gt;

&lt;p&gt;where TCP (Transmission Control Protocol) and UDP (User DataGram Protocol) come to play&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjpn6fuqhl0alctd4bz6t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjpn6fuqhl0alctd4bz6t.png" alt=" " width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;TCP&lt;/strong&gt; ensures reliable communication like a delivery guy that makes sure that the package to the right destination AND that nothing is missing, each data packet also includes a TCP Header which includes important information like the ports and control flags necessary for managing the connection and data flow;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpv0egg33z2et30or9ndc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpv0egg33z2et30or9ndc.png" alt=" " width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;TCP is known for reliability and its famous 3 way hand-shake which establishes connection between 3 devices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjnysu0byx8tbu8z9232.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjnysu0byx8tbu8z9232.png" alt=" " width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;UDP&lt;/strong&gt; is faster but not as reliable as TCP, it doesnt establish connection before sending data nor ensures the delivery or order of the package, makes it preferrable for time sensitive communication like video calls or live streaming where speed is crucial and some data loss is acceptable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To tie all these concepts together we have DNS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbws953ny5xijx4a30pt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbws953ny5xijx4a30pt.png" alt=" " width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;DNS (Domain Name Server)&lt;/strong&gt; is basically like the internet phone book translating human friendly domain names into IP-Adrresses, when u enter a domain in the browser, the browser sends a DNS query that searches for the corresponding IP Address allowing it to establish the connection and retrieve the web page,&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;the functioning of DNS is overseen by &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;ICANN(Internet Corporation For Assigned Names And Numbers),&lt;/strong&gt; which coordinates the global IP address space and domain name system, and Domain Name Registererers like Namecheap or GoDaddy are accredited by ICANN to sell domain names to the public.&lt;/p&gt;

&lt;p&gt;DNS, uses different types of records like&lt;br&gt;
&lt;strong&gt;A Records :&lt;/strong&gt; which map the domain to it’s corresponding IPv4, ensuring that the requests reach the correct destination&lt;/p&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AAAA Records:&lt;/strong&gt;  which basically do the same thing but map the domains to an IPv6 Address.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Networking Basic Infrastructure :
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Public IP Addresses :&lt;/strong&gt; Unique across the internet&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Private IP Addresses :&lt;/strong&gt; Unique only within the network&lt;/p&gt;

&lt;p&gt;and IP Address can be Static, permanently assigned to a device or Dynamic, changing over time (commonly used in residential areas, Cafes and public spaces) &lt;/p&gt;

&lt;p&gt;Devices within a Local Area Network can communicate with each other directly, and usually a firewall is used to monitor and filter the ingoing and outgoing packets from that local network.&lt;/p&gt;

&lt;p&gt;Device Services are Identified by ports which when combined with an IP Address create a Unique Identifier for a network service.&lt;/p&gt;

&lt;p&gt;some ports are reserved for specific services like :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTTP - 80&lt;/li&gt;
&lt;li&gt;HTTPS - 443&lt;/li&gt;
&lt;li&gt;MySQL - 3306&lt;/li&gt;
&lt;li&gt;SSH - 22&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4r9lgts2nymtm4b9m6tm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4r9lgts2nymtm4b9m6tm.png" alt=" " width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTTP&lt;/strong&gt;; Request - Response Protocol with no memory, it doesnt have to store any request context&lt;br&gt;
the POST/GET requests contain all the necessary information like the RequestURL, The Request Method, Status, Policy …etc.&lt;/p&gt;

&lt;p&gt;each status code represents a specific state of the request&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ltfup69l82dikbrs6sw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ltfup69l82dikbrs6sw.png" alt=" " width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;each request has a method GET, POST, PUT, DELETE (CRUD)&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Web Protocols&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;HTTP&lt;/strong&gt; is a one way connection, if we want real time two way connection (Real time updates) we use websockets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;WebSockets;&lt;/strong&gt; provides a 2 way communication channel over a single long-lived connection, allowing servers to push real time updates to clients, like Chat Applications, without the overhead of HTTP Request-Response Cycles&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Email Related Protocols :&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;SMTP; an ideal protocol for sending emails&lt;/li&gt;
&lt;li&gt;IMAP; used to retrieve emails from a server (internet Message Access Protocol)&lt;/li&gt;
&lt;li&gt;POP3; Used for downloading emails from a server to a local client.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;File Transfer Protocols :&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;FTP : transferring files over the internet&lt;/li&gt;
&lt;li&gt;SSH: For Command-line login and file; used for operating network services securely on an unsecured network; exp: logging into a remote machine and executing commands or transferring files&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Real-time Communication Protocols :&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;WebRTC :&lt;/strong&gt; enables browser to browser applications for voice calling, video chat and file sharing without internal or external plugins, essential for live streaming or video conferencing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MQTT :&lt;/strong&gt;  Lightweight messaging protocol (message queuing Telemetry Transport); ideal for devices with limited processing power (IoT devices)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RPC (Remote Procedure Call);&lt;/strong&gt; a protocol that allows a program on one computer to execute code on a server or another computer, its a method used to invoke a function as if it was a local call while in reality the code is executed in a remote machine.&lt;/p&gt;

&lt;p&gt;in Web services, HTTP requests can make RPC calls to execute code on behalf of a client to execute a specific functionality.&lt;/p&gt;

&lt;h1&gt;
  
  
  API Design
&lt;/h1&gt;

&lt;p&gt;&lt;a id="api-design-best-practices"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In API design we are concerned with defining :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the &lt;strong&gt;inputs&lt;/strong&gt;; (what the users enters), like products details for a new product which is provided by a seller.&lt;/li&gt;
&lt;li&gt;the &lt;strong&gt;ouputs&lt;/strong&gt; the information returned when someone queries a product of an API.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgxo59sez7ld7bh0wmge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgxo59sez7ld7bh0wmge.png" alt=" " width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The focus is mainly on how the CRUD operations are exposed to the user interface&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2wiet3i8n6wsmhhew3w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2wiet3i8n6wsmhhew3w.png" alt=" " width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There different types of APIs like GraphQL, REST and gRPC, each with its own set of principles, protocols and standards, the most used one is &lt;strong&gt;REST&lt;/strong&gt; which stands for &lt;strong&gt;REpresational State Transfer,&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;gRPC&lt;/strong&gt; stands for &lt;strong&gt;google Remote Procedure Call,&lt;/strong&gt; in the image the positive and negative parts of each type could be seen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqal4win5bj89zdkz4yni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqal4win5bj89zdkz4yni.png" alt=" " width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;the way an API is structured can largely vary from use case to use case but that’s the ideal format to follow, here’s an example of fetching the orders for a specific user in a E-Commerce application :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi335ds80y5w7fmfexhjf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi335ds80y5w7fmfexhjf.png" alt=" " width="744" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;common queries using best practices can also include limits and pagination like so :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi52qdm0lpe6inz52edyp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi52qdm0lpe6inz52edyp.png" alt=" " width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;a Get Request must be &lt;strong&gt;Idempotent,&lt;/strong&gt; the result never changes upon querying the get request repeatedly and a Get request must never affect changes, we have POST request for that.&lt;/p&gt;

&lt;p&gt;When modifying endpoints it’s important to maintain &lt;strong&gt;Backward Compatibility and Versioning,&lt;/strong&gt; that means that we need to ensure that changes don’t break existing clients, a common practice is to introduce new versions, like version to product; version 1 API can still serve the old client and the version 2 API should serve the new client&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthw54rfe3oimmygx39q1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthw54rfe3oimmygx39q1.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another best practice is to integrate a Rate Limiter to prevent DDOS attacks, it basically prevents a user from sending multiple requests a user can make in a certain time frame , a common practice is to also set CORS settings (cross origin resource sharing) to only accept requests from specific domains to prevent unwanted cross site interactions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn08pd0t9qua6v2myp81a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn08pd0t9qua6v2myp81a.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now image a company is hosting a website in a server on google cloud data centers in Finland, the response time for users across Europe would be 100ms but for users from US or AFRICA or Mexico the respond time might get longer up to 4000ms or more. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvtz6nkx2832sb8xjckd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvtz6nkx2832sb8xjckd.png" alt=" " width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;but fortunately, there are strategies to minimize this request latency for people who are far away&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsot3lxu94jqako4ahshb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsot3lxu94jqako4ahshb.png" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;these strategies are :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nv7pagakk4v2lt7f4lh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3nv7pagakk4v2lt7f4lh.png" alt=" " width="794" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Caching and CDNs
&lt;/h1&gt;

&lt;p&gt;&lt;a id="scaling-and-performance-strategies"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;caching is a technique used to improve the performance and efficiency of a system, it involves storing a copy of data in temporary storage so that future request for that data can be stored faster, there are 4 common places where cache can be stored :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Browser Caching&lt;/strong&gt;;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;using websites resources on the user’s local computer (usually bundled files html,css and js) typically in a dedicated cache directory managed by the browser so when the user revisits the site the browser can load the cache rather than fetching everything from the server again, there’s a cache control header to tell the browser how long this content should be cached, we can check that on the developer tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fil9kwjcedjfxm36wm5gi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fil9kwjcedjfxm36wm5gi.png" alt=" " width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   cache hit means cache was searched for and found.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;cache miss means cache was searched for and not found.&lt;/p&gt;

&lt;p&gt;the higher the cache ratio the better (more reliable cache)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Server Caching;&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;involves storing frequently used data on the server side, reducing the use of expensive operations like querying the database, they are usually stored on a server or a separate cache server, either on memory like Redis or on disk.&lt;/p&gt;

&lt;p&gt;There are several ways to deal with the caching on the server side :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjjuivumioxz8jqj1pzsq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjjuivumioxz8jqj1pzsq.png" alt=" " width="595" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Write around cache;&lt;/strong&gt; The server checks if the data exists on the cache first, if it wasn’t found then it queries the database and stores it in the cache.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Write-Through Cache;&lt;/strong&gt; data simultaneously written to cache and the database to ensure asynchronization but it might be slower than write-around cache.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wy4kk79kforj4ieflwx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wy4kk79kforj4ieflwx.png" alt=" " width="610" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Write-Back cache;&lt;/strong&gt; where data is first returned to the cache and then to permanent storage at a later time, but the storage can get quickly full like that, that why we have&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Eviction Policies :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd1m8otvc7d1qndwhe5ws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd1m8otvc7d1qndwhe5ws.png" alt=" " width="766" height="549"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Database Caching;&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqva5an4kuw29kxbc3jko.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqva5an4kuw29kxbc3jko.png" alt=" " width="787" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;that’s another crucial concept, it includes storing query results to improve the performance of database driven applications, It’s often done within the database system itself or via an external caching layer like Redis or Memcache, when the query is made we check the cache first to see if the result of that query has been already stored, if it is we return the cached data, if not we query the database and the result is stored on the cache for future use, that’s super beneficial for Read Focused applications, also the same eviction policies used on the server side caching, apply here too&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;CDNs;&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0epled87n971d5q1g0i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0epled87n971d5q1g0i.png" alt=" " width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;They are basically a network of servers distributed graphically, they are usually used to serve static content like html, css and javascript, images and other assets.&lt;/p&gt;

&lt;p&gt;they cache the content from the original server and deliver it to the end user from the nearest CDN server. There are 2 types of CDNs: &lt;strong&gt;Pull-based&lt;/strong&gt; and &lt;strong&gt;Push-Based.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="" class="article-body-image-wrapper"&gt;&lt;img alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;in Pull-based, it first checks if the assets are existing in the nearest server, if not it pulls them from the main server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fns09awz7oyan88sfhbv6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fns09awz7oyan88sfhbv6.png" alt=" " width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Push-based we usually put the assets in the main server and it automatically pushed them to the CDN servers, requires more management.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71qf3t3wzb0v7fb551vr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71qf3t3wzb0v7fb551vr.png" alt=" " width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy63v4zsj70idiz1bhi6y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy63v4zsj70idiz1bhi6y.png" alt=" " width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Proxy Servers
&lt;/h1&gt;

&lt;p&gt;they act as an intermediate between the Client asking for resources and the server providing that resources, the proxy servers can serve many other purposes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovy461d2k3gv5pd32mc2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovy461d2k3gv5pd32mc2.png" alt=" " width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;there are different types of Proxies :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyf77b527jwn8bfarx7l5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyf77b527jwn8bfarx7l5.png" alt=" " width="694" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The most commonly used proxies are &lt;strong&gt;Forward Proxy&lt;/strong&gt; and &lt;strong&gt;Reverse Proxy :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;forward proxy&lt;/strong&gt; acts as a middleman for a &lt;strong&gt;client&lt;/strong&gt; to access the internet. It protects the client's identity by making requests on their behalf, hiding their IP address and providing a layer of security and privacy. Forward proxies are often used in corporate or school networks for content filtering and enforcing usage policies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4r365temcar53ub3bjax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4r365temcar53ub3bjax.png" alt=" " width="800" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;reverse proxy&lt;/strong&gt; acts as a middleman for a &lt;strong&gt;server&lt;/strong&gt; (or group of servers) to accept requests from the internet. It protects the servers by receiving all incoming traffic and then forwarding it to the appropriate internal server, which hides the server's identity. Reverse proxies are used for things like load balancing, SSL encryption, and protecting against DDoS attacks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffng3oig89a61qvz5uap8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffng3oig89a61qvz5uap8.png" alt=" " width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Load Balancers
&lt;/h1&gt;

&lt;p&gt;A &lt;strong&gt;load balancer&lt;/strong&gt; acts as a traffic controller, distributing incoming network traffic across multiple servers to prevent any single server from becoming overwhelmed. It ensures high availability, improves application performance, and allows for seamless scaling.&lt;/p&gt;

&lt;h3&gt;
  
  
  Types of Load Balancers
&lt;/h3&gt;

&lt;p&gt;There are three main types of load balancers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hardware Load Balancers&lt;/strong&gt;: These are physical appliances with dedicated hardware and software, offering high performance and security for large-scale, on-premise deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Software Load Balancers&lt;/strong&gt;: These are applications that run on standard servers, offering more flexibility and lower cost. They can be installed on-premise or in the cloud.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud Load Balancers&lt;/strong&gt;: These are fully managed services provided by cloud providers like AWS or Google Cloud. They are scalable, easy to configure, and integrate seamlessly with other cloud services.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Used Algorithms
&lt;/h3&gt;

&lt;p&gt;Load balancing algorithms determine how traffic is distributed. They can be broadly categorized into two types:&lt;/p&gt;

&lt;h3&gt;
  
  
  Static Algorithms
&lt;/h3&gt;

&lt;p&gt;These algorithms don't consider the current state of the servers (e.g., their load or health).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Round Robin&lt;/strong&gt;: Distributes requests sequentially to each server in a rotating fashion. Simple and effective for servers with equal capacity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weighted Round Robin&lt;/strong&gt;: Assigns a "weight" to each server based on its capacity, sending more requests to more powerful servers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IP Hash&lt;/strong&gt;: Uses a hash of the client's IP address to ensure requests from the same client are always sent to the same server, which is useful for maintaining session persistence.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Dynamic Algorithms
&lt;/h3&gt;

&lt;p&gt;These algorithms actively monitor server health and load to make more intelligent routing decisions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Least Connections&lt;/strong&gt;: Directs new requests to the server with the fewest active connections. This is good for environments where connection times vary.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Least Response Time&lt;/strong&gt;: Sends requests to the server with the lowest response time, ensuring the fastest service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource-Based&lt;/strong&gt;: Uses an agent on each server to report real-time metrics (like CPU or memory usage) and directs traffic to the server with the most available resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Databases (Sharding, Replication, ACID, Vertical &amp;amp; Horizontal Scaling)
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Databases
&lt;/h3&gt;

&lt;p&gt;Databases are organized collections of data, typically stored electronically. To handle increasing amounts of data and traffic, different strategies are used to manage and scale them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sharding
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Sharding&lt;/strong&gt; is a technique that breaks up a large database into smaller, more manageable parts called &lt;strong&gt;shards&lt;/strong&gt;. Each shard is a separate database that contains a subset of the total data. The main goal of sharding is to improve performance by distributing the load across multiple servers. Instead of one server handling all the queries, multiple servers handle a portion of the workload, reducing the strain on any single machine. This is a form of &lt;strong&gt;horizontal scaling&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Replication
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Replication&lt;/strong&gt; is the process of creating and maintaining multiple copies of a database. It's used for two primary reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High Availability&lt;/strong&gt;: If one server fails, another copy (or &lt;strong&gt;replica&lt;/strong&gt;) can take over, ensuring the database remains accessible.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load Balancing&lt;/strong&gt;: Read traffic can be distributed across multiple replicas, reducing the load on the primary server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37xy4vlzt9jlmbwnyyny.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37xy4vlzt9jlmbwnyyny.png" alt=" " width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are different types of replication, but the most common model is &lt;strong&gt;Primary-Replica&lt;/strong&gt; (also known as Master-Slave), where one database is the primary (writes are allowed), and others are replicas (reads are allowed).&lt;/p&gt;

&lt;h3&gt;
  
  
  ACID Properties
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsfwcj28lqlx9hu21dnbr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsfwcj28lqlx9hu21dnbr.png" alt=" " width="672" height="528"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Relational Databases are ACID compliant&lt;/strong&gt; is a set of properties that guarantee a database transaction is processed reliably. They are fundamental to ensuring data integrity, especially in relational databases.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Atomicity&lt;/strong&gt;: A transaction is treated as a single, indivisible unit. It either completes entirely or doesn't happen at all. There are no partial transactions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency&lt;/strong&gt;: A transaction must bring the database from one valid state to another, ensuring all data integrity rules are maintained.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Isolation&lt;/strong&gt;: Concurrent transactions are isolated from each other. The result of a transaction is the same as if it were the only transaction running.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Durability&lt;/strong&gt;: Once a transaction is committed, it will remain so even in the event of a system failure (like a power outage).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Scaling
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojggdjdrw1kircjn26pl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojggdjdrw1kircjn26pl.png" alt=" " width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scaling&lt;/strong&gt; is the ability to handle increased demand. There are two main ways to scale a database:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vertical Scaling&lt;/strong&gt; (Scaling Up): This involves increasing the resources of a single server, such as adding more CPU, RAM, or storage. It's simpler to implement but has a physical limit and can be more expensive. Think of it as upgrading a single computer with better parts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Horizontal Scaling&lt;/strong&gt; (Scaling Out): This involves adding more servers to the system. It's more complex to manage but provides greater flexibility and is often more cost-effective for large-scale systems. Sharding and replication are examples of horizontal scaling. Think of it as adding more computers to the network.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For the record, the images and diagrams used in this guide are from FreeCodeCamp.&lt;/p&gt;

</description>
      <category>systemdesign</category>
      <category>webdev</category>
      <category>architecture</category>
      <category>networking</category>
    </item>
    <item>
      <title>n8n self-hosting with persistent workflows</title>
      <dc:creator>Oussama Belhadi</dc:creator>
      <pubDate>Mon, 25 Aug 2025 08:10:39 +0000</pubDate>
      <link>https://dev.to/zorous/n8n-self-hosting-with-persistent-workflows-4pbk</link>
      <guid>https://dev.to/zorous/n8n-self-hosting-with-persistent-workflows-4pbk</guid>
      <description>&lt;h2&gt;
  
  
  🚀 How I Self-Host n8n with Docker + Git for Persistent Workflows
&lt;/h2&gt;

&lt;p&gt;If you’re like me and love automating things with &lt;a href="https://n8n.io/" rel="noopener noreferrer"&gt;n8n&lt;/a&gt;, self-hosting is the best way to have &lt;strong&gt;full control&lt;/strong&gt; over your workflows.&lt;br&gt;&lt;br&gt;
But there’s a catch:  &lt;/p&gt;

&lt;p&gt;👉 By default, workflows are stored inside the container’s DB. If something breaks, you risk losing them.&lt;br&gt;&lt;br&gt;
👉 And moving workflows between devices isn’t exactly plug &amp;amp; play.  &lt;/p&gt;

&lt;p&gt;So, I built a simple setup that makes self-hosting &lt;strong&gt;persistent, Git-friendly, and portable&lt;/strong&gt;, using &lt;strong&gt;Docker + Git hooks&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;You can find the repo &lt;a href="https://github.com/Zorous/n8n-docker.git" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  📦 Project Structure
&lt;/h2&gt;

&lt;p&gt;Here’s the layout of my &lt;code&gt;n8n-docker&lt;/code&gt; repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;n8n-docker/
├── docker-compose.yml      # Docker Compose setup for n8n
├── export-workflows.sh     # Script to export all workflows as JSON
├── workflows/              # Folder where exported workflows are stored (Git-tracked)
├── n8n\_data/               # Docker volume for SQLite DB, credentials, and user account
└── .git/hooks/
└── pre-commit          # Git hook to auto-export workflows before commit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  ⚙️ How it Works
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Docker Compose Setup&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker-compose.yml&lt;/code&gt; runs n8n in a container.
&lt;/li&gt;
&lt;li&gt;Data is persisted in &lt;code&gt;n8n_data/&lt;/code&gt; so you don’t lose your account or workflows after restart.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Automatic Workflow Export&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A helper script &lt;code&gt;export-workflows.sh&lt;/code&gt; uses &lt;code&gt;docker exec&lt;/code&gt; to dump all workflows as JSON into the &lt;code&gt;workflows/&lt;/code&gt; folder.
&lt;/li&gt;
&lt;li&gt;This makes it easy to track workflows in Git.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Git Pre-Commit Hook&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before every commit, &lt;code&gt;.git/hooks/pre-commit&lt;/code&gt; automatically runs the export script and stages updated JSONs.
&lt;/li&gt;
&lt;li&gt;Result: your Git history always has the latest workflows.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;No more “oh crap I forgot to export that workflow.”  &lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 Usage
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start n8n with Docker&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Commit changes to Git&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Create or edit workflows in n8n.&lt;/li&gt;
&lt;li&gt;Run a commit:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Update workflows"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The hook auto-exports workflows → stages them → includes them in your commit.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Move to another device&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Clone the repo.&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;docker compose up -d&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Import workflows from the &lt;code&gt;workflows/&lt;/code&gt; folder.&lt;/li&gt;
&lt;li&gt;(Note: credentials are not included — you’ll need to recreate those manually).&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ⚠️ Notes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The container must be &lt;strong&gt;running&lt;/strong&gt; for workflow export to work.&lt;/li&gt;
&lt;li&gt;Credentials are stored encrypted in &lt;code&gt;n8n_data/&lt;/code&gt;, not exported to Git.&lt;/li&gt;
&lt;li&gt;Add a &lt;code&gt;.gitignore&lt;/code&gt; to keep secrets and &lt;code&gt;.env&lt;/code&gt; files safe.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔗 References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.n8n.io/" rel="noopener noreferrer"&gt;n8n Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.n8n.io/getting-started/installation/docker/" rel="noopener noreferrer"&gt;n8n Docker Setup&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Zorous/n8n-docker.git" rel="noopener noreferrer"&gt;The Github Repo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;This setup has been a &lt;strong&gt;game-changer&lt;/strong&gt; for me — it feels like using n8n with built-in Git support.&lt;br&gt;
Now, even if I nuke my server, I can recover my workflows in minutes.&lt;/p&gt;

</description>
      <category>n8n</category>
      <category>tooling</category>
      <category>automation</category>
    </item>
    <item>
      <title>Predicting Customer Churn with TensorFlow – A Beginner-Friendly Guide</title>
      <dc:creator>Oussama Belhadi</dc:creator>
      <pubDate>Mon, 18 Aug 2025 10:21:12 +0000</pubDate>
      <link>https://dev.to/zorous/predicting-customer-churn-with-tensorflow-a-beginner-friendly-guide-3l3a</link>
      <guid>https://dev.to/zorous/predicting-customer-churn-with-tensorflow-a-beginner-friendly-guide-3l3a</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Customer churn is when customers leave a company. Predicting churn helps businesses retain valuable customers and increase revenue.&lt;/p&gt;

&lt;p&gt;In this tutorial, I’ll show you how to use TensorFlow, pandas, and scikit-learn to build a neural network that predicts churn based on a real dataset.&lt;/p&gt;

&lt;p&gt;You can find a working ready to test/use example in my &lt;a href="https://github.com/Zorous/Tensorflow-in-nutshell" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No heavy theory — just step-by-step coding, explanations, and visuals.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We have a .csv file that holds the customers data and we will use it as a dataset to train our model today, It's also available in the &lt;a href="https://github.com/Zorous/Tensorflow-in-nutshell/blob/master/telco_customer_churn.csv" rel="noopener noreferrer"&gt;Github Repo&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Setting Up the Environment
&lt;/h2&gt;

&lt;p&gt;We need these libraries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install pandas numpy scikit-learn tensorflow matplotlib
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;pandas → for data manipulation&lt;/li&gt;
&lt;li&gt;numpy → for numeric computations&lt;/li&gt;
&lt;li&gt;scikit-learn → preprocessing, scaling, train/test splitting&lt;/li&gt;
&lt;li&gt;tensorflow → building neural networks&lt;/li&gt;
&lt;li&gt;matplotlib → plotting results&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 2: Load and Inspect the Dataset
&lt;/h2&gt;

&lt;p&gt;Load the dataset with pandas:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pandas as pd

df = pd.read_csv("customer_churn.csv")
df.head()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Tip: ⚠️ Always check your column names. Spaces or extra characters can break code later:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df.columns = df.columns.str.strip().str.replace(" ", "_")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Clean the Data
&lt;/h2&gt;

&lt;p&gt;Convert numeric columns with potential issues:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df['Total_Charges'] = pd.to_numeric(df['Total_Charges'], errors='coerce')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Drop missing rows and irrelevant columns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df = df.dropna()
df.drop('Customer_ID', axis=1, inplace=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Encode Categorical Variables
&lt;/h2&gt;

&lt;p&gt;Neural networks cannot process text. Convert categories to numbers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.preprocessing import LabelEncoder

df['Churn'] = df['Churn'].map({'Yes': 1, 'No': 0})

cat_cols = df.select_dtypes(include='object').columns
le = LabelEncoder()
for col in cat_cols:
    df[col] = le.fit_transform(df[col])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; Male → 1, Female → 0. Similarly for other categories.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Split Features and Target
&lt;/h2&gt;

&lt;p&gt;Separate input features (X) and output (y);&lt;br&gt;
Before scaling, each feature (column) has its own mean and standard deviation. Neural networks learn better when features are roughly in the same range.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mean:&lt;/strong&gt; average value of the feature&lt;br&gt;
&lt;strong&gt;Standard Deviation:&lt;/strong&gt; measures how spread out the values are&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The formula for the mean (μ) of a dataset with N values is:&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgvsjekie4yvqqk3dido.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgvsjekie4yvqqk3dido.png" alt="mean calculation formula" width="229" height="122"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Standard Scaler subtracts the mean and divides by the standard deviation,&lt;br&gt;
&lt;strong&gt;The formula for the standard deviation is&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn2mo5pjkspasj9h0w0p1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn2mo5pjkspasj9h0w0p1.png" alt="standard deviation calculation formula" width="298" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After scaling, each feature has mean ~0 and std ~1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4e8awjkhmrbfps1n0of.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4e8awjkhmrbfps1n0of.png" alt="normal distribution example" width="800" height="612"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X = df.drop('Churn', axis=1)
y = df['Churn']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Scale features (important for neural networks):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Split into train/test sets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, random_state=42)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 6: Build and Train the Neural Network
&lt;/h2&gt;

&lt;p&gt;from tensorflow.keras.models import Sequential&lt;br&gt;
from tensorflow.keras.layers import Dense&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model = Sequential([
    Dense(32, activation='relu', input_shape=(X_train.shape[1],)),
    Dense(16, activation='relu'),
    Dense(1, activation='sigmoid')
])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why these layers and activations?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxyij0z286nzuhrw93t7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxyij0z286nzuhrw93t7.png" alt="neural network layers" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dense(32) and Dense(16) → number of neurons in each hidden layer. Experiment to see what works best.&lt;/li&gt;
&lt;li&gt;ReLU activation → introduces non-linearity, helps the network learn complex patterns.&lt;/li&gt;
&lt;li&gt;Sigmoid in output → outputs a probability between 0 and 1, perfect for binary classification.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Optimizer: Adam
model.compile(
    optimizer='adam',
    loss='binary_crossentropy',
    metrics=['accuracy']
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why Adam?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adaptive optimizer: adjusts learning rate automatically&lt;/li&gt;
&lt;li&gt;Combines advantages of Momentum and RMSProp&lt;/li&gt;
&lt;li&gt;Works well out-of-the-box for most problems&lt;/li&gt;
&lt;li&gt;Loss function: binary_crossentropy → suitable for predicting 0/1 outcomes.&lt;/li&gt;
&lt;li&gt;Metric: accuracy → how often the model predicts correctly.&lt;/li&gt;
&lt;li&gt;Training
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;history = model.fit(
    X_train, y_train,
    validation_data=(X_test, y_test),
    epochs=20,
    batch_size=32
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Epochs = 20 → model sees the dataset 20 times.&lt;/p&gt;

&lt;p&gt;Batch size = 32 → updates weights every 32 samples.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7: Evaluate and Visualize
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import matplotlib.pyplot as plt

plt.plot(history.history['accuracy'], label='Train')
plt.plot(history.history['val_accuracy'], label='Validation')
plt.title('Accuracy over Epochs')
plt.legend()
plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Ouput Example :
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Folxrcs1vefau6co1xoif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Folxrcs1vefau6co1xoif.png" alt="output chart example" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Train vs Validation curves → check for overfitting/underfitting.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8: Predict Churn for a New Customer
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import numpy as np
import pandas as pd

new_customer = pd.DataFrame([{
    'Gender': 0, 'Senior_Citizen': 0, 'Partner': 1, 'Dependents': 0,
    'tenure': 12, 'Phone_Service': 1, 'Multiple_Lines': 0, 'Internet_Service': 0,
    'Online_Security': 2, 'Online_Backup': 0, 'Device_Protection': 1,
    'Tech_Support': 0, 'Streaming_TV': 0, 'Streaming_Movies': 1, 'Contract': 0,
    'Paperless_Billing': 1, 'Payment_Method': 2, 'Monthly_Charges': 50.0, 'Total_Charges': 500.0
}])

new_customer_scaled = scaler.transform(new_customer)
churn_prob = model.predict(new_customer_scaled)[0][0]
churn_label = int(churn_prob &amp;gt; 0.5)

print(f"Churn Probability: {churn_prob:.2f}")
print(f"Churn Prediction: {churn_label} ({'Yes' if churn_label==1 else 'No'})")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You now have a complete pipeline to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clean and preprocess data&lt;/li&gt;
&lt;li&gt;Train a neural network in TensorFlow&lt;/li&gt;
&lt;li&gt;Evaluate model performance&lt;/li&gt;
&lt;li&gt;Predict churn for new customers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This workflow is reusable for other tabular datasets and binary classification problems.&lt;/p&gt;

</description>
      <category>tensorflow</category>
      <category>data</category>
      <category>python</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Automating Script Execution and Building a Production-Ready Data Pipeline with GitHub Actions</title>
      <dc:creator>Oussama Belhadi</dc:creator>
      <pubDate>Mon, 21 Jul 2025 12:24:37 +0000</pubDate>
      <link>https://dev.to/zorous/automating-script-execution-and-building-a-production-ready-data-pipeline-with-github-actions-4jfc</link>
      <guid>https://dev.to/zorous/automating-script-execution-and-building-a-production-ready-data-pipeline-with-github-actions-4jfc</guid>
      <description>&lt;p&gt;Learn how to set up a fully automated workflow to fetch external data, update your web app’s content, and trigger redeployment using GitHub Actions. In this guide, I’ll use news fetching as a practical example, but the approach applies to any data pipeline. You’ll see real-world CI/CD, automation tips, and how to keep your site’s data up-to-date—no manual intervention required!&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 The Goal
&lt;/h2&gt;

&lt;p&gt;I wanted my Next.js app to always display the latest news from a third-party site, without me having to manually update a JSON file or trigger a redeploy. The solution? Automate everything: scraping, data update, and deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠️ The Stack
&lt;/h2&gt;

&lt;p&gt;Next.js (App Router)&lt;br&gt;
Tailwind CSS + DaisyUI&lt;br&gt;
Puppeteer for scraping&lt;br&gt;
GitHub Actions for CI/CD&lt;br&gt;
Vercel for hosting&lt;/p&gt;
&lt;h2&gt;
  
  
  🧩 The Workflow
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Scraping News with Puppeteer&lt;/strong&gt;&lt;br&gt;
I wrote a Node.js script using Puppeteer to scrape news headlines and details, then save them to src/data/news.json in my repo.&lt;br&gt;
Key code snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const puppeteer = require('puppeteer');
const fs = require('fs');

async function scrapeNews() {
  const browser = await puppeteer.launch({
    headless: true,
    args: ['--no-sandbox', '--disable-setuid-sandbox'] // Required for CI!
  });
  const page = await browser.newPage();
  await page.goto('https://www.news-website.com/', { waitUntil: 'networkidle2' });
  // ...scraping logic...
  fs.writeFileSync('src/data/news.json', JSON.stringify(news, null, 2));
  await browser.close();
}
scrapeNews();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Pro tip:&lt;/strong&gt;&lt;br&gt;
If you run Puppeteer in CI (like GitHub Actions), you must use the --no-sandbox and --disable-setuid-sandbox flags, or Chromium will fail to launch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Automating with GitHub Actions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I created a workflow (.github/workflows/scrape.yml) to:&lt;br&gt;
Run on a schedule (every 12 hours) or manually&lt;br&gt;
Install dependencies&lt;br&gt;
Run the scraper&lt;br&gt;
Commit and push the updated news.json back to GitHub&lt;br&gt;
&lt;strong&gt;Key workflow steps:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;permissions:
  contents: write

jobs:
  scrape:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v4
        with:
          node-version: '18'
      - run: npm ci
      - run: node utils/News/scrape-news.js
      - run: |
          git config user.name "github-actions[bot]"
          git config user.email "github-actions[bot]@users.noreply.github.com"
      - run: |
          git add src/data/news.json
          git diff --cached --quiet || git commit -m "chore: update news.json [auto]"
          git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Don’t forget:&lt;/strong&gt;&lt;br&gt;
Add permissions: contents: write at the top level of your workflow, or you’ll get a 403 error when trying to push.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Automatic Vercel Redeploys&lt;/strong&gt;&lt;br&gt;
Vercel is connected to my GitHub repo. When the workflow pushes a new commit (with updated news), Vercel automatically rebuilds and redeploys my app. The site always shows the latest scraped news—no manual intervention!&lt;/p&gt;

&lt;h2&gt;
  
  
  🐛 Common Pitfalls &amp;amp; Fixes
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Puppeteer fails to launch in CI:&lt;/strong&gt;&lt;br&gt;
Add args: ['--no-sandbox', '--disable-setuid-sandbox'] to your launch options.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow can’t push to repo:&lt;/strong&gt;&lt;br&gt;
Add permissions: contents: write to your workflow YAML.&lt;br&gt;
Case-sensitive paths:&lt;br&gt;
Linux (CI) is case-sensitive! Double-check your file and folder names.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Result&lt;/strong&gt;&lt;br&gt;
News is scraped and updated automatically, every 12 hours.&lt;br&gt;
My Next.js app always displays fresh content.&lt;br&gt;
No manual data updates or redeploys needed.&lt;br&gt;
The workflow is robust, production-ready, and easy to maintain.&lt;/p&gt;

</description>
      <category>githubactions</category>
      <category>webdev</category>
      <category>automation</category>
      <category>news</category>
    </item>
    <item>
      <title>Learning COBOL as a Beginner — A Simple Step-by-Step Guide</title>
      <dc:creator>Oussama Belhadi</dc:creator>
      <pubDate>Wed, 16 Jul 2025 12:21:36 +0000</pubDate>
      <link>https://dev.to/zorous/learning-cobol-as-a-beginner-a-simple-step-by-step-guide-4p0k</link>
      <guid>https://dev.to/zorous/learning-cobol-as-a-beginner-a-simple-step-by-step-guide-4p0k</guid>
      <description>&lt;p&gt;Hey there! 👋&lt;br&gt;
I'm currently learning COBOL — yes, that old programming language from the 1960s! I know it looks outdated, but it’s still used in critical systems like banks, airlines, and government software.&lt;/p&gt;

&lt;p&gt;I decided to document my learning journey here, in the simplest and shortest way possible, in case it helps someone else out there who's just getting started like me.&lt;/p&gt;
&lt;h2&gt;
  
  
  📚 1. What is COBOL?
&lt;/h2&gt;

&lt;p&gt;COBOL stands for Common Business-Oriented Language&lt;/p&gt;

&lt;p&gt;It was created in 1959 to process business data (e.g., invoices, reports)&lt;/p&gt;

&lt;p&gt;Still used today in huge legacy systems&lt;/p&gt;

&lt;p&gt;Super readable (like plain English), but very strict in structure&lt;/p&gt;
&lt;h2&gt;
  
  
  🧱 2. Structure of a COBOL Program
&lt;/h2&gt;

&lt;p&gt;COBOL programs are divided into four main divisions, always in this order:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IDENTIFICATION&lt;/strong&gt;  : Metadata like name, author, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ENVIRONMENT&lt;/strong&gt; : System configuration (often skipped by beginners)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DATA&lt;/strong&gt;     : Variable declarations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PROCEDURE&lt;/strong&gt;: The actual logic — your code goes here!&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🧾 Example Program&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      IDENTIFICATION DIVISION.
       PROGRAM-ID. HelloWorld.

       ENVIRONMENT DIVISION.

       DATA DIVISION.

       PROCEDURE DIVISION.
           DISPLAY "Hello, World!".
           STOP RUN.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  📏 3. COBOL Columns (Important!)
&lt;/h2&gt;

&lt;p&gt;COBOL was made for punch cards, so it has strict rules about where you write your code:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1 – 6 :&lt;/strong&gt;   Sequence numbers (optional, can ignore)&lt;br&gt;
&lt;strong&gt;7     :&lt;/strong&gt; * for comments, - for line continuation&lt;br&gt;
&lt;strong&gt;8–72    :&lt;/strong&gt; Actual code goes here (indentation often starts at column 8)&lt;br&gt;
&lt;strong&gt;73–80    :&lt;/strong&gt; Ignored by compiler&lt;/p&gt;
&lt;h2&gt;
  
  
  📝 4. Declaring Variables (DATA DIVISION)
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;       DATA DIVISION.
       WORKING-STORAGE SECTION.
       01 Name     PIC A(20).
       01 Age      PIC 99.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Explanation:&lt;/p&gt;

&lt;p&gt;PIC = Picture Clause, defines the variable type&lt;/p&gt;

&lt;p&gt;A(20) = 20 alphabetic characters&lt;/p&gt;

&lt;p&gt;99 = 2-digit number&lt;/p&gt;
&lt;h2&gt;
  
  
  🧾 5. Input and Output
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;       DISPLAY "Enter your name: ".
       ACCEPT Name.
       DISPLAY "Hello, " Name.
DISPLAY shows a message

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;ACCEPT takes input from the user&lt;/p&gt;
&lt;h2&gt;
  
  
  🔁 6. IF Conditions
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  IF Age &amp;gt; 18
           DISPLAY "You are an adult."
       ELSE
           DISPLAY "You are a minor."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Pretty readable, right?&lt;/p&gt;
&lt;h2&gt;
  
  
  🔄 7. Loops with PERFORM
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  WORKING-STORAGE SECTION.
       01 I      PIC 9 VALUE 1.

       PROCEDURE DIVISION.
           PERFORM VARYING I FROM 1 BY 1 UNTIL I &amp;gt; 5
               DISPLAY "Count: " I
           END-PERFORM.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Prints:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Count: 1
Count: 2
Count: 3
Count: 4
Count: 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s what I’ve learned so far!&lt;br&gt;
This post is meant to be a beginner-to-beginner guide, I’ll keep sharing more as I go deeper into COBOL.&lt;br&gt;
If you’re also learning COBOL or just curious about it, feel free to follow along! Let’s bring a bit of life into this powerful (but old-school) language.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>cobol</category>
      <category>ibm</category>
    </item>
    <item>
      <title>How to Work With Different People on Your Team</title>
      <dc:creator>Oussama Belhadi</dc:creator>
      <pubDate>Tue, 10 Jun 2025 10:16:54 +0000</pubDate>
      <link>https://dev.to/zorous/how-to-work-with-different-people-on-your-team-445o</link>
      <guid>https://dev.to/zorous/how-to-work-with-different-people-on-your-team-445o</guid>
      <description>&lt;h2&gt;
  
  
  Teamwork: Mastering the Art of Working with Anyone
&lt;/h2&gt;

&lt;p&gt;Learning to navigate differences and finding ways to work effectively with different kinds of people isn’t just smart, it’s a practical strategy to get things done without losing your mind.&lt;/p&gt;

&lt;p&gt;If you take the time to notice people around you, you'll find that they all have unique styles in the way they talk, how they work, and what motivates or drives them. Each person has a distinct personality that may appeal to you because it’s relatable, or bother you because you can’t comprehend the way they think, feel, and act.&lt;/p&gt;

&lt;p&gt;But we hardly pay attention to others. We hardly take the time to understand what makes them act a certain way. Not understanding others not only leads to misunderstandings, frustration, and conflict, it makes us miserable because we simply can’t stand people we need to work with.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you’re a fast-paced go-getter, you may feel slowed down by someone who’s more detail-oriented, while they may feel rushed and pushed into a decision.&lt;/p&gt;

&lt;p&gt;You prefer collaborating closely, while someone else likes to work independently with minimal interference. You may find them as cold and unresponsive, while they may think of you as annoying.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unlocking the "Idiots" Code
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqhmqp696qlnug9pgvyy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqhmqp696qlnug9pgvyy.png" alt="Cover Image of The Surrounded by Idiots Book" width="756" height="1008"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I recently read a book that shed a lot of light on these dynamics: "Surrounded by Idiots" by Thomas Erikson. The title is quite catchy, but the insights are profound. Erikson introduces a simple yet powerful framework, categorizing different behavioral types into four colors: Red, Yellow, Green, and Blue. It’s like having a guide to truly understanding why people do what they do.&lt;/p&gt;

&lt;p&gt;This book really resonated with me because it offers a practical lens for viewing team interactions. Instead of getting frustrated by what seems like inexplicable behavior, you can start to identify different communication styles and, more importantly, adapt your own approach to connect better with colleagues.&lt;/p&gt;

&lt;p&gt;So, how can we leverage this understanding to work more effectively with everyone on the team?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start with You: Know Your Own Style&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before you try to understand others, take a moment to understand yourself. What’s your dominant communication style? Are you direct and results-focused, or do you prefer to brainstorm and socialize? Do you prioritize harmony and stability, or are you driven by facts and logic? Recognizing your own tendencies is the first step to adapting how you interact with others. Knowing your style helps you predict how you might come across to someone with a different approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Embrace the Blend&lt;/strong&gt;&lt;br&gt;
Imagine a team where everyone thought and acted exactly the same way. It would be pretty uninspired, wouldn't it? Diversity in personalities is a strength, not a hindrance. Different perspectives lead to more robust solutions and a richer work environment. Embrace the unique mix of people around you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decode Their Communication&lt;/strong&gt;&lt;br&gt;
While you don't need to formally "color-code" your colleagues, noticing how people prefer to communicate is incredibly helpful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "Reds" (Direct &amp;amp; Driven):&lt;/strong&gt; They want action and results. Give them the bottom line.&lt;br&gt;
&lt;strong&gt;The "Yellows" (Social &amp;amp; Enthusiastic):&lt;/strong&gt; They love to talk and brainstorm. Engage them with ideas and collaboration.&lt;br&gt;
&lt;strong&gt;The "Greens" (Stable &amp;amp; Harmonious):&lt;/strong&gt; They value peace and teamwork. Be patient and emphasize shared support.&lt;br&gt;
&lt;strong&gt;The "Blues" (Analytical &amp;amp; Precise):&lt;/strong&gt; They need facts and details. Present information logically and with data.&lt;/p&gt;

&lt;p&gt;Adjust Your Approach, Don't Force Theirs&lt;br&gt;
This is perhaps the most crucial takeaway. It’s not about trying to force your colleagues to adopt your communication style. Instead, the responsibility is on you to adapt.&lt;/p&gt;

&lt;p&gt;For the direct types (Red): Be concise, focus on outcomes.&lt;br&gt;
For the social types (Yellow): Be engaging, open to new ideas.&lt;br&gt;
For the harmonious types (Green): Be patient, empathetic, and emphasize team well-being.&lt;br&gt;
For the analytical types (Blue): Be factual, provide details, and present information logically.&lt;/p&gt;

&lt;p&gt;This isn't about compromising your authenticity, but rather being flexible in your delivery to ensure your message lands effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Observe and Listen Deeply&lt;/strong&gt;&lt;br&gt;
Beyond general communication styles, pay close attention to individual cues. What truly motivates your colleagues? What are their preferred ways of working? Do they respond better to email, instant messages, or in-person discussions for certain topics? Observe their reactions, listen to their concerns, and ask open-ended questions to gain a deeper understanding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build a Safe Space&lt;/strong&gt;&lt;br&gt;
Regardless of personality type, everyone thrives in an environment where they feel safe to express ideas, ask questions, and even make mistakes without fear of judgment. Encourage open dialogue, be approachable, and create a culture of respect where differences are celebrated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Focus on the Finish Line&lt;/strong&gt;&lt;br&gt;
Ultimately, despite our individual differences, we are united by common objectives. Regularly remind your team of the overarching goals and how each person's unique contribution is vital to achieving them. This shared purpose can help bridge communication gaps and foster a strong sense of collective responsibility.&lt;/p&gt;

&lt;p&gt;Working effectively with different people on your team doesn't have to be a constant uphill battle. By understanding that seemingly "difficult" behaviors are often just different ways of navigating the world, and by actively adapting your communication and approach, you can transform potential friction into powerful synergy. So, go forth, embrace the incredible diversity of your team, and watch your collective success soar!&lt;/p&gt;

</description>
      <category>teamwork</category>
      <category>workflow</category>
      <category>understanding</category>
      <category>productivity</category>
    </item>
    <item>
      <title>EU Tech Law Update: What Devs Need to Know</title>
      <dc:creator>Oussama Belhadi</dc:creator>
      <pubDate>Tue, 10 Jun 2025 08:21:49 +0000</pubDate>
      <link>https://dev.to/zorous/eu-tech-law-update-what-devs-need-to-know-oe7</link>
      <guid>https://dev.to/zorous/eu-tech-law-update-what-devs-need-to-know-oe7</guid>
      <description>&lt;p&gt;The EU has been busy making some changes to its tech laws, and as developers and startup founders, it's super important to know how these updates impact our work. &lt;br&gt;
&lt;strong&gt;Good news:&lt;/strong&gt; some areas are getting a bit easier! &lt;br&gt;
&lt;strong&gt;Bad news:&lt;/strong&gt; new tech means new responsibilities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgefkfgn8zhe8ube1mpi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgefkfgn8zhe8ube1mpi.png" alt="Image description" width="800" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. GDPR (General Data Protection Regulation) for Small Businesses is Getting Easier (Finally!)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Less Paperwork: If you're a small or mid-sized startup (especially under 750 employees), you might see a break on some of the heavy paperwork that GDPR usually demands. The EU wants to cut down on "red tape."&lt;br&gt;
Focus on High-Risk Data: This doesn't mean you can ignore GDPR! You'll still need to be super careful with "high-risk" personal data (like health info or financial data). But for simpler data processing, the burden might be less.&lt;br&gt;
What to do: Keep making privacy a core part of your design. But know that some of the stricter documentation rules might not apply to you if your data handling is low-risk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. AI Liability: Software is a "Product" Now!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No Dedicated AI Liability Law (for now): The EU scrapped its specific AI liability proposal. This means there isn't a new, separate law just for AI accidents.&lt;br&gt;
Existing Product Rules Apply: Instead, our existing Product Liability Directive has been updated. The BIG change? Software, including AI systems, is now officially considered a "product."&lt;br&gt;
What this means: If your AI or software causes harm because it's "defective," you could be held responsible – even if it wasn't your fault directly. This is called "strict liability." So, make sure your AI is robust, well-tested, and secure. Document everything!&lt;br&gt;
AI Act Still Important: Don't forget the separate EU AI Act! If your AI is "high-risk" (e.g., in healthcare, hiring), you still have to follow strict rules for safety, transparency, and human oversight.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. ePrivacy (The "Cookie Law"): Still Around!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6874h32977tq0iqi0k7s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6874h32977tq0iqi0k7s.png" alt="Image description" width="612" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No New Cookie Regulation: After years of talk, the EU has actually withdrawn its proposal for a new ePrivacy Regulation.&lt;br&gt;
Old Rules Still Apply: This means the original ePrivacy Directive (the "Cookie Law") is still in force.&lt;br&gt;
What this means: You still need proper cookie consent on your websites and apps. That means:&lt;br&gt;
Clear banners: Tell users what cookies you use.&lt;br&gt;
Real choice: Let them accept, reject, or customize. No pre-ticked boxes!&lt;br&gt;
Easy opt-out: Make it simple to change their mind later.&lt;br&gt;
Direct Marketing: If you send marketing emails or messages, you generally need consent for those too.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bottom Line for Devs:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The message is clear: build safely and with privacy in mind from the start. While some of the administrative burden for smaller businesses is easing, your core responsibility for user data and product safety remains. If you're building with AI, understand its risks and document your development process thoroughly. And yes, those cookie banners aren't going away anytime soon!&lt;/p&gt;

&lt;p&gt;Stay sharp, keep coding!&lt;/p&gt;

</description>
      <category>gdpr</category>
      <category>softwaredevelopment</category>
      <category>news</category>
      <category>data</category>
    </item>
    <item>
      <title>Is PostgreSQL the Swiss Army Knife of Web Development? Rethinking Your Stack</title>
      <dc:creator>Oussama Belhadi</dc:creator>
      <pubDate>Thu, 06 Mar 2025 09:45:40 +0000</pubDate>
      <link>https://dev.to/zorous/is-postgresql-the-swiss-army-knife-of-web-development-rethinking-your-stack-568k</link>
      <guid>https://dev.to/zorous/is-postgresql-the-swiss-army-knife-of-web-development-rethinking-your-stack-568k</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjugfrcwbf6v4e5asmuel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjugfrcwbf6v4e5asmuel.png" alt="Image description" width="497" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We all know and (maybe sometimes) love PostgreSQL as that reliable relational database workhorse. But what if I told you it could be so much more?  Recently, I've been exploring the incredible potential of Postgres and it has completely shifted my perspective on what it's capable of.&lt;/p&gt;

&lt;p&gt;It turns out that with its amazing flexibility and a vibrant ecosystem of extensions, PostgreSQL can actually replace a whole bunch of specialized tools in your web development stack.  Intrigued? I was! Let's dive into some of the key areas where Postgres can shine, potentially simplifying your architecture and reducing tool sprawl.&lt;/p&gt;

&lt;h3&gt;
  
  
  Beyond Relational: Postgres as a Multi-Tool Powerhouse
&lt;/h3&gt;

&lt;p&gt;Postgres, armed with its built-in features and extensions, can step in for tools you might be currently relying on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;NoSQL Database? JSONB to the Rescue!&lt;/strong&gt;  Need to handle schemaless data or flexible documents?  Postgres' &lt;code&gt;JSONB&lt;/code&gt; data type lets you store and efficiently query JSON data right within your relational database. Forget spinning up a separate NoSQL database for certain use cases – Postgres has you covered.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cron Jobs? Say Hello to &lt;code&gt;pg_cron&lt;/code&gt;!&lt;/strong&gt;  Scheduling tasks directly from your database?  The &lt;code&gt;pg_cron&lt;/code&gt; extension allows you to schedule cron-like jobs within Postgres itself, simplifying your infrastructure and keeping scheduled tasks close to your data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;In-Memory Cache? Unlogged Tables for Speed!&lt;/strong&gt;  Need a fast, temporary cache?  Unlogged tables in Postgres provide in-memory-like performance without the complexity of setting up and managing a dedicated cache like Redis for certain scenarios.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector Database?  &lt;code&gt;pgvector&lt;/code&gt; and &lt;code&gt;pgai&lt;/code&gt; for AI Magic!&lt;/strong&gt;  Jumping into the world of vector embeddings and AI? Extensions like &lt;code&gt;pgvector&lt;/code&gt; and &lt;code&gt;pgai&lt;/code&gt; turn Postgres into a viable vector database, enabling similarity searches and vector-based operations directly within your database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full-Text Search? Built-in Power!&lt;/strong&gt;  Need robust search capabilities? Postgres has powerful built-in full-text search functionality. You might not need to reach for dedicated search engines for many applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GraphQL API?  &lt;code&gt;pg_graphql&lt;/code&gt; Makes it Easy!&lt;/strong&gt;  Want to expose your data via GraphQL? The &lt;code&gt;pg_graphql&lt;/code&gt; extension lets you build GraphQL APIs directly on top of your Postgres database, streamlining API development.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time Data Sync? ElectricSQL for the Win!&lt;/strong&gt;  Need real-time data synchronization?  ElectricSQL can bring real-time capabilities to Postgres, opening up possibilities for collaborative applications and live updates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authentication?  &lt;code&gt;pgcrypto&lt;/code&gt; and &lt;code&gt;pgjwt&lt;/code&gt; for Security!&lt;/strong&gt;  Handling authentication? Extensions like &lt;code&gt;pgcrypto&lt;/code&gt; and &lt;code&gt;pgjwt&lt;/code&gt; provide cryptographic functions and JWT support directly within Postgres, enhancing your database's security capabilities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time-Series Analytics?  &lt;code&gt;pg_mooncake&lt;/code&gt; for Insights!&lt;/strong&gt;  Working with time-series data?  &lt;code&gt;pg_mooncake&lt;/code&gt; extension adds time-series analytics capabilities to Postgres, making it suitable for monitoring and trend analysis.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;REST APIs? PostgREST for Instant Endpoints!&lt;/strong&gt;  Need to quickly expose your data via REST APIs? PostgREST can automatically generate RESTful APIs from your Postgres database, significantly speeding up backend development.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Frontend Hosting?  Serve Static Files Directly!&lt;/strong&gt;  Believe it or not, you can even serve frontend code directly from Postgres in certain situations! While maybe not for high-traffic production, it's a fascinating capability for simpler setups or internal tools.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Neon: Serverless Postgres in the Spotlight
&lt;/h3&gt;

&lt;p&gt;If you're looking to explore Postgres in a serverless environment, &lt;a href="https://www.google.com/url?sa=E&amp;amp;source=gmail&amp;amp;q=https://neon.tech/" rel="noopener noreferrer"&gt;Neon&lt;/a&gt; is definitely worth checking out.&lt;/p&gt;

&lt;h3&gt;
  
  
  The "But..." - Considerations for Real-World Use
&lt;/h3&gt;

&lt;p&gt;Now, before you go ripping out half your tech stack and replacing it with Postgres, it's crucial to remember: &lt;strong&gt;just because you &lt;em&gt;can&lt;/em&gt; do it, doesn't mean you &lt;em&gt;always&lt;/em&gt; should.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It's essential to consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Performance:&lt;/strong&gt;  While Postgres is powerful, specialized tools are often optimized for their specific tasks. Evaluate if Postgres can handle the performance demands of replacing a dedicated tool in your specific use case.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintainability:&lt;/strong&gt;  Complexity can sometimes shift.  While you might reduce the number of tools, ensure that managing a more multifaceted Postgres setup doesn't become more complex in the long run.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Right Tool for the Job:&lt;/strong&gt;  Sometimes, a dedicated tool &lt;em&gt;is&lt;/em&gt; simply the best choice. Don't force Postgres into a role where another tool might be significantly more efficient or better suited.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Final Thoughts: Embrace Postgres' Versatility, But Choose Wisely
&lt;/h3&gt;

&lt;p&gt;This exploration has really opened my eyes to the sheer versatility of PostgreSQL. It's not just a relational database anymore; it's a platform capable of handling a surprising range of web development needs.&lt;/p&gt;

&lt;p&gt;While it's tempting to jump on the "one database to rule them all" bandwagon, the key takeaway is &lt;strong&gt;thoughtful evaluation.&lt;/strong&gt;  Postgres offers incredible potential to simplify your stack, but always consider the specific requirements of your project and choose the &lt;em&gt;right&lt;/em&gt; tool for each job.&lt;/p&gt;

</description>
      <category>tech</category>
      <category>postgressql</category>
      <category>programming</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Azure Pipelines</title>
      <dc:creator>Oussama Belhadi</dc:creator>
      <pubDate>Tue, 04 Mar 2025 08:32:45 +0000</pubDate>
      <link>https://dev.to/zorous/azure-pipelines-27mc</link>
      <guid>https://dev.to/zorous/azure-pipelines-27mc</guid>
      <description>&lt;p&gt;&lt;strong&gt;1. Introduction to Azure Pipelines&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxf8k9fsqybg2r8w2a78t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxf8k9fsqybg2r8w2a78t.png" alt=" " width="700" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Azure Pipelines is a cloud service that automates the building and testing of your code and deploys it to any target. It works with any language, platform, and cloud.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Concepts:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Triggers:&lt;/strong&gt; Events that initiate a pipeline run (e.g., code commits, scheduled times).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pools:&lt;/strong&gt; Groups of agents (virtual machines or containers) that execute pipeline jobs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tasks:&lt;/strong&gt; Pre-built or custom actions performed in a pipeline (e.g., compiling code, running tests).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Steps:&lt;/strong&gt; Ordered sequences of tasks within a job.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Variables:&lt;/strong&gt; Values that can be used throughout the pipeline to customize behavior.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Artifacts:&lt;/strong&gt; Files or packages produced by a pipeline run.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;YAML Syntax:&lt;/strong&gt; Pipelines are defined using YAML (YAML Ain't Markup Language) files.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;YAML Syntax Basics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Indentation is crucial for defining the structure.&lt;/li&gt;
&lt;li&gt;Key-value pairs define settings.&lt;/li&gt;
&lt;li&gt;Lists are defined with hyphens (&lt;code&gt;-&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Java (Maven) Pipeline Explained&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This pipeline automates the build, test, and deployment of a Java project using Apache Maven.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;YAML File Breakdown:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Maven&lt;/span&gt;
&lt;span class="c1"&gt;# Build your Java project and run tests with Apache Maven.&lt;/span&gt;
&lt;span class="c1"&gt;# Add steps that analyze code, save build artifacts, deploy, and more:&lt;/span&gt;
&lt;span class="c1"&gt;# [https://docs.microsoft.com/azure/devops/pipelines/languages/java](https://docs.microsoft.com/azure/devops/pipelines/languages/java)&lt;/span&gt;
&lt;span class="na"&gt;trigger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Test&lt;/span&gt;

&lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Server Name&lt;/span&gt; 

&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Maven@3&lt;/span&gt;
  &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;mavenPomFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;JobAPI/pom.xml'&lt;/span&gt;
    &lt;span class="na"&gt;goals&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;clean&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;compile&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;process-resources&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;package'&lt;/span&gt;
    &lt;span class="na"&gt;publishJUnitResults&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;testResultsFiles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;**/surefire-reports/TEST-*.xml'&lt;/span&gt;
    &lt;span class="na"&gt;javaHomeOption&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;JDKVersion'&lt;/span&gt;
    &lt;span class="na"&gt;jdkVersionOption&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;1.8'&lt;/span&gt;
    &lt;span class="na"&gt;mavenVersionOption&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Default'&lt;/span&gt;
    &lt;span class="na"&gt;mavenOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-Xmx3072m'&lt;/span&gt;
    &lt;span class="na"&gt;mavenAuthenticateFeed&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;effectivePomSkip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;sonarQubeRunAnalysis&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CopyFiles@2&lt;/span&gt;
  &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;SourceFolder&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$(Build.SourcesDirectory)/JobAPI'&lt;/span&gt;
    &lt;span class="na"&gt;Contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;pom.xml&lt;/span&gt;
      &lt;span class="s"&gt;target/*.jar&lt;/span&gt;
    &lt;span class="na"&gt;TargetFolder&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$(Build.ArtifactStagingDirectory)'&lt;/span&gt;
  &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PublishBuildArtifacts@1&lt;/span&gt;
  &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;PathtoPublish&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$(Build.ArtifactStagingDirectory)'&lt;/span&gt;
    &lt;span class="na"&gt;ArtifactName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;drop'&lt;/span&gt;
    &lt;span class="na"&gt;publishLocation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Container'&lt;/span&gt;
  &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Maven@3&lt;/span&gt;
  &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;mavenPomFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;JobAPI/pom.xml'&lt;/span&gt;
    &lt;span class="na"&gt;goals&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;deploy'&lt;/span&gt;
    &lt;span class="na"&gt;publishJUnitResults&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;testResultsFiles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;**/surefire-reports/TEST-*.xml'&lt;/span&gt;
    &lt;span class="na"&gt;javaHomeOption&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;JDKVersion'&lt;/span&gt;
    &lt;span class="na"&gt;jdkVersionOption&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;1.8'&lt;/span&gt;
    &lt;span class="na"&gt;mavenVersionOption&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Default'&lt;/span&gt;
    &lt;span class="na"&gt;mavenOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-Xmx3072m'&lt;/span&gt;
    &lt;span class="na"&gt;mavenAuthenticateFeed&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;effectivePomSkip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;sonarQubeRunAnalysis&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;trigger: - Test&lt;/code&gt;&lt;/strong&gt;: Runs the pipeline when code is pushed to the &lt;code&gt;Test&lt;/code&gt; branch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;pool: ServerName&lt;/code&gt;&lt;/strong&gt;: Uses a self-hosted agent pool named "Server."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;steps:&lt;/code&gt;&lt;/strong&gt;: Defines the sequence of tasks.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;Maven@3&lt;/code&gt; (Clean, Compile, Package):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;mavenPomFile&lt;/code&gt;: Specifies the path to the &lt;code&gt;pom.xml&lt;/code&gt; file.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;goals&lt;/code&gt;: Defines the Maven goals to execute (&lt;code&gt;clean&lt;/code&gt;, &lt;code&gt;compile&lt;/code&gt;, &lt;code&gt;process-resources&lt;/code&gt;, &lt;code&gt;package&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;publishJUnitResults&lt;/code&gt;: Publishes JUnit test results.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;testResultsFiles&lt;/code&gt;: Specifies the location of test result files.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;javaHomeOption&lt;/code&gt;, &lt;code&gt;jdkVersionOption&lt;/code&gt;, &lt;code&gt;mavenVersionOption&lt;/code&gt;, &lt;code&gt;mavenOptions&lt;/code&gt;: Configures the Java and Maven environment.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;&lt;code&gt;CopyFiles@2&lt;/code&gt; (POM and JAR):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Copies the &lt;code&gt;pom.xml&lt;/code&gt; and the generated JAR file to the artifact staging directory (&lt;code&gt;$(Build.ArtifactStagingDirectory)&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;condition&lt;/code&gt;: Prevents these tasks from running during pull requests.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;&lt;code&gt;PublishBuildArtifacts@1&lt;/code&gt;&lt;/strong&gt;: Publishes the staged artifacts as "drop."&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;&lt;code&gt;Maven@3&lt;/code&gt; (Deploy):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Runs the maven deploy goal, to push the artifact to a maven repository.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;&lt;code&gt;condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))&lt;/code&gt;&lt;/strong&gt;: Ensures specific steps only run on successful builds and not on pull requests.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Explanation of Maven Goals and Options:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;clean&lt;/code&gt;: Deletes the &lt;code&gt;target&lt;/code&gt; directory.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;compile&lt;/code&gt;: Compiles the Java source code.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;process-resources&lt;/code&gt;: Copies resources to the &lt;code&gt;target&lt;/code&gt; directory.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;package&lt;/code&gt;: Packages the compiled code into a JAR file.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;deploy&lt;/code&gt;: pushes the artifact to a repository.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Xmx3072m&lt;/code&gt;: Allocates 3GB of memory to Maven.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Artifact Handling:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The pipeline copies the &lt;code&gt;pom.xml&lt;/code&gt; and JAR file to the artifact staging directory and then publishes them as an artifact named "drop."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The second maven task with the deploy goal, will push the resulting jar file to a maven repository, that is configured inside of the pom.xml file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. .NET Core (.NET Framework) Pipeline Explained&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This pipeline builds and tests an ASP.NET Core project targeting the full .NET Framework.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;YAML File Breakdown:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ASP.NET Core (.NET Framework)&lt;/span&gt;
&lt;span class="c1"&gt;# Build and test ASP.NET Core projects targeting the full .NET Framework.&lt;/span&gt;
&lt;span class="c1"&gt;# Add steps that publish symbols, save build artifacts, and more:&lt;/span&gt;
&lt;span class="c1"&gt;# [https://docs.microsoft.com/azure/devops/pipelines/languages/dotnet-core](https://docs.microsoft.com/azure/devops/pipelines/languages/dotnet-core)&lt;/span&gt;

&lt;span class="na"&gt;trigger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Test&lt;/span&gt;

&lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vmImage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;windows-latest'&lt;/span&gt;

&lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;solution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;**/*.sln'&lt;/span&gt;
  &lt;span class="na"&gt;buildPlatform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Any&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;CPU'&lt;/span&gt;
  &lt;span class="na"&gt;buildConfiguration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Release'&lt;/span&gt;

&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NuGetToolInstaller@1&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NuGetCommand@2&lt;/span&gt;
  &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;NuGet&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;restore'&lt;/span&gt;
  &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;restore'&lt;/span&gt;
    &lt;span class="na"&gt;restoreSolution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$(solution)'&lt;/span&gt;
    &lt;span class="na"&gt;feedsToUse&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config&lt;/span&gt;
    &lt;span class="na"&gt;nugetConfigPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TTLineIMSBackoffice/NuGet.Config'&lt;/span&gt;
    &lt;span class="na"&gt;externalFeedCredentials&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Telerik&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Nuget'&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VSBuild@1&lt;/span&gt;
  &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;solution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$(solution)'&lt;/span&gt;
    &lt;span class="na"&gt;msbuildArgs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/p:DeployOnBuild=true&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;/p:WebPublishMethod=Package&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;/p:PackageAsSingleFile=true&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;/p:SkipInvalidConfigurations=true&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;/p:DesktopBuildPackageLocation="$(build.artifactStagingDirectory)/WebApp.zip"&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;/p:DeployIisAppPath="Default&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Web&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Site"'&lt;/span&gt;
    &lt;span class="na"&gt;platform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$(buildPlatform)'&lt;/span&gt;
    &lt;span class="na"&gt;configuration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$(buildConfiguration)'&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VSTest@2&lt;/span&gt;
  &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;platform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$(buildPlatform)'&lt;/span&gt;
    &lt;span class="na"&gt;configuration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$(buildConfiguration)'&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PublishBuildArtifacts@1&lt;/span&gt;
  &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;PathtoPublish&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$(Build.ArtifactStagingDirectory)'&lt;/span&gt;
    &lt;span class="na"&gt;ArtifactName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;drop'&lt;/span&gt;
    &lt;span class="na"&gt;publishLocation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Container'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;trigger: - Test&lt;/code&gt;&lt;/strong&gt;: Triggers the pipeline on commits to the &lt;code&gt;Test&lt;/code&gt; branch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;pool: vmImage: 'windows-latest'&lt;/code&gt;&lt;/strong&gt;: Uses a Microsoft-hosted Windows agent.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;variables:&lt;/code&gt;&lt;/strong&gt;: Defines pipeline variables.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;solution&lt;/code&gt;: Path to the solution file.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;buildPlatform&lt;/code&gt;: Build platform (&lt;code&gt;Any CPU&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;buildConfiguration&lt;/code&gt;: Build configuration (&lt;code&gt;Release&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;&lt;code&gt;steps:&lt;/code&gt;&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;NuGetToolInstaller@1&lt;/code&gt;&lt;/strong&gt;: Installs the NuGet tool.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;NuGetCommand@2&lt;/code&gt; (Restore):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Restores NuGet packages using the specified &lt;code&gt;NuGet.Config&lt;/code&gt; file.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;externalFeedCredentials&lt;/code&gt;: provides credentials to a telerik nuget feed.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;&lt;code&gt;VSBuild@1&lt;/code&gt;&lt;/strong&gt;: Builds the solution using MSBuild.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;msbuildArgs&lt;/code&gt;: Configures the build process, including creating a web package (&lt;code&gt;WebApp.zip&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;&lt;code&gt;VSTest@2&lt;/code&gt;&lt;/strong&gt;: Runs unit tests.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;&lt;code&gt;PublishBuildArtifacts@1&lt;/code&gt;&lt;/strong&gt;: Publishes the build artifacts.&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Understanding the AS400 and its Connection to DB2</title>
      <dc:creator>Oussama Belhadi</dc:creator>
      <pubDate>Mon, 24 Feb 2025 15:12:17 +0000</pubDate>
      <link>https://dev.to/zorous/understanding-the-as400-and-its-connection-to-db2-4dik</link>
      <guid>https://dev.to/zorous/understanding-the-as400-and-its-connection-to-db2-4dik</guid>
      <description>&lt;p&gt;The AS400, now known as the IBM i, is a unique and powerful system that has played a significant role in the world of business computing.  Often misunderstood, it's more than just a legacy system; it's a robust, integrated platform with a rich history and a surprisingly modern architecture.  A key component of its strength is its tight integration with the DB2 database. This article will delve into the AS400, its evolution, and its inseparable link to DB2.   &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncfz5ydsnibi4z7uvnl9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncfz5ydsnibi4z7uvnl9.png" alt="Image description" width="500" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the AS400 / IBM i?
&lt;/h2&gt;

&lt;p&gt;Originally introduced in 1988 as the Application System/400, the AS400 was designed as a mid-range computer system aimed at businesses of all sizes.&lt;/p&gt;

&lt;p&gt;It quickly gained popularity for its reliability, ease of use, and integrated architecture.  Over the years, it has evolved significantly, undergoing name changes (first to iSeries, then to System i, and now to IBM i) and numerous technological advancements. However, its core principles of integrated hardware and software, object-based architecture, and strong focus on business applications remain.   &lt;/p&gt;

&lt;h2&gt;
  
  
  Key Characteristics of the AS400 / IBM i:
&lt;/h2&gt;

&lt;p&gt;Integrated Hardware and Software: The AS400 was designed as a complete, integrated system. The operating system (initially OS/400, now IBM i) was tightly coupled with the hardware, providing a highly optimized environment. This integration simplified system management and contributed to its renowned reliability.   &lt;br&gt;
Object-Based Architecture: The AS400's operating system is object-based, meaning that everything within the system – programs, data files, even hardware resources – is treated as an object. This approach allows for a high degree of encapsulation and modularity, making development and maintenance easier.   &lt;br&gt;
Robust and Reliable: The AS400 has a well-deserved reputation for its reliability. Its architecture, combined with features like journaling and commitment control, helps ensure data integrity and system availability.   &lt;br&gt;
Business-Oriented: From its inception, the AS400 was designed with business applications in mind. It provides a robust platform for running ERP systems, CRM applications, and other critical business software.   &lt;br&gt;
Modern Capabilities: While often perceived as "legacy," the IBM i is a modern platform that supports current technologies like web services, open-source languages (Python, Node.js), and cloud integration.   &lt;br&gt;
DB2 and the AS400 / IBM i: A Symbiotic Relationship&lt;/p&gt;

&lt;p&gt;DB2 is IBM's family of relational database management systems (RDBMS).  On the AS400/IBM i, DB2 is not just a database; it's the database.  It's deeply integrated into the operating system and is the primary means of storing and managing data.  This tight integration provides several advantages:   &lt;/p&gt;

&lt;p&gt;Performance: The close relationship between DB2 and the operating system allows for significant performance optimizations. Data access is highly efficient, and the system is designed to minimize overhead.&lt;br&gt;
Simplified Management: Because DB2 is integrated, database administration is often simpler than on systems where the database is a separate component. Many database management tasks can be performed using the same tools and interfaces used for other system functions.&lt;br&gt;
Data Integrity: The AS400's architecture, combined with DB2's features like journaling and commitment control, helps ensure data integrity. This is crucial for business applications that require accurate and reliable data.   &lt;br&gt;
Seamless Application Development: The tight integration of DB2 makes it easy for developers to work with data within their applications. The system provides tools and APIs that streamline database access.&lt;br&gt;
The Evolution of DB2 on IBM i:&lt;/p&gt;

&lt;p&gt;Over the years, DB2 on IBM i has evolved alongside the operating system.  It has kept pace with industry standards and introduced new features to meet the changing needs of businesses.  This includes support for SQL, advanced query processing, and integration with other database technologies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is this Integration Important?
&lt;/h2&gt;

&lt;p&gt;The tight integration of DB2 with the AS400/IBM i is a key factor in the system's success.  It provides a stable, reliable, and high-performance platform for running business applications.  This integration simplifies system management, improves data integrity, and makes application development easier.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AS400 / IBM i Today:
&lt;/h2&gt;

&lt;p&gt;While often overlooked, the IBM i remains a relevant and powerful platform for businesses.  Its focus on reliability, security, and integration, combined with its modern capabilities, makes it a strong choice for organizations that need a robust and dependable system. The tight integration with DB2 continues to be a crucial advantage, providing a highly optimized environment for data-intensive applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  In Conclusion:
&lt;/h2&gt;

&lt;p&gt;The AS400/IBM i and DB2 are inextricably linked.  Their close relationship is a defining characteristic of the platform and a major contributor to its long-standing success.  Understanding this connection is essential for anyone working with or considering this powerful system.  While the name has changed and the technology has evolved, the core principles of integration, reliability, and business focus remain, making the IBM i a compelling option for many organizations.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Is AS400 an Operating System?
&lt;/h2&gt;

&lt;p&gt;The AS400 is not just an operating system, but it's more accurate to describe it as a &lt;strong&gt;complete, integrated system&lt;/strong&gt;. This system includes both hardware and software components that are designed to work together seamlessly.&lt;/p&gt;

&lt;p&gt;Here's a breakdown to clarify:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AS400 as a Hardware Platform:&lt;/strong&gt; Initially, AS400 referred to a specific line of mid-range computer hardware introduced by IBM in 1988. This hardware was designed with a focus on reliability and business applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AS400 as a Software Platform:&lt;/strong&gt; Alongside the hardware, the AS400 also included its own operating system, initially called OS/400. This operating system was tightly integrated with the hardware and provided a unique object-based architecture.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evolution and Renaming:&lt;/strong&gt; Over the years, both the hardware and the operating system evolved significantly. The hardware line went through several name changes (iSeries, System i, and now IBM Power Systems). The operating system also underwent name changes (i5/OS, IBM i) to reflect its modernization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key takeaway:&lt;/strong&gt; While the term "AS400" is often used to refer to the system as a whole, it's important to remember that it encompasses both the hardware and the operating system working together.&lt;/p&gt;

&lt;p&gt;Think of it like a car:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The car itself (hardware) is like the AS400 system.&lt;/li&gt;
&lt;li&gt;The engine (operating system) is a crucial part of the car, but it doesn't define the entire car.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the same way, the operating system (IBM i) is a vital component of the AS400 system, but it's not the whole story. The AS400 is a combination of hardware and software designed to provide a robust and integrated platform for business computing.&lt;/p&gt;

</description>
      <category>as400</category>
      <category>db2</category>
      <category>softwaredevelopment</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Monorepos</title>
      <dc:creator>Oussama Belhadi</dc:creator>
      <pubDate>Thu, 20 Feb 2025 13:07:21 +0000</pubDate>
      <link>https://dev.to/zorous/monorepos-3f0o</link>
      <guid>https://dev.to/zorous/monorepos-3f0o</guid>
      <description>&lt;h1&gt;
  
  
  Mastering Monorepos: How to Set Up Next.js &amp;amp; NestJS with Turborepo
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjuuv25g10kjljgxijtmj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjuuv25g10kjljgxijtmj.png" alt="Image description" width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Monorepos are becoming a popular approach for managing multiple related projects in a single repository. If you're building a &lt;strong&gt;full-stack application&lt;/strong&gt; with &lt;strong&gt;Next.js for the frontend&lt;/strong&gt; and &lt;strong&gt;NestJS for the backend&lt;/strong&gt;, using a &lt;strong&gt;monorepo setup&lt;/strong&gt; can significantly improve code sharing, consistency, and development speed.&lt;/p&gt;

&lt;p&gt;In this article, we'll walk through setting up a &lt;strong&gt;monorepo&lt;/strong&gt; using &lt;strong&gt;Turborepo&lt;/strong&gt;, which optimizes builds and runs tasks efficiently.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Use a Monorepo?
&lt;/h2&gt;

&lt;p&gt;A monorepo helps manage multiple applications or services in a single repository, making it ideal for projects with &lt;strong&gt;shared code&lt;/strong&gt; and &lt;strong&gt;multiple services&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyss6njog05lpydxlc0q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyss6njog05lpydxlc0q.png" alt="Image description" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;✅ Benefits of a Monorepo&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Code Sharing:&lt;/strong&gt; Reuse components, utility functions, and TypeScript types across frontend and backend.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Single Dependency Management:&lt;/strong&gt; All dependencies are managed in one place, reducing version conflicts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faster Development:&lt;/strong&gt; Turborepo caches previous builds, making incremental changes faster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplified CI/CD:&lt;/strong&gt; One pipeline for the entire project, making deployment easier.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better Collaboration:&lt;/strong&gt; Developers can work across services without managing multiple repositories.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Setting Up a Monorepo with Next.js &amp;amp; NestJS
&lt;/h2&gt;

&lt;p&gt;Let's create a &lt;strong&gt;monorepo&lt;/strong&gt; structure that includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; Next.js&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; NestJS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared Code:&lt;/strong&gt; Common UI components, utilities, and TypeScript types&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1️⃣ Initialize the Monorepo&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Turborepo helps manage monorepos efficiently by caching and running only necessary tasks.&lt;/p&gt;

&lt;p&gt;Run the following command to create a new monorepo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx create-turbo@latest personal-finance-app
&lt;span class="nb"&gt;cd &lt;/span&gt;personal-finance-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This sets up a &lt;strong&gt;Turborepo workspace&lt;/strong&gt; with &lt;code&gt;apps&lt;/code&gt; and &lt;code&gt;packages&lt;/code&gt; folders.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2️⃣ Create Folder Structure&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Organize your project like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/personal-finance-app
 ├── /apps
 │   ├── /frontend  (Next.js)
 │   ├── /backend   (NestJS)
 ├── /packages
 │   ├── /ui        (Shared UI components)
 │   ├── /utils     (Shared utility functions)
 │   ├── /types     (Shared TypeScript types)
 ├── package.json
 ├── turbo.json
 ├── tsconfig.json
 ├── .gitignore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;3️⃣ Install Next.js for the Frontend&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Navigate to the &lt;code&gt;apps/frontend&lt;/code&gt; folder and create a Next.js app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;apps/frontend
npx create-next-app &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, add workspace support to &lt;code&gt;package.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"workspaces"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"../../packages/*"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;4️⃣ Install NestJS for the Backend&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Navigate to the &lt;code&gt;apps/backend&lt;/code&gt; folder and set up NestJS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ../backend
npx nest new &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, add workspace support to &lt;code&gt;package.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"workspaces"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"../../packages/*"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;5️⃣ Configure the Root &lt;code&gt;package.json&lt;/code&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Modify the root &lt;code&gt;package.json&lt;/code&gt; to enable workspaces and Turborepo commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"personal-finance-app"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"private"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"workspaces"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"apps/*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"packages/*"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dev"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"turbo run dev"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"build"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"turbo run build"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"devDependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"turbo"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"latest"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;6️⃣ Run Everything!&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Install dependencies for all apps and packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yarn &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, start both frontend and backend:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yarn dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, Turborepo runs &lt;strong&gt;Next.js and NestJS in parallel&lt;/strong&gt;! 🎉&lt;/p&gt;




&lt;h2&gt;
  
  
  How Monorepos Work Under the Hood
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Workspaces for Shared Code&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You can create reusable packages under &lt;code&gt;packages/&lt;/code&gt;, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;packages/ui&lt;/code&gt;&lt;/strong&gt; → Shared React components for frontend.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;packages/utils&lt;/code&gt;&lt;/strong&gt; → Utility functions used in both frontend &amp;amp; backend.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;packages/types&lt;/code&gt;&lt;/strong&gt; → Shared TypeScript interfaces.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To use these packages, simply install them in &lt;code&gt;apps/frontend&lt;/code&gt; and &lt;code&gt;apps/backend&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yarn workspace frontend add ../../packages/types
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Turborepo’s Smart Caching&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Turborepo &lt;strong&gt;caches previous builds&lt;/strong&gt;, so when you change the frontend, it won’t unnecessarily rebuild the backend.&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 Deploying a Monorepo
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Frontend (Next.js) Deployment&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Deploy easily to &lt;strong&gt;Vercel&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yarn build
vercel deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Backend (NestJS) Deployment&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;For a backend, &lt;strong&gt;Railway or Render&lt;/strong&gt; works well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yarn build
railway up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;monorepo&lt;/strong&gt; with &lt;strong&gt;Next.js and NestJS&lt;/strong&gt; using &lt;strong&gt;Turborepo&lt;/strong&gt; makes full-stack development faster, more efficient, and easier to manage. 🚀&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Key Takeaways:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;✅ &lt;strong&gt;Shared code&lt;/strong&gt; reduces duplication.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Faster builds&lt;/strong&gt; with caching.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;One command&lt;/strong&gt; to run both frontend &amp;amp; backend.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Simplified deployments.&lt;/strong&gt;  &lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>monorepos</category>
      <category>webdev</category>
      <category>nestjs</category>
    </item>
  </channel>
</rss>
