<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: STEVE</title>
    <description>The latest articles on DEV Community by STEVE (@realsteveig).</description>
    <link>https://dev.to/realsteveig</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/realsteveig"/>
    <language>en</language>
    <item>
      <title>System Design Series - Scalability</title>
      <dc:creator>STEVE</dc:creator>
      <pubDate>Sat, 29 Jun 2024 22:07:52 +0000</pubDate>
      <link>https://dev.to/realsteveig/system-design-series-scalability-1ln8</link>
      <guid>https://dev.to/realsteveig/system-design-series-scalability-1ln8</guid>
      <description>&lt;h1&gt;
  
  
  System Design Series - Scalability
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this section, we are going to discuss scalability, a critical aspect of system design that ensures your application can handle increased load gracefully. Understanding scalability is essential for building robust, high-performance systems that can grow with user demand and business needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Scalability?
&lt;/h2&gt;

&lt;p&gt;Scalability is the ability of a system to handle increased workload by adding resources. It ensures that as demand grows, the system can continue to function efficiently. Scalability can be thought of in three dimensions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Vertical Scalability (Scaling Up)&lt;/strong&gt;: Adding more power (CPU, RAM, etc.) to an existing machine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Horizontal Scalability (Scaling Out)&lt;/strong&gt;: Adding more machines to a system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Diagonal Scalability&lt;/strong&gt;: Combining both vertical and horizontal scaling.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Types of Scaling
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Vertical Scaling (Scaling Up)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Description&lt;/strong&gt;: Increasing the capacity of a single machine by adding more resources (CPU, RAM, storage).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pros&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Easier to implement since it involves upgrading existing machines.&lt;/li&gt;
&lt;li&gt;No need to modify the application architecture.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Cons&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Limited by the maximum capacity of a single machine.&lt;/li&gt;
&lt;li&gt;Single point of failure: if the machine goes down, the application becomes unavailable.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Use Cases&lt;/strong&gt;: Initial stages of a project, applications with low to moderate growth.&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Horizontal Scaling (Scaling Out)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Description&lt;/strong&gt;: Adding more machines to handle increased load.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pros&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Virtually limitless scalability by adding more machines.&lt;/li&gt;
&lt;li&gt;Increases redundancy, reducing the risk of a single point of failure.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Cons&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;More complex to implement due to the need for distributed systems design.&lt;/li&gt;
&lt;li&gt;Requires load balancing and data distribution strategies.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Use Cases&lt;/strong&gt;: High-growth applications, distributed systems, applications requiring high availability.&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Diagonal Scaling
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Description&lt;/strong&gt;: A combination of vertical and horizontal scaling. Start with vertical scaling and switch to horizontal scaling when the vertical limit is reached.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pros&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Flexibility to adapt to different stages of growth.&lt;/li&gt;
&lt;li&gt;Optimizes resource utilization.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Cons&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Requires careful planning and monitoring to switch between scaling strategies effectively.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Use Cases&lt;/strong&gt;: Applications with varying load patterns, systems with mixed workloads.&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Auto Scaling
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Description&lt;/strong&gt;: Automatically adjusting the number of running instances based on current load.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pros&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Dynamic scaling based on real-time demand.&lt;/li&gt;
&lt;li&gt;Cost-efficient as resources are used only when needed.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Cons&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Requires accurate load prediction and monitoring.&lt;/li&gt;
&lt;li&gt;Potential for delays in scaling actions, leading to temporary performance issues.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Use Cases&lt;/strong&gt;: Cloud-based applications, unpredictable traffic patterns, cost-sensitive applications.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Considerations for Scalability
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Load Balancing
&lt;/h3&gt;

&lt;p&gt;Load balancing distributes incoming network traffic across multiple servers, ensuring no single server becomes a bottleneck. Popular load balancers include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hardware Load Balancers&lt;/strong&gt;: Physical devices designed to distribute traffic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Software Load Balancers&lt;/strong&gt;: Tools like Nginx, HAProxy, and cloud-based solutions like AWS Elastic Load Balancing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Caching
&lt;/h3&gt;

&lt;p&gt;Caching reduces the load on your servers by storing frequently accessed data in memory. Types of caching include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Client-Side Caching&lt;/strong&gt;: Caching data on the user's device.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server-Side Caching&lt;/strong&gt;: Caching data on the server side using tools like Redis or Memcached.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content Delivery Networks (CDNs)&lt;/strong&gt;: Caching static assets (images, videos, etc.) on servers closer to the user.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Database Scaling
&lt;/h3&gt;

&lt;p&gt;Databases can be a significant bottleneck in scalable systems. Techniques for scaling databases include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Read Replicas&lt;/strong&gt;: Distributing read requests to multiple read-only copies of the database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sharding&lt;/strong&gt;: Partitioning the database into smaller, more manageable pieces called shards.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NoSQL Databases&lt;/strong&gt;: Databases like MongoDB, Cassandra, and DynamoDB are designed for horizontal scaling.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Microservices Architecture
&lt;/h3&gt;

&lt;p&gt;Microservices architecture breaks down a monolithic application into smaller, independent services that can be developed, deployed, and scaled independently. This approach enhances scalability by allowing individual services to be scaled based on their specific demands.&lt;/p&gt;

&lt;h3&gt;
  
  
  Auto Scaling
&lt;/h3&gt;

&lt;p&gt;Auto scaling automatically adjusts the number of running instances based on current load. Cloud providers like AWS, Google Cloud, and Azure offer auto scaling features, ensuring your application scales dynamically in response to traffic changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Real-World Examples
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Vertical Scaling Example&lt;/strong&gt;: A startup begins with a single server for their web application. As they gain more users, they upgrade the server’s RAM and CPU to handle the increased load. This works well initially but eventually reaches a hardware limit.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Horizontal Scaling Example&lt;/strong&gt;: A popular e-commerce site handles millions of users during peak seasons. They use multiple servers behind a load balancer to distribute incoming traffic. If one server fails, the load balancer redirects traffic to the remaining servers, ensuring uninterrupted service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Diagonal Scaling Example&lt;/strong&gt;: A SaaS company starts with vertical scaling by upgrading their servers as their user base grows. When they reach the limit of vertical scaling, they transition to horizontal scaling by adding more servers and implementing load balancing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Auto Scaling Example&lt;/strong&gt;: A news website experiences fluctuating traffic with sudden spikes during breaking news. Using auto scaling, the website dynamically adjusts the number of servers to handle the traffic spikes, ensuring consistent performance and cost efficiency.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Scalability is a fundamental aspect of system design that ensures your application can handle growth efficiently. By understanding and implementing various scaling techniques, from vertical, horizontal, and diagonal scaling to auto scaling, load balancing, caching, and database strategies, you can build systems that perform well under increased load.&lt;/p&gt;

&lt;p&gt;Remember, the right scaling strategy depends on your specific use case, and often, a combination of methods will yield the best results. In the next section of our System Design Series, we will delve deeper into load balancing techniques and their importance in building scalable systems.&lt;/p&gt;

</description>
      <category>systemdesign</category>
      <category>softwareengineering</category>
      <category>webdev</category>
      <category>node</category>
    </item>
    <item>
      <title>Node.js and GraphQL Tutorial: How to build a GraphQL API with an Apollo server</title>
      <dc:creator>STEVE</dc:creator>
      <pubDate>Sat, 26 Aug 2023 16:04:42 +0000</pubDate>
      <link>https://dev.to/realsteveig/nodejs-and-graphql-tutorial-how-to-build-a-graphql-api-with-an-apollo-server-2733</link>
      <guid>https://dev.to/realsteveig/nodejs-and-graphql-tutorial-how-to-build-a-graphql-api-with-an-apollo-server-2733</guid>
      <description>&lt;p&gt;&lt;strong&gt;INTRODUCTION&lt;/strong&gt;&lt;br&gt;
Welcome to my tutorial on building a GraphQL API with an Apollo Server using Node.js! In this guide, I will show you how to harness the power of GraphQL to create efficient and flexible APIs for your applications.&lt;/p&gt;

&lt;p&gt;So, what exactly is GraphQL? Imagine a world where you can request exactly the data you need from your server, no more and no less. GraphQL is a query language that allows you to do just that. Unlike traditional REST APIs where you often receive a fixed set of data in predefined endpoints, GraphQL empowers you to shape your queries to match your specific requirements. It's like having a tailor-made API at your fingertips.&lt;/p&gt;

&lt;p&gt;Comparing GraphQL to REST is like comparing a custom-made suit to off-the-rack clothing. With REST, you might end up over-fetching or under-fetching data, causing inefficiencies. But with GraphQL, you have the freedom to ask for only the fields you need, eliminating unnecessary data transfer and optimizing your app's performance.&lt;/p&gt;

&lt;p&gt;But that's not all! GraphQL also excels in its ability to consolidate multiple data sources into a single query. No more juggling between different endpoints to assemble the data you need. GraphQL brings it all together in one elegant request.&lt;/p&gt;

&lt;p&gt;Whether you're building a simple to-do app or a complex e-commerce platform, GraphQL's flexibility and efficiency can revolutionize the way you interact with APIs. Throughout this tutorial, I'll guide you step by step in creating a GraphQL API using an Apollo Server with Node.js. You'll learn how to define your data schema and fetch data from MongoDB using resolvers for a seamless experience.&lt;/p&gt;

&lt;p&gt;So, if you're ready to dive into the world of GraphQL and unlock its potential, let's get started!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FOLDER STRUCTURE&lt;/strong&gt;&lt;br&gt;
When diving into building a GraphQL API with an Apollo Server, having a clear and organized folder structure is key. Here's a suggested layout for your project's folder structure, drawing parallels to how things might be organized in a traditional RESTful API project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;project-root/
|-- src/
|   |-- schema/           # GraphQL schema definitions
|   |-- resolvers/        # Resolver functions for handling queries and mutations
|   |-- models/           # Data models or database schemas
|   |-- app.js            # GraphQL server setup
|-- package.json          # Project dependencies and scripts
|-- .gitignore            # Git ignore configurations
|-- .env                  # Environment variables (optional)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what each folder represents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;schema: Think of the "schema" folder as similar to the routes or endpoints in a RESTful API. Here, you define your GraphQL schema using a Schema Definition Language (SDL). This schema outlines the types, queries, mutations, and even subscriptions that your API will support. It's the heart of your API's structure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;resolvers: Just as controllers handle the logic for different routes in a RESTful API, resolvers in the "resolvers" folder handle the logic for various fields in your GraphQL schema. Each resolver function corresponds to a specific field and contains the actual code that fetches data, interacts with databases, and performs the required operations. Resolvers are where the "magic" happens, similar to how controllers execute the actions in a RESTful API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;models: The "models" folder houses your data models, which are equivalent to the database schema in a RESTful API context. These models define the structure and relationships of your data. In resolvers, you utilize these models to interact with your data sources, just as you would in a RESTful API's database layer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;app.js: The "app.js" file takes on the role of setting up and configuring your GraphQL server. This is equivalent to the server configuration and middleware setup you might do in the main file of a RESTful API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;package.json: This file remains the same regardless of whether you're working with REST or GraphQL. It lists your project's dependencies and scripts for managing your API, much like in a RESTful API project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;.gitignore: Similar to a RESTful API project, the ".gitignore" file helps you specify files and directories that should be ignored by version control (Git).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;.env: Optionally, you can use the ".env" file to store environment variables for your application, just like in a RESTful API.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Remember, while this suggested folder structure draws parallels to RESTful design, it's adaptable to your project's specific needs and your development preferences. It provides a roadmap for organizing your GraphQL API project effectively, leveraging concepts you might already be familiar with from working with RESTful APIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PACKAGES&lt;/strong&gt;&lt;br&gt;
For this project, you will need to install the following packages:&lt;br&gt;
&lt;code&gt;mongoose&lt;/code&gt;, &lt;code&gt;bcryptjs&lt;/code&gt;, &lt;code&gt;dotenv&lt;/code&gt;, &lt;code&gt;@apollo/server&lt;/code&gt; &lt;code&gt;@graphql-tools/merge&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let's briefly talk about these packages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;mongoose&lt;/code&gt;&lt;/strong&gt;: ODM library for MongoDB and Node.js, aiding data modeling and interaction.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;bcryptjs&lt;/code&gt;&lt;/strong&gt;: Library for secure password hashing and comparison.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;dotenv&lt;/code&gt;&lt;/strong&gt;: Loads environment variables from &lt;code&gt;.env&lt;/code&gt; files, ensuring secure configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;@apollo/server&lt;/code&gt;&lt;/strong&gt;: GraphQL server implementation for streamlined schema execution and validation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;@graphql-tools/merge&lt;/code&gt;&lt;/strong&gt;: Utility for combining multiple GraphQL schemas into one cohesive schema.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Because I am using typescript, my folder structure and package.json file now look like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5s7qtyfo66xeh0g5ke5i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5s7qtyfo66xeh0g5ke5i.png" alt="folder structure" width="800" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now update the &lt;code&gt;./src/db/connect.ts&lt;/code&gt; file with the code below to establish a connection to your Mongo database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import mongoose from "mongoose";

export const connectDB = (url : string) =&amp;gt; {
    return mongoose.connect(url)
    .then(() =&amp;gt; console.log("Connected to database"))
    .catch((err) =&amp;gt; console.log(err));
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For this project, I created two models- &lt;code&gt;User&lt;/code&gt; and &lt;code&gt;Products&lt;/code&gt;. This is what they look like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;./src/model/user.ts&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {Schema, Model, model, Document} from 'mongoose';
import bcrypt from 'bcryptjs';

export interface IUser extends Document {
    username: string;
    email: string;
    password: string;
    isValidPassword: (password: string) =&amp;gt; Promise&amp;lt;boolean&amp;gt;;
}

const UserSchema: Schema = new Schema({
    username: {type: String, required: true, unique: true},
    email: {type: String, required: true},
    password: { type: String, required: true}
})

UserSchema.pre('save', async function() {
    const salt = await bcrypt.genSalt(10);
    this.password = await bcrypt.hash(this.password, salt);
})

UserSchema.methods.isValidPassword = async function(password: string) {
    const compare = await bcrypt.compare(password, this.password);
    return compare;
}

export const User: Model&amp;lt;IUser&amp;gt; = model&amp;lt;IUser&amp;gt;('User', UserSchema);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;./src/model/product.ts&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {Schema, Model, model, Document} from 'mongoose'

export interface IProduct extends Document {
    name: string;
    price: number;
}

const ProductSchema: Schema = new Schema({
    name: {type: String, required: true},
    price: {type: Number, required: true}
})

export const Product: Model&amp;lt;IProduct&amp;gt; = model&amp;lt;IProduct&amp;gt;('Product', ProductSchema);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have successfully created a database schema for our API let us head straight to create the schema for our GraphQL queries and Mutations.&lt;/p&gt;

&lt;p&gt;For the users schema, update &lt;code&gt;./src/schema/user.ts&lt;/code&gt; to look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { buildSchema } from "graphql";

export const usersGQLSchema = buildSchema(`
    type User {
        id: String!
        username: String!
        email: String!
        password: String!
    }

    type Query {
        users: usersInfoResponse!
        user(id: String!): User!
    }

    type usersInfoResponse {
        success: Boolean!
        total: Int!
        users: [User!]!
    }

    type Mutation {
        regUser(username: String!, email: String!, password: String!): User!
        loginUser(email: String!, password: String!): User!
        updateUser(id: String!, username: String, email: String, password: String): User!
        deleteUser(id: String!): deleteResponse!
    }

    type deleteResponse {
        success: Boolean!
        message: String!
        id: String!
    }

`)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me break down everything this code presents:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Types - Like Data Structures&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;Type User&lt;/code&gt;: Just as in RESTful APIs, a "type" in GraphQL defines what data looks like. For instance, "User" is like a blueprint for a user's data, including properties such as ID, username, email, and password.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Queries - Retrieving Data&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;Query users&lt;/code&gt;: Think of this as a way to request a list of users. Similar to a RESTful API endpoint, you're asking for user information. The exclamation point (!) after &lt;code&gt;usersInfoResponse&lt;/code&gt; indicates that this query always returns a response with users' information.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Query user(id)&lt;/code&gt;: This is like getting details about one user, just as you would in a RESTful API by providing an ID. The id parameter is marked with an exclamation point (!) to show that it's required for the query to work. The exclamation point after User indicates that this query always returns user information.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Mutations - Modifying Data&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;Mutation regUser/loginUser&lt;/code&gt;: Similar to creating a new resource in REST, this mutation lets you signup/signin a new user. The exclamation points after username, email, and password indicate that these fields are required for creating a user. The exclamation point after User indicates that the mutation always returns the newly created user's information.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Mutation updateUser(id)&lt;/code&gt;: This is like updating a user's information, comparable to editing a resource in REST. The id is required, and you can modify username, email, or password. If you don't provide an exclamation point, it means the field is optional.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Mutation deleteUser(id)&lt;/code&gt;: Just as you might delete a resource in REST, this mutation removes a user. The id is required, and the exclamation point after deleteResponse indicates that it always returns a response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Custom Response Types - Structured Responses&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;Type usersInfoResponse:&lt;/code&gt; This is like the response you might get when requesting a list of users in REST. The exclamation point after success, total, and users means that these fields are always included in the response.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Type deleteResponse&lt;/code&gt;: Comparable to a response when deleting a resource in REST, this type always includes success, message, and id.&lt;/p&gt;

&lt;p&gt;In essence, GraphQL's exclamation points highlight required fields just like in RESTful APIs. They ensure that when you make a query or mutation, you get back the data you need with certainty.&lt;/p&gt;

&lt;p&gt;After creating our User schema, let us create the resolver (or controllers) to implement the logic.&lt;/p&gt;

&lt;p&gt;Update &lt;code&gt;./src/resolver/user.ts&lt;/code&gt; with this code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {User} from '../model/user';

interface Args {
    id: string;
    username: string;
    email: string;
    password: string;
}

export const UsersResolver = {
    Query : {
        users: async () =&amp;gt; {
            try {
                const users = await User.find({});
                if (!users) throw new Error('No users found');
                return {
                    success: true,
                    total: users.length,
                    users
                };
            } catch (error) {
                throw error;
            }
        },    

        user: async (_ : any, args : Args) =&amp;gt; {
            try {
                if (!args.id) throw new Error('No id provided');
                const user = await User.findById(args.id);
                if (!user) throw new Error('No user found');
                return user;
            } catch (error) {
                throw error;
            }
        }
    },

    Mutation : {
        regUser: async (_ : any, args : Args) =&amp;gt; {
            try {
                const user = await User.findOne({email: args.email});
                if (user) throw new Error('User already exists');
                const newUser = await User.create({
                    username: args.username,
                    email: args.email,
                    password: args.password
                })
                return newUser;
            } catch (error) {
                throw error;
            }
        },

        loginUser: async (_ : any, args : Args) =&amp;gt; {
            try {
                const user = await User.findOne({email: args.email});
                if (!user) throw new Error('User not found');
                const isValid = await user.isValidPassword(args.password);
                if (!isValid) throw new Error('Invalid password');
                return user;
            } catch (error) {
                throw error;
            }
        },

        updateUser: async (_ : any, args : Args) =&amp;gt; {
            try {
                const id = args.id;
                if (!id) throw new Error('No id provided');
                const user = await User.findById(args.id);
                if (!user) throw new Error('User not found');
                const updateUser = await User.findByIdAndUpdate(id, {...args}, {new: true, runValidators: true});
                return updateUser;
            } catch (error) {
                throw error;
            }
        },

        deleteUser: async (_ : any, args : Args) =&amp;gt; {
            try {
                const id = args.id;
                if (!id) throw new Error('No id provided');
                const user = await User.findById(args.id);
                if (!user) throw new Error('User not found');
                const deleteUser = await User.findByIdAndDelete(id);
                return {
                    success: true,
                    message: 'User deleted successfully',
                    id: deleteUser?._id
                };
            } catch (error) {
                throw error;
            }
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the code above:&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;UsersResolver&lt;/code&gt; object is created, containing resolvers for both queries and mutations.&lt;/p&gt;

&lt;p&gt;The Query section contains two resolvers:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;users&lt;/code&gt;: Fetches all users from the database and returns a response containing information about users.&lt;br&gt;
&lt;code&gt;user&lt;/code&gt;: Fetches a user by their provided ID from the database.&lt;br&gt;
The Mutation section contains several resolvers for various operations, each performing a specific action:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;regUser&lt;/code&gt;: Registers a new user if they don't already exist in the database.&lt;br&gt;
&lt;code&gt;loginUser&lt;/code&gt;: Validates user credentials and returns the user if login is successful.&lt;br&gt;
&lt;code&gt;updateUser&lt;/code&gt;: Updates a user's information based on their ID.&lt;br&gt;
&lt;code&gt;deleteUser&lt;/code&gt;: Deletes a user by their ID.&lt;br&gt;
Each resolver uses asynchronous code (async/await) to interact with the database and handle potential errors.&lt;/p&gt;

&lt;p&gt;The Args interface defines the expected arguments for the resolver functions. For instance, each mutation requires id, username, email, and password parameters.&lt;/p&gt;

&lt;p&gt;This code demonstrates how GraphQL resolvers work to fetch, create, update, and delete data while interacting with a User model from an external module. It's similar to the logic you might use in controllers for RESTful APIs, where each resolver corresponds to a specific API operation.&lt;/p&gt;

&lt;p&gt;Great work, now that we have successfully implemented the logic for &lt;code&gt;Users&lt;/code&gt; let us repeat the same process by creating a schema and a resolver for &lt;code&gt;Products&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;./src/schema/products.ts&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {buildSchema} from "graphql"

export const productsGQLSchema = buildSchema(`
    type Product {
        id: String!
        name: String!
        price: Int!
    }

    type Query {
        products: productsInfoResponse!
        product(id: String!): Product!
    }

    type productsInfoResponse {
        success: Boolean!
        total: Int!
        products: [Product!]!
    }

    type Mutation {
        addProduct(name: String!, price: Int!): Product!
        updateProduct(id: String!, name: String, price: Int): Product!
        deleteProduct(id: String!): deleteResponse!
    }

    type deleteResponse {
        success: Boolean!
        message: String!
        id: String!
    }
`)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;./src/resolvers/products.ts&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Product } from "../model/products";

interface Args {
    id: string;
    name: string;
    price: number;
}

export const ProductsResolver = {
    Query : {
        products: async () =&amp;gt; {
            try {
                const products = await Product.find({});
                if (!products) throw new Error('No products found');
                return {
                    success: true,
                    total: products.length,
                    products
                };
            } catch (error) {
                throw error;
            }
        },

        product: async (_ : any, args : Args) =&amp;gt; {
            try {
                if (!args.id) throw new Error('No id provided');
                const product = await Product.findById(args.id);
                if (!product) throw new Error('No product found');
                return product;
            } catch (error) {
                throw error;
            }
        }
    },

    Mutation : {
        addProduct: async (_ : any, args : Args) =&amp;gt; {
            try {
                const product = await Product.findOne({name: args.name});
                if (product) throw new Error('Product already exists');
                const newProduct = await Product.create({
                    name: args.name,
                    price: args.price
                })
                return newProduct;
            } catch (error) {
                throw error;
            }
        },

        updateProduct: async (_ : any, args : Args) =&amp;gt; {
            try {
                const id = args.id;
                if (!id) throw new Error('No id provided');
                const product = await Product.findById(args.id);
                if (!product) throw new Error('No product found');
                const updateProduct = await Product.findByIdAndUpdate(id, {...args}, {new: true, runValidators : true});
                return updateProduct;
            } catch (error) {
                console.log(error)
            }
        },

        deleteProduct: async (_ : any, args : Args) =&amp;gt; {
            try {
                const id = args.id;
                if (!id) throw new Error('No id provided');
                const product = await Product.findById(args.id);
                if (!product) throw new Error('No product found');
                const deleteProduct = await Product.findByIdAndDelete(id);
                return {
                    success: true,
                    message: 'Product deleted successfully',
                    id: deleteProduct?._id
                };
            } catch (error) {
                throw error;
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similar to the implementation for &lt;code&gt;User&lt;/code&gt; entities, the schema and resolver for &lt;code&gt;Product&lt;/code&gt; entities collaborate seamlessly to offer a comprehensive and structured approach to managing product-related operations. This cohesive interaction ensures that querying, creating, updating, and deleting products within your GraphQL API are handled efficiently and logically.&lt;/p&gt;

&lt;p&gt;The product schema serves as a blueprint, defining the structure of a &lt;code&gt;Product&lt;/code&gt; type. Just as with the &lt;code&gt;User&lt;/code&gt; type, this schema outlines the fields that compose a product, such as ID, name, description, price, and more. It specifies not only the fields themselves but also their data types and whether they're required or optional.&lt;/p&gt;

&lt;p&gt;On the other hand, the product resolver takes care of the functional aspects. In a manner akin to the user resolver, it encapsulates the actual logic behind queries and mutations involving products. For instance, when querying for a list of products, the resolver fetches the products from a data source (e.g., a database. In our case - MongoDB) and constructs a well-formed response with details about each product. Similarly, when creating, updating, or deleting a product, the resolver handles the necessary data manipulation, validation, and interaction with the data source.&lt;/p&gt;

&lt;p&gt;In tandem, the schema and resolver form a cohesive unit that makes it straightforward to understand, implement, and maintain product-related operations in your GraphQL API. This separation of concerns between defining the structure (schema) and implementing the functionality (resolver) contributes to a clean and organized codebase, making your API development experience smoother and more structured.&lt;/p&gt;

&lt;p&gt;Now it's time for us to combine all the Schema and resolvers so that we can import a merged type definition and schema into the &lt;code&gt;app.ts&lt;/code&gt; file. &lt;/p&gt;

&lt;p&gt;Update &lt;code&gt;./src/schema/index.ts&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {mergeTypeDefs} from "@graphql-tools/merge"

import { usersGQLSchema } from "./user"
import { productsGQLSchema } from "./products"

export const mergedGQLSchema = mergeTypeDefs([usersGQLSchema, productsGQLSchema])

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By doing this, you're creating a unified GraphQL schema that incorporates all the types and operations defined in the user and product schemas. This merged schema can then be imported into your &lt;code&gt;app.ts&lt;/code&gt; file to create a cohesive GraphQL API that supports both user and product-related functionalities.&lt;/p&gt;

&lt;p&gt;Update &lt;code&gt;./src/resolver/index.ts&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { UsersResolver } from "./user";
import { ProductsResolver } from "./product";

export const resolvers = [UsersResolver, ProductsResolver]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we set up your GraphQL server, you'll use this combined array of resolvers along with the merged schema to create a fully functional GraphQL API.&lt;/p&gt;

&lt;p&gt;We're nearing completion, and the final step is to construct our &lt;code&gt;app.ts&lt;/code&gt; file, where we'll consolidate the various components we've developed so far. This file will serve as the backbone of our GraphQL application.&lt;/p&gt;

&lt;p&gt;To begin, we load environment variables from a &lt;code&gt;.env&lt;/code&gt; file using the &lt;code&gt;dotenv&lt;/code&gt; library, ensuring the secure configuration of sensitive data like database connection details and port numbers.&lt;/p&gt;

&lt;p&gt;Importing essential dependencies follows suit. These include the function responsible for connecting to the database (&lt;code&gt;connectDB&lt;/code&gt;), as well as the &lt;code&gt;ApolloServer&lt;/code&gt; and &lt;code&gt;startStandaloneServer&lt;/code&gt; modules that facilitate GraphQL server creation.&lt;/p&gt;

&lt;p&gt;Within this context, we define a constant named &lt;code&gt;PORT&lt;/code&gt; to encapsulate the port number for our server. This value is either extracted from environment variables or defaults to &lt;code&gt;3000&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Our &lt;code&gt;ApolloServer&lt;/code&gt; instance takes center stage. We configure it with the merged GraphQL schema (&lt;code&gt;mergedGQLSchema&lt;/code&gt;) and the combined resolvers (&lt;code&gt;resolvers&lt;/code&gt;). Moreover, we enable introspection, a valuable tool for introspecting the schema using tools like GraphQL Playground.&lt;/p&gt;

&lt;p&gt;To bring it all to life, the &lt;code&gt;start&lt;/code&gt; function emerges. As an asynchronous function, it orchestrates the setup process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It employs the &lt;code&gt;connectDB&lt;/code&gt; function to establish a connection to the MongoDB database, using the URI from environment variables.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;startStandaloneServer&lt;/code&gt; function is then invoked, initiating the Apollo server's operation. This server listens attentively on the specified port (&lt;code&gt;PORT&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;The culmination of this sequence is marked by a console message announcing the server's successful launch.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the completion of these steps, we culminate the process by invoking the &lt;code&gt;start&lt;/code&gt; function. This action ignites the journey, connecting the database, and propelling the GraphQL server into operation.&lt;/p&gt;

&lt;p&gt;This is what our &lt;code&gt;app.ts&lt;/code&gt; file looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;require("dotenv").config()

import { connectDB } from "./db/connect";

import { ApolloServer } from '@apollo/server';

import { startStandaloneServer } from '@apollo/server/standalone';

import { mergedGQLSchema } from "./schema";
import { resolvers } from "./resolvers";

const PORT = parseInt(process.env.PORT as string) || 3000

const server = new ApolloServer({
    typeDefs : mergedGQLSchema,
    resolvers : resolvers,
    introspection : true
  });

const start = async () =&amp;gt; {
    try {
        connectDB(process.env.MONGO_URI as string)
        startStandaloneServer(server, { listen: { port: PORT } });
        console.log(`Server is listening on port ${PORT}`)
    } catch (error) {
        console.log(error)
    }
}

start()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And with that, we're all set. To set the server in motion, simply execute the command &lt;code&gt;npm run dev&lt;/code&gt; or &lt;code&gt;npm start&lt;/code&gt;. Afterwards, pinpoint the endpoint at &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;. If the components fall into place as anticipated, your browser will transport you to the interactive Apollo GraphQL Playground. &lt;/p&gt;

&lt;p&gt;This serves as an excellent feature as GraphQL is inherently self-documenting. The GraphQL Playground, an integrated query tool, allows you to test and construct queries with ease. Much like the functionality found in tools like Postman, you can formulate queries, explore the schema, and gain insights into your API's capabilities firsthand.&lt;/p&gt;

&lt;p&gt;It would look like this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fes1z7l609ifvz9po7vk4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fes1z7l609ifvz9po7vk4.png" alt="Apollo graphql playground" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Query to get all users:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5bndop5otyi1vtwjrie.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5bndop5otyi1vtwjrie.png" alt="get all users" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mutation to register a user:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxe5d4cnew3hlummmarq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxe5d4cnew3hlummmarq.png" alt="register user" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mutation to update a product&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femr1ce9xkp87hp734a74.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femr1ce9xkp87hp734a74.png" alt="Update product" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Notice that all endpoints are accessible via &lt;a href="http://localhost:3000/" rel="noopener noreferrer"&gt;http://localhost:3000/&lt;/a&gt;, unlike a RESTful design where you would need to define different endpoints for each route. This is because of GraphQL's single endpoint architecture and its ability to handle complex data retrieval in a more dynamic manner.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In a traditional RESTful API design, each endpoint typically corresponds to a specific resource or route. If you wanted to retrieve different types of data, you would need to create distinct endpoints for each resource. For example, to fetch user information, you might have an endpoint like &lt;code&gt;GET /users&lt;/code&gt;, and for products, another endpoint like &lt;code&gt;GET /products&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;However, GraphQL takes a different approach. With GraphQL, there's a single endpoint that serves as the entry point for all data operations. This endpoint is usually accessed via an HTTP POST request. Instead of defining multiple endpoints for different resources, GraphQL employs a flexible querying system that allows you to request exactly the data you need, and nothing more. &lt;/p&gt;

&lt;p&gt;This is where the power of the GraphQL query language shines. The client can specify the shape and structure of the data it requires by creating queries that match the types and fields defined in the GraphQL schema. It's like asking for a custom-made data response tailored to your application's needs.&lt;/p&gt;

&lt;p&gt;Behind the scenes, the GraphQL server processes the query and retrieves only the requested data. This eliminates the need to create and manage numerous endpoints for different use cases. The single endpoint approach simplifies the API structure, reduces redundancy, and provides a more efficient way to interact with data.&lt;/p&gt;

&lt;p&gt;In essence, GraphQL's single endpoint design, coupled with its dynamic querying capabilities, offers a more streamlined and adaptable approach to handling complex data retrieval compared to the more rigid endpoint structure of traditional RESTful APIs. This contributes to the efficiency and flexibility that GraphQL brings to modern API development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CONCLUSION&lt;/strong&gt;&lt;br&gt;
In conclusion, GraphQL presents a host of advantages that make it a compelling choice for modern API development. Its flexible querying system empowers clients to precisely request only the data they need, eliminating over-fetching and under-fetching of data commonly associated with RESTful APIs. This optimization in data transfer enhances performance, reduces unnecessary network traffic, and results in faster, more efficient interactions.&lt;/p&gt;

&lt;p&gt;Unlike RESTful APIs that often require multiple endpoints for distinct resources, GraphQL's single endpoint architecture simplifies API management and reduces the need for versioning. With GraphQL, you have the freedom to evolve your API without causing disruption to existing clients, as fields can be added or deprecated without changing the endpoint structure.&lt;/p&gt;

&lt;p&gt;Furthermore, GraphQL's introspection capabilities grant developers access to in-depth schema documentation, making it a self-documenting API. This, coupled with the integrated query tools like the GraphQL Playground, streamlines development, debugging, and testing.&lt;/p&gt;

&lt;p&gt;However, it's essential to acknowledge that GraphQL might not be the optimal solution for every scenario. Its flexibility might lead to complex queries, potentially putting a heavier load on servers. RESTful APIs, on the other hand, can offer a clearer mapping to underlying data models and cache management due to their predictable nature.&lt;/p&gt;

&lt;p&gt;In comparison, GraphQL and RESTful APIs each have their strengths and weaknesses, catering to different project requirements. While GraphQL excels in scenarios that prioritize flexibility, efficient data retrieval, and a unified endpoint, RESTful APIs can be more suitable for situations where a clear, resource-oriented structure and caching mechanisms are vital.&lt;/p&gt;

&lt;p&gt;In the journey of building GraphQL APIs, we've traversed the process of creating schemas, defining types, crafting resolvers, and setting up the server. The completed code for this tutorial is accessible on Github via  &lt;a href="https://github.com/REALSTEVEIG/GRAPH-QL-API-DESIGN" rel="noopener noreferrer"&gt;GRAPHQL-API&lt;/a&gt;, offering a practical reference for your endeavors.&lt;/p&gt;

&lt;p&gt;To all the readers embarking on this exploration, I extend my appreciation for joining this tutorial. Whether you choose GraphQL or RESTful APIs, may your coding journeys be filled with innovation, efficiency, and transformative experiences. Happy coding!&lt;/p&gt;

</description>
      <category>graphql</category>
      <category>api</category>
      <category>node</category>
      <category>webdev</category>
    </item>
    <item>
      <title>HOW TO AUTOMATE CI/CD ON YOUR AZURE KUBERNETES CLUSTER</title>
      <dc:creator>STEVE</dc:creator>
      <pubDate>Tue, 15 Aug 2023 20:25:39 +0000</pubDate>
      <link>https://dev.to/realsteveig/how-to-automate-cicd-on-your-azure-kubernetes-cluster-52c9</link>
      <guid>https://dev.to/realsteveig/how-to-automate-cicd-on-your-azure-kubernetes-cluster-52c9</guid>
      <description>&lt;p&gt;&lt;strong&gt;INTRODUCTION&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the world of modern software development, delivering applications rapidly and reliably is paramount. Continuous Integration and Continuous Deployment (CI/CD) practices streamline the development lifecycle, enabling teams to automate building, testing, and deploying applications. When coupled with the power of Kubernetes, a robust container orchestration platform, the efficiency and scalability of your applications reach new heights.&lt;/p&gt;

&lt;p&gt;In this comprehensive tutorial, I will walk you through the process of automating CI/CD for your applications on an Azure Kubernetes Service (AKS) cluster. You'll learn how to set up a complete pipeline that connects your GitHub repository to your AKS cluster, enabling automatic building, testing, and deployment of your containerized applications. Whether you're new to Kubernetes and CI/CD or looking to refine your skills, this guide has you covered.&lt;/p&gt;

&lt;p&gt;Prerequisites:&lt;br&gt;
Before diving into the tutorial, ensure you have the following prerequisites in place:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;GitHub Account: You'll need an active GitHub account to host your application's source code and set up the pipeline for CI/CD.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Azure Account: You'll require an Azure account with either a free subscription or a pay-as-you-go subscription. If you're new to Azure, you can take advantage of the $200 free trial credit for the first month to explore and experiment with AKS and other Azure services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Azure DevOps Account: To seamlessly integrate your CI/CD pipeline, an Azure DevOps account is necessary. This account will allow you to configure the automation process and manage the flow of changes from source code to the AKS cluster.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Outline:&lt;br&gt;
Throughout this tutorial, we'll cover the following key topics:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Creating an Azure Kubernetes Cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand the benefits of using AKS for container orchestration.&lt;/li&gt;
&lt;li&gt;Step-by-step guide to creating an AKS cluster in your Azure account.&lt;/li&gt;
&lt;li&gt;Exploring AKS features and configurations.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Setting Up a GitHub Pipeline for Docker and Kubernetes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Introduction to CI/CD and its importance in modern development.&lt;/li&gt;
&lt;li&gt;Configuring your GitHub repository for seamless integration with Azure DevOps.&lt;/li&gt;
&lt;li&gt;Creating a CI/CD pipeline that automates Docker image builds and Kubernetes deployments to your AKS cluster.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By the end of this tutorial, you'll have gained practical insights into the world of CI/CD automation on Azure Kubernetes Service, empowering you to accelerate your software delivery process while maintaining high standards of reliability and efficiency.&lt;/p&gt;

&lt;p&gt;So, let's embark on this journey to unlock the potential of automating CI/CD on your Azure Kubernetes Cluster. Ready to get started? Let's dive in!&lt;/p&gt;

&lt;p&gt;First things first we have to build our docker image locally. I have set up a simple Typescript-Nodejs server with a few routes: &lt;code&gt;home&lt;/code&gt;, &lt;code&gt;about&lt;/code&gt;, &lt;code&gt;contact&lt;/code&gt;, and a universal &lt;code&gt;404&lt;/code&gt;. (This can be setup using any framework) Here is a link to my code on GitHub - &lt;a href="https://github.com/REALSTEVEIG/kubernetes-pipeline" rel="noopener noreferrer"&gt;CODE&lt;/a&gt;.&lt;br&gt;
Here is what the routes look like :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1teina7iu0v8bb5zmvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1teina7iu0v8bb5zmvg.png" alt="routes" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have also set up a basic Docker file configuration. Here is what it looks like : &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwuhvmcdm87ul3kw6atl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwuhvmcdm87ul3kw6atl.png" alt="dockerfile" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now I will simply build a new image using this command :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker build -t kubernetes-pipeline .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then ensure the docker image is running using this command :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -d --name kubernetes-pipeline -p 3000:3000 kubernetes-pipeline&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This should start the docker container on port 3000&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CREATING A REGISTRY&lt;/strong&gt;&lt;br&gt;
Next, we need to push this image to a registry on Microsoft Azure. This will be a good time to create an Azure account if you don't already have one. You can create an Azure account here: &lt;a href="https://azure.microsoft.com/en-us/free" rel="noopener noreferrer"&gt;Azure&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once your account has been created successfully, in the search bar that appears on the Azure dashboard, search for registries and select &lt;code&gt;Create container registry&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24bu4crpijetjlkrw53a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24bu4crpijetjlkrw53a.png" alt="registry" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have successfully created your registry, we will proceed to push our image to the registry using this command:&lt;/p&gt;

&lt;p&gt;Login to the registry: &lt;code&gt;az acr login --name onlyregistryhere&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Tag image to the repository: &lt;code&gt;docker tag kubernetes-pipeline onlyregistryhere.azurecr.io/kubernetes-pipeline:latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Push the image to Azure registry: &lt;code&gt;docker push onlyregistryhere.azurecr.io/kubernetes-pipeline:latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Ensure to replace &lt;code&gt;onlyregistryhere&lt;/code&gt; with your registry name and &lt;code&gt;kubernetes-pipeline:latest&lt;/code&gt; with your docker image name and tag.&lt;/p&gt;

&lt;p&gt;If everything works as expected, you should see your image name in the list of repositories.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcyotd3k289vempaox904.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcyotd3k289vempaox904.png" alt="all repositories" width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CREATING AN AZURE KUBERNETES CLUSTER&lt;/strong&gt;&lt;br&gt;
Now let us create our Kubernetes cluster. From the Azure dashboard, simply search &lt;code&gt;Kubernetes services&lt;/code&gt;, follow the prompt, and create a cluster. If everything has been set up correctly, you should see this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fseecmwe9ak533y7xtu0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fseecmwe9ak533y7xtu0h.png" alt="kubernestes cluster" width="800" height="378"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;To interact with our cluster and manage services and deployments, I recommend using the Cloud shell. Therefore, locate the cloud shell in the &lt;code&gt;get started&lt;/code&gt; menu and click on &lt;code&gt;connect&lt;/code&gt;.&lt;br&gt;
Once you open the cloud shell, you can now interact with all the pods we will deploy in the future. save this tab and let us head over to Azure DevOps.&lt;/p&gt;

&lt;p&gt;Our cloud shell should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3egruc4mnjzbeb9e2tc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3egruc4mnjzbeb9e2tc.png" alt="Cloud shell" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CI WITH AZURE DEVOPS&lt;/strong&gt;&lt;br&gt;
Now, to automate the CI/CD process, you need to have an Azure DevOps account. If you don't, simply head over to &lt;a href="https://dev.azure.com/" rel="noopener noreferrer"&gt;Azure DevOps&lt;/a&gt; to create a free account.&lt;/p&gt;

&lt;p&gt;Next, click on &lt;code&gt;create a new project&lt;/code&gt;. Once created, locate &lt;code&gt;pipeline&lt;/code&gt; and select Github/Gitlab to create a pipeline with your chosen host. This may prompt you to authorize this action from your Github/Gitlab account. After authorization, select the repository you would like to create a CI pipeline for and select &lt;code&gt;okay&lt;/code&gt;. Next, configure your pipeline from the list of available options. In our case, we will select &lt;code&gt;Docker&lt;/code&gt; to build and push the image to Azure Container Registry,  then we will later run a separate pipeline - &lt;code&gt;Deploy to Azure Kubernetes Service&lt;/code&gt; to create a pipeline for your Azure Kubernetes Service. follow the default prompts and grant all the necessary permissions to configure a successful CI pipeline.&lt;/p&gt;

&lt;p&gt;If everything checks out, you should see this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxwlxnj2dbwshhlpdnqu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxwlxnj2dbwshhlpdnqu.png" alt="Successful deployment" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now to confirm that our deployment works fine, let us head back to our cloud shell and query for all services and deployments using the following commands:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get deployments&lt;/code&gt;: get all deployments.&lt;br&gt;
&lt;code&gt;kubectl get services&lt;/code&gt;: get all services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5drsj5uyh1qcnltfr3i2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5drsj5uyh1qcnltfr3i2.png" alt="deployed" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let me explain what is happening in the cloud shell.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kubectl get deployments:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployment name &lt;code&gt;kubernetespipeline&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;1 pod is ready and available out of 1.&lt;/li&gt;
&lt;li&gt;Deployment is up-to-date.&lt;/li&gt;
&lt;li&gt;Age: 56 seconds.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kubectl get services:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;kubernetes&lt;/code&gt; service (core Kubernetes service):

&lt;ul&gt;
&lt;li&gt;ClusterIP: 10.0.0.1.&lt;/li&gt;
&lt;li&gt;No external IP.&lt;/li&gt;
&lt;li&gt;Type: ClusterIP.&lt;/li&gt;
&lt;li&gt;Age: 129 minutes.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubernetespipeline&lt;/code&gt; service (created service):

&lt;ul&gt;
&lt;li&gt;ClusterIP: 10.0.25.184.&lt;/li&gt;
&lt;li&gt;External IP: 51.142.173.251.&lt;/li&gt;
&lt;li&gt;Type: LoadBalancer.&lt;/li&gt;
&lt;li&gt;Port Mapping: 3000:31553.&lt;/li&gt;
&lt;li&gt;Age: 60 seconds.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kubectl get pods:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pod name: &lt;code&gt;kubernetespipeline-5d677f89c8-qvcsp&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;1/1 containers in the pod are ready.&lt;/li&gt;
&lt;li&gt;Pod status: Running.&lt;/li&gt;
&lt;li&gt;No restarts.&lt;/li&gt;
&lt;li&gt;Age: 2 minutes.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;kubectl logs -f kubernetespipeline-5d677f89c8-qvcsp:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Following logs for pod &lt;code&gt;kubernetespipeline-5d677f89c8-qvcsp&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Application: &lt;code&gt;kubernetes-pipeline@1.0.0&lt;/code&gt;, started with &lt;code&gt;node ./dist/app.js&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Server running on port 3000.&lt;/li&gt;
&lt;li&gt;Logs show requests to "Home page!", "Contact page!", and "About page!".&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The output provides a snapshot of the Kubernetes deployment, services, pod, and application logs in my cluster. A deployment named &lt;code&gt;kubernetespipeline&lt;/code&gt; has a ready pod, and a service named &lt;code&gt;kubernetespipeline&lt;/code&gt; is externally accessible. The pod, &lt;code&gt;kubernetespipeline-5d677f89c8-qvcsp&lt;/code&gt;, is running an application serving requests on port 3000, with logs indicating various page accesses. The core &lt;code&gt;kubernetes&lt;/code&gt; service and its details are also displayed.&lt;/p&gt;

&lt;p&gt;Now let me access this endpoint on my local browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncu3b3rhr2pimt738ahi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncu3b3rhr2pimt738ahi.png" alt="service" width="800" height="160"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
See? You've successfully accessed your containerized API using the external IP of your service within your Kubernetes cluster. By following the steps outlined in this tutorial, you've not only set up a seamless integration between your GitHub repository and your Kubernetes cluster but also automated the process of updating your application. This streamlined approach gives you more time to concentrate on what truly matters – the development of your application itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's Next?&lt;/strong&gt;&lt;br&gt;
The journey doesn't stop here. You've established a robust foundation for your CI/CD pipeline, but there are more enhancements and optimizations you can explore:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Custom Domain Setup:&lt;/strong&gt;&lt;br&gt;
Take your application to the next level by providing a custom domain for your deployment. This way, users can access your API using a memorable and branded URL. You can achieve this by setting up an Ingress controller in Kubernetes and configuring it to route traffic to your service. This enhances user experience and aligns with professional standards.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scale and Load Balancing:&lt;/strong&gt;&lt;br&gt;
As your application gains popularity and user traffic increases, you can further optimize performance by exploring Kubernetes' scaling and load balancing capabilities. Configure Horizontal Pod Autoscaling to dynamically adjust the number of pods based on traffic load, ensuring smooth user experiences during traffic spikes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security and Authentication:&lt;/strong&gt;&lt;br&gt;
Protect your API and user data by implementing security measures. Explore Kubernetes' built-in security features, like Network Policies, to control communication between pods. Additionally, consider integrating authentication and authorization mechanisms to ensure that only authorized users can access your API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitoring and Logging:&lt;/strong&gt;&lt;br&gt;
Gain insights into your application's behavior and performance by setting up monitoring and logging solutions. Tools like Prometheus and Grafana can help you monitor resource usage and visualize metrics, enabling you to proactively address any issues.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As you venture further into the realm of Kubernetes, CI/CD, and application development, remember that your learning journey is ongoing. Embrace new challenges and keep exploring advanced techniques to create more efficient, reliable, and user-friendly applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Farewell:&lt;/strong&gt;&lt;br&gt;
With that, I bid farewell to this tutorial. I hope that this guide has provided you with a solid foundation to automate your CI/CD pipeline on an Azure Kubernetes Service cluster. Remember, technology evolves, and so does your expertise. Keep experimenting, learning, and innovating, and you'll continue to build amazing solutions that make a real impact.&lt;/p&gt;

&lt;p&gt;Thank you for joining me on this journey, and best of luck with your future endeavors in the exciting world of DevOps!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>azure</category>
    </item>
    <item>
      <title>DOCKER FOR EVERYONE - (Learn about Caching, Load-Balancing, and Virtual Machines).</title>
      <dc:creator>STEVE</dc:creator>
      <pubDate>Mon, 24 Jul 2023 13:45:01 +0000</pubDate>
      <link>https://dev.to/realsteveig/docker-for-everyone-learn-about-caching-load-balancing-and-virtual-machines-1ah9</link>
      <guid>https://dev.to/realsteveig/docker-for-everyone-learn-about-caching-load-balancing-and-virtual-machines-1ah9</guid>
      <description>&lt;p&gt;&lt;strong&gt;INTRODUCTION&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Hello there and welcome to this comprehensive tutorial on Docker, where I will be guiding you through the exciting world of load-balancing, caching, and deploying Docker containers to cloud services. Whether you're a beginner or an experienced developer, this tutorial is designed to be accessible and beneficial for everyone.&lt;/p&gt;

&lt;p&gt;In this tutorial, we will cover a range of fundamental concepts and practical techniques to enhance your Docker skills. First and foremost, we'll delve into the basics of Docker and containerization, helping you understand the core principles and advantages of this powerful technology.&lt;/p&gt;

&lt;p&gt;One of the key topics we'll explore is caching user sessions using Redis. Redis is an open-source, in-memory data structure store that allows for lightning-fast data retrieval, making it an ideal tool for caching frequently accessed data, like user sessions. I will guide you through the process of integrating Redis into your Docker workflow to optimize the performance of your applications.&lt;/p&gt;

&lt;p&gt;Another critical aspect we'll address is load balancing using Nginx. Nginx is a high-performance web server that excels at distributing incoming network traffic across multiple endpoints. By incorporating Nginx into your Docker environment, you can effectively distribute the workload, ensuring smooth and efficient handling of incoming API requests.&lt;/p&gt;

&lt;p&gt;Finally, we'll cover deploying your Docker containers to Microsoft Azure or your preferred cloud service. The ability to deploy applications to the cloud offers numerous benefits, including scalability, reliability, and easy access from anywhere. I'll provide step-by-step instructions to facilitate a seamless deployment process.&lt;/p&gt;

&lt;p&gt;Before we begin, the only prerequisite for this tutorial is having a working API to follow along. If you don't have your own API, don't worry! You can simply clone my repository on Github by following this link : &lt;a href="https://github.com/REALSTEVEIG/USING_DOCKER" rel="noopener noreferrer"&gt;DOCKER&lt;/a&gt;, which we'll use throughout the tutorial.&lt;/p&gt;

&lt;p&gt;I am committed to making this tutorial as comprehensive and informative as possible, hence the title "Docker for everyone." However, if you encounter any challenges along the way, feel free to reach out to me via the comment section. Additionally, don't hesitate to use online resources to overcome any roadblocks you may encounter during your learning journey.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Docker?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Docker is a powerful tool that provides a standardized and efficient way to package, distribute, and run applications. It addresses several challenges faced in traditional software development and deployment processes. including but not limited to the following.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Compatibility Issues: Docker ensures consistent behavior across different environments by encapsulating applications and their dependencies within containers. This eliminates compatibility issues that arise due to differences in operating systems, libraries, and configurations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dependency Management: With Docker, developers define application dependencies in a Dockerfile, and Docker takes care of including all required libraries and frameworks in the container image. This simplifies dependency management and ensures reproducible deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deployment Complexities: Docker's containerization simplifies application deployment, especially in complex setups with multiple microservices. It allows each service to run in its own container, making scaling, deployment, and management easier.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability and Resource Utilization: Docker enables seamless application scaling through container orchestration platforms like Kubernetes or Docker Swarm. These platforms automatically adjust the number of containers based on demand, optimizing resource utilization and ensuring smooth user experiences.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In summary, Docker provides an efficient solution to challenges such as compatibility, dependency management, deployment complexities, and scalability, making it an essential tool for modern software development and deployment workflows.&lt;/p&gt;

&lt;p&gt;Now that we've set the stage, let's dive into the fascinating world of Docker, load-balancing, caching, and virtual machines. Get ready to unlock the true potential of your applications with Docker's powerful capabilities.&lt;/p&gt;

&lt;p&gt;As mentioned earlier, we will use a preexisting API. you can clone this repository from GitHub via this link : &lt;a href="https://github.com/REALSTEVEIG/USING_DOCKER" rel="noopener noreferrer"&gt;DOCKER&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After cloning, you will need to supply the following environment variables : &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnkl0qo1wm0xc31i1hrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnkl0qo1wm0xc31i1hrg.png" alt="file structure" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the environment variables above, we have included some credentials related to Redis. As mentioned earlier, we will use Redis to cache user sessions. so let me show you how we can include that in a typical Nodejs application. First, create a &lt;code&gt;redis.js&lt;/code&gt; file in the config folder and populate it with the following code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { createClient } = require("redis");

const client = createClient({
    password: process.env.REDIS_PASSWORD,
    socket: {
        host: process.env.REDIS_HOST,
        port: process.env.REDIS_PORT,
    }
});

client.on("connect", () =&amp;gt; {
    console.log("Connected to redis...")
})

client.on("error", (error) =&amp;gt; {
    console.log("Error connecting to redis...", error)
})

module.exports = client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;please ensure that you have a Redis instance running so that you can easily connect it to your Nodejs Application. visit &lt;a href="https://app.redislabs.com" rel="noopener noreferrer"&gt;Redis cloud&lt;/a&gt; to create a new Redis instance.&lt;/p&gt;

&lt;p&gt;Now in the &lt;code&gt;app.js&lt;/code&gt;, we will connect Redis to our API and use the express-session module to create sessions in our Redis database for our users.&lt;/p&gt;

&lt;p&gt;This is the relevant code that achieves that purpose.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const redisClient = require("./config/redis");
const RedisStore = require('connect-redis').default;
const session = require('express-session');

// Initialize sesssion storage.
app.use(session({
  store: new RedisStore({ client: redisClient }),
  secret: process.env.SESSION_SECRET,
  name: 'express-session',
  cookie: {
    secure: false,
    httpOnly: true,
    maxAge: 60000, // 1 minute. you can extend the maxAge value to suite your needs.
    // You can also set other cookie options if needed.
  },
  resave: false, // Set this to false to prevent session being saved on every request.
  saveUninitialized: true, // Set this to true to save new sessions that are not modified.
}));

const start = async () =&amp;gt; {
  try {
    await redisClient.connect() //connect API to redis
    await connectDB(mongoUrl || 'mongodb://localhost:27017/express-mongo');
    app.listen(PORT, () =&amp;gt; {
      console.log(`Server is running on port ${PORT}`);
    });
  } catch (error) {
    console.log(error);
  }
};

start();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything works fine, our terminal should look like this :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fespgor2rmqj5975kz8im.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fespgor2rmqj5975kz8im.png" alt="connected to redis terminal" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if we try to log in, we should see our cookie &lt;code&gt;express-session&lt;/code&gt; in the cookie section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdmwl4nuwa5rc7k8ahez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdmwl4nuwa5rc7k8ahez.png" alt="cookies" width="800" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After 1 minute the session should expire and you will get this error when you hit the &lt;code&gt;get all users&lt;/code&gt; endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnm8gml0acq8irbfvgimq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnm8gml0acq8irbfvgimq.png" alt="Session Expired" width="800" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great! Now that our cache works as expected, let us containerize our application. But before we dive into that, let's take a moment to familiarize ourselves with some essential keywords related to Docker:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Container:&lt;/strong&gt; A container is a lightweight, isolated execution environment that contains an application and all its dependencies. It encapsulates the application, libraries, and configurations required to run the software. Containers provide consistency and portability, ensuring that the application runs consistently across different environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Image&lt;/strong&gt;: An image is a read-only template used to create containers. It includes the application code, runtime, libraries, environment variables, and any other files required for the application to run. Docker images are the building blocks for containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Volume&lt;/strong&gt;: A volume in Docker is a persistent data storage mechanism that allows data to be shared between the host machine and the container. Volumes enable data to persist even after the container is stopped or deleted, making it ideal for managing databases and other persistent data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dockerfile&lt;/strong&gt;: A Dockerfile is a text file that contains instructions for building a Docker image. It specifies the base image, adds application code, sets environment variables, and defines other configurations needed for the container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dockerignore&lt;/strong&gt;: The .dockerignore file is used to specify which files and directories should be excluded from the Docker image build process. This is useful to prevent unnecessary files from being included in the image and reduces the image size.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Compose&lt;/strong&gt;: Docker Compose is a tool for defining and managing multi-container Docker applications. It uses a YAML file to define the services, networks, and volumes required for the application to run. Compose simplifies the process of managing complex applications with multiple containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Services&lt;/strong&gt;: In the context of Docker Compose, services refer to the individual components of a multi-container application. Each service represents a separate container running a specific part of the application, such as a web server, a database, or a cache.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Understanding these keywords will help you confidently move forward with containerizing your application using Docker. Let's explore how to utilize Docker to package our application into containers for seamless deployment and scalability.&lt;/p&gt;

&lt;p&gt;First, as described above, create a &lt;code&gt;Dockerfile&lt;/code&gt; in the root directory and populate it with the following code :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# specify the node base image with your desired version node:&amp;lt;version&amp;gt;
FROM node:16

WORKDIR /app

# copy the package.json to install dependencies
COPY package.json .

# install dependencies
RUN npm install

ARG NODE_ENV
RUN if [ "$NODE_ENV" = "development" ]; \
    then npm install; \
    else npm install --only=production; \
    fi

# copy the rest of the files
COPY . ./

# replace this with your application's default port
EXPOSE 3000

# start the app
CMD ["node", "app.js"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break down the configuration in the &lt;code&gt;Dockerfile&lt;/code&gt; step by step:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;FROM node:16&lt;/code&gt;: This line sets the base image for our Docker container. In this case, we are using the official Node.js Docker image with version 16 as our starting point. This base image includes the Node.js runtime and package manager, which we need to run our application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;WORKDIR /app&lt;/code&gt;: This line sets the working directory inside the container to &lt;code&gt;/app&lt;/code&gt;. This is the directory where our application code will be copied and where we'll execute commands.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;COPY package.json .&lt;/code&gt;: This line copies the &lt;code&gt;package.json&lt;/code&gt; file from our local directory (the same directory as the &lt;code&gt;Dockerfile&lt;/code&gt;) into the container's working directory. We do this first to take advantage of Docker's layer caching mechanism. It allows Docker to cache the dependencies installation step if the &lt;code&gt;package.json&lt;/code&gt; file hasn't changed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;RUN npm install&lt;/code&gt;: This command runs the &lt;code&gt;npm install&lt;/code&gt; command inside the container to install the application's dependencies listed in the &lt;code&gt;package.json&lt;/code&gt; file. This ensures that all required packages are available inside the container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ARG NODE_ENV&lt;/code&gt;: This line declares an argument named &lt;code&gt;NODE_ENV&lt;/code&gt;. Arguments can be passed to the Docker build command using &lt;code&gt;--build-arg&lt;/code&gt; option. It allows us to specify whether we are building the container for development or production environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;RUN if [ "$NODE_ENV" = "development" ]; ...&lt;/code&gt;: This conditional statement checks the value of the &lt;code&gt;NODE_ENV&lt;/code&gt; argument. If it is set to "development," it will run &lt;code&gt;npm install&lt;/code&gt; again, installing the development dependencies. Otherwise, if &lt;code&gt;NODE_ENV&lt;/code&gt; is set to anything other than "development" (e.g., "production"), it will only install production dependencies using &lt;code&gt;npm install --only=production&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;COPY . ./&lt;/code&gt;: This line copies all the files and directories from our local directory (the same directory as the &lt;code&gt;Dockerfile&lt;/code&gt;) into the container's working directory (&lt;code&gt;/app&lt;/code&gt;). This includes our application code, configuration files, and any other necessary files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;EXPOSE 3000&lt;/code&gt;: This instruction specifies that the container will listen on port 3000. It doesn't actually publish the port to the host machine; it's merely a way to document the port that the container exposes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;CMD ["node", "app.js"]&lt;/code&gt;: This sets the default command to be executed when the container starts. In this case, it runs the Node.js application using the &lt;code&gt;node&lt;/code&gt; command with the entry point file &lt;code&gt;app.js&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In summary, the &lt;code&gt;Dockerfile&lt;/code&gt; is a set of instructions to build a Docker image for our Node.js application. It starts from the official Node.js image, sets up the working directory, installs dependencies based on the environment (development or production), copies our application code, specifies the exposed port, and defines the command to start our application. With this configuration, we can create a containerized version of our Node.js application that can be easily deployed and run consistently across different environments.&lt;/p&gt;

&lt;p&gt;Next, let us create and populate three docker-compose files in the root directory of our application: &lt;/p&gt;

&lt;p&gt;First &lt;code&gt;docker-compose.yml&lt;/code&gt; file :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "3" # specify docker-compose version
services:
  nginx:
    image: nginx:stable-alpine # specify image to build container from
    ports:
      - "5000:80" # specify port mapping
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf # mount nginx config
  node-app:
    build: . # use the Dockerfile in the current directory
    environment:
      - PORT=3000 # container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Second &lt;code&gt;docker-compose.dev.yml&lt;/code&gt; file :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "3"
services:
  nginx:
    image: nginx:stable-alpine # specify image to build container from
    ports:
      - "3000:80" # specify port mapping
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro # mount nginx config file
  node-app:
    build:
      context: . # current directory
      args:
        - NODE_ENV=development
    volumes:
      - ./:/app
      - /app/node_modules
    environment:
      - NODE_ENV=development
    command: npm run dev

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Third &lt;code&gt;docker-compose.prod.yml&lt;/code&gt; file :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "3"
services:
  nginx:
    image: nginx:stable-alpine
    ports:
      - "80:80"
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
  node-app:
    deploy:
      restart_policy:
        condition: on-failure
    build: 
      context: .
      args:
        - NODE_ENV=${NODE_ENV}
    volumes:
      - ./:/app
      - /app/node_modules
    command: npm start
    environment:
      - MONGO_USERNAME=${MONGO_USERNAME}
      - MONGO_PASSWORD=${MONGO_PASSWORD}
      - REDIS_HOST=${REDIS_HOST}
      - REDIS_PORT=${REDIS_PORT}
      - SESSION_SECRET=${SESSION_SECRET}
      - REDIS_PASSWORD=${REDIS_PASSWORD}
      - NODE_ENV=${NODE_ENV}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By using these docker-compose files, we can easily manage our containers and define different configurations for development and production environments. The combination of Docker and docker-compose simplifies the process of containerizing and deploying our application, making it more efficient and scalable in real-world scenarios.&lt;/p&gt;

&lt;p&gt;Now let us break down the contents of all three files.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker-compose.yml&lt;/code&gt; file:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;docker-compose.yml&lt;/code&gt; file is the main configuration file for our application. It allows us to define and manage multiple services, each running in its own container. Let's go through its contents:&lt;br&gt;
&lt;code&gt;version: "3"&lt;/code&gt;: This line specifies the version of the docker-compose syntax that we are using. In this case, we are using version 3.&lt;br&gt;
&lt;code&gt;services&lt;/code&gt;: This section defines the different services (containers) that compose our application.&lt;br&gt;
&lt;code&gt;nginx&lt;/code&gt;: This service is responsible for running the Nginx web server.&lt;br&gt;
&lt;code&gt;image&lt;/code&gt;: nginx:stable-alpine: It specifies the base image for the nginx container, which will be pulled from Docker Hub. We are using the stable Alpine version of Nginx, a lightweight and efficient web server.&lt;br&gt;
&lt;code&gt;ports&lt;/code&gt;: This line maps port 5000 on the host machine to port 80 inside the nginx container. This allows us to access the Nginx server through port 5000 on our local machine.&lt;br&gt;
&lt;code&gt;volumes&lt;/code&gt;: Here, we mount the ./nginx/default.conf file from the host machine to the container's /etc/nginx/conf.d/default.conf path. This file is used to configure Nginx.&lt;br&gt;
&lt;code&gt;node-app&lt;/code&gt;: This service represents our Node.js application.&lt;br&gt;
&lt;code&gt;build&lt;/code&gt;: .: It tells Docker to build the node-app container using the Dockerfile located in the current directory (.).&lt;br&gt;
&lt;code&gt;environment&lt;/code&gt;: In this line, we set the PORT environment variable to 3000 inside the container. This variable allows our Node.js application to listen on port 3000.&lt;br&gt;
These settings in the &lt;code&gt;docker-compose.yml&lt;/code&gt; file allow us to run both Nginx and our Node.js application together, making them work seamlessly in tandem.&lt;/p&gt;

&lt;p&gt;Now since we created a volume that mounts a custom nginx configuration in the docker-compose file, later on, we will need to create that file in our development environment and ensure we provide accurate configuration settings - (More on this later).&lt;/p&gt;

&lt;p&gt;Next, we'll look at the other two docker-compose files used for different scenarios - development and production environments.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker-compose.dev.yml&lt;/code&gt; file :&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;docker-compose.dev.yml&lt;/code&gt; file is used for the development environment. It allows us to set up our application with configurations optimized for development purposes. Let's go through its contents:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;version: "3":&lt;/code&gt; Same as in the previous file, this specifies the version of the docker-compose syntax used.&lt;br&gt;
&lt;code&gt;services&lt;/code&gt;: This section defines the services (containers) specific to the development environment.&lt;br&gt;
&lt;code&gt;nginx&lt;/code&gt;: This service runs the Nginx web server, just like in the previous file.&lt;br&gt;
&lt;code&gt;image&lt;/code&gt;: nginx:stable-alpine: The same base image for Nginx.&lt;br&gt;
&lt;code&gt;ports&lt;/code&gt;: Here, we map port 3000 on the host machine to port 80 inside the nginx container. This allows us to access the Nginx server through port 3000 on our local machine.&lt;br&gt;
&lt;code&gt;volumes&lt;/code&gt;: We mount the same ./nginx/default.conf file, but this time with the ro (read-only) option, as we don't need to modify it during development.&lt;br&gt;
&lt;code&gt;node-app&lt;/code&gt;: This service represents our Node.js application specifically for development.&lt;br&gt;
&lt;code&gt;build&lt;/code&gt;: It tells Docker to build the my-node-app container using the Dockerfile in the current directory (.). Additionally, we pass the &lt;code&gt;NODE_ENV=development&lt;/code&gt; argument to the build process, allowing our application to use development-specific configurations.&lt;br&gt;
&lt;code&gt;volumes&lt;/code&gt;: Here, we mount the current directory (./) to the /app directory inside the container. This allows us to have real-time code changes reflected in the container without rebuilding it. We also mount /app/node_modules to prevent overriding the node_modules directory in the container and ensure our installed dependencies are available.&lt;br&gt;
environment: We set the NODE_ENV environment variable to development inside the container to activate development-specific behavior in our Node.js application.&lt;br&gt;
&lt;code&gt;command&lt;/code&gt;: This line specifies the command to run when the container starts. In this case, we execute the npm run dev command, which usually starts our application in development mode.&lt;br&gt;
The &lt;code&gt;docker-compose.dev.yml&lt;/code&gt; file enables us to set up our development environment with the necessary configurations, ensuring the smooth and efficient development of our application.&lt;br&gt;
Now, let's proceed to the last docker-compose file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker-compose.prod.yml&lt;/code&gt; file:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The docker-compose.prod.yml file is designed for the production environment. It defines the configurations optimized for running the application in a production setting, where reliability and scalability are crucial. Let's examine its contents:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;version: "3"&lt;/code&gt;: As before, this specifies the version of the docker-compose syntax used.&lt;br&gt;
&lt;code&gt;services&lt;/code&gt;: This section defines the services (containers) specific to the production environment.&lt;br&gt;
&lt;code&gt;nginx&lt;/code&gt;: This service runs the Nginx web server, just like in the previous files.&lt;br&gt;
&lt;code&gt;image&lt;/code&gt;: nginx:stable-alpine: The same base image for Nginx.&lt;br&gt;
&lt;code&gt;ports&lt;/code&gt;: Here, we map port 80 on the host machine to port 80 inside the nginx container, allowing HTTP traffic to reach the Nginx server on port 80.&lt;br&gt;
&lt;code&gt;volumes&lt;/code&gt;: Again, we mount the ./nginx/default.conf file, but this time with the ro (read-only) option, as we don't need to modify it during production.&lt;br&gt;
&lt;code&gt;node-app&lt;/code&gt;: This service represents our Node.js application specifically for production.&lt;br&gt;
&lt;code&gt;deploy&lt;/code&gt;: This section specifies deployment-related configurations for the service.&lt;br&gt;
&lt;code&gt;restart_policy&lt;/code&gt;: We set the restart policy to "on-failure," which means the container will automatically restart if it fails.&lt;br&gt;
&lt;code&gt;build&lt;/code&gt;: Similar to previous files, it tells Docker to build the node-app container using the Dockerfile in the current directory (.). Additionally, we use the NODE_ENV=${NODE_ENV} argument, allowing our application to use production-specific configurations.&lt;br&gt;
&lt;code&gt;volumes&lt;/code&gt;: We mount the current directory (./) to the /app directory inside the container, along with mounting /app/node_modules to preserve installed dependencies.&lt;br&gt;
&lt;code&gt;command&lt;/code&gt;: This line specifies the command to run when the container starts. In this case, we execute the npm start command, which usually starts our application in production mode.&lt;br&gt;
environment: We set various environment variables (MONGO_USERNAME, MONGO_PASSWORD, REDIS_HOST, REDIS_PORT, SESSION_SECRET, REDIS_PASSWORD, and NODE_ENV) required by our Node.js application for production-specific settings.&lt;br&gt;
The &lt;code&gt;docker-compose.prod.yml&lt;/code&gt; file ensures that our application is optimally configured for a production environment, with reliability, scalability, and automatic restarts on failure. It allows us to deploy our application confidently, knowing that it is running efficiently and can handle real-world production scenarios.&lt;/p&gt;

&lt;p&gt;At this point, we are almost done with the file setup, we now need to write our custom Nginx configuration to enable effective load-balancing within our container. &lt;/p&gt;

&lt;p&gt;So create the Nginx config file to match the volume we declared in our docker-compose file - &lt;code&gt;./nginx/default.conf:&lt;/code&gt; now add the following lines of code :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;upstream backend { # this is the name of the upstream block
    server using_docker_node-app_1:3000;
    server using_docker_node-app_2:3000;
    server using_docker_node-app_3:3000;
}

server {
    listen 80; # this is the port that the server will listen on

    location /api/ {
        proxy_set_header X-Real-IP $remote_addr; # this is required to pass on the client's IP to the node app
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # this is required to pass on the client's IP to the node app
        proxy_set_header Host $http_host; # this is required to pass on the client's IP to the node app
        proxy_set_header X-NginX-Proxy true; # this is required to pass on the client's IP to the node app
        proxy_pass http://backend; # this is the name of the upstream block
        proxy_redirect off; # this is required to pass on the client's IP to the node app
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let us explain this configuration : &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;upstream backend&lt;/strong&gt;: This block defines an upstream group named "backend." It is used to define a list of backend servers that Nginx will load balance requests to. In this case, we have three servers (using_docker_node-app_1, using_docker_node-app_2, and using_docker_node-app_3) running our Node.js application on port 3000.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;server&lt;/strong&gt;: This block defines the server configuration for Nginx.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;listen 80&lt;/strong&gt;: This line specifies that the Nginx server will listen on port 80 for incoming HTTP requests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;location /api/&lt;/strong&gt;: This block defines a location for Nginx to handle requests that start with /api/. We use this location to route requests to our backend Node.js application for API calls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;proxy_set_header&lt;/strong&gt;: These lines set various headers to pass on information to the Node.js application:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;X-Real-IP&lt;/strong&gt;: Sets the client's IP address as seen by the Nginx server.&lt;br&gt;
&lt;strong&gt;X-Forwarded-For&lt;/strong&gt;: Appends the client's IP address to the X-Forwarded-For header, indicating the chain of proxy servers.&lt;br&gt;
&lt;strong&gt;Host&lt;/strong&gt;: Sets the original host header to preserve the client's hostname.&lt;br&gt;
&lt;strong&gt;X-NginX-Proxy&lt;/strong&gt;: Sets a header to indicate that the request is being proxied by Nginx.&lt;br&gt;
&lt;strong&gt;proxy_pass &lt;a href="http://backend" rel="noopener noreferrer"&gt;http://backend&lt;/a&gt;;&lt;/strong&gt;: This line directs Nginx to pass the incoming requests to the backend group named "backend" that we defined earlier. Nginx will automatically load balance the requests among the three servers specified in the "backend" group.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;proxy_redirect off;:&lt;/strong&gt; This line disables any automatic rewriting of HTTP redirects.&lt;/p&gt;

&lt;p&gt;This custom Nginx configuration enables load-balancing across multiple instances of our Node.js application, ensuring better performance, high availability, and efficient utilization of resources. With this configuration, Nginx acts as a reverse proxy, directing incoming requests to one of the backend servers in the "backend" group, effectively distributing the load and improving overall application responsiveness.&lt;/p&gt;

&lt;p&gt;Our folder and file structure should now look like this :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktehw0d1hla6i5oz2vs7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktehw0d1hla6i5oz2vs7.png" alt="Folder structure" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now it is time to build our docker image. Since we are still on VS-code, we will start by building the &lt;code&gt;docker-compose.dev.yml&lt;/code&gt; file. afterward, when we deploy our Virtual machine using Azure or any other third-party cloud service of your choice, we will then run the &lt;code&gt;docker-compose.prod.yml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;To build our Docker image and work with Docker Compose, you will need to have Docker and Docker Compose installed on your machine. You can follow the links below to find the installation instructions that work best for your operating system:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Docker Installation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For Windows: &lt;a href="https://docs.docker.com/desktop/windows/install/" rel="noopener noreferrer"&gt;Install Docker Desktop on Windows&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;For macOS: &lt;a href="https://docs.docker.com/desktop/mac/install/" rel="noopener noreferrer"&gt;Install Docker Desktop on Mac&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;For Linux: &lt;a href="https://docs.docker.com/engine/install/" rel="noopener noreferrer"&gt;Install Docker Engine on Linux&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Docker Compose Installation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For all platforms: &lt;a href="https://docs.docker.com/compose/install/" rel="noopener noreferrer"&gt;Install Docker Compose&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Please choose the appropriate link for your operating system and follow the step-by-step instructions provided to install Docker and Docker Compose. Once installed, you will be able to proceed with containerizing and deploying your applications using Docker and Docker Compose.&lt;/p&gt;

&lt;p&gt;Let's proceed with building our container (and image, if one doesn't exist yet).&lt;/p&gt;

&lt;p&gt;To do this, open your terminal and execute the following command: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose -f docker-compose.yml -f docker-compose.dev.yml up --scale node-app=3 -d --build&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, let's break down and understand this command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;docker-compose&lt;/strong&gt;: This is the command-line tool we use to interact with Docker Compose.&lt;br&gt;
&lt;strong&gt;-f docker-compose.yml -f docker-compose.dev.yml&lt;/strong&gt;: We are using two Compose files here, docker-compose.yml and docker-compose.dev.yml, to define configurations for both the general compose configuration and the development environment.&lt;br&gt;
&lt;strong&gt;up&lt;/strong&gt;: This option tells Compose to create and start the containers.&lt;br&gt;
&lt;strong&gt;--scale node-app=3&lt;/strong&gt;: It scales the node-app service to run three instances, effectively setting up load balancing across these instances.&lt;br&gt;
&lt;strong&gt;-d&lt;/strong&gt;: The containers run in detached mode, meaning they will continue to run in the background.&lt;br&gt;
&lt;strong&gt;--build&lt;/strong&gt;: This flag ensures that Docker builds the image from the Dockerfile before starting the container.&lt;/p&gt;

&lt;p&gt;By running this command, we initiate the process of creating and launching our containers based on the configurations we defined in the Compose files. The --scale option ensures that three instances of our Node.js application will be running concurrently, allowing us to efficiently handle incoming traffic and improve performance through load balancing.&lt;/p&gt;

&lt;p&gt;If everything has been set up correctly your terminal should look like this :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb79lhwn2fq2px2o7l9a9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb79lhwn2fq2px2o7l9a9.png" alt="build" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's check the status of our running containers by executing the &lt;code&gt;docker ps&lt;/code&gt; command. After scaling our Node.js application with three instances, we should observe three containers running.&lt;/p&gt;

&lt;p&gt;However, there might be an issue with the Nginx service, which can be identified by running the docker-compose logs -f command. This log display is likely to reveal an error associated with the Nginx container, which is caused by the way we named our server files in the Nginx configuration (Further details on this will be explained later). The error will look like this :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fus66tgwqh2op3p3r6whu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fus66tgwqh2op3p3r6whu.png" alt="nginx error" width="800" height="89"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To resolve this error, we need to ensure that the server names in our Nginx configuration file match the servers created by our containers. After making these adjustments, we can rebuild our containers using the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose -f docker-compose.yml -f docker-compose.dev.yml up --scale node-app=3 -d --build&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
By running this updated build command, all our containers, including Nginx, will be up and running without any issues.&lt;/p&gt;

&lt;p&gt;As a reminder, we have made changes to our Nginx configuration file to ensure it matches the expected service names that Nginx requires. The updated Nginx configuration file should now look like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;./nginx/default.conf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;upstream backend { # this is the name of the upstream block
    server learningdocker-node-app-1:3000;
    server learningdocker-node-app-2:3000;
    server learningdocker-node-app-3:3000;
}

server {
    listen 80; # this is the port that the server will listen on

    location /api/ {
        proxy_set_header X-Real-IP $remote_addr; # this is required to pass on the client's IP to the node app
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # this is required to pass on the client's IP to the node app
        proxy_set_header Host $http_host; # this is required to pass on the client's IP to the node app
        proxy_set_header X-NginX-Proxy true; # this is required to pass on the client's IP to the node app
        proxy_pass http://backend; # this is the name of the upstream block
        proxy_redirect off; # this is required to pass on the client's IP to the node app
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if we run &lt;code&gt;docker ps&lt;/code&gt; again, we should see four running containers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpllre6cj1ypx5j4p7prf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpllre6cj1ypx5j4p7prf.png" alt="Nginx running.." width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let's verify if our application is effectively load-balancing API calls among the three instances of our Node-API.&lt;/p&gt;

&lt;p&gt;To do this, I have added a &lt;code&gt;console.log("testing nginx")&lt;/code&gt; statement in the "get all users" endpoint of our Node.js application. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdnwtzj7118das6acixg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdnwtzj7118das6acixg.png" alt="Nginx test" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will now make multiple requests to this endpoint to observe how well Nginx distributes these requests among the instances that have been created.&lt;/p&gt;

&lt;p&gt;By running the load-balanced setup, we can assess the even distribution of API calls and ensure that our system is effectively utilizing the resources provided by the three instances. This testing will help us validate that Nginx is indeed handling load balancing as expected, improving the overall performance and scalability of our application.&lt;/p&gt;

&lt;p&gt;Don't forget that we are working with sessions, so we must log in again before we can access the get-all user's endpoint.&lt;/p&gt;

&lt;p&gt;LOGIN:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86iczt3drjv5itcrdti2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86iczt3drjv5itcrdti2.png" alt="login" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GET ALL USERS: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38k2jicfcaei5ptsomyp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38k2jicfcaei5ptsomyp.png" alt="All users" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the "get-all-users" endpoint, I have triggered an API call 8 times consecutively to simulate multiple requests being made to our application.&lt;/p&gt;

&lt;p&gt;To observe the real-time results of our experiment, I will open three separate terminal instances. In each terminal, I will run the following commands: &lt;code&gt;docker logs -f learningdocker-node-app-1&lt;/code&gt;, &lt;code&gt;docker logs -f learningdocker-node-app-2&lt;/code&gt;, and &lt;code&gt;docker logs -f learningdocker-node-app-3&lt;/code&gt;. These commands will allow me to continuously follow the log outputs of each container to see how our application is load-balancing the API calls among the three instances of our Node-API.&lt;/p&gt;

&lt;p&gt;RESULT:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rj07zeom5x27f369hc3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rj07zeom5x27f369hc3.png" alt="load balancing result" width="800" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The outcome of the experiment indicates that our load balancer is functioning as expected. It has effectively distributed the API requests among the Node instances that our container created. This demonstrates that our application is successfully load balancing and handling the requests in a balanced and efficient manner.&lt;/p&gt;

&lt;p&gt;Excellent! Up to this point, our application is running smoothly in the development environment. However, to make it production-ready, we'll need to deploy it on a virtual machine. Creating a virtual machine is a straightforward process. For this tutorial, I'll demonstrate using Microsoft Azure as the cloud provider. However, keep in mind that you have the flexibility to choose any cloud provider you prefer, such as Google Cloud, AWS, UpCloud, or others. The essential requirement is to set up a Linux server, and any of these providers will be suitable for the task at hand. Let's proceed with the deployment process!&lt;/p&gt;

&lt;p&gt;Sign in or sign up for your Microsoft Azure account using the Azure portal (&lt;a href="https://portal.azure.com/" rel="noopener noreferrer"&gt;https://portal.azure.com/&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Once you're signed in, click on "Create a resource" in the top-left corner of the dashboard.&lt;/p&gt;

&lt;p&gt;In the search bar, type "Virtual Machine" and select "Virtual Machines" from the suggested results.&lt;/p&gt;

&lt;p&gt;Click on "Add" to create a new virtual machine.&lt;/p&gt;

&lt;p&gt;Now, let's configure the virtual machine:&lt;/p&gt;

&lt;p&gt;a. &lt;strong&gt;Basics&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Choose your subscription.&lt;br&gt;
   Create a new resource group or use an existing one.&lt;br&gt;
   Enter a unique virtual machine name.&lt;br&gt;
   Choose a region close to your target audience for better &lt;br&gt;
   performance.&lt;br&gt;
   Select "Ubuntu Server" as the image.&lt;/p&gt;

&lt;p&gt;b. &lt;strong&gt;Instance details&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Choose a virtual machine size based on your needs (e.g., S &lt;br&gt;
   tandard B2s).&lt;br&gt;
   Enable "SSH public key" authentication and provide your public &lt;br&gt;
   SSH key. This allows you to sign in using SSH securely.&lt;/p&gt;

&lt;p&gt;c. &lt;strong&gt;Disks&lt;/strong&gt;:&lt;br&gt;
   Choose your preferred OS disk settings, usually, the default &lt;br&gt;
   settings are sufficient.&lt;/p&gt;

&lt;p&gt;d. &lt;strong&gt;Networking&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Create a new virtual network or select an existing one.&lt;br&gt;
   Choose a subnet within the virtual network.&lt;br&gt;
   Enable "Public IP" and choose "Static" for a consistent IP &lt;br&gt;
   address.&lt;br&gt;
   Open port 22 for SSH (necessary for remote login), 80 for HTTP &lt;br&gt;
   and 443 for HTTPS.&lt;/p&gt;

&lt;p&gt;e. &lt;strong&gt;Management&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Choose "Enable" for Boot diagnostics to troubleshoot startup &lt;br&gt;
   issues if necessary.&lt;/p&gt;

&lt;p&gt;f. &lt;strong&gt;Advanced&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Customize any additional settings according to your &lt;br&gt;
   requirements.&lt;br&gt;
   Once you've completed the configuration, click on "Review + &lt;br&gt;
   create" to review your choices.&lt;/p&gt;

&lt;p&gt;Review the details to ensure everything is correct, and then click on "Create" to start deploying your virtual machine.&lt;/p&gt;

&lt;p&gt;Azure will now create the virtual machine based on your configuration. This process may take a few minutes.&lt;/p&gt;

&lt;p&gt;If everything works fine, your Virtual machine should be up and running like this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5krgq48mw7snytvq9y6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5krgq48mw7snytvq9y6k.png" alt="VM up and running" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the virtual machine is successfully deployed, you can access it using SSH. To log in to the Ubuntu server, open your terminal and execute the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ssh -i /path/to/your/sshkey.pem azureuser@your_external_ip&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Replace /path/to/your/sshkey.pem with the path to your SSH private key file and azureuser with your SSH username. The your_external_ip should be replaced with the public IP address assigned to your virtual machine.&lt;/p&gt;

&lt;p&gt;Once connected, your terminal prompt will look like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;azureuser@your_virtual_machine_name:~$&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here is a visual representation : &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjeqg4vgglqt7mpgmff7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjeqg4vgglqt7mpgmff7.png" alt="Ubuntu server" width="800" height="773"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you have secure access to your Ubuntu server, and you can perform various configurations and deploy your applications as needed. Remember to keep your server secure by using SSH keys and regularly updating your system packages.&lt;/p&gt;

&lt;p&gt;Next thing we have to do now is install docker and docker compose on our newly created ubuntu server.&lt;/p&gt;

&lt;p&gt;Now, our next step is to install Docker and Docker Compose on the Ubuntu server we just created.&lt;/p&gt;

&lt;p&gt;To install the latest stable versions of Docker CLI, Docker Engine, and their dependencies :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# 1. download the script
#
#   $ curl -fsSL https://get.docker.com -o install-docker.sh
#
# 2. verify the script's content
#
#   $ cat install-docker.sh
#
# 3. run the script with --dry-run to verify the steps it executes
#
#   $ sh install-docker.sh --dry-run
#
# 4. run the script either as root, or using sudo to perform the installation.
#
#   $ sudo sh install-docker.sh
#

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the installation, verify the successful download of Docker by running &lt;code&gt;docker -v&lt;/code&gt; in the terminal.&lt;/p&gt;

&lt;p&gt;Next, we need to download Docker Compose, just as we did during development.&lt;/p&gt;

&lt;p&gt;To download docker-compose, simply copy and paste the following command into your terminal :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we have successfully installed docker and docker-compose.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ewaaxzh806hqg1zbbn3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ewaaxzh806hqg1zbbn3.png" alt="Docker compose" width="800" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we need to set up our environment variables. Remember that our API will be listening for specific variables that we didn't commit to GitHub.&lt;/p&gt;

&lt;p&gt;To set up the environment variables, we will create and add them to the .env file located in the root of our server. You can use the command &lt;code&gt;sudo nano .env&lt;/code&gt; to open and edit the .env file. After making the necessary changes, press "Ctrl + X," then "Y" on your keyboard to save the changes before pressing the enter button. This will ensure that the environment variables are correctly configured and saved.&lt;/p&gt;

&lt;p&gt;To verify if the changes you made have been updated successfully, use the command &lt;code&gt;cat .env&lt;/code&gt; this will display the content of the &lt;code&gt;.env&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;You should get something that looks like this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;REDIS_HOST= visit https://app.redislabs.com to get your redis host
REDIS_PORT= visit https://app.redislabs.com to get your redis port
REDIS_PASSWORD= visit https://app.redislabs.com to get your redis password
MONGO_USERNAME= vist https://cloud.mongodb.com to get your mongo username
MONGO_PASSWORD= visit https://cloud.mongodb.com to get your mongo password
SESSION_SECRET= use any random string
NODE_ENV= development or production

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, there is one issue to address. Currently, if we build the container, our API will be unable to read the .env file on our host machine unless we establish a way to link our application and persist the environment variables through reboots. To tackle this problem, we will edit the .profile file and add the following code at the bottom:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;set -o allexport
source /home/azureuser/.env
set +o allexport
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way, our API will have access to the required environment variables, and we can keep them confidential and isolated from the codebase. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; : &lt;code&gt;/home/azureuser/.env &lt;br&gt;
&lt;/code&gt; is the path to my env file. Kindly replace yours to match the absolute path of your .env file on your host machine.&lt;/p&gt;

&lt;p&gt;To access the profile file, ensure that you are in the root directory using the following command. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;pwd&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then use &lt;code&gt;sudo nano .profile&lt;/code&gt; command to open the profile file in a text editor.&lt;/p&gt;

&lt;p&gt;After editing, make sure you save your file and exit the editor. &lt;/p&gt;

&lt;p&gt;When you type &lt;code&gt;cat .profile&lt;/code&gt; in your terminal, it should be displayed like this :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesrfn4tsqbmdlf78kadg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesrfn4tsqbmdlf78kadg.png" alt="profile" width="800" height="693"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To apply the changes we made to the &lt;code&gt;.profile&lt;/code&gt; file, you'll need to log out of your server. After logging back in, you can confirm if the &lt;code&gt;.env&lt;/code&gt; file is now persistent and readable by the Node.js application by using the command &lt;code&gt;printenv&lt;/code&gt;. In the output, if you find all the environment variables you added in the &lt;code&gt;.env&lt;/code&gt; file, then everything is set up correctly. However, if some variables are missing, you should troubleshoot the issue until all your environment variables are displayed when you use the &lt;code&gt;printenv&lt;/code&gt; command. &lt;/p&gt;

&lt;p&gt;Now we will clone the app we developed and pushed to &lt;a href="https://github.com/REALSTEVEIG/USING_DOCKER" rel="noopener noreferrer"&gt;Github&lt;/a&gt;: &lt;/p&gt;

&lt;p&gt;Simply type &lt;code&gt;git clone https://github.com/REALSTEVEIG/USING_DOCKER&lt;/code&gt; and CD into the project. In my case, the project name will be &lt;code&gt;USING_DOCKER&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Since we have already installed docker, set up our environment variable, all we need to do now is run the build command but this time around, we will build using the docker-compose.prod.yml file.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose -f docker-compose.yml -f docker-compose.prod.yml up --scale node-app=3 -d --build&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This command will build the image and create the required containers on the host machine and now we run into another error :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpkr3xrla8rnecyvlf02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpkr3xrla8rnecyvlf02.png" alt="Service name error" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Upon closer inspection, you'll notice that the service names have changed due to pulling this project from GitHub. Consequently, the build process has modified the container names, which causes Nginx to be unaware of them. As a result, the server encounters issues recognizing the new container names.&lt;/p&gt;

&lt;p&gt;To resolve this error, we need to go to our VsCode and modify the server names to match what Nginx can recognize. After making the necessary changes, we will push the updates to GitHub. On our Ubuntu server, we'll pull these changes and then run the build command again. This way, Nginx will correctly recognize the server names, and the issue will be resolved.&lt;/p&gt;

&lt;p&gt;Our &lt;code&gt;./nginx/default.conf&lt;/code&gt; file should now look like this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;upstream backend { # this is the name of the upstream block
    server using_docker_node-app_1:3000;
    server using_docker_node-app_2:3000;
    server using_docker_node-app_3:3000;
}

server {
    listen 80; # this is the port that the server will listen on

    location /api/ {
        proxy_set_header X-Real-IP $remote_addr; # this is required to pass on the client's IP to the node app
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # this is required to pass on the client's IP to the node app
        proxy_set_header Host $http_host; # this is required to pass on the client's IP to the node app
        proxy_set_header X-NginX-Proxy true; # this is required to pass on the client's IP to the node app
        proxy_pass http://backend; # this is the name of the upstream block
        proxy_redirect off; # this is required to pass on the client's IP to the node app
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we rebuild the container on our Ubuntu server using the same command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose -f docker-compose.yml -f docker-compose.prod.yml up --scale node-app=3 -d --build&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If we run &lt;code&gt;docker ps&lt;/code&gt; Nginx should now be running alongside three instances of our Nodejs API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fln47pqwh1x6fx11yybnq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fln47pqwh1x6fx11yybnq.png" alt="All containers running" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let us test our API with the external IP provider by the cloud provider. In my case &lt;code&gt;20.69.20.104&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftafkoh0qfhegdcs0106u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftafkoh0qfhegdcs0106u.png" alt="Login" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can observe, the Login route is functioning correctly. Now, let's verify if our Load-balancing is working as intended.&lt;/p&gt;

&lt;p&gt;Create 8 requests, similar to what we did during development for the "get all users" endpoint, and observe if Nginx appropriately proxies our requests across the different Node instances. This will help us ensure that our Load-balancing mechanism is functioning as expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpljbx6afdkh3gyq037se.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpljbx6afdkh3gyq037se.png" alt="load-balancing" width="646" height="756"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the image above, we can confidently conclude that our load balancing works impeccably, efficiently distributing API requests among the different Node instances as expected.&lt;/p&gt;

&lt;p&gt;With this, we have reached the conclusion of this comprehensive tutorial. Throughout this guide, we have covered a wide array of topics, including caching using Redis, load-balancing with Nginx, containerizing our application using Docker, and migrating our API to a Linux Ubuntu server on the Microsoft Azure cloud service. By following this tutorial, you have acquired valuable skills that can greatly enhance your application's performance, scalability, and deployment process.&lt;/p&gt;

&lt;p&gt;As you continue your journey in the world of DevOps and cloud computing, there are endless possibilities to explore. You can dive deeper into deploying your API, attaching a custom domain, and implementing advanced load-balancing strategies. Additionally, learning about Kubernetes, a powerful container orchestration tool, can further boost your expertise in managing containerized applications at scale.&lt;/p&gt;

&lt;p&gt;Remember, continuous learning and experimentation are vital in the ever-evolving tech landscape. Don't hesitate to explore new technologies, best practices, and industry trends to stay ahead in your journey as a skilled developer.&lt;/p&gt;

&lt;p&gt;Thank you for embarking on this learning journey with me, and I wish you all the best in your future projects and endeavors! Happy coding!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Integrating a chatbot into your Nodejs API using Dialogflow</title>
      <dc:creator>STEVE</dc:creator>
      <pubDate>Mon, 10 Jul 2023 13:23:36 +0000</pubDate>
      <link>https://dev.to/realsteveig/integrating-a-chatbot-into-your-nodejs-api-using-dialogflow-1dpn</link>
      <guid>https://dev.to/realsteveig/integrating-a-chatbot-into-your-nodejs-api-using-dialogflow-1dpn</guid>
      <description>&lt;p&gt;&lt;strong&gt;INTRODUCTION&lt;/strong&gt;&lt;br&gt;
Welcome. This tutorial will guide you through the process of integrating a chatbot into your Node.js API using Dialogflow. With the rapid advancement of conversational AI, chatbots have become an essential component for providing interactive and personalized experiences to users. By integrating a chatbot into your Node.js API, you can enhance the functionality and engagement of your application, allowing users to interact seamlessly with automated conversational agents.&lt;/p&gt;

&lt;p&gt;Dialogflow, powered by Google Cloud, is a powerful natural language processing (NLP) platform that enables developers to build intelligent chatbots and virtual assistants. It offers a wide range of features, including language understanding, context management, and intent recognition, making it an ideal choice for creating conversational interfaces. By combining Dialogflow with your Node.js API, you can leverage its capabilities to understand user queries, provide accurate responses, and deliver a smooth conversational experience.&lt;/p&gt;

&lt;p&gt;Throughout this tutorial, we will explore the step-by-step process of integrating Dialogflow into your Node.js API. We will cover the necessary setup, configuration, and implementation steps, allowing you to seamlessly connect your chatbot with your API endpoints. Whether you are building a customer support system, an e-commerce platform, or any other application that requires interactive communication, this tutorial will equip you with the knowledge and tools to integrate a chatbot effectively.&lt;/p&gt;

&lt;p&gt;By the end of this tutorial, you will have a comprehensive understanding of how to integrate Dialogflow into your Node.js API, enabling your application to handle user queries, provide intelligent responses, and offer a more engaging and dynamic user experience. So, let's dive in and embark on this exciting journey.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CREATING AN AGENT&lt;/strong&gt;&lt;br&gt;
Before we use Dialogflow, we need to create an agent. So visit &lt;a href="https://dialogflow.cloud.google.com/" rel="noopener noreferrer"&gt;Dialog flow&lt;/a&gt; to get started. &lt;/p&gt;

&lt;p&gt;Next, create an agent and assign a name to it e.g. my-first-chat-bot.&lt;/p&gt;

&lt;p&gt;In the menu, that shows up, find and click on the gear icon - (Settings).&lt;/p&gt;

&lt;p&gt;Under the general menu, scroll down to the project ID field and click on your project ID.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fka3bjnec9q4wcqxgk3ik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fka3bjnec9q4wcqxgk3ik.png" alt="project ID" width="800" height="496"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;This action will redirect you to the google cloud console at &lt;a href="https://console.cloud.google.com/" rel="noopener noreferrer"&gt;Google-cloud&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Next, navigate to the IAM &amp;amp; Admin menu and locate the service accounts sub-menu. If you haven't registered a service account yet, you will need to create a new one. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvzqcyeqy2us35b4s17h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvzqcyeqy2us35b4s17h.png" alt="Creating a service account" width="703" height="966"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To create a new service account, enter a suitable name for your account. By default, a service ID will be automatically generated for you, but you have the option to modify it if needed. Once you have entered the necessary details, click on the "create" button to proceed. After creating the service account, the next step is to assign a role to it. This step is crucial for granting the necessary permissions. In the role selection process, search for "Dialogflow API client" under the Dialogflow section, and select it. Once you have selected the appropriate role, click on "continue" to proceed. Finally, click on "done" to complete the process. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Please note&lt;/strong&gt; that the images provided in this tutorial serve as a visual reference to assist you in locating the correct menu and sub-menu options. The actual appearance and arrangement of the interface may vary based on the version or configuration of the platform you are using.&lt;/p&gt;

&lt;p&gt;Once you have successfully created a service account, the next step is to obtain your configuration keys for further integration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuex88z3ilgv77ntlx5u9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuex88z3ilgv77ntlx5u9.png" alt="Manage keys" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To download the configuration keys, click on the "Manage Keys" option. From the dropdown menu, select "Create new key". Choose the JSON format and click on the "create" button. This action will initiate the download of a JSON file containing all the necessary configuration properties required for the integration process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsrrd16ke5tb7xslec81.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsrrd16ke5tb7xslec81.png" alt="configuration file" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is crucial to note that the downloaded JSON file serves as your configuration file, holding essential information needed for successful integration. Therefore, it is vital to keep this file secure and not share its contents with anyone. Safeguarding this sensitive data ensures the integrity and security of your chatbot integration.&lt;/p&gt;

&lt;p&gt;Great! now let's jump into our code.&lt;/p&gt;

&lt;p&gt;Create a package.json file and initialize our Nodejs project by using this command :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm init -y&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install the following packages &lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm i express dotenv uuid dialogflow&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;import the configuration file you downloaded earlier into your project. Now your project setup should look like this :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb65lil72rwx0rxwo7qhg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb65lil72rwx0rxwo7qhg.png" alt="Project set up" width="768" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we set up our server and import the necessary dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const dotenv = require("dotenv").config()

const express = require("express")

const dialogflow = require("dialogflow")

const uuid = require('uuid');

const PORT = process.env.PORT || 7000


const projectId = process.env.PROJECT_ID || "small-talk-2-2-2-2"
const credentialsPath = process.env.CREDENTIALS_PATH || "./small-talk-2-2-2-2-5b3b3b3b3b3b.json"

process.env.GOOGLE_APPLICATION_CREDENTIALS = credentialsPath

const start = async () =&amp;gt; {
    try {
        app.listen(PORT, () =&amp;gt; {
            console.log(`Server has been started on port ${PORT}`)
        })
    } catch (error) {
      console.log(error)  
    }
}

start()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, lets explain what is happening here :&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;projectId&lt;/strong&gt; variable is being set to either the value of the &lt;code&gt;process.env.PROJECT_ID&lt;/code&gt; environment variable or "small-talk-2-2-2-2" if the variable is not defined.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;credentialsPath&lt;/strong&gt; variable is being set to either the value of the &lt;code&gt;process.env.CREDENTIALS_PATH&lt;/code&gt; environment variable or "./small-talk-2-2-2-2-5b3b3b3b3b3b.json" if the variable is not defined.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;GOOGLE_APPLICATION_CREDENTIALS&lt;/strong&gt; environment variable is being set to the value of credentialsPath. This variable is used to specify the path to the Google Cloud service account credentials JSON file that we downloaded and imported.&lt;/p&gt;

&lt;p&gt;Now since we are loading the credentials from our &lt;code&gt;.env&lt;/code&gt; file, it is important to copy and paste the credentials from our JSON file into the &lt;code&gt;.env&lt;/code&gt; file and ensure to spread all the credential properties across one variable.&lt;/p&gt;

&lt;p&gt;Spreading the credential properties across one variable, such as CREDENTIALS, in the .env file ensures that the credentials are assigned to a single environment variable. This approach simplifies the code and enhances readability. It also allows the application to access the credentials easily by referencing the CREDENTIALS environment variable, rather than handling each credential property individually.&lt;/p&gt;

&lt;p&gt;Your &lt;code&gt;.env&lt;/code&gt; file should now look like this : &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwcbfjdyzlengbxhi3v3j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwcbfjdyzlengbxhi3v3j.png" alt="env file" width="800" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we will establish a connection to Dialogflow by creating a function and configuring a route to handle incoming requests and responses.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function runSample() {
  // A unique identifier for the given session
  const sessionId = uuid.v4();

  // Create a new session
  const sessionClient = new dialogflow.SessionsClient();
  const sessionPath = sessionClient.sessionPath(projectId, sessionId);

  // The text query request.
  const request = {
    session: sessionPath,
    queryInput: {
      text: {
        // The query to send to the dialogflow agent
        text: 'Who are you?',
        // The language used by the client (en-US)
        languageCode: 'en-US',
      },
    },
  };

  // Send request and log result
  const responses = await sessionClient.detectIntent(request);
  const result = responses[0].queryResult.fulfillmentText;
  const queryText = responses[0].queryResult.queryText;

  if (result) {
        return {
            user: queryText,
            bot: result
        }
} else {
    return Error("No intent matched")
  }
}

const app = express()

app.get("/", async (req, res) =&amp;gt; {
    try {
        const result = await runSample()
        return res.status(200).json({message: "Success", result})
    } catch (error) {
        console.log(error)
        return res.status(500).json({message: "Server error", error})
    }
})

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the code above, the following actions are taking place:&lt;/p&gt;

&lt;p&gt;An asynchronous function named &lt;code&gt;runSample()&lt;/code&gt; is defined. It performs the following tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Generates a unique identifier, sessionId, for the session.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creates a new session using dialogflow.SessionsClient().&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Constructs the session path using the projectId and sessionId.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Defines a text query request containing the query to send to the &lt;br&gt;
Dialogflow agent and the language code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sends the request to Dialogflow using sessionClient.detectIntent() and awaits the response.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Extracts the fulfillment text and query text from the response.&lt;br&gt;
Returns an object with the user query and the bot's response, if a fulfillment text is present. Otherwise, it returns an error indicating that no intent matched the query.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An Express.js application is created using express() and stored in the app variable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A route is set up for the root URL ("/") using app.get(). The route is configured as an asynchronous function that performs the following tasks:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Calls the &lt;strong&gt;runSample()&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Returns a JSON response with a success message and the result of the &lt;strong&gt;runSample()&lt;/strong&gt; function if successful.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Logs any errors that occur during the process and returns a JSON response with an error message if there's a server error.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, this code sets up a route in an Express.js application that connects to Dialogflow, sends a query, and retrieves a response. The result is then returned as a JSON response to the client. Any errors that occur during the process are logged and returned as error messages.&lt;/p&gt;

&lt;p&gt;Our app.js file should now look like this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const dotenv = require("dotenv").config()

const express = require("express")

const dialogflow = require("dialogflow")

const uuid = require('uuid');

const PORT = process.env.PORT || 7000

const projectId = process.env.PROJECT_ID || "small-talk-2-2-2-2"
const credentialsPath = process.env.CREDENTIALS_PATH || "./small-talk-2-2-2-2-5b3b3b3b3b3b.json"

process.env.GOOGLE_APPLICATION_CREDENTIALS = credentialsPath

async function runSample() {
  // A unique identifier for the given session
  const sessionId = uuid.v4();

  // Create a new session
  const sessionClient = new dialogflow.SessionsClient();
  const sessionPath = sessionClient.sessionPath(projectId, sessionId);

  // The text query request.
  const request = {
    session: sessionPath,
    queryInput: {
      text: {
        // The query to send to the dialogflow agent
        text: 'Who are you?',
        // The language used by the client (en-US)
        languageCode: 'en-US',
      },
    },
  };

  // Send request and log result
  const responses = await sessionClient.detectIntent(request);
  const result = responses[0].queryResult.fulfillmentText;
  const queryText = responses[0].queryResult.queryText;

  if (result) {
        return {
            user: queryText,
            bot: result
        }
} else {
    return Error("No intent matched")
  }
}

const app = express()

app.get("/", async (req, res) =&amp;gt; {
    try {
        const result = await runSample()
        return res.status(200).json({message: "Success", result})
    } catch (error) {
        console.log(error)
        return res.status(500).json({message: "Server error", error})
    }
})

const start = async () =&amp;gt; {
    try {
        app.listen(PORT, () =&amp;gt; {
            console.log(`Server has been started on port ${PORT}`)
        })
    } catch (error) {
      console.log(error)  
    }
}

start()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's start our server and test. If everything was set up correctly, we will get this response :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkk0ys2jkj5p63eqiioz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkk0ys2jkj5p63eqiioz2.png" alt="API RESPONSE" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The response is a JSON object with two key-value pairs:&lt;/p&gt;

&lt;p&gt;"&lt;strong&gt;message&lt;/strong&gt;": "Success" - This indicates that the request to Dialogflow was successful and a response was received.&lt;/p&gt;

&lt;p&gt;"&lt;strong&gt;result&lt;/strong&gt;": This key holds another JSON object with two key-value pairs:&lt;/p&gt;

&lt;p&gt;"&lt;strong&gt;user&lt;/strong&gt;": "Who are you?" - This represents the user's query or input sent to Dialogflow. In this case, the user asked, "Who are you?"&lt;br&gt;
"&lt;strong&gt;bot&lt;/strong&gt;": "You can call me Sofia. How can I help you today?" - This is the response generated by Dialogflow's natural language processing. It indicates that the bot's name is Sofia and asks how it can assist the user.&lt;/p&gt;

&lt;p&gt;Overall, the response demonstrates a successful interaction with Dialogflow, where the user's query is processed, and the bot responds with an appropriate answer. &lt;/p&gt;

&lt;p&gt;At this point, our API is fully functional and ready to be integrated with a user interface. However, to optimize the performance of our chatbot, continuous training is essential. This involves adding new intents and responses in the Dialogflow dashboard, allowing the chatbot to handle a wider range of queries and interactions.&lt;/p&gt;

&lt;p&gt;let us change the user request and see what response we will receive from the agent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Request&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwdqj41j3hcun1fhrrb0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwdqj41j3hcun1fhrrb0.png" alt="request changed" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Response&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zn3z1cxs8nvmeofksgf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zn3z1cxs8nvmeofksgf.png" alt="new response" width="636" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What happens when the user asks a question which our agent does not understand? &lt;br&gt;
Well, there are predefined fallback responses like this :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9my6e049cew7h1kggbf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9my6e049cew7h1kggbf.png" alt="unknown" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dialogflow offers extensive possibilities for customization and integration. We have the flexibility to incorporate the chatbot into various platforms such as Facebook, WhatsApp, and more. By leveraging these integrations, we can reach users across different channels and provide them with personalized conversational experiences.&lt;/p&gt;

&lt;p&gt;Additionally, Dialogflow supports multi-language capabilities, enabling us to serve users in their preferred languages. Activating multi-language support ensures that our chatbot can communicate effectively with users from diverse linguistic backgrounds.&lt;/p&gt;

&lt;p&gt;It's crucial to emphasize that the effectiveness of our chatbot relies on the quality of information we provide. Regularly updating and refining the training data, intents, and responses helps our agent improve its understanding and accuracy in delivering meaningful and helpful interactions.&lt;/p&gt;

&lt;p&gt;In summary, by continuously training our agent and exploring the extensive features of Dialogflow, we can unlock a multitude of possibilities to enhance the functionality and reach of our chatbot. Taking advantage of different integration options, such as social media platforms and multi-language support, allows us to create engaging user experiences and effectively address the needs of a diverse user base. Remember, the success of our chatbot lies in the knowledge and information we feed it, so ongoing improvement and adaptation are key.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>ai</category>
      <category>google</category>
    </item>
    <item>
      <title>GETTING STARTED WITH CACHING: USING REDIS AND TYPESCRIPT</title>
      <dc:creator>STEVE</dc:creator>
      <pubDate>Tue, 04 Jul 2023 18:18:34 +0000</pubDate>
      <link>https://dev.to/realsteveig/getting-started-with-caching-using-redis-and-typescript-2c4n</link>
      <guid>https://dev.to/realsteveig/getting-started-with-caching-using-redis-and-typescript-2c4n</guid>
      <description>&lt;p&gt;&lt;strong&gt;INTRODUCTION:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this Tutorial, We are going to learn the fundamentals of caching and how to implement them using Redis and Typescript/Nodejs. But before we begin, let's start with the basics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Caching?&lt;/strong&gt; &lt;br&gt;
Caching is a technique used in computer systems and software applications to store and retrieve data quickly. It involves storing a copy of frequently accessed or expensive-to-compute data in a temporary storage location, called a cache so that future requests for the same data can be served faster.&lt;/p&gt;

&lt;p&gt;The purpose of caching is to improve the performance and efficiency of a system by reducing the time and resources required to fetch data from its original source. Instead of retrieving the data from the original location, which may involve time-consuming operations like disk access or network communication, the data is retrieved from the cache, which is typically located closer to the requester and provides faster access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PREREQUISITES&lt;/strong&gt; &lt;br&gt;
To follow along with this tutorial, you will need to have a basic understanding of NodeJS and Typescript. I will try my best to explain every required step and I am confident that I can provide thorough explanations to assist you throughout the process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Redis?&lt;/strong&gt;&lt;br&gt;
When it comes to caching, Redis stands out as an exceptional choice for several reasons. Its features and capabilities make it an ideal solution for optimizing performance and improving overall system efficiency.&lt;/p&gt;

&lt;p&gt;Redis is renowned for its incredible speed and performance. By storing data in memory, Redis enables lightning-fast data access and retrieval, making it suitable for applications that require real-time data processing and high throughput. With the ability to handle millions of requests per second and provide low-latency response times, Redis excels in scenarios where speed is of utmost importance.&lt;/p&gt;

&lt;p&gt;By leveraging Redis as a cache, applications can store frequently accessed data in memory, eliminating the need for repetitive and resource-intensive operations. This results in improved performance, reduced response times, and a more seamless user experience.&lt;/p&gt;

&lt;p&gt;In this tutorial, we will employ Redis for caching data retrieved from an external API with a noticeably slow response time. We will store this data in our cache and utilize it for subsequent requests made to the API. However, we will also address the scenario when the data from the original API undergoes changes. We will implement a mechanism to ensure that our cache consistently provides up-to-date data from the API, ensuring that our system remains synchronized with the latest information.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's get started.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, let's initialize our Nodejs project and install the required dependencies.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm init -y&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm i express axios cors dotenv redis&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now install the type definitions : &lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm i @types/express @types/axios @types/cors @types/dotenv @types/redis&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Having a tsconfig.json file ensures consistency in the compilation process across different environments and allows for easy project configuration and maintenance.&lt;/p&gt;

&lt;p&gt;So go ahead and create one using :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tsc --init&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then configure your &lt;strong&gt;outDir&lt;/strong&gt; and &lt;strong&gt;rootDir&lt;/strong&gt; options to match the following structure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2nh3nrx49k5y1kd7n1q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2nh3nrx49k5y1kd7n1q.png" alt="file structure" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Don't forget to watch for changes in your Typescript code by using &lt;code&gt;tsc -w&lt;/code&gt; in a dedicated terminal.&lt;/p&gt;

&lt;p&gt;Next, let us connect to Redis. To do this, you have two options.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Connect to Redis-Local-Server using localhost.&lt;/li&gt;
&lt;li&gt;Connect to the Redis-cloud-console by creating an instance via &lt;a href="https://app.redislabs.com/#/" rel="noopener noreferrer"&gt;Redis-cloud&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this tutorial, we will go with option 2. (If you don't have an account already, Kindly create one and create a database instance).&lt;/p&gt;

&lt;p&gt;Now let's connect to our database. Our &lt;code&gt;./src/config.ts&lt;/code&gt; file should now have the following lines of code :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { createClient } from 'redis';
import dotenv from 'dotenv';

dotenv.config();

export const client = createClient({
    password: process.env.REDIS_PASSWORD,
    socket: {
        host: process.env.REDIS_HOST,
        port: parseInt(process.env.REDIS_PORT || '6379', 10)
    }
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code above sets up a Redis client using the redis package, loads environment variables from a .env file using dotenv, and exports the Redis client for use in other parts of the codebase. The environment variables are used to configure the Redis connection, including the host, port, and password. (These variables are available on your database instance once you create an account here : &lt;a href="https://app.redislabs.com/#/" rel="noopener noreferrer"&gt;Redis-cloud&lt;/a&gt; )&lt;/p&gt;

&lt;p&gt;Next, Let's import the client variable in our &lt;code&gt;app.ts&lt;/code&gt; file and connect to the database.&lt;/p&gt;

&lt;p&gt;Configure your &lt;code&gt;app.ts&lt;/code&gt; file to look like this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import dotenv from "dotenv"
import express, { Request, Response } from "express"
import axios from "axios"
import { client } from "./config/connect"
import cors from "cors"

dotenv.config()

const app : express.Application = express()

const PORT = process.env.PORT || 7000

app.use(cors())
app.use(express.json())

const start = async () =&amp;gt; {
    try {
        await client.connect()
        app.listen(PORT, () =&amp;gt; {
            console.log(`Server is connected to redis and is listening on port ${PORT}`)
        })
    } catch (error) {
        console.log(error)
    }
}

start()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code above imports all the installed dependencies we will need for this project, sets up an Express server, configures it to handle JSON requests, enables CORS, connects to the Redis client, and starts the server to listen for incoming requests if the connection is successful.&lt;/p&gt;

&lt;p&gt;Now for this tutorial, I have created and deployed an API whose response is intentionally delayed by 5secs to demonstrate how caching can be useful in a real-world scenario.&lt;/p&gt;

&lt;p&gt;So in our &lt;code&gt;app.ts&lt;/code&gt; file, we will create two functions that make API calls like this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function isDataModified () {
    const response = await axios.get("https://pleasant-newt-girdle.cyclic.app/api/modified")
    return response.data.modified
}

async function getAllUsers () {
    const response = await axios.get("https://pleasant-newt-girdle.cyclic.app/api/users")
    return response.data
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's examine each of these functions separately. The first function sends a request to an endpoint that interacts with its database. It retrieves a &lt;code&gt;boolean&lt;/code&gt; result, which evaluates to true if there have been any recent changes. These changes encompass scenarios such as the addition of a new item &lt;code&gt;POST&lt;/code&gt;, modification of an existing item &lt;code&gt;PUT&lt;/code&gt;, or deletion of an item &lt;code&gt;DELETE&lt;/code&gt;. In the absence of any changes, the result will be false.&lt;/p&gt;

&lt;p&gt;The second function, on the other hand, straightforwardly retrieves a list of all items (in this case, users) stored in the database of that particular API.&lt;/p&gt;

&lt;p&gt;Now, let's understand the rationale behind this approach. Why are we making two requests to an API? Remember when we mentioned the need to cache only recent information? Exactly! So, what happens when a user decides to update specific details on their profile? In such cases, we also need to update the cache, right? Absolutely correct. That's precisely what the first function accomplishes. Before serving our response, we verify if any changes have been made in the API's database to ensure that our cache is up-to-date.&lt;/p&gt;

&lt;p&gt;Now let's create an endpoint in this API to store and retrieve information from our cache. Update the &lt;code&gt;app.ts&lt;/code&gt; file with this endpoint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.use("/get-users", async (req : Request, res : Response) =&amp;gt; {

    let result;
    let isCahed;

    try {

        const data = await isDataModified()

        if (data === true) {
            result = await getAllUsers()
            isCahed = false
            await client.set("all_users", JSON.stringify(result))
        } 

        else {

            const isCahedInRedis = await client.get("all_users");

            if (isCahedInRedis) {

                isCahed = true
                result = JSON.parse(isCahedInRedis)
            }

           else {
                result = await getAllUsers()
                isCahed = false

                await client.set("all_users", JSON.stringify(result))
           }

        }

        return res.status(200).json({
            isCahed,
            result : result
        })
    } catch (error) {
        console.log(error)
        return res.status(500).json({error})
    }
})

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's explain what is happening here step by step:&lt;/p&gt;

&lt;p&gt;Inside the route handler function:&lt;/p&gt;

&lt;p&gt;a. Two variables, &lt;code&gt;result&lt;/code&gt; and &lt;code&gt;isCached&lt;/code&gt;, are created to store the API request result and caching status.&lt;/p&gt;

&lt;p&gt;b. The &lt;code&gt;isDataModified()&lt;/code&gt; function is called to check if there have been any modifications in the database. The result is stored in the &lt;code&gt;data&lt;/code&gt; variable.&lt;/p&gt;

&lt;p&gt;c. If modifications are detected (when data is true), it means the cache needs to be updated. The &lt;code&gt;getAllUsers()&lt;/code&gt; function is called to retrieve all user data from the API. The result is assigned to the &lt;code&gt;result&lt;/code&gt; variable, and the &lt;code&gt;isCached&lt;/code&gt; variable is set to &lt;code&gt;false&lt;/code&gt;. The retrieved data is then stored in the Redis cache using the &lt;code&gt;client.set()&lt;/code&gt; method.&lt;/p&gt;

&lt;p&gt;d. If no modifications are detected, it means the cached data is still valid. The code checks if the data is already cached in Redis using the &lt;code&gt;client.get()&lt;/code&gt; method. If cached data exists, it is assigned to the &lt;code&gt;result&lt;/code&gt; variable, and the &lt;code&gt;isCached&lt;/code&gt; variable is set to true.&lt;/p&gt;

&lt;p&gt;e. If no cached data exists, the &lt;code&gt;getAllUsers()&lt;/code&gt; function is called to retrieve the user data from the API. The result is assigned to the &lt;code&gt;result&lt;/code&gt; variable, and the &lt;code&gt;isCached&lt;/code&gt; variable is set to &lt;code&gt;false&lt;/code&gt;. The retrieved data is then stored in the Redis cache using the &lt;code&gt;client.set()&lt;/code&gt; method.&lt;/p&gt;

&lt;p&gt;f. Finally, the code sends a JSON response with a status of 200. The response includes the isCached status and the result data.&lt;/p&gt;

&lt;p&gt;If any errors occur during the process, they are caught in the catch block. A JSON response with a status of 500 and the error message is returned.&lt;/p&gt;

&lt;p&gt;To summarize, this code sets up an endpoint that fetches user data from an API. It checks if the data has been modified, updates the cache if needed, and returns the cached data or retrieves fresh data from the API.&lt;/p&gt;

&lt;p&gt;And that pretty much does the job. Our &lt;code&gt;app.ts&lt;/code&gt; file should now look like this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import dotenv from "dotenv"
import express, { Request, Response } from "express"
import axios from "axios"
import { client } from "./config/connect"
import cors from "cors"

dotenv.config()

const app : express.Application = express()

const PORT = process.env.PORT || 7000

app.use(cors())
app.use(express.json())

async function isDataModified () {
    const response = await axios.get("https://pleasant-newt-girdle.cyclic.app/api/modified")
    return response.data.modified
}

async function getAllUsers () {
    const response = await axios.get("https://pleasant-newt-girdle.cyclic.app/api/users")
    return response.data
}

app.use("/get-users", async (req : Request, res : Response) =&amp;gt; {

    let result;
    let isCached;

    try {

        const data = await isDataModified()

        if (data === true) {
            result = await getAllUsers()
            isCached = false
            await client.set("all_users", JSON.stringify(result))
        } 

        else {

            const isCachedInRedis = await client.get("all_users");

            if (isCachedInRedis) {

                isCached = true
                result = JSON.parse(isCachedInRedis)
            }

           else {
                result = await getAllUsers()
                isCached = false

                await client.set("all_users", JSON.stringify(result))
           }

        }

        return res.status(200).json({
            isCached,
            result : result
        })
    } catch (error) {
        console.log(error)
        return res.status(500).json({error})
    }
})

const start = async () =&amp;gt; {
    try {
        await client.connect()
        app.listen(PORT, () =&amp;gt; {
            console.log(`Server is connected to redis and is listening on port ${PORT}`)
        })
    } catch (error) {
        console.log(error)
    }
}

start()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run the server and let us test our endpoints :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpqy48kf63t33qf33vy4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpqy48kf63t33qf33vy4.png" alt="GET USERS" width="800" height="611"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first thing you will notice when you hit this endpoint &lt;code&gt;http://localhost:7000/get-users&lt;/code&gt; is how long it took to get a response. As I mentioned before, that is intentional, it is meant to simulate how real-world applications, that are CPU intensive behave when caching is not implemented.&lt;/p&gt;

&lt;p&gt;Next, you will notice that the first property from the response, which is &lt;code&gt;isCached&lt;/code&gt; reads &lt;code&gt;false&lt;/code&gt;. This means that this data does not exist in our cache but has been added immediately. How can we confirm that? well, lets make the same request again by simply refreshing our browser. This is what we get : &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fee0nuygz5ijj5r8v8tcu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fee0nuygz5ijj5r8v8tcu.png" alt="IsCached True" width="800" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice how, the response time is reduced as you keep refreshing the page? also, Notice that the &lt;code&gt;isCached&lt;/code&gt;property now reads &lt;code&gt;true&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To test what will happen when we alter the database, I invite you to modify the response by creating, editing, or deleting users using any of the endpoints below : &lt;/p&gt;

&lt;p&gt;Create user (POST): &lt;code&gt;https://pleasant-newt-girdle.cyclic.app/api/user&lt;/code&gt;&lt;br&gt;
Update user (PUT)/Delete user (DELETE): &lt;code&gt;https://pleasant-newt-girdle.cyclic.app/api/user/:id&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For every time, you makes these requests successfully (POST, PUT and DELETE) the &lt;code&gt;isCached&lt;/code&gt; value becomes &lt;code&gt;false&lt;/code&gt;, prompting a delayed response to fetch current data and update the cache. Remember, you can always refresh your endpoint at &lt;code&gt;http://localhost:7000/get-users&lt;/code&gt; to visualize the changes happening live.&lt;/p&gt;

&lt;p&gt;Here is an example of how to make these requests on Postman. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CREATE NEW USER&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffmwra48i8za2zknmzuob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffmwra48i8za2zknmzuob.png" alt="CREATE A NEW USER" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UPDATE USER&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuw5ad1r1s4wm3zmavkru.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuw5ad1r1s4wm3zmavkru.png" alt="UPADTE USER" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DELETE USER&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfx6slnywui5zfnm50tx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfx6slnywui5zfnm50tx.png" alt="DELETE USER" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, if for some reason, the API, &lt;code&gt;https://pleasant-newt-girdle.cyclic.app/api/user&lt;/code&gt; ceases to exist in the future, you can also try out this similar public API &lt;code&gt;https://reqres.in/api/users?delay=3&lt;/code&gt;. Here is how it works: The response time of the API is contingent on the query parameter &lt;code&gt;delay&lt;/code&gt;. That means if you set the &lt;code&gt;delay=3&lt;/code&gt;, it will take three seconds before the API returns a response. This means that you can practice the caching mechanism with this API.&lt;/p&gt;

&lt;p&gt;But wait, how do we actually visualize the data in our cache? Well, you can download a tool like &lt;strong&gt;RedisInsight&lt;/strong&gt;, connect your database instance, and voila, you can now visualize and query your cache.&lt;/p&gt;

&lt;p&gt;Here is a link to the complete code on &lt;a href="https://github.com/REALSTEVEIG/REDIS-API2" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. Thank you for sticking around till the end.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FINAL NOTES:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Congratulations, if you got this far. By now, you should have a solid understanding of how Redis can be leveraged as a powerful caching solution. However, caching with Redis extends beyond just speeding up data retrieval. In this final note, let's explore some additional use cases and discuss the remarkable usefulness of caching.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Session Caching&lt;/strong&gt;: Redis is an excellent choice for storing session data. By caching session information, you can achieve high-performance session management, improve user experience, and reduce database load. Redis's ability to set expiration times on keys makes it perfect for managing session timeouts and automatically cleaning up expired sessions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full-page Caching&lt;/strong&gt;: Redis can be used to cache entire HTML pages, eliminating the need to regenerate them on each request. By serving cached pages directly from Redis, you can dramatically reduce the response time and alleviate the load on your application servers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result Caching&lt;/strong&gt;: Redis enables you to cache the results of complex or time-consuming computations. For example, if your application involves heavy calculations or data processing, you can store the computed results in the Redis cache and retrieve them when needed, avoiding redundant computations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Leaderboards and Counters&lt;/strong&gt;: Redis's sorted sets and atomic increment operations make it an excellent choice for implementing leaderboards, vote counters, or popularity rankings. By caching these frequently changing metrics, you can efficiently update and display them in real-time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pub/Sub Messaging&lt;/strong&gt;: Redis supports Publish/Subscribe (Pub/Sub) messaging, allowing you to build real-time communication channels, notifications, and event-driven architectures. By caching messages or maintaining subscription lists, Redis facilitates the implementation of scalable, high-performance messaging systems.&lt;/p&gt;

&lt;p&gt;The usefulness of caching with Redis cannot be overstated. By intelligently caching data, you can achieve significant performance improvements, reduce latency, and enhance the overall scalability of your applications. However, it's crucial to consider cache invalidation and maintain data consistency when working with caching systems.&lt;/p&gt;

</description>
      <category>backend</category>
      <category>typescript</category>
      <category>redis</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Node.js and TypeScript Tutorial: Build a rest API with Typescript, NodeJS, and a file-based storage system.</title>
      <dc:creator>STEVE</dc:creator>
      <pubDate>Mon, 19 Jun 2023 20:57:52 +0000</pubDate>
      <link>https://dev.to/realsteveig/nodejs-and-typescript-tutorial-build-a-rest-api-with-typescript-nodejs-and-a-file-based-storage-system-2l61</link>
      <guid>https://dev.to/realsteveig/nodejs-and-typescript-tutorial-build-a-rest-api-with-typescript-nodejs-and-a-file-based-storage-system-2l61</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Welcome to my blog! In this tutorial, I will guide you through the process of building a robust micro e-commerce API using Node.js, Express, and TypeScript. Together, we will explore various features and techniques that will empower you to create a powerful API for your e-commerce applications.&lt;/p&gt;

&lt;p&gt;One of our key decisions in this project was to implement a file-based storage system instead of relying on traditional databases like MongoDB. This approach offers simplicity and ease of implementation, making it ideal for smaller-scale applications or scenarios where a full-fledged database management system may be unnecessary.&lt;/p&gt;

&lt;p&gt;The tutorial will cover essential topics such as user management, product handling, and authentication.&lt;/p&gt;

&lt;p&gt;You'll gain hands-on experience working with features that span both user and product data, demonstrating how these entities interact within an e-commerce API. By the end of this tutorial, you'll have a comprehensive understanding of building a powerful API that enables seamless interactions with user and product resources.&lt;/p&gt;

&lt;p&gt;So, join me on this exciting journey as we dive into creating a micro e-commerce API using Node.js, Express, and TypeScript.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get Started with TypeScript in Node.js&lt;/strong&gt;&lt;br&gt;
Start by creating a project directory that looks like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xldg3z8mwzg0fpfmc1h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xldg3z8mwzg0fpfmc1h.png" alt=" " width="360" height="768"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, initialize a Node.js project within the project directory by creating a package.json file with default settings, using this command : &lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm init -y&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install Project Dependencies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your Node.js project requires a couple of dependencies to create a secure Express server with TypeScript. Install them like so:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm i express dotenv helmet cors http-status-codes uuid bcryptjs&lt;/code&gt;&lt;br&gt;
To use TypeScript, you also need to install a stable version of typescript as a developer dependency:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm i -D typescript&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To use TypeScript effectively, you need to install type definitions for the packages you installed previously:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i -D @types/express @types/dotenv @types/helmet @types/cors @types/http-status-codes @types/uuid @types/bcryptjs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Populate the .env hidden file with the following variable that defines the port your server can use to listen for requests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PORT=7000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, locate the app.js file in the root of the src folder and import the project dependencies you installed earlier and load any environmental variables from the local .env file using the dotenv.config() method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import express from "express"
import * as dotevnv from "dotenv"
import cors from "cors"
import helmet from "helmet"

dotevnv.config()

if (!process.env.PORT) {
    console.log(`No port value specified...`)
}

const PORT = parseInt(process.env.PORT as string, 10)

const app = express()

app.use(express.json())
app.use(express.urlencoded({extended : true}))
app.use(cors())
app.use(helmet())

app.listen(PORT, () =&amp;gt; {
    console.log(`Server is listening on port ${PORT}`)
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this code snippet, a Node.js application is being set up using the Express framework. Here's a breakdown of what's happening:&lt;/p&gt;

&lt;p&gt;The required modules are imported:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;express&lt;/strong&gt; is imported as the main framework for building the web application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;dotenv&lt;/strong&gt; is imported to handle environment variables.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;cors&lt;/strong&gt; is imported to enable Cross-Origin Resource Sharing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;helmet&lt;/strong&gt; is imported to add security headers to HTTP responses.&lt;/p&gt;

&lt;p&gt;The code checks if the PORT environment variable is defined. If not, a message is logged to the console.&lt;/p&gt;

&lt;p&gt;The PORT variable is parsed from a string to an integer using parseInt().&lt;/p&gt;

&lt;p&gt;An instance of the Express application is created using express() and assigned to the app variable.&lt;/p&gt;

&lt;p&gt;Middleware functions are added to the Express application:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;express.json()&lt;/strong&gt; is used to parse JSON bodies of incoming requests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;express.urlencoded({extended : true})&lt;/strong&gt; is used to parse URL-encoded bodies of incoming requests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;cors()&lt;/strong&gt; is used to enable Cross-Origin Resource Sharing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;helmet()&lt;/strong&gt; is used to enhance the security of the application by setting various HTTP headers.&lt;/p&gt;

&lt;p&gt;The Express application starts listening on the specified PORT by calling app.listen(). Once the server is running, a message indicating the port number is logged to the console.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Improve TypeScript Development Workflow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The TypeScript compilation process can increase the bootstrapping time of an application. However, you don't need to recompile the entire project whenever there's a change in its source code. You can set up ts-node-dev to significantly decrease the time it takes to restart your application when you make a change.&lt;/p&gt;

&lt;p&gt;Start by installing this package to power up your development workflow:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm i -D ts-node-dev&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;ts-node-dev restarts a target Node.js process when any of the required files change. However, it shares the Typescript compilation process between restarts, which can significantly increase the restart speed.&lt;/p&gt;

&lt;p&gt;You can create a dev npm script in package.json to run your server. Update your package.json file like this. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
  "name": "typescript-nodejs",&lt;br&gt;
  "version": "1.0.0",&lt;br&gt;
  "description": "",&lt;br&gt;
  "main": "index.js",&lt;br&gt;
  "scripts": {&lt;br&gt;
    "test": "echo \"Error: no test specified\" &amp;amp;&amp;amp; exit 1",&lt;br&gt;
    "dev": "ts-node-dev --pretty --respawn ./src/app.ts"&lt;br&gt;
  },&lt;br&gt;
  "keywords": [],&lt;br&gt;
  "author": "",&lt;br&gt;
  "license": "ISC",&lt;br&gt;
  "dependencies": {&lt;br&gt;
    "@types/nanoid": "^3.0.0",&lt;br&gt;
    "@types/uuid": "^9.0.2",&lt;br&gt;
    "bcryptjs": "^2.4.3",&lt;br&gt;
    "cors": "^2.8.5",&lt;br&gt;
    "dotenv": "^16.3.0",&lt;br&gt;
    "express": "^4.18.2",&lt;br&gt;
    "helmet": "^7.0.0",&lt;br&gt;
    "http-status-codes": "^2.2.0",&lt;br&gt;
    "nanoid": "^4.0.2",&lt;br&gt;
    "uuid": "^9.0.0"&lt;br&gt;
  },&lt;br&gt;
  "devDependencies": {&lt;br&gt;
    "@types/bcryptjs": "^2.4.2",&lt;br&gt;
    "@types/cors": "^2.8.13",&lt;br&gt;
    "@types/dotenv": "^8.2.0",&lt;br&gt;
    "@types/express": "^4.17.17",&lt;br&gt;
    "@types/helmet": "^4.0.0",&lt;br&gt;
    "@types/http-status-codes": "^1.2.0",&lt;br&gt;
    "ts-node-dev": "^2.0.0"&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let's briefly break down the options that ts-node-dev takes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;--respawn&lt;/strong&gt;: Keep watching for changes after the script has exited.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;--pretty&lt;/strong&gt;: Use pretty diagnostic formatter (TS_NODE_PRETTY).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;./src/app.ts&lt;/strong&gt;: This is the application's entry file.&lt;/p&gt;

&lt;p&gt;Now, simply run the dev script to launch your project:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm run dev&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If everything is working correctly, you'll see a message indicating that the server is listening for requests on port 7000.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model Data with TypeScript Interfaces&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before creating any routes, define the structure of the data you want to manage. Our user database will have the following properties:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;id&lt;/strong&gt; : (string) Unique identifier for the item record.&lt;br&gt;
&lt;strong&gt;username&lt;/strong&gt; : (string) Name of the item.&lt;br&gt;
&lt;strong&gt;email&lt;/strong&gt; : (number) Price of the item in cents.&lt;br&gt;
&lt;strong&gt;password&lt;/strong&gt; : (string) Description of the item.&lt;/p&gt;

&lt;p&gt;Populate src/users/user.interface.ts with the following definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export interface User {
    username : string,
    email : string,
    password : string
}

export interface UnitUser extends User {
    id : string
}

export interface Users {
    [key : string] : UnitUser
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code defines three TypeScript interfaces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
The User interface represents a basic user object with three properties:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;username&lt;/strong&gt;, which is a string representing the username of the user.&lt;br&gt;
&lt;strong&gt;email&lt;/strong&gt;, which is a string representing the email address of the user.&lt;br&gt;
&lt;strong&gt;password&lt;/strong&gt;, which is a string representing the password of the user.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
The UnitUser interface extends the User interface and adds an id property:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;id, which is a string representing the unique identifier of the user.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
The Users interface represents a collection of user objects with dynamic keys:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;[key: string]&lt;/strong&gt; indicates that the keys of the Users object can be any string.&lt;br&gt;
The values of the Users object are of type UnitUser, which means each user object in the collection should conform to the UnitUser interface.&lt;br&gt;
In simpler terms, these interfaces define the structure and types of user objects. The User interface defines the basic properties of a user, while the UnitUser interface adds an id property to represent a user with a unique identifier. The Users interface represents a collection of user objects, where the keys are strings and the values are UnitUser objects.&lt;/p&gt;

&lt;p&gt;Next, we will create the logic for our data storage. you can call it a database if you like. &lt;br&gt;
Populate src/users/user.database.ts with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { User, UnitUser, Users } from "./user.interface";
import bcrypt from "bcryptjs"
import {v4 as random} from "uuid"
import fs from "fs"

let users: Users = loadUsers() 

function loadUsers () : Users {
  try {
    const data = fs.readFileSync("./users.json", "utf-8")
    return JSON.parse(data)
  } catch (error) {
    console.log(`Error ${error}`)
    return {}
  }
}

function saveUsers () {
  try {
    fs.writeFileSync("./users.json", JSON.stringify(users), "utf-8")
    console.log(`User saved successfully!`)
  } catch (error) {
    console.log(`Error : ${error}`)
  }
}

export const findAll = async (): Promise&amp;lt;UnitUser[]&amp;gt; =&amp;gt; Object.values(users);

export const findOne = async (id: string): Promise&amp;lt;UnitUser&amp;gt; =&amp;gt; users[id];

export const create = async (userData: UnitUser): Promise&amp;lt;UnitUser | null&amp;gt; =&amp;gt; {

  let id = random()

  let check_user = await findOne(id);

  while (check_user) {
    id = random()
    check_user = await findOne(id)
  }

  const salt = await bcrypt.genSalt(10);

  const hashedPassword = await bcrypt.hash(userData.password, salt);

  const user : UnitUser = {
    id : id,
    username : userData.username,
    email : userData.email,
    password: hashedPassword
  };

  users[id] = user;

  saveUsers()

  return user;
};

export const findByEmail = async (user_email: string): Promise&amp;lt;null | UnitUser&amp;gt; =&amp;gt; {

  const allUsers = await findAll();

  const getUser = allUsers.find(result =&amp;gt; user_email === result.email);

  if (!getUser) {
    return null;
  }

  return getUser;
};

export const comparePassword  = async (email : string, supplied_password : string) : Promise&amp;lt;null | UnitUser&amp;gt; =&amp;gt; {

    const user = await findByEmail(email)

    const decryptPassword = await bcrypt.compare(supplied_password, user!.password)

    if (!decryptPassword) {
        return null
    }

    return user
}

export const update = async (id : string, updateValues : User) : Promise&amp;lt;UnitUser | null&amp;gt; =&amp;gt; {

    const userExists = await findOne(id)

    if (!userExists) {
        return null
    }

    if(updateValues.password) {
        const salt = await bcrypt.genSalt(10)
        const newPass = await bcrypt.hash(updateValues.password, salt)

        updateValues.password = newPass
    }

    users[id] = {
        ...userExists,
        ...updateValues
    }

    saveUsers()

    return users[id]
}

export const remove = async (id : string) : Promise&amp;lt;null | void&amp;gt; =&amp;gt; {

    const user = await findOne(id)

    if (!user) {
        return null
    }

    delete users[id]

    saveUsers()
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me explain every function in the code above :&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;loadUsers&lt;/strong&gt;: This function reads the data from a file called "users.json" using the fs module. It attempts to parse the data as JSON and returns it as the users object. If an error occurs during the process, it logs the error and returns an empty object.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;saveUsers&lt;/strong&gt;: This function saves the users object to the "users.json" file by writing the JSON string representation of the users object using the fs module's writeFileSync method. If an error occurs during the process, it logs the error.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;findAll&lt;/strong&gt;: This function returns a promise that resolves to an array of UnitUser objects. It uses Object.values(users) to extract the values (users) from the users object.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;findOne&lt;/strong&gt;: This function takes an id parameter and returns a promise that resolves to the UnitUser object corresponding to that id in the users object.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;create&lt;/strong&gt;: This function takes a userData object as input and returns a promise that resolves to the newly created UnitUser object. It generates a random id using the uuid package and checks if a user with that id already exists. If a user with that id exists, it generates a new id until a unique one is found. It then hashes the userData object's password using bcrypt and saves the hashed password in the UnitUser object. The UnitUser object is added to the users object, saved using saveUsers, and returned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;findByEmail&lt;/strong&gt;: This function takes a user_email parameter and returns a promise that resolves to a UnitUser object if a user with the specified email exists, or null otherwise. It retrieves all users using findAll and finds the user with the matching email using the find method.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;comparePassword&lt;/strong&gt;: This function takes an email and supplied_password as parameters and returns a promise that resolves to a UnitUser object if the supplied password matches the user's stored password, or null otherwise. It calls findByEmail to retrieve the user by email and then uses bcrypt.compare to compare the hashed stored password with the supplied password.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;update&lt;/strong&gt;: This function takes an id and updateValues as parameters and returns a promise that resolves to the updated UnitUser object if the user with the specified id exists. It checks if the user exists using findOne and updates the user's password if updateValues contains a new password. The user's properties are updated with the values from updateValues, and the users object is saved using saveUsers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;remove&lt;/strong&gt;: This function takes an id parameter and returns a promise that resolves to null if the user with the specified id doesn't exist, or void otherwise. It uses findOne to check if the user exists and deletes the user from the users object using the delete keyword. The updated users object is then saved using saveUsers.&lt;/p&gt;

&lt;p&gt;These functions serve as the methods our API can use to process and retrieve information from the database.&lt;/p&gt;

&lt;p&gt;Next, let all import all the required functions and modules into the routes file ./src/users.routes.ts and populate as follows :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import express, {Request, Response} from "express"
import { UnitUser, User } from "./user.interface"
import {StatusCodes} from "http-status-codes"
import * as database from "./user.database"

export const userRouter = express.Router()

userRouter.get("/users", async (req : Request, res : Response) =&amp;gt; {
    try {
        const allUsers : UnitUser[] = await database.findAll()

        if (!allUsers) {
            return res.status(StatusCodes.NOT_FOUND).json({msg : `No users at this time..`})
        }

        return res.status(StatusCodes.OK).json({total_user : allUsers.length, allUsers})
    } catch (error) {
        return res.status(StatusCodes.INTERNAL_SERVER_ERROR).json({error})
    }
})

userRouter.get("/user/:id", async (req : Request, res : Response) =&amp;gt; {
    try {
        const user : UnitUser = await database.findOne(req.params.id)

        if (!user) {
            return res.status(StatusCodes.NOT_FOUND).json({error : `User not found!`})
        }

        return res.status(StatusCodes.OK).json({user})
    } catch (error) {
        return res.status(StatusCodes.INTERNAL_SERVER_ERROR).json({error})
    }
})

userRouter.post("/register", async (req : Request, res : Response) =&amp;gt; {
    try {
        const { username, email, password } = req.body

        if (!username || !email || !password) {
            return res.status(StatusCodes.BAD_REQUEST).json({error : `Please provide all the required parameters..`})
        }

        const user = await database.findByEmail(email) 

        if (user) {
            return res.status(StatusCodes.BAD_REQUEST).json({error : `This email has already been registered..`})
        }

        const newUser = await database.create(req.body)

        return res.status(StatusCodes.CREATED).json({newUser})

    } catch (error) {
        return res.status(StatusCodes.INTERNAL_SERVER_ERROR).json({error})
    }
})

userRouter.post("/login", async (req : Request, res : Response) =&amp;gt; {
    try {
        const {email, password} = req.body

        if (!email || !password) {
            return res.status(StatusCodes.BAD_REQUEST).json({error : "Please provide all the required parameters.."})
        }

        const user = await database.findByEmail(email)

        if (!user) {
            return res.status(StatusCodes.NOT_FOUND).json({error : "No user exists with the email provided.."})
        }

        const comparePassword = await database.comparePassword(email, password)

        if (!comparePassword) {
            return res.status(StatusCodes.BAD_REQUEST).json({error : `Incorrect Password!`})
        }

        return res.status(StatusCodes.OK).json({user})

    } catch (error) {
        return res.status(StatusCodes.INTERNAL_SERVER_ERROR).json({error})
    }
})


userRouter.put('/user/:id', async (req : Request, res : Response) =&amp;gt; {

    try {

        const {username, email, password} = req.body

        const getUser = await database.findOne(req.params.id)

        if (!username || !email || !password) {
            return res.status(401).json({error : `Please provide all the required parameters..`})
        }

        if (!getUser) {
            return res.status(404).json({error : `No user with id ${req.params.id}`})
        }

        const updateUser = await database.update((req.params.id), req.body)

        return res.status(201).json({updateUser})
    } catch (error) {
        console.log(error) 
        return res.status(500).json({error})
    }
})

userRouter.delete("/user/:id", async (req : Request, res : Response) =&amp;gt; {
    try {
        const id = (req.params.id)

        const user = await database.findOne(id)

        if (!user) {
            return res.status(StatusCodes.NOT_FOUND).json({error : `User does not exist`})
        }

        await database.remove(id)

        return res.status(StatusCodes.OK).json({msg : "User deleted"})
    } catch (error) {
        return res.status(StatusCodes.INTERNAL_SERVER_ERROR).json({error})
    }
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what each function does : &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;userRouter.get("/users")&lt;/strong&gt;: This function handles a GET request to "/users". It calls the findAll function from the database module to retrieve all users. If no users are found, it returns a 404 status code with a message. If users are found, it returns a 200 status code with the total number of users and the array of all users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;userRouter.get("/user/:id")&lt;/strong&gt;: This function handles a GET request to "/user/:id" where :id represents a specific user's ID. It calls the findOne function from the database module to retrieve the user with the specified ID. If the user is not found, it returns a 404 status code with an error message. If the user is found, it returns a 200 status code with the user object.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;userRouter.post("/register")&lt;/strong&gt;: This function handles a POST request to "/register" for user registration. It extracts the username, email, and password from the request body. If any of these fields are missing, it returns a 400 status code with an error message. It calls the findByEmail function from the database module to check if the email is already registered. If the email is found, it returns a 400 status code with an error message. If the email is not found, it calls the create function from the database module to create a new user and returns a 201 status code with the newly created user object.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;userRouter.post("/login")&lt;/strong&gt;: This function handles a POST request to "/login" for user login. It extracts the email and password from the request body. If any of these fields are missing, it returns a 400 status code with an error message. It calls the findByEmail function from the database module to check if the email exists. If the email is not found, it returns a 404 status code with an error message. If the email is found, it calls the comparePassword function from the database module to check if the supplied password matches the stored password. If the passwords don't match, it returns a 400 status code with an error message. If the passwords match, it returns a 200 status code with the user object.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;userRouter.put('/user/:id')&lt;/strong&gt;: This function handles a PUT request to "/user/:id" where :id represents a specific user's ID. It extracts the username, email, and password from the request body. If any of these fields are missing, it returns a 401 status code with an error message. It calls the findOne function from the database module to check if the user with the specified ID exists. If the user is not found, it returns a 404 status code with an error message. If the user is found, it calls the update function from the database module to update the user's details and returns a 201 status code with the updated user object.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;userRouter.delete("/user/:id")&lt;/strong&gt;: This function handles a DELETE request to "/user/:id" where :id represents a specific user's ID. It extracts the id from the request parameters. It calls the findOne function from the database module to check if the user with the specified ID exists. If the user is not found, it returns a 404 status code with an error message. If the user is found, it calls the remove function from the database module to delete the user and returns a 200 status code with a success message.&lt;/p&gt;

&lt;p&gt;All These functions define the routes and corresponding logic for user-related operations such as retrieving all users, retrieving a specific user, registering a new user, logging in a user, updating a user's details, and deleting a user.&lt;/p&gt;

&lt;p&gt;finally, to make API calls to these routes we need to import them into our app.ts file and update our code like this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import express from "express"
import * as dotevnv from "dotenv"
import cors from "cors"
import helmet from "helmet"
import { userRouter } from "./users/users.routes"

dotevnv.config()

if (!process.env.PORT) {
    console.log(`No port value specified...`)
}

const PORT = parseInt(process.env.PORT as string, 10)

const app = express()

app.use(express.json())
app.use(express.urlencoded({extended : true}))
app.use(cors())
app.use(helmet())

app.use('/', userRouter)

app.listen(PORT, () =&amp;gt; {
    console.log(`Server is listening on port ${PORT}`)
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great! now let's start our server and test our API using Postman.&lt;/p&gt;

&lt;p&gt;run &lt;code&gt;npm run dev&lt;/code&gt; in your terminal &lt;/p&gt;

&lt;p&gt;your terminal should be similar to this &lt;/p&gt;

&lt;p&gt;&lt;code&gt;[INFO] 20:55:40 ts-node-dev ver. 2.0.0 (using ts-node ver. 10.9.1, typescript ver. 5.1.3)&lt;br&gt;
Server is listening on port 7000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Great! Let's make calls to our endpoints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Register users&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87z1my9odcntkp41g9o1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87z1my9odcntkp41g9o1.png" alt="register user" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Login users&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodtvpg6amsx1gvieitoi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodtvpg6amsx1gvieitoi.png" alt="login user" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get all users&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F461eaufanry0f6ixrmaz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F461eaufanry0f6ixrmaz.png" alt="All users" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get a single user&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gs9yn8bx8e2diuj8u0v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gs9yn8bx8e2diuj8u0v.png" alt="single user" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update user&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0mhoj641lnzb4uhsr0b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0mhoj641lnzb4uhsr0b.png" alt="update user" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Delete user :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flt3ioqaspu2rhybntwpb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flt3ioqaspu2rhybntwpb.png" alt="delete user" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; : If you have added users, your users.json file should continuously append new users and should look like this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Users-data-storage-file&lt;/strong&gt; :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fud38cjk7r3vm0e5opp7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fud38cjk7r3vm0e5opp7y.png" alt="database" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, let us create the login and routes for our products. &lt;br&gt;
So let's duplicate the contents of our users interface with minor changes into the file &lt;code&gt;./src/product.interface.ts&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export interface Product {
    name : string,
    price : number;
    quantity : number;
    image : string;
}

export interface UnitProduct extends Product {
    id : string
}

export interface Products {
    [key : string] : UnitProduct
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can reference the section on the Users interface for details about what these interfaces do.&lt;/p&gt;

&lt;p&gt;Next, just like in the &lt;code&gt;./src/users.database.ts&lt;/code&gt; file, let us populate the  &lt;code&gt;./src/products.database.ts&lt;/code&gt; with a similar logic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Product, Products, UnitProduct } from "./product.interface";
import { v4 as random } from "uuid";
import fs from "fs";

let products: Products = loadProducts();

function loadProducts(): Products {
  try {
    const data = fs.readFileSync("./products.json", "utf-8");
    return JSON.parse(data);
  } catch (error) {
    console.log(`Error ${error}`);
    return {};
  }
}

function saveProducts() {
    try {
        fs.writeFileSync("./products.json", JSON.stringify(products), "utf-8");
        console.log("Products saved successfully!")
    } catch (error) {
        console.log("Error", error)
    }
}


export const findAll = async () : Promise&amp;lt;UnitProduct[]&amp;gt; =&amp;gt; Object.values(products)

export const findOne = async (id : string) : Promise&amp;lt;UnitProduct&amp;gt; =&amp;gt; products[id]

export const create = async (productInfo : Product) : Promise&amp;lt;null | UnitProduct&amp;gt; =&amp;gt; {

    let id = random()

    let product = await findOne(id)

    while (product) {
        id = random ()
        await findOne(id)
    }

    products[id] = {
        id : id,
        ...productInfo
    }

    saveProducts()

    return products[id]
}

export const update = async (id : string, updateValues : Product) : Promise&amp;lt;UnitProduct | null&amp;gt; =&amp;gt; {

    const product = await findOne(id) 

    if (!product) {
        return null
    }

    products[id] = {
        id,
        ...updateValues
    }

    saveProducts()

    return products[id]
}

export const remove = async (id : string) : Promise&amp;lt;null | void&amp;gt; =&amp;gt; {

    const product = await findOne(id)

    if (!product) {
        return null
    }

    delete products[id]

    saveProducts()

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, you can reference the user's section for more details on what these functions provide to our API.&lt;/p&gt;

&lt;p&gt;Once our logic checks out, it's time to implement the routes for our products.&lt;/p&gt;

&lt;p&gt;Populate the &lt;code&gt;./src/products.routes.ts&lt;/code&gt; file with the following code :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import express, {Request, Response} from "express"
import { Product, UnitProduct } from "./product.interface"
import * as database from "./product.database"
import {StatusCodes} from "http-status-codes"

export const productRouter = express.Router()

productRouter.get('/products', async (req : Request, res : Response) =&amp;gt; {
    try {
       const allProducts = await database.findAll()

       if (!allProducts) {
        return res.status(StatusCodes.NOT_FOUND).json({error : `No products found!`})
       }

       return res.status(StatusCodes.OK).json({total : allProducts.length, allProducts})
    } catch (error) {
       return res.status(StatusCodes.INTERNAL_SERVER_ERROR).json({error}) 
    }
})

productRouter.get("/product/:id", async (req : Request, res : Response) =&amp;gt; {
    try {
        const product = await database.findOne(req.params.id)

        if (!product) {
            return res.status(StatusCodes.NOT_FOUND).json({error : "Product does not exist"})
        }

        return res.status(StatusCodes.OK).json({product})
    } catch (error) {
        return res.status(StatusCodes.INTERNAL_SERVER_ERROR).json({error})
    }
})


productRouter.post("/product", async (req : Request, res : Response) =&amp;gt; {
    try {
        const {name, price, quantity, image} = req.body

        if (!name || !price || !quantity || !image) {
            return res.status(StatusCodes.BAD_REQUEST).json({error : `Please provide all the required parameters..`})
        }
        const newProduct = await database.create({...req.body})
        return res.status(StatusCodes.CREATED).json({newProduct})
    } catch (error) {
        return res.status(StatusCodes.INTERNAL_SERVER_ERROR).json({error})
    }
})

productRouter.put("/product/:id", async (req : Request, res : Response) =&amp;gt; {
    try {
        const id = req.params.id

        const newProduct = req.body

        const findProduct = await database.findOne(id)

        if (!findProduct) {
            return res.status(StatusCodes.NOT_FOUND).json({error : `Product does not exist..`})
        }

        const updateProduct = await database.update(id, newProduct)

        return res.status(StatusCodes.OK).json({updateProduct})
    } catch (error) {
        return res.status(StatusCodes.INTERNAL_SERVER_ERROR).json({error})
    }
})


productRouter.delete("/product/:id", async (req : Request, res : Response) =&amp;gt; {
    try {
        const getProduct = await database.findOne(req.params.id)

        if (!getProduct) {
            return res.status(StatusCodes.NOT_FOUND).json({error : `No product with ID ${req.params.id}`})
        }

        await database.remove(req.params.id)

        return res.status(StatusCodes.OK).json({msg : `Product deleted..`})

    } catch (error) {
        return res.status(StatusCodes.INTERNAL_SERVER_ERROR).json({error})
    }
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don't forget to import and call the product's route in our app.ts file, which should now look like this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import express from "express"
import * as dotevnv from "dotenv"
import cors from "cors"
import helmet from "helmet"
import { userRouter } from "./users/users.routes"
import { productRouter } from "./products/product.routes"

dotevnv.config()

if (!process.env.PORT) {
    console.log(`No port value specified...`)
}

const PORT = parseInt(process.env.PORT as string, 10)

const app = express()

app.use(express.json())
app.use(express.urlencoded({extended : true}))
app.use(cors())
app.use(helmet())

app.use('/', userRouter)
app.use('/', productRouter)

app.listen(PORT, () =&amp;gt; {
    console.log(`Server is listening on port ${PORT}`)
})

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Perfect. We now have a full-fledged API built with Typescript and Nodejs. Hurray!!&lt;/p&gt;

&lt;p&gt;Let's test our endpoints. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create product&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxg8nmf9sg8w0oxnjwlcc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxg8nmf9sg8w0oxnjwlcc.png" alt="Create product" width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;All products&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fau4clig149pl1awzuveo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fau4clig149pl1awzuveo.png" alt="All products" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Single product&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fteoekr6sgsllyo7xrz9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fteoekr6sgsllyo7xrz9u.png" alt="Single product" width="800" height="498"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update product&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhaxn7q7q093nwkbg8jr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhaxn7q7q093nwkbg8jr.png" alt="Update product" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Delete product&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6mrja3upck7t2hfyubr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6mrja3upck7t2hfyubr.png" alt="Delete product" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you add new products they will be appended to the &lt;code&gt;products.json&lt;/code&gt; file and it will look like this : &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0hlb7d0jv4bidkir7vh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0hlb7d0jv4bidkir7vh.png" alt="Products.json file" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's we are done. If you came this far, Congratulations and Thank you!&lt;/p&gt;

&lt;p&gt;Comments and recommendations are welcome.&lt;/p&gt;

&lt;p&gt;You can find the complete code on github here -&amp;gt; &lt;a href="https://github.com/REALSTEVEIG/REST-API-WITH-TYPESCRIPT-NODEJS-AND-A-FILE-BASED-STORAGE-SYSTEM" rel="noopener noreferrer"&gt;GITHUB&lt;/a&gt;&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>javascript</category>
      <category>backend</category>
      <category>webdev</category>
    </item>
    <item>
      <title>BUILDING MY BLOGGING API (CHALLENGES FACED, LESSONS LEARNT AND OPTIMIZATION)</title>
      <dc:creator>STEVE</dc:creator>
      <pubDate>Thu, 05 Jan 2023 11:42:11 +0000</pubDate>
      <link>https://dev.to/realsteveig/building-my-blogging-api-challenges-faced-lessons-learnt-and-optimization-4pfb</link>
      <guid>https://dev.to/realsteveig/building-my-blogging-api-challenges-faced-lessons-learnt-and-optimization-4pfb</guid>
      <description>&lt;p&gt;The project at hand was a blogging API built with JavaScript, Node.js, Express, and MongoDB as part of a second semester exam. The initial scope of the project included various features such as creating and publishing blog posts, managing user accounts, and implementing authentication and authorization. However, during the development process, there were certain features that were either wrongly implemented or not implemented at all.&lt;/p&gt;

&lt;p&gt;One of the main challenges faced while working on the project was ensuring the correctness and reliability of the implemented features. To address this, a decision was made to re-implement all the wrongly implemented features and to complete the implementation of all the features that were not implemented at all.&lt;/p&gt;

&lt;p&gt;New Feature:&lt;/p&gt;

&lt;p&gt;In addition to re-implementing the wrongly implemented features, a new feature was also added to the project – user validation using JOI. JOI (Javascript Object Schema validation) is a powerful and flexible schema validation library for JavaScript objects. It allows developers to define a schema for an object and then validate the object against that schema. This helps to ensure that the object adheres to the required format and structure, and also helps to prevent errors and bugs in the application.&lt;/p&gt;

&lt;p&gt;The new feature was implemented by defining a schema for the user object using JOI, and then using the validate method of JOI to validate the user object against the schema. Any errors or deviations from the schema were then handled and appropriate feedback was provided to the user. This added an extra layer of security and reliability to the application, ensuring that only valid and properly formatted user data was processed and stored.&lt;/p&gt;

&lt;p&gt;`const Joi = require('joi');&lt;/p&gt;

&lt;p&gt;const userValidation = async (req, res, next) =&amp;gt; {&lt;br&gt;
    try {&lt;br&gt;
        const payload = req.body&lt;br&gt;
        await schema.validateAsync(payload)&lt;br&gt;
        next()&lt;br&gt;
    } catch (error) {&lt;br&gt;
        console.log(error)&lt;br&gt;
        return res&lt;br&gt;
            .status(500)&lt;br&gt;
            .json({error : error.details[0].message})&lt;br&gt;
    }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;const schema = Joi.object({&lt;br&gt;
    email: Joi.string()&lt;br&gt;
        .email({ minDomainSegments: 2, tlds: { allow: ['com', 'net'] }})&lt;br&gt;
        .required(),&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;password: Joi.string()
    .pattern(new RegExp('^[a-zA-Z0-9]{3,30}$'))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;})&lt;/p&gt;

&lt;p&gt;module.exports = userValidation`&lt;/p&gt;

&lt;p&gt;Re-implementation of Wrongly Implemented Features:&lt;/p&gt;

&lt;p&gt;One of the features that was wrongly implemented was the authentication and authorization system. The initial implementation had several security vulnerabilities and did not properly enforce the required permissions and access controls. To fix this, the authentication and authorization system was completely re-implemented from scratch.&lt;/p&gt;

&lt;p&gt;The re-implementation process involved designing a new authentication and authorization flow that addressed the security vulnerabilities of the initial implementation. This included the use of secure hashes and salts for storing passwords, as well as the implementation of proper access controls and permissions. The re-implemented system was thoroughly tested to ensure that it was reliable and secure.&lt;/p&gt;

&lt;p&gt;Another feature that was wrongly implemented was the user account management system. The initial implementation had several issues with data consistency and reliability, which led to errors and inconsistencies in the application. To fix this, the user account management system was re-implemented to ensure that all data was properly validated, stored, and retrieved. This included the implementation of proper data validation and error handling, as well as the use of transactions to ensure data consistency.&lt;/p&gt;

&lt;p&gt;Implementation of Missing Features:&lt;/p&gt;

&lt;p&gt;In addition to re-implementing the wrongly implemented features, there were also several features that were not implemented at all in the initial implementation. These features were completed as part of the re-implementation process.&lt;/p&gt;

&lt;p&gt;One of the missing features that was implemented was the ability to delete blog posts. This feature was implemented by adding a new route and corresponding logic to the application, which allowed users with the appropriate permissions to delete their own blog posts. The delete functionality was thoroughly tested to ensure that it worked as expected.&lt;/p&gt;

&lt;p&gt;Another missing feature that was implemented was the ability to edit blog posts. This feature was implemented by adding a new route and corresponding logic to the application, which allowed users with the appropriate permissions to edit their own blog posts. The edit functionality was also thoroughly tested to ensure that it worked.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
