<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Joel Ndoh</title>
    <description>The latest articles on DEV Community by Joel Ndoh (@ndohjapan).</description>
    <link>https://dev.to/ndohjapan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ndohjapan"/>
    <language>en</language>
    <item>
      <title>Running Unit Tests with MongoDB in a Node.js Express Application using Jest</title>
      <dc:creator>Joel Ndoh</dc:creator>
      <pubDate>Sun, 26 May 2024 13:41:12 +0000</pubDate>
      <link>https://dev.to/ndohjapan/running-unit-tests-with-mongodb-in-a-nodejs-express-application-using-jest-42de</link>
      <guid>https://dev.to/ndohjapan/running-unit-tests-with-mongodb-in-a-nodejs-express-application-using-jest-42de</guid>
      <description>&lt;p&gt;Hey everyone!&lt;/p&gt;

&lt;p&gt;It's been a while since I last posted, and I'm excited to share something new today. Let's dive into running unit tests with MongoDB in a Node.js Express application. If you've ever struggled to set this up, this post is for you!&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Post?
&lt;/h2&gt;

&lt;p&gt;Online resources often cover automated test setups for Node.js with SQL databases like PostgreSQL. However, setting up unit tests for Node.js, Express, and MongoDB is less common and can be quite challenging. Issues with running MongoDB in-memory for each test and the speed of test execution are just a couple of the hurdles.&lt;/p&gt;

&lt;p&gt;Having used MongoDB for over three years, I've developed a robust method for running tests with MongoDB. My goal here is to help you set up your Node.js Express app to run tests smoothly with MongoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing the Necessary Packages
&lt;/h2&gt;

&lt;p&gt;Let's start by installing the essential packages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;jest&lt;/code&gt;: Jest is our test runner for Node.js Express applications.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;jest-watch-typeahead&lt;/code&gt;: This package makes it easier to run specific tests. It allows us to select and search for test files and rerun failed tests quickly.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;supertest&lt;/code&gt;: Supertest allows us to simulate API requests in our tests.
mongodb The MongoDB driver for JavaScript.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;uuid&lt;/code&gt;: This package generates unique IDs for each database used in our tests.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setting Up the Jest Config File
&lt;/h2&gt;

&lt;p&gt;Here's where we configure Jest to work with our setup. We include jest-watch-typeahead for flexibility and specify the test folder.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;jest.config.js&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports = {
  testRegex: './__tests__/.*\\.spec\\.js$',
  watchPlugins: [
    'jest-watch-typeahead/filename',
    'jest-watch-typeahead/testname',
  ],
  testPathIgnorePatterns: ['&amp;lt;rootDir&amp;gt;/node_modules/', '&amp;lt;rootDir&amp;gt;/config/'],
  testEnvironment: 'node',
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Handling Database Setup and Cleanup
&lt;/h2&gt;

&lt;p&gt;Understanding the problem: By default, running tests in SQL databases can be done in-memory, which is fast and isolated. However, MongoDB lacks built-in in-memory support for isolated tests. Packages like mongodb-memory-server exist but have limitations with isolation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations of mongodb-memory-server
&lt;/h2&gt;

&lt;p&gt;The mongodb-memory-server package allows us to spin up an in-memory instance of MongoDB for testing purposes. However, it comes with several limitations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Shared Instance&lt;/strong&gt;: mongodb-memory-server does not create separate instances for each test case. All tests share the same in-memory database, which can lead to data collisions and inconsistent test results.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance Overhead&lt;/strong&gt;: Running an in-memory MongoDB instance can be resource-intensive. For large test suites, this can slow down the overall execution time and lead to performance bottlenecks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Features&lt;/strong&gt;: Some features and configurations available in a full MongoDB instance may not be supported or fully functional in the in-memory server. This can lead to discrepancies between test and production environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability Issues&lt;/strong&gt;: As the number of tests grows, managing the in-memory database can become increasingly complex. Ensuring data isolation and cleanup becomes a significant challenge.&lt;br&gt;
Given these limitations, we need a solution that ensures each test runs in an isolated environment, mimicking the behavior of in-memory databases used with SQL.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Our Solution
&lt;/h2&gt;

&lt;p&gt;We'll create a new database for each test and drop it afterward. This approach uses UUIDs to ensure uniqueness and avoids database conflicts.&lt;/p&gt;

&lt;p&gt;Setup File: &lt;code&gt;setup.js&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const mongoose = require('mongoose');
const { connectToDatabase } = require('../connection/db-conn');
const { redis_client } = require('../connection/redis-conn');
const { v4: uuidv4 } = require('uuid');

const setup = () =&amp;gt; {
  beforeEach(async () =&amp;gt; {
    await connectToDatabase(`mongodb://127.0.0.1:27017/test_${uuidv4()}`);
  });

  afterEach(async () =&amp;gt; {
    await mongoose.connection.dropDatabase();
    await mongoose.connection.close();
    await redis_client.flushDb();
  });
};

module.exports = { setup }; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Database Cleanup Script
&lt;/h2&gt;

&lt;p&gt;To handle databases that may not be dropped properly after tests, we create a cleanup script. This script deletes all test databases whose names start with "test_".&lt;/p&gt;

&lt;p&gt;Database Cleanup File: &lt;code&gt;testdb-cleanup.js&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { MongoClient } = require('mongodb');

const deleteTestDatabases = async () =&amp;gt; {
  const url = 'mongodb://127.0.0.1:27017';
  const client = new MongoClient(url);

  try {
    await client.connect();
    const databaseNames = await client.db().admin().listDatabases();
    const testDatabaseNames = databaseNames.databases.filter(db =&amp;gt; db.name.startsWith('test_'));

    for (const database of testDatabaseNames) {
      await client.db(database.name).dropDatabase();
      console.log(`Deleted database: ${database.name}`);
    }
  } catch (error) {
    console.error('Error deleting test databases:', error);
  } finally {
    await client.close();
  }
};

deleteTestDatabases(); 
Updating package.json to Include Database Cleanup
To ensure cleanup happens after tests run, we update package.json:
package.json
{
  "scripts": {
    "start": "node index.js",
    "dev": "nodemon index.js",
    "test": "jest --runInBand --watch ./__tests__ &amp;amp;&amp;amp; npm run test:cleanup",
    "test:cleanup": "node ./__tests__/testdb-cleanup.js"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Running Tests
&lt;/h2&gt;

&lt;p&gt;We'll place our tests in a folder called &lt;strong&gt;tests&lt;/strong&gt;. Inside, we'll have apis for API tests and functions for unit tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Writing Tests
&lt;/h2&gt;

&lt;p&gt;Use describe to group similar tests.&lt;/p&gt;

&lt;p&gt;Function Tests: &lt;code&gt;./__tests__/functions/Rizz.spec.js&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const RizzService = require('../../controllers/rizz-controller');
const Rizz = require('../../database/model/Rizz');
const { setup } = require('../setup');
setup();

describe('Getting the rizz', () =&amp;gt; {
  it('should return the rizzes added to the database', async () =&amp;gt; {
    await Rizz.create([{ text: 'First Rizz' }, { text: 'Second Rizz' }]);
    const result = await RizzService.GetLatestRizz();
    expect(result.total_docs).toBe(2);
  });
});

describe('Liking a rizz', () =&amp;gt; {
  it('should increase the likes count of a rizz', async () =&amp;gt; {
    const rizz = await Rizz.create({ text: 'First Rizz' });
    await RizzService.LikeRizz(rizz._id);
    const updatedRizz = await Rizz.findById(rizz._id);
    expect(updatedRizz.likes).toBe(1);
  });
}); 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;API Tests: &lt;code&gt;./__tests__/apis/Rizz.spec.js&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const request = require('supertest');
const { app } = require('../../app');
const { setup } = require('../setup');
setup();

describe('Rizz API', () =&amp;gt; {
  it('should return the latest rizzes', async () =&amp;gt; {
    await request(app)
      .post('/api/v1/rizz')
      .send({ text: 'First Rizz' });

    const response = await request(app)
      .get('/api/v1/rizz/latest?page=1&amp;amp;limit=100')
      .send();

    expect(response.body.data.total_docs).toBe(1);
  });

  it('should like a rizz', async () =&amp;gt; {
    const rizzResponse = await request(app)
      .post('/api/v1/rizz')
      .send({ text: 'First Rizz' });

    const rizzId = rizzResponse.body.data._id;

    const response = await request(app)
      .post(`/api/v1/rizz/${rizzId}/like`)
      .send();

    expect(response.body.data.likes).toBe(1);
  });
}); 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Setting up unit tests with MongoDB in a Node.js Express app doesn't have to be daunting. By following these steps, you can ensure isolated, efficient tests. For the complete code, check out my GitHub repository: &lt;/p&gt;

&lt;p&gt;GitHub - &lt;a href="https://github.com/Ndohjapan/get-your-rizz"&gt;https://github.com/Ndohjapan/get-your-rizz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy testing! 🚀 #NodeJS #Express #MongoDB #AutomatedTesting #SoftwareDevelopment&lt;/p&gt;

</description>
      <category>testing</category>
      <category>express</category>
      <category>mongodb</category>
      <category>mongoose</category>
    </item>
    <item>
      <title>Implementing Clean Architecture and Adding a Caching Layer</title>
      <dc:creator>Joel Ndoh</dc:creator>
      <pubDate>Sun, 05 May 2024 17:02:42 +0000</pubDate>
      <link>https://dev.to/ndohjapan/implementing-clean-architecture-and-adding-a-caching-layer-jkn</link>
      <guid>https://dev.to/ndohjapan/implementing-clean-architecture-and-adding-a-caching-layer-jkn</guid>
      <description>&lt;p&gt;In this article, we'll discuss implementing Clean Architecture and taking it a step further by adding a caching mechanism in front of our repository. This allows us to navigate between the database (Db) and Redis while performing a one-way data flow: read-only from Redis and write-only directly to the Db.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Building a Web Application with Clean Architecture and Caching&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So far, for our final year project, we've been able to draft the database schema, which I shared in a previous LinkedIn post: &lt;a href="https://www.linkedin.com/posts/joel-ndoh_dbdiagramio-database-relationship-diagrams-activity-7176510355722956800-m30p?utm_source=share&amp;amp;utm_medium=member_desktop"&gt;https://www.linkedin.com/posts/joel-ndoh_dbdiagramio-database-relationship-diagrams-activity-7176510355722956800-m30p?utm_source=share&amp;amp;utm_medium=member_desktop&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I'd like to give a big shoutout to &lt;a href="https://www.linkedin.com/article/edit/7187866455948873728/#?lipi=urn%3Ali%3Apage%3Ad_flagship3_publishing_post_edit%3B4tJH%2Bk2VQCKQX074%2F4hGKw%3D%3D"&gt;Rufai Quadri&lt;/a&gt; for designing the entire user interface.&lt;/p&gt;

&lt;p&gt;We've begun development, with me handling the backend while &lt;a href="https://www.linkedin.com/article/edit/7187866455948873728/#?lipi=urn%3Ali%3Apage%3Ad_flagship3_publishing_post_edit%3B4tJH%2Bk2VQCKQX074%2F4hGKw%3D%3D"&gt;Promise Sheggsmann&lt;/a&gt; and &lt;a href="https://www.linkedin.com/article/edit/7187866455948873728/#?lipi=urn%3Ali%3Apage%3Ad_flagship3_publishing_post_edit%3B4tJH%2Bk2VQCKQX074%2F4hGKw%3D%3D"&gt;Jay Stance&lt;/a&gt; work on the dashboards and mobile app, respectively.&lt;/p&gt;

&lt;p&gt;Since our system's requirements are dynamic, we aimed to create a plug-and-play system that can be easily adapted for any feature at any time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using Clean Architecture for a Modular and Maintainable System&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Uncle Bob's Clean Architecture has been my go-to approach for every project I've worked on in the past eight months. There was no question about using it for this project either.&lt;/p&gt;

&lt;p&gt;Over the past few months, I've observed various implementations of Clean Architecture, all adhering to the core concepts. Here's how I've structured mine:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn7vo0jmx0uf2rajg8ze.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn7vo0jmx0uf2rajg8ze.png" alt="Image description" width="711" height="686"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Resource/Entity: These are the fundamental objects of the entire project, defining the basic data structure and properties.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Database: Here, I've modeled the schema for each resource and exposed a few database queries for each schema in a dedicated "schema-repository.js" file. This ensures that all database queries throughout the application are performed from a single location, preventing them from being scattered across the project. This makes it much easier to identify and fix any issues with database queries that might be affecting the database or application performance. This approach stems from my personal experience of having the database layer within the same layer as the service, which I found to be cumbersome and something I wouldn't want to repeat myself or recommend to others.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cache Layer: This layer consists of a single file containing a list of frequently used Redis queries. Similar to the database queries, this promotes code organization by keeping all Redis queries in one place.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Database Service Layer: This layer is crucial for creating a well-cached application. By "cache," I mean the application caches static data whenever possible to prevent serving stale data to users. Here, we have a database service file for all our schemas. This implementation effectively bridges the gap between achieving a well-cached system and avoiding the issue of stale data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Service Layer: This layer houses all our business logic. Each resource has its own dedicated service file named "resource_name-service.js". Previously, I used to combine the middleware, database, and service layers into a single "controller" layer. However, over time, while working on larger projects, this approach proved to be difficult to manage and maintain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Routing/Middleware and Helper   &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Routing/Middleware: In this layer, we've ensured that no business logic runs here. It's solely for routing and validating user input. We then use middleware for tasks like rate limiting and authentication.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helper: This layer contains functions that are not tightly coupled with our project's core business logic. Here, you might find functions for:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Handling external API calls&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sending messages to message queues&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Receiving messages from message queues&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Executing routine tasks that ultimately call methods within the service layer.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This concludes my breakdown of how I implemented Clean Architecture in my final year project.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Scalable Notification System Design for 50 Million Users (Database Design)</title>
      <dc:creator>Joel Ndoh</dc:creator>
      <pubDate>Mon, 15 Apr 2024 09:01:49 +0000</pubDate>
      <link>https://dev.to/ndohjapan/scalable-notification-system-design-for-50-million-users-database-design-4cl</link>
      <guid>https://dev.to/ndohjapan/scalable-notification-system-design-for-50-million-users-database-design-4cl</guid>
      <description>&lt;p&gt;Hey everyone, I will be writing about building a scalable notification system for an academic records management system I designed for my final year project.&lt;/p&gt;

&lt;p&gt;This system needs to handle a whopping 50 million users, so scalability is key!&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Constraints
&lt;/h2&gt;

&lt;p&gt;Imagine this: students, lecturers, and a super admin (think academic head) are all buzzing around on the platform. The system needs to send out different notifications to these users.&lt;/p&gt;

&lt;p&gt;For instance, a super admin might need to send announcements to a specific group of students. Students, on the other hand, might get reminders about upcoming classes, assignment deadlines, or new results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;To handle this, we'll use a NoSQL database like MongoDB. Here's how we'll structure things:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NotificationEntity Collection&lt;/strong&gt;: This collection stores the core information about each notification type. Think of it as a notification template. We'll have details like the entity (e.g., coursework, class), the kind of event (e.g., creation, update), the notification type (reminder, announcement), and a message template with placeholders like lecturer_name or course_codes.&lt;/p&gt;

&lt;p&gt;Here's a mongoose schema snippet for this collection:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

const NotificationEntitiesSchema = new mongoose.Schema(
  {
    entity: {
      type: String,
      required: true,
    },
    entity_kind: {
      type: String,
      required: true,
    },
    type: {
      type: String,
      required: true,
      enum: notification_types,
    },
    template: {
      type: String,
      required: true,
    },
    is_deleted: {
      type: Boolean,
      default: false,
      select: false,
    },
    __v: { type: Number, select: false },
  },
  {
    timestamps: {
      createdAt: 'created_at',
      updatedAt: 'updated_at',
    },
  },
);



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;p&gt;&lt;strong&gt;Notification Collection&lt;/strong&gt;: This collection stores individual notifications fired off based on the templates in the NotificationEntity collection. We'll have the actual message, a reference to the notification entity it's based on, the type of actor who triggered it (super admin, lecturer), and their ID.&lt;br&gt;
Here's another mongoose schema snippet for this collection:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

const NotificationsSchema = new mongoose.Schema(
  {
    message: {
      type: String,
      required: true,
    },
    notification_entity: {
      type: mongoose.Schema.Types.ObjectId,
      ref: 'notification-entity',
      required: true,
    },
    actor_type: {
      type: String,
      enum: actors,
    },
    actor_id: {
      type: mongoose.Schema.Types.ObjectId,
      refPath: 'actor_type',
    },
  },
  {
    timestamps: {
      createdAt: 'created_at',
      updatedAt: 'updated_at'
    },
  }
);


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;p&gt;&lt;strong&gt;NotificationReceivers Collection&lt;/strong&gt;: This collection keeps track of who received a particular notification and their read status (read/unread, and if read, the date). Essentially, it links Notifications to Students.&lt;br&gt;
Here's the mongoose schema snippet for this last collection:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

const NotificationReceiversSchema = new mongoose.Schema(
  {
    notification: {
      type: mongoose.Schema.Types.ObjectId,
      ref: 'notification',
      required: [true, en['notification-id-required']],
    },
    student: {
      type: mongoose.Schema.Types.ObjectId,
      ref: 'student',
      required: [true, en['student-id-required']],
    },
    read_status: {
      type: {
        status: {
          type: Boolean,
          default: false,
        },
        date_read: {
          type: Date,
        },
      },
    },
    is_deleted: {
      type: Boolean,
      default: false,
      select: false,
    },
    __v: { type: Number, select: false },
  },
  {
    timestamps: true,
  }
);


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  Alright, so how do notifications get sent? Let's break it down:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Fetch the notification template: We'll grab the relevant notification entity from the &lt;code&gt;NotificationEntity&lt;/code&gt; collection that matches the notification we want to send.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Craft the message: We'll use the template's message format and fill in any placeholders with relevant information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the notification document: A new document is created in the &lt;code&gt;Notification&lt;/code&gt; collection to store the actual notification message and references.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Identify recipients: We'll figure out who needs to receive this notification based on the notification type and any specific criteria (e.g., all students enrolled in a particular course).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Store notification receivers: For each recipient, a document is created in the &lt;code&gt;NotificationReceivers&lt;/code&gt; collection linking them to the notification and marking the initial read status as unread.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This way, we can efficiently track who has seen what notification and scale the system to handle a massive number of users because each notification is stored separately with references to recipients.&lt;/p&gt;

&lt;p&gt;Below is a simple illustration to explain the database tables in a Relational Database Management System (RDBMS)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fub1i7ke4b95uy6uz0iuk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fub1i7ke4b95uy6uz0iuk.png" alt="Database Schema for Scalable Notification System"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, that's the blueprint for a scalable notification system that can handle millions of users on our academic management platform. Hope this walkthrough was helpful!&lt;/p&gt;

</description>
      <category>database</category>
      <category>mongodb</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Unraveling the Layers of System Requirements in Software Architecture</title>
      <dc:creator>Joel Ndoh</dc:creator>
      <pubDate>Sun, 11 Feb 2024 20:55:39 +0000</pubDate>
      <link>https://dev.to/ndohjapan/unraveling-the-layers-of-system-requirements-in-software-architecture-api</link>
      <guid>https://dev.to/ndohjapan/unraveling-the-layers-of-system-requirements-in-software-architecture-api</guid>
      <description>&lt;p&gt;Greetings, Tech Enthusiasts! 👋&lt;/p&gt;

&lt;p&gt;It's been a while since I've delved into the world of blogging, and what better way to make a comeback than by unraveling the intricate layers of System Requirements in Software Architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Performance 🚀
&lt;/h2&gt;

&lt;p&gt;In the ever-evolving landscape of software design, the performance of a system is paramount. We're not just talking about how fast or responsive it should be, but breaking it down into measurable goals. Consider aiming for 90% of requests to be responded to in a crisp 50ms.&lt;/p&gt;

&lt;p&gt;To truly grasp this, let's introduce a coffee bar analogy:&lt;/p&gt;

&lt;p&gt;Latency: Picture the time from ordering a coffee to the barista handing it over. It's the wait time plus processing time.&lt;/p&gt;

&lt;p&gt;Throughput: Think of throughput as how many coffee cups a barista can whip up in a given timeframe.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling Up: The Art of Scalability 📈
&lt;/h2&gt;

&lt;p&gt;Next on our exploration is Scalability. It's not just about handling a multitude of users simultaneously; it's about doing so gracefully. Think of it as ensuring your system can seamlessly dance through the traffic of a large user base without missing a beat.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reliability in the Face of Challenges 🛡️
&lt;/h2&gt;

&lt;p&gt;Now, let's talk about Reliability. A robust system isn't just efficient; it's resilient. It withstands failures, data center downtimes, and unexpected events with a stoic demeanor.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fort Knox of Software: Security 🔒
&lt;/h2&gt;

&lt;p&gt;Security is a non-negotiable aspect of system requirements. We aim to send and store data securely, warding off any unauthorized attempts to breach our digital fortress.&lt;/p&gt;

&lt;h2&gt;
  
  
  Smooth Sailing through Deployment Challenges ⚙️
&lt;/h2&gt;

&lt;p&gt;Enter Deployment—a formidable challenge in managing large-scale, distributed systems. We aim for frictionless deployments of our complex architectures, comprising microservices, databases, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing the Right Tools: Technology Stack ⚖️
&lt;/h2&gt;

&lt;p&gt;Last but not least, the Technology Stack. Depending on our system requirements, such as performance and scalability, choosing the right tech stack becomes crucial. It's akin to selecting the tools that best fit the job when crafting a large-scale system.&lt;/p&gt;

&lt;p&gt;Understanding and optimizing these six aspects will pave the way for robust and efficient systems. Stay tuned as we delve deeper into each element in the upcoming posts.&lt;/p&gt;

&lt;p&gt;Are you ready to elevate your understanding of Software Architecture? Let's embark on this journey together! 💻✨&lt;/p&gt;

&lt;h1&gt;
  
  
  SystemRequirements #TechInsights #SoftwareArchitecture #Performance #Scalability #Reliability #Security #Deployment #TechStack
&lt;/h1&gt;

</description>
      <category>architecture</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>The Importance of JSON Web Tokens in Microservices Architecture</title>
      <dc:creator>Joel Ndoh</dc:creator>
      <pubDate>Mon, 27 Mar 2023 11:19:38 +0000</pubDate>
      <link>https://dev.to/ndohjapan/the-importance-of-json-web-tokens-in-microservices-architecture-4llo</link>
      <guid>https://dev.to/ndohjapan/the-importance-of-json-web-tokens-in-microservices-architecture-4llo</guid>
      <description>&lt;p&gt;JWTs are a compact, URL-safe means of representing claims to be transferred between two parties. In the context of microservices architecture, JWTs can be used to securely authenticate and authorize requests between services.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Here's how it works:&lt;/strong&gt; when a user logs into your application, &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The authentication service generates a JWT containing the user's identity and any relevant permissions. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This JWT is then passed to the user's client application, which can then use it to make requests to other microservices in the system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Each microservice can verify the authenticity of the JWT using a shared secret key. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This key is securely stored on each microservice and is used to verify the signature of the JWT. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the JWT is verified, the microservice can extract the user's identity and any relevant permissions from the token and use them to authorize the request.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Benefits
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;One of the key benefits of using JWTs in microservices architecture is that they are stateless. Unlike traditional session-based authentication, where a server has to maintain session information for each user, JWTs contain all the necessary information within the token itself. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This means that each microservice can independently verify the token without relying on a central authentication server, making the system more resilient to failures and easier to scale.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Another benefit of using JWTs is that they can be used across multiple domains. Because JWTs are self-contained, they can be easily passed between different services or even different applications. This makes it easier to integrate with third-party services or to build a system that spans multiple domains.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;In summary, JSON Web Tokens are an important tool in microservices architecture. They provide a secure, stateless way of authenticating and authorizing requests between services, making it easier to build complex systems that are scalable and resilient. If you're building a microservices architecture, be sure to consider using JWTs as part of your authentication and authorization strategy.&lt;/p&gt;

&lt;p&gt;Also, if you found this post helpful, please consider giving it a like, sharing what you think about it in the comment or sharing it with others. Thanks for reading!&lt;/p&gt;

</description>
      <category>jwt</category>
      <category>microservices</category>
      <category>authentication</category>
    </item>
    <item>
      <title>My takes on EsLint and Prettier against TypeScript</title>
      <dc:creator>Joel Ndoh</dc:creator>
      <pubDate>Mon, 27 Mar 2023 00:33:17 +0000</pubDate>
      <link>https://dev.to/ndohjapan/my-takes-on-eslint-and-prettier-against-typescript-4p27</link>
      <guid>https://dev.to/ndohjapan/my-takes-on-eslint-and-prettier-against-typescript-4p27</guid>
      <description>&lt;p&gt;As a developer, I've always been a strong advocate for using Eslint and Prettier for type safety in JavaScript, even over &lt;br&gt;
TypeScript programming language. I know this might be a bit controversial, but hear me out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Number 1
&lt;/h2&gt;

&lt;p&gt;First of all, Eslint and Prettier offer a lot of flexibility and customization options when it comes to enforcing coding standards and style. This not only improves code readability and consistency but also helps catch potential errors and bugs before they become a problem. With Typescript, while it offers type safety, it can be more rigid and may limit the ability to customize and enforce coding standards in the same way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Number 2
&lt;/h2&gt;

&lt;p&gt;Secondly, Eslint and Prettier are very easy to integrate into any JavaScript project, regardless of the framework or libraries being used. This can save a lot of time and effort, especially when working on multiple projects with different setups. With Typescript, there can be a steeper learning curve and more effort required to integrate it properly into a project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Number 3
&lt;/h2&gt;

&lt;p&gt;Finally, Eslint and Prettier have a large and active community, constantly updating and improving the tools with new features and fixes. This ensures that the tools remain up to date with the latest best practices and can adapt to changing development trends. While Typescript also has a large community, it may not be as active in updating and improving the language as frequently.&lt;/p&gt;

&lt;p&gt;Of course, this is just my personal preference and I understand that different developers have different opinions and experiences. However, I believe that Eslint and Prettier provide a great solution for type safety in JavaScript and should be considered as a viable option, even over Typescript.&lt;/p&gt;

&lt;p&gt;What are your thoughts on this? Do you prefer Eslint and Prettier or Typescript for type safety in JavaScript? Let's start a conversation in the comments!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TH5zQ1JZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lmyzrvm8mykq6jph0a94.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TH5zQ1JZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lmyzrvm8mykq6jph0a94.gif" alt="What ever you say guys" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>javascript</category>
      <category>eslint</category>
      <category>prettier</category>
    </item>
    <item>
      <title>How to prevent data loss when a Postgres container is killed or shut down.</title>
      <dc:creator>Joel Ndoh</dc:creator>
      <pubDate>Sun, 26 Mar 2023 21:27:20 +0000</pubDate>
      <link>https://dev.to/ndohjapan/how-to-prevent-data-loss-when-a-postgres-container-is-killed-or-shut-down-p8d</link>
      <guid>https://dev.to/ndohjapan/how-to-prevent-data-loss-when-a-postgres-container-is-killed-or-shut-down-p8d</guid>
      <description>&lt;p&gt;I do not like to make my blog contents so long when I can easily go straight to the point.&lt;/p&gt;

&lt;p&gt;The simple answer to this problem is simply &lt;code&gt;docker volume&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Docker Volume
&lt;/h2&gt;

&lt;p&gt;The main concept behind using Docker volumes for backups is to create a copy of the data outside the container. Each time the container is restarted, the data is copied from the local system into the Docker container file system. This ensures that the data is always up-to-date and consistent with the latest changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  How To Create Docker Volume for PostgreSQL
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;To create a Docker volume for PostgreSQL, use the following command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; docker volume create pgdata
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a new Docker volume called &lt;code&gt;pgdata&lt;/code&gt; that you can use to store the data for your PostgreSQL container. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start a new PostgreSQL container using the following command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name postgresql -v pgdata:/var/lib/postgresql/data postgres:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command starts a new PostgreSQL container called &lt;code&gt;postgresql&lt;/code&gt; and maps the &lt;code&gt;pgdata&lt;/code&gt; volume to the &lt;code&gt;/var/lib/postgresql/data&lt;/code&gt; directory in the container. This means that the data stored in the container will be stored in the &lt;code&gt;pgdata&lt;/code&gt; volume, which is separate from the container and will persist even if the container is deleted or restarted.&lt;/p&gt;

&lt;p&gt;If the container is deleted or restarted, you can simply start a new container and map the &lt;code&gt;pgdata&lt;/code&gt; volume to the &lt;code&gt;/var/lib/postgresql/data&lt;/code&gt; directory again, and your data will be restored. You do not need to create backups of your data or copy the data back into the container manually.&lt;/p&gt;

&lt;p&gt;Also you can check the &lt;a href="https://docs.docker.com/storage/volumes/"&gt;docs&lt;/a&gt; for more information on docker volumes.&lt;/p&gt;

&lt;p&gt;I usually use docker volume for development, but I have not really considered its importance when it comes to production if I won't be using a postgres container. I don't know about you? Tell me what you think in the comment section.&lt;/p&gt;

&lt;p&gt;Also, if you found this post helpful, please consider giving it a like or sharing it with others who may be facing a similar issue. Thanks for reading!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>pods</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>How To Prevent HPP and XSS Attacks In Nodejs</title>
      <dc:creator>Joel Ndoh</dc:creator>
      <pubDate>Sun, 19 Mar 2023 19:07:21 +0000</pubDate>
      <link>https://dev.to/ndohjapan/how-to-prevent-hpp-and-xss-attacks-in-nodejs-2ana</link>
      <guid>https://dev.to/ndohjapan/how-to-prevent-hpp-and-xss-attacks-in-nodejs-2ana</guid>
      <description>&lt;p&gt;In today's world, cyber attacks are becoming more and more sophisticated. Two common types of attacks that websites and applications face are: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;HPP (HTTP Parameter Pollution) &lt;/li&gt;
&lt;li&gt;XSS (Cross-Site Scripting). &lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  HPP
&lt;/h2&gt;

&lt;p&gt;HPP attacks occur when the HTTP parameters are polluted with duplicate or malicious values. &lt;/p&gt;

&lt;h2&gt;
  
  
  XSS
&lt;/h2&gt;

&lt;p&gt;While XSS attacks occur when attackers inject malicious scripts into a website or application. It occurs the most when we users are able to make queries using the URL. &lt;/p&gt;

&lt;p&gt;Fortunately, there are modules available in Node.js that can help prevent these types of attacks. The &lt;code&gt;"hpp"&lt;/code&gt; module can prevent HPP attacks, while the &lt;code&gt;"xss-clean"&lt;/code&gt; module can prevent XSS attacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prevent HPP
&lt;/h2&gt;

&lt;p&gt;The "hpp" module works by preventing the duplication of HTTP parameters. It does this by checking each parameter and removing duplicates before passing the request to the next middleware. This ensures that the server receives only one instance of each parameter, preventing any HPP attacks that may be attempted.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To use the "hpp" module, simply install it using NPM
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install hpp
const hpp = require('hpp');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Require it in your code:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const hpp = require('hpp');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Then add the middleware to your application:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.use(hpp());
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Prevent XSS
&lt;/h2&gt;

&lt;p&gt;The "xss-clean" module, on the other hand, prevents XSS attacks by sanitizing user input. It does this by escaping characters that could be used to execute scripts, such as "&amp;lt;" and "&amp;gt;". This ensures that any user input is safe to use and cannot be used to execute malicious scripts.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To use the "xss-clean" module, install it using NPM
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install xss-clean
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Require it in your code:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const xss = require('xss-clean');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;const xss = require('xss-clean');
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.use(xss());
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In conclusion, HPP and XSS attacks are two common types of attacks that websites and applications face. Fortunately, modules such as "hpp" and "xss-clean" are available in Node.js to prevent these attacks. By using these modules in your Node.js application, you can help ensure that your application is secure and protected from these types of attacks.&lt;/p&gt;

&lt;p&gt;I post stuff around DevOps and Backend Engineering, you can follow me if you found this helpful.&lt;/p&gt;

</description>
      <category>xss</category>
      <category>hpp</category>
      <category>security</category>
      <category>node</category>
    </item>
    <item>
      <title>How To Fix "Starting broker... completed with 0 plugins." In RabbitMq</title>
      <dc:creator>Joel Ndoh</dc:creator>
      <pubDate>Sat, 18 Mar 2023 18:00:17 +0000</pubDate>
      <link>https://dev.to/ndohjapan/how-to-fix-starting-broker-completed-with-0-plugins-in-rabbitmq-22p</link>
      <guid>https://dev.to/ndohjapan/how-to-fix-starting-broker-completed-with-0-plugins-in-rabbitmq-22p</guid>
      <description>&lt;p&gt;When you install RabbitMQ, it comes with a default configuration that includes some plugins. However, if you try to run RabbitMQ without any additional configuration, you may encounter an error like the one below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Logs: &amp;lt;stdout&amp;gt;
        c:/Users/&amp;lt;COMPUTER NAME&amp;gt;/AppData/Roaming/RabbitMQ/log/rabbit@Japan.log
        c:/Users/&amp;lt;COMPUTER NAME&amp;gt;/AppData/Roaming/RabbitMQ/log/rabbit@Japan_upgrade.log

  Config file(s): (none)

  Starting broker... completed with 0 plugins.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This error message indicates that RabbitMQ was unable to start because it could not find any plugins to load. Fortunately, this error can be easily fixed by installing the management plugin.&lt;/p&gt;

&lt;p&gt;To install the management plugin, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open a command prompt or terminal window.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to the RabbitMQ sbin directory. For example, on Windows, the command may look like:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd "C:\Users\ndohj\Downloads\rabbitmq-server-windows-3.11.10\rabbitmq_server-3.11.10\sbin"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Run the following command to enable the management plugin:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rabbitmq-plugins enable rabbitmq_management
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command installs the management plugin, which provides a web-based interface for managing RabbitMQ.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Restart the RabbitMQ server by running the following command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rabbitmq-server.bat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command starts the RabbitMQ server with the newly installed management plugin.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open a web browser and navigate to &lt;code&gt;http://localhost:15672&lt;/code&gt;. This will bring up the RabbitMQ management interface.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If everything was installed and configured correctly, you should see a login page where you can enter your RabbitMQ credentials.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk88vbldera0ock1hr57p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk88vbldera0ock1hr57p.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it! You should now be able to use RabbitMQ with the management plugin installed. If you encounter any further issues, be sure to leave a comment or check the RabbitMQ documentation or seek help from the RabbitMQ community.&lt;/p&gt;

&lt;p&gt;And don't also forget to follow and react to the post 😎&lt;/p&gt;

</description>
      <category>rabbitm</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Why Is EsLint Important In Nodejs</title>
      <dc:creator>Joel Ndoh</dc:creator>
      <pubDate>Thu, 16 Mar 2023 18:32:27 +0000</pubDate>
      <link>https://dev.to/ndohjapan/why-is-eslint-important-in-nodejs-1mfe</link>
      <guid>https://dev.to/ndohjapan/why-is-eslint-important-in-nodejs-1mfe</guid>
      <description>&lt;p&gt;ESLint is a popular tool used in Node.js development as a devDependency. Its main function is to analyze and find issues in your code that may cause errors or affect code quality. ESLint is a static analysis tool that checks your JavaScript code for syntax and style errors, as well as other common issues.&lt;/p&gt;

&lt;p&gt;When ESLint is added to a Node.js project as a devDependency, it can help improve code quality, consistency, and maintainability. It enforces coding standards and helps prevent mistakes that can lead to bugs and security vulnerabilities. By detecting issues early in the development process, it can save time and effort in the long run by reducing the time spent on debugging and maintenance.&lt;/p&gt;

&lt;p&gt;ESLint can be configured to match your team's coding style and preferences, which ensures that everyone adheres to the same coding standards. It can be integrated into your development workflow, such as your text editor or Continuous Integration (CI) pipeline, to provide immediate feedback as you write code.&lt;/p&gt;

&lt;p&gt;Overall, ESLint is an essential devDependency for Node.js development, as it helps ensure that your code is high-quality, maintainable, and free from errors.&lt;/p&gt;

&lt;p&gt;Don't forget to like and also follow for more simple analogies of complex concepts in web development and DevOps&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How To Use "--save-dev" when installing Nodejs Modules</title>
      <dc:creator>Joel Ndoh</dc:creator>
      <pubDate>Tue, 14 Mar 2023 19:26:23 +0000</pubDate>
      <link>https://dev.to/ndohjapan/how-to-use-save-dev-when-installing-nodejs-modules-52ae</link>
      <guid>https://dev.to/ndohjapan/how-to-use-save-dev-when-installing-nodejs-modules-52ae</guid>
      <description>&lt;h2&gt;
  
  
  Why Use &lt;code&gt;--save-dev&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;When installing npm packages, the &lt;code&gt;--save-dev&lt;/code&gt; option is used to indicate that the package being installed is a development dependency.&lt;/p&gt;

&lt;p&gt;Development dependencies are packages that are only needed during the development process, such as testing libraries or build tools. They are not required for the production environment where the application will be deployed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Importance
&lt;/h2&gt;

&lt;p&gt;The importance of using --save-dev when installing npm packages is that it helps to manage dependencies more efficiently. By separating production dependencies from development dependencies, it is easier to track which packages are needed for each environment. This makes it easier to deploy the application to the production environment, as only the necessary dependencies are installed.&lt;/p&gt;

&lt;p&gt;Additionally, when other developers work on the project, they can easily see which packages are used for development purposes and which are used for production purposes. This helps to prevent confusion and reduces the risk of including unnecessary packages in the production environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Overall, using &lt;code&gt;--save-dev&lt;/code&gt; when installing npm packages is an important best practice for managing dependencies and ensuring that applications are properly configured for deployment.&lt;/p&gt;

&lt;p&gt;Don't forget to leave a comment, like and even follow if you found this interesting. 😎&lt;/p&gt;

</description>
      <category>node</category>
      <category>testing</category>
      <category>devops</category>
      <category>express</category>
    </item>
    <item>
      <title>Great Analogy To Explain ID Tokens Vs Access Tokens</title>
      <dc:creator>Joel Ndoh</dc:creator>
      <pubDate>Mon, 06 Mar 2023 14:34:33 +0000</pubDate>
      <link>https://dev.to/ndohjapan/great-analogy-to-explain-id-tokens-vs-access-tokens-5an4</link>
      <guid>https://dev.to/ndohjapan/great-analogy-to-explain-id-tokens-vs-access-tokens-5an4</guid>
      <description>&lt;h2&gt;
  
  
  Analogy For ID Token
&lt;/h2&gt;

&lt;p&gt;Imagine you're planning to attend a fancy party at a private club. Before you can enter the club, you need to show the bouncer your ID to prove that you're on the guest list. In this analogy, the &lt;strong&gt;bouncer&lt;/strong&gt; represents the &lt;strong&gt;server&lt;/strong&gt; that's responsible for controlling access to the club, and your &lt;strong&gt;ID&lt;/strong&gt; represents your &lt;strong&gt;ID token&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is ID Token
&lt;/h2&gt;

&lt;p&gt;An ID token is a type of token that's used in authentication protocols like OAuth 2.0 and OpenID Connect. It's a JSON Web Token (JWT) that contains information about the authenticated user, such as their user ID and email address. When a user logs in to an application using a third-party identity provider (like Google or Facebook), the identity provider sends an ID token to the application server to verify the user's identity. The application server can then use the information in the ID token to create a user account or grant access to certain resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Analogy For Access Token
&lt;/h2&gt;

&lt;p&gt;Now let's say you're inside the club and you want to order a drink from the bar. You need to show the bartender your membership card to prove that you're allowed to order drinks. In this analogy, the bartender represents the server that's responsible for controlling access to the club's resources (like drinks), and your membership card represents your access token.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Access Token
&lt;/h2&gt;

&lt;p&gt;An access token is a type of token that's used in authorization protocols like OAuth 2.0. It's a string of characters that represents a user's permission to access certain resources. When a user logs in to an application using a third-party identity provider, the application can request an access token that allows the user to access certain resources (like their Google Drive files). The application can then include the access token in API requests to prove that the user has permission to access those resources.&lt;/p&gt;

&lt;p&gt;So, to summarize:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;ID tokens are like IDs that prove your identity and allow you to enter the party (authenticate).&lt;/li&gt;
&lt;li&gt;Access tokens are like membership cards that grant you permission to access certain resources (authorize).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Don't forget to leave a comment if you found this interesting. 😎&lt;/p&gt;

</description>
      <category>security</category>
      <category>node</category>
      <category>programming</category>
      <category>identity</category>
    </item>
  </channel>
</rss>
